This is a Python-based Telegram chatbot that uses the OpenAI API with gpt-3.5-turbo-* models to provide conversational responses to users in a dialog context-aware manner.
You can easily configure the parameters of OpenAI models for the chatbot by using the models.yml file. This approach allows for quick adjustments of model settings such as temperature, max_tokens, voice and so on without altering the code. Simply edit the models.yml file to change the behavior and response style of your chatbot as needed.
Clone or download the repository.
git clone [email protected]:welel/dialog-chat-bot.git
Checkout on gpt-3.5-turbo branch.
git checkout gpt-3.5-turbo
Create virtual environment and activate it and install dependencies.
python -m venv env
source env/bin/activate
pip install --upgrade pip && pip install -r requirements.txt
To use voice messages, please install ffmpeg.
# on Ubuntu or Debian
sudo apt update && sudo apt install ffmpeg
# on Arch Linux
sudo pacman -S ffmpeg
# on MacOS using Homebrew (https://brew.sh/)
brew install ffmpeg
# on Windows using Chocolatey (https://chocolatey.org/)
choco install ffmpeg
# on Windows using Scoop (https://scoop.sh/)
scoop install ffmpeg
Copy/rename .env.dist to .env and fill it with data.
cp .env.dist .env
Set up a Telegram bot and obtain a bot token (see https://medium.com/geekculture/generate-telegram-token-for-bot-api-d26faf9bf064 for instructions).
Set up an OpenAI account and obtain an API key (see https://beta.openai.com/docs/quickstart for instructions).
Run the bot.
python bot.py
Install Docker and Docker Compose (link if you don’t know how).
Copy/rename .env.dist to .env and fill it with data.
cp .env.dist .env
Now simply build the image and run it with docker compose:
docker compose build
docker compose up -d
To start interact with the bot send any message to the telegram bot.
The chatbot uses configurations specified in the models.yml file to tailor its responses. This file allows for detailed customization of the OpenAI model parameters, offering flexibility to adjust the bot's behavior according to different needs or contexts.
models.ymlThe models.yml file in the project directory contains configurations for different models or scenarios. Here's how to configure it:
Selecting a Model: Under the models key, you can define multiple configurations.
Each configuration can specify a different OpenAI model. For example, the default configuration uses gpt-3.5-turbo with a max_tokens limit of 100. You need to set a model config name in the environment variable MODEL_CONFIG_NAME to select the config.
Configuring OpenAI chat model (chat_model section):
gpt-3.5-turbo model, each with different capabilities and context window sizes.Configuring chatbot behaviour (chatbot section):
max_context_len parameter defines the total number of tokens (user inputs and bot responses) considered in a single conversation window. Adjusting this helps manage the detail of conversational history and can impact computational requirements and billing.Configuring bot's voice (voice section):
alloy, echo, fable, onyx, nova, and shimmer. Each voice has a unique tone and style.