| .vscode | ||
| fjerkroa_bot | ||
| tests | ||
| .flake8 | ||
| .gitignore | ||
| .pre-commit-config.yaml | ||
| config.toml | ||
| mypy.ini | ||
| pyproject.toml | ||
| pytest.ini | ||
| README.md | ||
| requirements.txt | ||
| setup.py | ||
| setupenv.sh | ||
Fjerkroa bot
A simple Discord bot that uses OpenAI's GPT to chat with users.
Installation
- Install the package using pip:
pip install fjerkroa-bot
- Create a
config.tomlfile with the following content, replacing the tokens with your own:
openai-key = "OPENAIKEY"
discord-token = "DISCORDTOKEN"
model = "gpt-3.5-turbo"
max-tokens = 1024
temperature = 0.9
top-p = 1.0
presence-penalty = 1.0
frequency-penalty = 1.0
history-limit = 20
history-per-channel = 5
history-directory = "history"
welcome-channel = "chat"
staff-channel = "mods"
join-message = "Hi! Ich heiße {name} und ich bin neu hier! Wie geht es euch?"
short-path = [['^news$', '^news-bot$'], ['^mods$', '.*']]
ignore-channels = ["blengon"]
fix-model = "gpt-3.5-turbo"
fix-description = "You are an AI which fixes JSON documents. User send you JSON document, possibly invalid, and you fix it as good as you can and return as answer. Even when document is valid, return it pretty formated."
additional-responders = []
system = "You are an smart AI"
- Run the bot:
python -m fjerkroa_bot --config config.toml
Configuration
Create a config.toml file with the following configuration options:
openai-token = "your_openai_api_key"
model = "gpt-3.5-turbo"
temperature = 0.3
max-tokens = 100
top-p = 0.9
presence-penalty = 0
frequency-penalty = 0
history-limit = 50
history-per-channel = 3
history-directory = "./history"
system = "You are conversing with an AI assistant designed to answer questions and provide helpful information."
short-path = [
["channel_regex_1", "user_regex_1"],
["channel_regex_2", "user_regex_2"],
]
fix-model = "text-davinci-002"
fix-description = "Please fix the text to a valid JSON format."
openai-token: Your OpenAI API key.model: The OpenAI GPT model to use.temperature: Controls the randomness of the generated responses.max-tokens: Maximum number of tokens allowed in a response.top-p: Controls the diversity of the generated responses.presence-penalty: Controls the penalty for new token occurrences.frequency-penalty: Controls the penalty for frequent token occurrences.history-limit: Maximum number of messages to store in the conversation history.history-per-channel: Maximum number of messages per channel in the conversation history.history-directory: Directory to store the conversation history as a file.system: System message to be included in the conversation.short-path: List of channel and user regex patterns to apply short path (skip sending message to AI, just fill the history).fix-model: OpenAI GPT model to use for fixing invalid JSON responses.fix-description: Description of the fixing process.