Improve README.md
This commit is contained in:
parent
92c6d8474b
commit
0791825e01
2
.gitignore
vendored
2
.gitignore
vendored
@ -6,3 +6,5 @@ dist/
|
|||||||
build/
|
build/
|
||||||
.temp
|
.temp
|
||||||
history/
|
history/
|
||||||
|
.config.yaml
|
||||||
|
.db
|
||||||
|
|||||||
71
README.md
71
README.md
@ -9,17 +9,70 @@ A simple Discord bot that uses OpenAI's GPT to chat with users.
|
|||||||
pip install fjerkroa-bot
|
pip install fjerkroa-bot
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Create a `bot.py` file with the following content, replacing the tokens with your own:
|
2. Create a `config.toml` file with the following content, replacing the tokens with your own:
|
||||||
```python
|
```toml
|
||||||
from discord_gpt_bot import main
|
openai-key = "OPENAIKEY"
|
||||||
|
discord-token = "DISCORDTOKEN"
|
||||||
main.DISCORD_BOT_TOKEN = "your_discord_bot_token"
|
model = "gpt-3.5-turbo"
|
||||||
main.OPENAI_API_KEY = "your_openai_api_key"
|
max-tokens = 1024
|
||||||
|
temperature = 0.9
|
||||||
main.run_bot()
|
top-p = 1.0
|
||||||
|
presence-penalty = 1.0
|
||||||
|
frequency-penalty = 1.0
|
||||||
|
history-limit = 20
|
||||||
|
history-per-channel = 5
|
||||||
|
history-directory = "history"
|
||||||
|
welcome-channel = "chat"
|
||||||
|
staff-channel = "mods"
|
||||||
|
join-message = "Hi! Ich heiße {name} und ich bin neu hier! Wie geht es euch?"
|
||||||
|
short-path = [['^news$', '^news-bot$'], ['^mods$', '.*']]
|
||||||
|
ignore-channels = ["blengon"]
|
||||||
|
fix-model = "gpt-3.5-turbo"
|
||||||
|
fix-description = "You are an AI which fixes JSON documents. User send you JSON document, possibly invalid, and you fix it as good as you can and return as answer. Even when document is valid, return it pretty formated."
|
||||||
|
additional-responders = []
|
||||||
|
system = "You are an smart AI"
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Run the bot:
|
3. Run the bot:
|
||||||
```
|
```
|
||||||
python bot.py
|
python -m fjerkroa_bot --config config.toml
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
Create a `config.toml` file with the following configuration options:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
openai-token = "your_openai_api_key"
|
||||||
|
model = "gpt-3.5-turbo"
|
||||||
|
temperature = 0.3
|
||||||
|
max-tokens = 100
|
||||||
|
top-p = 0.9
|
||||||
|
presence-penalty = 0
|
||||||
|
frequency-penalty = 0
|
||||||
|
history-limit = 50
|
||||||
|
history-per-channel = 3
|
||||||
|
history-directory = "./history"
|
||||||
|
system = "You are conversing with an AI assistant designed to answer questions and provide helpful information."
|
||||||
|
short-path = [
|
||||||
|
["channel_regex_1", "user_regex_1"],
|
||||||
|
["channel_regex_2", "user_regex_2"],
|
||||||
|
]
|
||||||
|
fix-model = "text-davinci-002"
|
||||||
|
fix-description = "Please fix the text to a valid JSON format."
|
||||||
|
```
|
||||||
|
|
||||||
|
- `openai-token`: Your OpenAI API key.
|
||||||
|
- `model`: The OpenAI GPT model to use.
|
||||||
|
- `temperature`: Controls the randomness of the generated responses.
|
||||||
|
- `max-tokens`: Maximum number of tokens allowed in a response.
|
||||||
|
- `top-p`: Controls the diversity of the generated responses.
|
||||||
|
- `presence-penalty`: Controls the penalty for new token occurrences.
|
||||||
|
- `frequency-penalty`: Controls the penalty for frequent token occurrences.
|
||||||
|
- `history-limit`: Maximum number of messages to store in the conversation history.
|
||||||
|
- `history-per-channel`: Maximum number of messages per channel in the conversation history.
|
||||||
|
- `history-directory`: Directory to store the conversation history as a file.
|
||||||
|
- `system`: System message to be included in the conversation.
|
||||||
|
- `short-path`: List of channel and user regex patterns to apply short path (skip sending message to AI, just fill the history).
|
||||||
|
- `fix-model`: OpenAI GPT model to use for fixing invalid JSON responses.
|
||||||
|
- `fix-description`: Description of the fixing process.
|
||||||
Loading…
Reference in New Issue
Block a user