## Major Features Added - **Enhanced igdblib.py**: * Added search_games() method with fuzzy game search * Added get_game_details() for comprehensive game information * Added AI-friendly data formatting with _format_game_for_ai() * Added OpenAI function definitions via get_openai_functions() - **OpenAI Function Calling Integration**: * Modified OpenAIResponder to support function calling * Added IGDB function execution with _execute_igdb_function() * Backward compatible - gracefully falls back if IGDB unavailable * Auto-detects gaming queries and fetches real-time data - **Configuration & Setup**: * Added IGDB configuration options to config.toml * Updated system prompt to inform AI of gaming capabilities * Added comprehensive IGDB_SETUP.md documentation * Graceful initialization with proper error handling ## Technical Implementation - **Function Calling**: Uses OpenAI's tools/function calling API - **Smart Game Search**: Includes ratings, platforms, developers, genres - **Error Handling**: Robust fallbacks and logging - **Data Formatting**: Optimized for AI comprehension and user presentation - **Rate Limiting**: Respects IGDB API limits ## Usage Users can now ask natural gaming questions: - "Tell me about Elden Ring" - "What are good RPG games from 2023?" - "Is Cyberpunk 2077 on PlayStation?" The AI automatically detects gaming queries, calls IGDB API, and presents accurate, real-time game information seamlessly. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> |
||
|---|---|---|
| .vscode | ||
| fjerkroa_bot | ||
| tests | ||
| .flake8 | ||
| .gitignore | ||
| .pre-commit-config.yaml | ||
| config.toml | ||
| DEVELOPMENT.md | ||
| IGDB_SETUP.md | ||
| Makefile | ||
| mypy.ini | ||
| pyproject.toml | ||
| pytest.ini | ||
| README.md | ||
| requirements.txt | ||
| setup.py | ||
| setupenv.sh | ||
Fjerkroa bot
A simple Discord bot that uses OpenAI's GPT to chat with users.
Installation
- Install the package using pip:
pip install fjerkroa-bot
- Create a
config.tomlfile with the following content, replacing the tokens with your own:
openai-key = "OPENAIKEY"
discord-token = "DISCORDTOKEN"
model = "gpt-3.5-turbo"
max-tokens = 1024
temperature = 0.9
top-p = 1.0
presence-penalty = 1.0
frequency-penalty = 1.0
history-limit = 20
history-per-channel = 5
history-directory = "history"
welcome-channel = "chat"
staff-channel = "mods"
join-message = "Hi! Ich heiße {name} und ich bin neu hier! Wie geht es euch?"
short-path = [['^news$', '^news-bot$'], ['^mods$', '.*']]
ignore-channels = ["blengon"]
fix-model = "gpt-3.5-turbo"
fix-description = "You are an AI which fixes JSON documents. User send you JSON document, possibly invalid, and you fix it as good as you can and return as answer. Even when document is valid, return it pretty formated."
additional-responders = []
system = "You are an smart AI"
- Run the bot:
python -m fjerkroa_bot --config config.toml
Configuration
Create a config.toml file with the following configuration options:
openai-key = "OPENAIKEY"
discord-token = "DISCORDTOKEN"
model = "gpt-3.5-turbo"
max-tokens = 1024
temperature = 0.9
top-p = 1.0
presence-penalty = 1.0
frequency-penalty = 1.0
history-limit = 10
welcome-channel = "welcome"
staff-channel = "staff"
join-message = "Hi! I am {name}, and I am new here."
short-path = [['^news$', '^news-bot$'], ['^mod$', '.*']]
system = "You are an smart AI"
discord-token: The token for the Discord bot account.openai-token: The API key for the OpenAI API.model: The OpenAI model name to be used for generating AI responses.temperature: The temperature for the AI model's output.history-limit: The maximum number of messages to maintain in the conversation history.history-directory: The directory where the conversation history will be stored.history-per-channel: The number of history items to keep per channel.max-tokens: The maximum number of tokens in the generated AI response.top-p: The top-p sampling value for the AI model's output.presence-penalty: The presence penalty value for the AI model's output.frequency-penalty: The frequency penalty value for the AI model's output.staff-channel: The name of the channel where staff messages will be sent.welcome-channel: The name of the channel where welcome messages will be sent.join-message: The message template to be sent to AI when a user joins the server, triggers that way the AI to write something to the user.ignore-channels: A list of channels to be ignored by the bot.additional-responders: A list of channels that should have a separate AI responder with separated history.short-path: List of channel and user regex patterns to apply short path (skip sending message to AI, just fill the history).system: The system message template for the AI conversation.fix-model: The OpenAI model name to be used for fixing the AI responses.fix-description: The description for the fix-model's conversation.
register-python-argcomplete