Machine shell, human heart.
YesImBot/Athena is a Koishi plug-in designed to allow AI models to participate in group chat discussions.
The new document site is online: https://yesimbot.ccalliance.tech
Easy to customize: Bot's name, personality, emotions, and other additional messages can be easily modified in the plug-in configuration.
Load balancing: You can configure multiple large-model API interfaces, and Athena will call each API evenly.
Immersive perception: The big model perceives current background information, such as date and time, group chat name, At message, etc.
Anti-prompt injection: Athena will block messages that may inject large models to prevent the robot from being destroyed by others.
Prompt Automatic Get: No need to make it yourself, a variety of high-quality Prompts are available out of the box.
AND MORE...
Important
Before continuing, make sure you are using the latest version of Athena.
Caution
Please read this section carefully, it is important.
The following is an explanation of the usage of configuration files:
# 会话设置
Group :
# 记忆槽位,每一个记忆槽位都可以填入一个或多个会话id(群号或private:私聊账号),在一个槽位中的会话id会共享上下文
AllowedGroups :
- 114514 # 收到来自114514的消息时,优先使用这个槽位,意味着bot在此群中无其他会话的记忆
- 114514, private:1919810 # 收到来自1919810的私聊消息时,优先使用这个槽位,意味着bot此时拥有两个会话的记忆
- private:1919810, 12085141, 2551991321520
# 规定机器人能阅读的上下文数量
SendQueueSize : 100
# 机器人在每个会话开始发言所需的消息数量,即首次触发条数
TriggerCount : 2
# 以下是每次机器人发送消息后的冷却条数由LLM确定或取随机数的区间
# 最大冷却条数
MaxPopNum : 4
# 最小冷却条数
MinPopNum : 2
# 每次收到 @ 消息,机器人马上开始做出回复的概率。 取值范围:[0, 1]
AtReactPossibility : 0.50 # 以前这里写错成了 AtReactPossiblilty,现在已经修正了
# 过滤的消息。这些包含这些关键词的消息将不会加入到上下文。
# 这主要是为了防止 Bot 遭受提示词注入攻击。
Filter :
- You are
- 呢
- 大家
# LLM API 相关设置
API :
# 这是个列表,可以配置多个 API,实现负载均衡。
APIList :
# API 返回格式类型,可选 OpenAI / Cloudflare
- APIType : OpenAI
# API 基础 URL,此处以 OpenAI 为例
# 若你是 Cloudflare, 请填入 https://api.cloudflare.com/client/v4
BaseURL : https://api.openai.com/
# 你的 API 令牌
APIKey : sk-XXXXXXXXXXXXXXXXXXXXXXXXXXXX
# 模型
AIModel : gpt-4o-mini
# 若你正在使用 Cloudflare,不要忘记下面这个配置
# Cloudflare Account ID,若不清楚可以看看你 Cloudflare 控制台的 URL
UID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# 机器人设定
Bot :
# 名字
BotName : 胡梨
# 原神模式(什
CuteMode : true
# Prompt 文件的下载链接或文件名。如果下载失败,请手动下载文件并放入 koishi.yml 所在目录
# 非常重要! 如果你不理解这是什么,请不要修改
PromptFileUrl :
- " https://raw.githubusercontent.com/HydroGest/promptHosting/main/src/prompt.mdt " # 一代 Prompt,所有 AI 模型适用
- " https://raw.githubusercontent.com/HydroGest/promptHosting/main/src/prompt-next.mdt " # 下一代 Prompt,效果最佳,如果你是富哥,用的起 Claude 3.5 / GPT-4 等,则推荐使用
- " https://raw.githubusercontent.com/HydroGest/promptHosting/main/src/prompt-next-short.mdt " # 下一代 Prompt 的删减版,适合 GPT-4o-mini 等低配模型使用
# 当前选择的 Prompt 索引,从 0 开始
PromptFileSelected : 2
# Bot 的自我认知
WhoAmI : 一个普通群友
# Bot 的性格
BotPersonality : 冷漠/高傲/网络女神
# 屏蔽其他指令(实验性)
SendDirectly : true
# 机器人的习惯,当然你也可以放点别的小叮咛
BotHabbits : 辩论
# 机器人的背景
BotBackground : 校辩论队选手
... # 其他应用于prompt的角色设定。如果这些配置项没有被写入prompt文件,那么这些配置项将不会体现作用
# 机器人消息后处理,用于在机器人发送消息前的最后一个关头替换消息中的内容,支持正则表达式
BotSentencePostProcess :
- replacethis : 。$
tothis : ' '
- replacethis : 哈哈哈哈
tothis : 嘎哈哈哈
# 机器人的打字速度
WordsPerSecond : 30 # 30 字每秒
... # 其他配置项参见文档站Then, pull the robot into the corresponding group. The robot will dive for a while first, depending on the configuration of Group.TriggerCount . When the number of new messages reaches this value, Bot will start to participate in the discussion (this is also very restored to the real human situation, isn’t it).
Tip
If you think Bot is too active, you can also turn Group.MinPopNum value up.
Warning
The frequency configuration should maintain the following relationship: Group.MinPopNum < Group.MaxPopNum < Group.SendQueueSize , otherwise it will cause problems.
Next, you can adjust the options in the robot settings according to the actual situation. You can play freely in this regard. But if you are using Cloudflare Workers AI, you can find your robot talking nonsense. This is caused by the poor Chinese corpus of Cloudflare Workers AI's free model. If you want to choose an AI model that is more economical while ensuring the quality of AI speech, then ChatGPT-4o-mini may be a wise choice. Of course, you don't necessarily have to use OpenAI's official API. Athena supports any API interface using OpenAI's official format.
Note
After testing, the Claude 3.5 model performed best in this scenario.
After downloading the prompt.mdt file locally, if you think we are not writing well, or have your own novel ideas, you may want to customize this part. Next we will teach you how to do this.
First, you need to turn off the option每次启动时尝试更新Prompt 文件in the plugin configuration, which is in the debugging tool configuration item at the bottom of the configuration page. After that, you can find the file propt.mdt in koishi's explorer. You can freely modify this file in the editor that comes with koishi, but there are a few points you need to pay attention to:
${config.Bot.BotName} -> 机器人的名字
${config.Bot.WhoAmI} -> 机器人的自我认知
${config.Bot.BotHometown} -> 机器人的家乡
${config.Bot.BotYearold} -> 机器人的年龄
${config.Bot.BotPersonality} -> 机器人的性格
${config.Bot.BotGender} -> 机器人的性别
${config.Bot.BotHabbits} -> 机器人的习惯
${config.Bot.BotBackground} -> 机器人的背景
${config.Bot.CuteMode} -> 开启|关闭
${curYear} -> 当前年份 # 2024
${curMonth} -> 当前月份 # 11
${curDate} -> 当前日期 # 25
${curHour} -> 当前小时 # 10
${curMinute} -> 当前分钟 # 30
${curSecond} -> 当前秒数 # 15
${curGroupName} -> 触发此次调用的消息所在会话的名字。如果是私聊,则为“bot与xxx的私聊”
<img src="https://xxxxx.jpg base64="xx_xxxx"> -> 将交由图片查看器处理的图片 # 没想到吧系统提示词里也可以插图片
[
{
time: "", // 时间戳,格式为yyyy/mm/dd/hh/min/sec
session_id: "", // 此消息所在的会话id,示例:"123456789","private:9876543210"
id: "", // 消息id,bot在需要引用消息时,用它来确定在select中填写的值
author: "", // 消息发送者的名字
author_id: "", // 消息发送者的id
msg: "" // 消息本体
},
{
time: "",
session_id: "",
id: "",
author: "",
author_id: "",
msg: ""
},
...
]
{
"status": "success", // "success" 或 "skip" (跳过回复)
"session_id": "123456789", // 要把finReply发送到的会话id
"nextReplyIn": 2, // 下次回复的冷却条数,让LLM参与控制发言频率
"logic": "", // LLM思考过程
"reply": "", // 初版回复
"check": "", // 检查初版回复是否符合 "消息生成条例" 过程中的检查逻辑。
"finReply": "", // 最终版回复,让LLM在开头添加<quote id=""/>来指定引用回复的消息id
"execute":[] // 要运行的指令列表
}
Note
When modifying the propt yourself, please make sure that the LLM's reply meets the required JSON format. But it doesn't seem to matter if some entries are missing? Σ(っ°Д °;)っ
We highly recommend that you use non-Token billing APIs, because the front Prompt of Athena each conversation consumes a lot of tokens. You can use some APIs that bill for calls, such as:
Our ultimate goal is that even if your account is connected to Athena one day, group members cannot find any clues - all our improvements are working towards this.
Thanks to the contributors, it was you who made Athena possible.
Welcome to publish an issue, or directly join the Athena official exchange & test group: 857518324, we welcome your visit at any time!