galah
v1.0.0

tl; dr:galah(/ fəˈlounː/ - 发音为“ guh-laa')是一个由LLM驱动的Web Honeypot,旨在模仿各种应用程序,并动态响应任意的HTTP请求。 Galah支持主要LLM提供商,包括OpenAI,Googleai,GCP的顶点AI,人类,Cohere和Ollama。
与传统的Web蜜饯手动模仿特定的Web应用程序或漏洞不同,Galah动态制作了相关的响应(包括HTTP标头和身体内容),以提交任何HTTP请求。 LLM生成的响应在可配置的时期内被缓存,以防止重复生成相同的请求,从而降低API成本。缓存是特定于端口的,确保为特定端口生成的响应不会在其他端口上重复使用相同的请求。
提示配置是此蜜罐中的关键。虽然您可以在配置文件中更新提示,但是维护指示LLM以以指定的JSON格式产生响应的细分至关重要。
注意: Galah是作为一个有趣的周末项目开发的,旨在探索LLM在制作HTTP消息中的功能,并且不打算用于生产。可以通过各种方法(例如网络指纹技术),根据LLM提供商和模型以及非标准响应的延长响应时间来识别蜜罐。为了防止拒绝钱包攻击,请确保在LLM API上设置使用限制。
config.yaml文件。% git clone [email protected]:0x4D31/galah.git
% cd galah
% go mod download
% go build -o galah ./cmd/galah
% export LLM_API_KEY=your-api-key
% ./galah --help
██████ █████ ██ █████ ██ ██
██ ██ ██ ██ ██ ██ ██ ██
██ ███ ███████ ██ ███████ ███████
██ ██ ██ ██ ██ ██ ██ ██ ██
██████ ██ ██ ███████ ██ ██ ██ ██
llm-based web honeypot // version 1.0
author: Adel " 0x4D31 " Karimi
Usage: galah --provider PROVIDER --model MODEL [--server-url SERVER-URL] [--temperature TEMPERATURE] [--api-key API-KEY] [--cloud-location CLOUD-LOCATION] [--cloud-project CLOUD-PROJECT] [--interface INTERFACE] [--config-file CONFIG-FILE] [--event-log-file EVENT-LOG-FILE] [--cache-db-file CACHE-DB-FILE] [--cache-duration CACHE-DURATION] [--log-level LOG-LEVEL]
Options:
--provider PROVIDER, -p PROVIDER
LLM provider (openai, googleai, gcp-vertex, anthropic, cohere, ollama) [env: LLM_PROVIDER]
--model MODEL, -m MODEL
LLM model (e.g. gpt-3.5-turbo-1106, gemini-1.5-pro-preview-0409) [env: LLM_MODEL]
--server-url SERVER-URL, -u SERVER-URL
LLM Server URL (required for Ollama) [env: LLM_SERVER_URL]
--temperature TEMPERATURE, -t TEMPERATURE
LLM sampling temperature (0-2). Higher values make the output more random [default: 1, env: LLM_TEMPERATURE]
--api-key API-KEY, -k API-KEY
LLM API Key [env: LLM_API_KEY]
--cloud-location CLOUD-LOCATION
LLM cloud location region (required for GCP ' s Vertex AI) [env: LLM_CLOUD_LOCATION]
--cloud-project CLOUD-PROJECT
LLM cloud project ID (required for GCP ' s Vertex AI) [env: LLM_CLOUD_PROJECT]
--interface INTERFACE, -i INTERFACE
interface to serve on
--config-file CONFIG-FILE, -c CONFIG-FILE
Path to config file [default: config/config.yaml]
--event-log-file EVENT-LOG-FILE, -o EVENT-LOG-FILE
Path to event log file [default: event_log.json]
--cache-db-file CACHE-DB-FILE, -f CACHE-DB-FILE
Path to database file for response caching [default: cache.db]
--cache-duration CACHE-DURATION, -d CACHE-DURATION
Cache duration for generated responses (in hours). Use 0 to disable caching, and -1 for unlimited caching (no expiration). [default: 24]
--log-level LOG-LEVEL, -l LOG-LEVEL
Log level (debug, info, error, fatal) [default: info]
--help, -h display this help and exit% git clone [email protected]:0x4D31/galah.git
% cd galah
% mkdir logs
% export LLM_API_KEY=your-api-key
% docker build -t galah-image .
% docker run -d --name galah-container -p 8080:8080 -v $( pwd ) /logs:/galah/logs -e LLM_API_KEY galah-image -o logs/galah.json -p openai -m gpt-3.5-turbo-1106./galah -p gcp-vertex -m gemini-1.0-pro-002 --cloud-project galah-test --cloud-location us-central1 --temperature 0.2 --cache-duration 0 % curl -i http://localhost:8080/.aws/credentials
HTTP/1.1 200 OK
Date: Sun, 26 May 2024 16:37:26 GMT
Content-Length: 116
Content-Type: text/plain; charset=utf-8
[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
JSON事件日志:
{
"eventTime": "2024-05-26T18:37:26.742418+02:00",
"httpRequest": {
"body": "",
"bodySha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"headers": "User-Agent: [curl/7.71.1], Accept: [*/*]",
"headersSorted": "Accept,User-Agent",
"headersSortedSha256": "cf69e186169279bd51769f29d122b07f1f9b7e51bf119c340b66fbd2a1128bc9",
"method": "GET",
"protocolVersion": "HTTP/1.1",
"request": "/.aws/credentials",
"userAgent": "curl/7.71.1"
},
"httpResponse": {
"headers": {
"Content-Length": "127",
"Content-Type": "text/plain"
},
"body": "[default]naws_access_key_id = AKIAIOSFODNN7EXAMPLEnaws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEYn"
},
"level": "info",
"llm": {
"model": "gemini-1.0-pro-002",
"provider": "gcp-vertex",
"temperature": 0.2
},
"msg": "successfulResponse",
"port": "8080",
"sensorName": "mbp.local",
"srcHost": "localhost",
"srcIP": "::1",
"srcPort": "51725",
"tags": null,
"time": "2024-05-26T18:37:26.742447+02:00"
}
在此处查看更多示例。