You can run humanscript in a sandboxed environment via Docker:
docker run -it lukechilds/humanscriptAlternatively you can install it natively on your system with Homebrew:
brew install lukechilds/tap/humanscriptOr manually install by downloading this repository and copy/symlink humanscript into your PATH.
Be careful if you're running humanscript unsandboxed. The inferpreter can sometimes do weird and dangerous things. Speaking from experience, unless you want to be doing a system restore at 2am on a saturday evening, you should atleast run humanscripts initially with
HUMANSCRIPT_EXECUTE="false"so you can check the resulting code before executing.
humanscript is configured out of the box to use OpenAI's GPT-4, you just need to add your API key.
We need to add it to ~/.humanscript/config
mkdir -p ~/.humanscript/
echo 'HUMANSCRIPT_API_KEY="<your-openai-api-key>"' >> ~/.humanscript/configNow you can create a humanscript and make it executable.
echo '#!/usr/bin/env humanscript
print an ascii art human' > asciiman
chmod +x asciimanAnd then execute it.
./asciiman
O
/|
/ All environment variables can be added to ~/.humanscript/config to be applied globally to all humanscripts:
$ cat ~/.humanscript/config
HUMANSCRIPT_API_KEY="sk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
HUMANSCRIPT_MODEL="gpt-4"or on a per script basis:
$ HUMANSCRIPT_REGENERATE="true" ./asciimanHUMANSCRIPT_APIDefault: https://api.openai.com/v1
A server following OpenAI's Chat Completion API.
Many local proxies exist that implement this API in front of locally running LLMs like Llama 2. LM Studio is a good option.
HUMANSCRIPT_API="http://localhost:1234/v1"HUMANSCRIPT_API_KEYDefault: unset
The API key to be sent to the LLM backend. Only needed when using OpenAI.
HUMANSCRIPT_API_KEY="sk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"HUMANSCRIPT_MODELDefault: gpt-4
The model to use for inference.
HUMANSCRIPT_MODEL="gpt-3.5"HUMANSCRIPT_EXECUTEDefault: true
Whether or not the humanscript inferpreter should automatically execute the generated code on the fly.
If false the generated code will not be executed and instead be streamed to stdout.
HUMANSCRIPT_EXECUTE="false"HUMANSCRIPT_REGENERATEDefault: false
Whether or not the humanscript inferpreter should regenerate a cached humanscript.
If true the humanscript will be reinferpreted and the cache entry will be replaced with the newly generated code. Due to the nondeterministic nature of LLMs each time you reinferpret a humanscript you will get a similar but slightly different output.
HUMANSCRIPT_REGENERATE="true"MIT © Luke Childs