LLM IK
1.0.0
This repository is for generating and testing the inverse kinematics solutions generated by large language models (LLMs) for kinematic chains with a single "end effector".
python3 -m venv .venv.
.venvScriptsactivate.bat..venvScriptsactivate.ps1.source .venv/bin/activate.pip install -r requirements.txt.Robots or Models, and Providers folders if you wish to use some of the same robots or LLMs as we have.Robots, and place the URDF files of the robots you wish to use inside.Models, and place all your LLM specification files you wish to use inside as detailed in the Models section.Providers, and place your OpenAI API compatible specification files you wish to use inside as detailed in the Providers section.Keys, and make .txt files named the same as the OpenAI API compatible specification files in the Providers folder and paste the appropriate API keys into each.llm_ik with the parameters outlined in the Usage section.Results folder in the root directory..txt files in the Models folder in the root directory.True or False and defaulting to False. If not a reasoning model, the prompts will include a statement to "think step by step and show all your work" to elicit some benefits from chain-of-thought thinking. Otherwise, this is omitted, as reasoning already do a process like this internally..txt extension) to use from the Providers folder. See the Providers section for how to configure these files themselves.True or False and defaulting to whether its provider supports functions. This is useful as some providers, such as OpenRouter, supports function calling, but, not all models they provide do as well, thus giving you an option to perform a per-model override. However, if the provider does not support function calls and this is set to True, the provider's configuration will override this to False, so this can only be used to disable function calling and not enable it. If this is False, additional details are added to the prompt so models can still call methods, just not through the OpenAI API functions and instead the regular message response is parsed..txt extension) will be used..txt files in the Providers folder in the root directory.True or False and defaulting to False. If the provider supports methods but a model does not as explained in the Models section, this will be overwritten to False for that model only.-r or --robots - The names of the robots. Defaults to None which will load all robot URDF files in the Robots folder.-m or --max - The maximum chain length to run. Defaults to 0 which means there is no limit.-o or --orientation - If we want to solve for orientation in addition to position. Defaults to True.-t or --types - The highest solving type to run. Defaults to Transfer, meaning all are run.-f or --feedbacks - The max number of times to give feedback. Defaults to 5.-e or --examples - The number of examples to give with feedbacks. Defaults to 10.-a or --training - The number of training samples. Defaults to 1000.-v or --evaluating - The number of evaluating samples. Defaults to 1000.-s or --seed - The samples generation seed. Defaults to 42.-d or --distance - The acceptable distance error. Defaults to 0.001.-n or --angle - The acceptable angle error. Defaults to 0.001.-c or --cwd - The working directory. Defaults to None which gets the current working directory.-l or --logging - The logging level. Defaults to INFO.-w or --wait - How long to wait between API calls. Defaults to 1 second.-u or --run - Flag - Enable API running .-b or --bypass - Flag - Bypass the confirmation for API running.Interactions folder until you find the robot, model, and solving you are looking for.X-Prompt.txt, X-Feedback.txt, X-Forward.txt, or X-Test.txt into your chat interface and wait for a response where X is a number.
X-Response.txt where X is the next number for the chat history and run the program again. Repeat the previous step and this until a file named X-Done.txt appears where X is a number.