note : its a simple toy llm model builder framework to play around llm models and understand its working principles
its not a llm model but you can put ur data and train it to build llm models
An Experimental implementation of language model (LLM) architecture for research and development of it architectures, design process to build, training, and fine-tuning efficient Generative Pretrained Transformers (GPT) models
for more ai related tools and framework look into OX-AI an open source AI project
github pull will be clean if encountered with bugs please report issues
pip install git+https://github.com/Lokeshwaran-M/jam-gpt.gitjam-gpt==0.0.4 may not have fine tuning as its still under development and may contain bug please report issues if any
pip install jam-gptRefere Docs and test-gptLM.ipynb for code examples
from jam_gpt.tokenizer import Tokenizer
from jam_gpt import config
from jam_gpt import lm
from jam_gpt.model import Model
md_name = "md-name"
tok = Tokenizer()
tok.get_encoding(md_name)
# model initilization
model = Model()
# load pretrined model
model.load_model(md_name)
# Generate data using Model
pmt = tok.encode("user prompt")
res = tok.decode(model.generate(pmt))
print(res)Jam-gpt docs will give you the complete useage and explanation of the jam-gpt library
1 setup
2 Collecting data
3 Tokenization
4 configuration
5 Language Model ( LM , Model )
6 Model Fine Tuning
for contribution guidelines and terms and condition to contribute refere jam-contribution by rasing the PR you are accepting the terms and condition
Any form of contribution is accepted here
Submitting :
Issues
pull requests
feature requests
bug reports
documentation