fastLLaMa是一個實驗性的高性能框架,旨在應對與生產環境中部署大型語言模型(LLM)相關的挑戰。
它為C ++庫,Llama.cpp提供了用戶友好的Python接口,使開發人員能夠創建自定義工作流程,實現適應性的日誌記錄,並在會話之間無縫切換上下文。該框架旨在提高規模運行LLM的效率,持續開發的重點是引入功能,例如優化的冷啟動時間,INT4對NVIDIA GPU,模型偽像管理和多種編程語言支持。
___ __ _ _ __ __
| | '___ ___ _| |_ | | | | ___ | ___
| |-<_> |<_-< | | | |_ | |_ <_> || |<_> |
|_| <___|/__/ |_| |___||___|<___||_|_|_|<___|
.+*+-.
-%#--
:=***%*++=.
:+=+**####%+
++=+*%#
.*+++==-
::--:. .**++=::
#%##*++=...... =*+==-::
.@@@*@%*==-==-==---:::::------::==*+==--::
%@@@@+--====+===---=---==+=======+++----:
.%@@*++*##***+===-=====++++++*++*+====++.
:@@%*##%@@%#*%#+==++++++=++***==-=+==+=-
%@%%%%%@%#+=*%*##%%%@###**++++==--==++
#@%%@%@@##**%@@@%#%%%%**++*++=====-=*-
-@@@@@@@%*#%@@@@@@@%%%%#+*%#++++++=*+.
+@@@@@%%*-#@@@@@@@@@@@%%@%**#*#+=-.
#%%###%: ..+#%@@@@%%@@@@%#+-
:***#*- ... *@@@%*+:
=***= -@%##**.
:#*++ -@#-:*=.
=##- .%*..##
+*- *: +-
:+- :+ =.
=-. *+ =-
:-:- =-- :::
aio_read 。 io_uring 。 cmake
對於Linux:
sudo apt-get -y install cmake
對於OS X:
brew install cmake
對於Windows
從下載頁面下載cmake-*。exe安裝程序並運行它。
GCC 11或更高
最小C ++ 17
Python 3.x
通過使用PIP安裝fastLLaMa
pip install git+https://github.com/PotatoSpudowski/fastLLaMa.git@main進口Fastlalama剛跑步
from fastllama import Model MODEL_PATH = "./models/7B/ggml-model-q4_0.bin"
model = Model (
path = MODEL_PATH , #path to model
num_threads = 8 , #number of threads to use
n_ctx = 512 , #context size of model
last_n_size = 64 , #size of last n tokens (used for repetition penalty) (Optional)
seed = 0 , #seed for random number generator (Optional)
n_batch = 128 , #batch size (Optional)
use_mmap = False , #use mmap to load model (Optional)
) prompt = """Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.
User: Hello, Bob.
Bob: Hello. How may I help you today?
User: Please tell me the largest city in Europe.
Bob: Sure. The largest city in Europe is Moscow, the capital of Russia.
User: """
res = model . ingest ( prompt , is_system_prompt = True ) #ingest model with prompt def stream_token ( x : str ) -> None :
"""
This function is called by the library to stream tokens
"""
print ( x , end = '' , flush = True )
res = model . generate (
num_tokens = 100 ,
top_p = 0.95 , #top p sampling (Optional)
temp = 0.8 , #temperature (Optional)
repeat_penalty = 1.0 , #repetition penalty (Optional)
streaming_fn = stream_token , #streaming function
stop_words = [ "User:" , " n " ] #stop generation when this word is encountered (Optional)
) model = Model (
path = MODEL_PATH , #path to model
num_threads = 8 , #number of threads to use
n_ctx = 512 , #context size of model
last_n_size = 64 , #size of last n tokens (used for repetition penalty) (Optional)
seed = 0 , #seed for random number generator (Optional)
n_batch = 128 , #batch size (Optional)
load_parallel = True
)要緩存會話,您可以使用save_state方法。
res = model . save_state ( "./models/fast_llama.bin" )要加載會話,請使用load_state方法。
res = model . load_state ( "./models/fast_llama.bin" )重置會話使用reset方法。
model . reset ()To attach LoRA Adapter during runtime use the attach_lora method.
LORA_ADAPTER_PATH = "./models/ALPACA-7B-ADAPTER/ggml-adapter-model.bin"
model . attach_lora ( LORA_ADAPTER_PATH )注意:在附加LORA適配器後重置模型狀態是一個好主意。
在運行時分離洛拉適配器,請使用detach_lora方法。
model . detach_lora ()為了使困惑性陷入困境,請使用perplexity方法。
with open ( "test.txt" , "r" ) as f :
data = f . read ( 8000 )
total_perplexity = model . perplexity ( data )
print ( f"Total Perplexity: { total_perplexity :.4f } " )要獲取模型的嵌入,請使用get_embeddings方法。
embeddings = model . get_embeddings ()要獲取模型的徽標,請使用get_logits方法。
logits = model . get_logits () from fastLLaMa import Logger
class MyLogger ( Logger ):
def __init__ ( self ):
super (). __init__ ()
self . file = open ( "logs.log" , "w" )
def log_info ( self , func_name : str , message : str ) -> None :
#Modify this to do whatever you want when you see info logs
print ( f"[Info]: Func(' { func_name } ') { message } " , flush = True , end = '' , file = self . file )
pass
def log_err ( self , func_name : str , message : str ) -> None :
#Modify this to do whatever you want when you see error logs
print ( f"[Error]: Func(' { func_name } ') { message } " , flush = True , end = '' , file = self . file )
def log_warn ( self , func_name : str , message : str ) -> None :
#Modify this to do whatever you want when you see warning logs
print ( f"[Warn]: Func(' { func_name } ') { message } " , flush = True , end = '' , file = self . file )為了更加清晰,請檢查examples/python/文件夾。
# obtain the original LLaMA model weights and place them in ./models
ls ./models
65B 30B 13B 7B tokenizer_checklist.chk tokenizer.model
# convert the 7B model to ggml FP16 format
# python [PythonFile] [ModelPath] [Floattype] [Vocab Only] [SplitType]
python3 scripts/convert-pth-to-ggml.py models/7B/ 1 0
# quantize the model to 4-bits
./build/src/quantize models/7B/ggml-model-f16.bin models/7B/ggml-model-q4_0.bin 2
# run the inference
# Run the scripts from the root dir of the project for now!
python ./examples/python/example.py # Before running this command
# You need to provide the HF model paths here
python ./scripts/export-from-huggingface.py
# Alternatively you can just download the ggml models from huggingface directly and run them!
python3 ./scripts/convert-pth-to-ggml.py models/ALPACA-LORA-7B 1 0
./build/src/quantize models/ALPACA-LORA-7B/ggml-model-f16.bin models/ALPACA-LORA-7B/alpaca-lora-q4_0.bin 2
python ./examples/python/example-alpaca.py # Download lora adapters and paste them inside models folder
# https://huggingface.co/tloen/alpaca-lora-7b
python scripts/convert-lora-to-ggml.py models/ALPACA-7B-ADAPTER/ -t fp32
# Change -t to fp16 to use fp16 weights
# Inorder to use LoRA adapters without caching, pass the --no-cache flag
# - Only supported for fp32 adapter weights
python examples/python/example-lora-adapter.py
# Make sure to set paths correctly for the base model and adapter inside the example
# Commands:
# load_lora: Attaches the adapter to the base model
# unload_lora: Deattaches the adapter (Deattach for fp16 is yet to be added!)
# reset: Resets the model state要運行WebSocket服務器和WebUI,請按照各自分支上的說明進行操作。
由於目前將模型完全加載到內存中,因此您將需要足夠的磁盤空間來節省它們並充分加載RAM。目前,內存和磁盤要求相同。
| 型號大小 | 原始大小 | 量化尺寸(4位) |
|---|---|---|
| 7b | 13 GB | 3.9 GB |
| 13b | 24 GB | 7.8 GB |
| 30b | 60 GB | 19.5 GB |
| 65B | 120 GB | 38.5 GB |
信息:運行時間可能需要在推理期間額外的內存!
(取決於模型初始化期間使用的超級訓練器)