Dabarqus is a stand alone application that implements a complete RAG solution. It is designed to be easy to use and easy to integrate with your existing applications. Dabarqus includes a REST API, a command-line interface, and an admin dashboard.
If you're a developer, building a basic RAG solution is pretty straightforward. There are tons of tutorials and how-to's as well as Python code to reuse. But, if you're deploying your RAG solution within a company, or for end-user PCs, you will also have to figure out some potentially tricky deployment and maintenance issues. That means also deploying Python, a vector database, the right embedding AI model, and possible licensing challenges. Dabarqus was created to address these issues with a stand-alone, all-in-one solution with no dependencies. It's written in low-level C++ with built in vector search capabilities, flexibility to use the embedding AI model that's best for your use case, and a REST API for easy development integration.
Dabarqus works on CPU only, or can use NVIDIA CUDA for higher performance. For the CUDA (aka nvidia cublas) version, you will need to install the NVIDIA driver. The CPU version does not require any additional software. Note that to use the CUDA version, you will need to have an NVIDIA GPU with CUDA support, and to download the CUDA version of Dabarqus.
To install NVIDIA drivers on Ubuntu (if you have an NVIDIA GPU), run the following command:
sudo ubuntu-drivers installUnzip the Dabarqus file into a folder
unzip Dabarqus-linux-DOWNLOADED_VERSION.zip
cd Dabarqus-linux-DOWNLOADED_VERSION
chmod +x ./bin/*
./bin/barq service installOpen a browser and go to http://localhost:6568/admin
For package file downloads, do the following:
http://localhost:6568/adminFor zip file downloads, do the following:
Unzip the Dabarqus file into a folder
unzip Dabarqus-linux-DOWNLOADED_VERSION.zip
cd Dabarqus-linux-DOWNLOADED_VERSION
./bin/barq service installOpen a browser and go to http://localhost:6568/admin
http://localhost:6568/adminIngest documents, databases, and APIs: Ingest diverse data sources like PDFs*, emails, and raw data.
LLM-Style Prompting: Use simple, LLM-style prompts when speaking to your memory banks.
REST API: Comprehensive control interface for downloading models, prompting semantic indexes, and even LLM inference.
Multiple Semantic Indexes (Memory Banks): Group your data into separate semantic indexes (memory banks).
SDKs: Native SDKs in Python and Javascript.
LLM-Friendly Output: Produces LLM-ready output that works with ChatGPT, Ollama, and any other LLM provider
Admin Dashboard: Monitor performance, test memory banks, and make changes in an easy-to-use UI
Mac, Linux, and Windows Support: Runs natively with zero dependencies on all platforms: MacOS (Intel or Metal), Linux, and Windows (CPU or GPU)
LLM Inference: Chat with LLM models right through the Dabarqus API/SDKs
*Dabarqus Professional Edition is required for email, messaging and API support.
To install: barq service install
To uninstall: barq service uninstall
Usage: barq store --input-path <path to folder> --memory-bank "<memory bank name>"
Example: barq store --input-path C:docs --memory-bank documents
Usage: barq retrieve --memory-bank "<memory bank name>"
barq retrieve --memory-bank documentsbarq retrieve --memory-bank documents --query "Tell me about the documents" --query-limit 3
This will display three answers to the query from the 'documents' memory bank| Method | Endpoint | Description | Parameters |
|---|---|---|---|
| GET | /health or /api/health | Check the health status of the service | None |
| GET | /admin/* | Serve the admin application | None |
| GET | /odobo/* | Serve the Odobo application | None |
| GET | /api/models | Retrieve available AI models | None |
| GET | /api/model/metadata | Get metadata for a specific model | modelRepo, filePath (optional) |
| GET | /api/downloads | Get information about downloaded items | modelRepo (optional), filePath (optional) |
| GET | /api/downloads/enqueue | Enqueue a new download | modelRepo, filePath |
| GET | /api/downloads/cancel | Cancel a download | modelRepo, filePath |
| GET | /api/downloads/remove | Remove a downloaded item | modelRepo, filePath |
| GET | /api/inference | Get information about inference items | alias (optional) |
| GET | /api/inference/start | Start an inference | alias, modelRepo, filePath, address (optional), port (optional), contextSize (optional), gpuLayers (optional), chatTemplate (optional) |
| GET | /api/inference/stop | Stop an inference | alias |
| GET | /api/inference/status | Get the status of an inference | alias (optional) |
| GET | /api/inference/reset | Reset an inference | alias |
| GET | /api/inference/restart | Restart the current inference | None |
| GET | /api/hardware or /api/hardwareinfo | Get hardware information | None |
| GET | /api/silk | Get memory status | None |
| GET | /api/silk/enable | Enable memories | None |
| GET | /api/silk/disable | Disable memories | None |
| GET | /api/silk/memorybanks | Get memory banks information | None |
| GET | /api/silk/memorybank/activate | Activate a memory bank | memorybank |
| GET | /api/silk/memorybank/deactivate | Deactivate a memory bank | memorybank, all |
| GET | /api/silk/query | Perform a semantic query | (Parameters handled by Silk retriever) |
| GET | /api/silk/health | Check the health of the Silk retriever | None |
| GET | /api/silk/model/metadata | Get model metadata from the Silk retriever | (Parameters handled by Silk retriever) |
| GET | /api/shutdown | Initiate server shutdown | None |
| POST | /api/utils/log | Write to log | JSON body with log details |
| POST | /api/silk/embedding | Get an embedding from the Silk retriever | (Parameters handled by Silk retriever) |
curl http://localhost:6568/api/silk/query?q=Tell%20me%20about%20the%20documents&limit=3&memorybank=docsExamples of Dabarqus in action can be found in this repo under examples.