Quickstart
Levia is currently still in active development. Issues or bugs found are encouraged to be submitted here
Levia's main repository lies in the main branch at https://github.com/Levia-is-us/Levia-us
Prerequisites
Python 3.11 or higher
Virtual environment tool (venv, conda, etc.)
Installation
Clone the repository:
cd levia-protocolCreate and activate a virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: venv\Scripts\activate`Install dependencies:
pip install -r requirements.txt
Configuration
Create a .env file in the root directory with the following required environment variables
# Selected Model for running Levia_engine
# Supported models are listed in engine/llm_provider/models.json
MODEL_NAME=your_model_name_from_llm_provider
# OpenAI Configuration
OPENAI_API_KEY=your_openai_api_key
OPENAI_BASE_URL=your_openai_base_url
# Azure OpenAI Configuration (if using Azure)
AZURE_OPENAI_API_KEY=your_azure_openai_api_key
AZURE_OPENAI_BASE_URL=your_azure_endpoint
#DeepSeek Configuration (if using DeepSeek)
DEEPSEEK_API_KEY=your_deepseek_api_key
DEEPSEEK_BASE_URL=your_deepseek_base_url
# Claude Configuration (if using Claude)
ANTHROPIC_API_KEY=your_anthropic_api_key'
# Pinecone Configuration (other vector database support is coming soon)
PINECONE_API_KEY=your_pinecone_api_key
PINECONE_HOST=your_pinecone_host
Running the Application
Start the main application:
The application will initialize with available tools from the tools/ directory and start an interactive chat session.
Core Features
Tool Integration
The engine automatically scans and loads tools from the tools/ directory. Here's how it works:
Memory Management
The engine includes short-term memory capabilities for maintaining context during conversations:
Stream Processing
The engine supports multiple output streams including HTTP, WebSocket, and local file logging:
Testing
Run tests using pytest:
Last updated