Quickstart
Levia is currently still in active development. Issues or bugs found are encouraged to be submitted here
Levia's main repository lies in the main branch at https://github.com/Levia-is-us/Levia-us
Prerequisites
Python 3.11 or higher
Virtual environment tool (venv, conda, etc.)
Installation
Clone the repository:
cd levia-protocol
Create and activate a virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: venv\Scripts\activate`
Install dependencies:
pip install -r requirements.txt
Configuration
Create a .env file in the root directory with the following required environment variables
# Selected Model for running Levia_engine
# Supported models are listed in engine/llm_provider/models.json
MODEL_NAME=your_model_name_from_llm_provider
# OpenAI Configuration
OPENAI_API_KEY=your_openai_api_key
OPENAI_BASE_URL=your_openai_base_url
# Azure OpenAI Configuration (if using Azure)
AZURE_OPENAI_API_KEY=your_azure_openai_api_key
AZURE_OPENAI_BASE_URL=your_azure_endpoint
#DeepSeek Configuration (if using DeepSeek)
DEEPSEEK_API_KEY=your_deepseek_api_key
DEEPSEEK_BASE_URL=your_deepseek_base_url
# Claude Configuration (if using Claude)
ANTHROPIC_API_KEY=your_anthropic_api_key'
# Pinecone Configuration (other vector database support is coming soon)
PINECONE_API_KEY=your_pinecone_api_key
PINECONE_HOST=your_pinecone_host
Running the Application
Start the main application:
python main.py
The application will initialize with available tools from the tools/ directory and start an interactive chat session.
Core Features
Tool Integration
The engine automatically scans and loads tools from the tools/
directory. Here's how it works:
def init_tools():
"""Initialize tool registry and caller"""
registry = ToolRegistry()
project_root = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
)
tools_dir = os.path.join(project_root, "tools")
print(f"Scanning tools from: {tools_dir}")
registry.scan_directory(tools_dir)
return ToolCaller(registry)
Memory Management
The engine includes short-term memory capabilities for maintaining context during conversations:
class ContextStore:
def __init__(self, max_length=5):
"""
Initialize the context store.
:param max_length: Maximum length of context history to maintain
"""
self.max_length = max_length
self.history = []
def add(self, user_input, model_output):
"""
Add new conversation to history.
:param user_input: User input
:param model_output: Model output
"""
self.history.append({"user": user_input, "model": model_output})
# If history exceeds max length, remove oldest entry
if len(self.history) > self.max_length:
self.history.pop(0)
def get_context(self):
"""
Get current conversation context formatted as string.
:return: Current conversation context
"""
context = ""
for exchange in self.history:
context += f"User: {exchange['user']}\n"
context += f"Model: {exchange['model']}\n"
return context
def clear(self):
"""
Clear all history.
"""
self.history = []
Stream Processing
The engine supports multiple output streams including HTTP, WebSocket, and local file logging:
class Stream:
"""
Stream class that manages multiple output streams.
Supports HTTP, local file, and WebSocket output streams.
"""
def __init__(self, stream_type="local"):
"""
Initialize Stream with specified stream type.
Args:
stream_type (str): Type of stream to initialize ("http", "local", or "websocket")
"""
self.streams = []
if stream_type == "http":
self.add_stream(HTTPStream("http://localhost:8000"))
elif stream_type == "local":
self.add_stream(LocalStream())
elif stream_type == "websocket":
self.add_stream(WebsocketStream("ws://localhost:8765"))
else:
raise ValueError(f"Invalid stream type: {stream_type}")
# Always add log stream as secondary output
self.add_stream(LogStream())
Testing
Run tests using pytest:
pytest test/
Last updated