Documentation Index Fetch the complete documentation index at: https://mintlify.com/volcengine/OpenViking/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
Before starting with OpenViking, ensure your environment meets the following requirements:
Python 3.10+ Python 3.10 or higher is required
Network Access Stable connection for dependencies and model services
AI Model Access VLM and embedding model API credentials
Operating System Linux, macOS, or Windows
Advanced Requirements (only for building from source):
Go 1.22+ for AGFS components
GCC 9+ or Clang 11+ for C++ extensions
Installation
Install OpenViking
Install the Python package using pip: pip install openviking --upgrade --force-reinstall
Using uv for faster installation: uv pip install openviking --upgrade
Verify Installation
Check that OpenViking is installed correctly: python -c "import openviking; print(openviking.__version__)"
Optional: Rust CLI
For advanced users, OpenViking provides a high-performance Rust CLI:
# Quick install
curl -fsSL https://raw.githubusercontent.com/volcengine/OpenViking/main/crates/ov_cli/install.sh | bash
# Or build from source
cargo install --git https://github.com/volcengine/OpenViking ov_cli
Model Configuration
OpenViking requires two types of models:
VLM Model For image and content understanding
Embedding Model For vectorization and semantic retrieval
Supported VLM Providers
OpenViking supports three VLM providers:
Volcengine (Doubao)
OpenAI
LiteLLM
Recommended - Cost-effective with good performance, free quota for new users.{
"vlm" : {
"provider" : "volcengine" ,
"model" : "doubao-seed-2-0-pro-260215" ,
"api_key" : "your-api-key" ,
"api_base" : "https://ark.cn-beijing.volces.com/api/v3"
}
}
You can also use endpoint IDs: {
"vlm" : {
"provider" : "volcengine" ,
"model" : "ep-20241220174930-xxxxx" ,
"api_key" : "your-api-key" ,
"api_base" : "https://ark.cn-beijing.volces.com/api/v3"
}
}
Use OpenAI’s official API or OpenAI-compatible endpoints: {
"vlm" : {
"provider" : "openai" ,
"model" : "gpt-4o" ,
"api_key" : "your-api-key" ,
"api_base" : "https://api.openai.com/v1"
}
}
Unified access to various models including Anthropic, DeepSeek, Gemini, Qwen, vLLM, Ollama, and more. Anthropic (Claude): {
"vlm" : {
"provider" : "litellm" ,
"model" : "claude-3-5-sonnet-20240620" ,
"api_key" : "your-anthropic-api-key"
}
}
Qwen (DashScope): {
"vlm" : {
"provider" : "litellm" ,
"model" : "dashscope/qwen-turbo" ,
"api_key" : "your-dashscope-api-key" ,
"api_base" : "https://dashscope.aliyuncs.com/compatible-mode/v1"
}
}
Local Models (Ollama): {
"vlm" : {
"provider" : "litellm" ,
"model" : "ollama/llama3.1" ,
"api_base" : "http://localhost:11434"
}
}
The system auto-detects common models like claude-*, deepseek-*, gemini-*, ollama/*, etc. See LiteLLM Providers for complete list.
Configuration File
Create Configuration Directory
Create Configuration File
Create ~/.openviking/ov.conf with your model settings: {
"storage" : {
"workspace" : "/home/your-name/openviking_workspace"
},
"log" : {
"level" : "INFO" ,
"output" : "stdout"
},
"embedding" : {
"dense" : {
"api_base" : "https://ark.cn-beijing.volces.com/api/v3" ,
"api_key" : "your-embedding-api-key" ,
"provider" : "volcengine" ,
"dimension" : 1024 ,
"model" : "doubao-embedding-vision-250615"
},
"max_concurrent" : 10
},
"vlm" : {
"api_base" : "https://ark.cn-beijing.volces.com/api/v3" ,
"api_key" : "your-vlm-api-key" ,
"provider" : "volcengine" ,
"model" : "doubao-seed-2-0-pro-260215" ,
"max_concurrent" : 100
}
}
Replace your-name, your-embedding-api-key, and your-vlm-api-key with your actual values.
Set Environment Variable (Optional)
If your config file is not at the default location: Linux/macOS
Windows (PowerShell)
Windows (CMD)
export OPENVIKING_CONFIG_FILE =~ /. openviking / ov . conf
Your First Example
Let’s create a complete example that demonstrates OpenViking’s core features.
Create Python Script
Create a file named example.py: import openviking as ov
# Initialize OpenViking client with data directory
client = ov.OpenViking( path = "./data" )
try :
# Initialize the client
client.initialize()
print ( "✓ Client initialized" )
# Add resource (supports URL, file, or directory)
add_result = client.add_resource(
path = "https://raw.githubusercontent.com/volcengine/OpenViking/refs/heads/main/README.md"
)
root_uri = add_result[ 'root_uri' ]
print ( f "✓ Resource added: { root_uri } " )
# Explore the resource tree structure
ls_result = client.ls(root_uri)
print ( f " \n 📁 Directory structure: \n { ls_result } " )
# Use glob to find markdown files
glob_result = client.glob( pattern = "**/*.md" , uri = root_uri)
if glob_result[ 'matches' ]:
first_file = glob_result[ 'matches' ][ 0 ]
content = client.read(first_file)
print ( f " \n 📄 Content preview ( { first_file } ): \n { content[: 200 ] } ..." )
# Wait for semantic processing to complete
print ( " \n ⏳ Waiting for semantic processing..." )
client.wait_processed()
print ( "✓ Processing complete" )
# Get abstract and overview of the resource (L0 and L1 layers)
abstract = client.abstract(root_uri)
overview = client.overview(root_uri)
print ( f " \n 📝 L0 Abstract: \n { abstract } " )
print ( f " \n 📋 L1 Overview: \n { overview[: 500 ] } ..." )
# Perform semantic search
print ( " \n 🔍 Semantic search: 'what is openviking'" )
results = client.find( "what is openviking" , target_uri = root_uri)
print ( " \n Search results:" )
for r in results.resources:
print ( f " • { r.uri } (score: { r.score :.4f} )" )
# Close the client
client.close()
print ( " \n ✓ Client closed successfully" )
except Exception as e:
print ( f "❌ Error: { e } " )
import traceback
traceback.print_exc()
Expected Output
You should see output similar to: ✓ Client initialized
✓ Resource added: viking://resources/github.com/volcengine/OpenViking/README.md
📁 Directory structure:
[...]
📄 Content preview:
<div align="center">
<picture>
<img alt="OpenViking" src="docs/images/banner.jpg" width="100%" height="auto">
</picture>...
⏳ Waiting for semantic processing...
✓ Processing complete
📝 L0 Abstract:
OpenViking is an open-source context database for AI Agents...
📋 L1 Overview:
# Overview
OpenViking unifies the management of context through a file system paradigm...
🔍 Semantic search: 'what is openviking'
Search results:
• viking://resources/github.com/volcengine/OpenViking/README.md (score: 0.8523)
✓ Client closed successfully
Congratulations! You’ve successfully:
Added a resource to OpenViking
Explored the filesystem structure
Retrieved hierarchical context (L0/L1 layers)
Performed semantic search
Key API Operations
Here are the essential operations you’ll use with OpenViking:
Resource Management
# Add resources (URL, local file, or directory)
client.add_resource( path = "https://example.com/doc.pdf" )
client.add_resource( path = "/path/to/local/file.md" )
client.add_resource( path = "/path/to/directory/" )
# Wait for semantic processing
client.wait_processed()
# Remove resources
client.rm( "viking://resources/example/" , recursive = True )
Filesystem Operations
# List directory contents
client.ls( "viking://resources/" )
# Read file content
content = client.read( "viking://resources/doc.md" )
# Create directory
client.mkdir( "viking://agent/skills/custom/" )
# Find files by pattern
matches = client.glob( "**/*.py" , uri = "viking://resources/project/" )
# Tree view
tree = client.tree( "viking://resources/" , max_depth = 2 )
Hierarchical Context Access
# Get L0 abstract (~100 tokens)
abstract = client.abstract( "viking://resources/project/" )
# Get L1 overview (~2k tokens)
overview = client.overview( "viking://resources/project/" )
# Get L2 full content
content = client.read( "viking://resources/project/docs/api.md" )
Semantic Search
# Search across all resources
results = client.find( "user authentication" , limit = 10 )
# Search within specific URI
results = client.find(
query = "API endpoints" ,
target_uri = "viking://resources/project/" ,
limit = 5
)
# Access results
for resource in results.resources:
print ( f " { resource.uri } - Score: { resource.score } " )
print ( f "Content: { resource.content[: 200 ] } ..." )
Session Management
# Create a session
session = client.create_session()
session_id = session[ "session_id" ]
# Add messages
client.add_message(session_id, "user" , "How do I configure OpenViking?" )
client.add_message(session_id, "assistant" , "To configure OpenViking..." )
# Get session info
info = client.get_session(session_id)
# List all sessions
sessions = client.list_sessions()
# Commit session (extract memories)
client.commit_session(session_id)
Next Steps
Server Deployment Learn how to deploy OpenViking as a production HTTP service
API Reference Explore the complete API documentation
Configuration Guide Advanced configuration options and model providers
Examples Browse code examples and integration patterns
Troubleshooting
Import Error: Cannot find module 'openviking'
Make sure OpenViking is installed in your current Python environment: pip install openviking --upgrade
python -c "import openviking; print(openviking.__version__)"
Configuration file not found
Ensure your config file exists at ~/.openviking/ov.conf or set the OPENVIKING_CONFIG_FILE environment variable: export OPENVIKING_CONFIG_FILE = / path / to / your / ov . conf
API authentication errors
Semantic processing takes too long
Semantic processing time depends on:
Resource size and complexity
Model API response time
Network latency
You can check processing status: status = client.wait_processed()
print (status)