Skip to main content
GET
/
api
/
v1
/
observer
/
system
System Metrics
curl --request GET \
  --url https://api.example.com/api/v1/observer/system
{
  "status": "ok",
  "result": {
    "name": "queue",
    "is_healthy": true,
    "has_errors": false,
    "status": "Queue\tPending\tIn Progress\tProcessed\tErrors\tTotal\nEmbedding\t0\t0\t10\t0\t10\nSemantic\t0\t0\t10\t0\t10\nTOTAL\t0\t0\t20\t0\t20"
  },
  "time": 0.03
}
Get detailed metrics and status for all system components including queue system, VikingDB, and VLM token usage.
The observer API provides component-level monitoring for production deployments.

Authentication

Requires API key authentication via X-API-Key header.

Available Metrics

Queue System

Get status of embedding and semantic processing queues.
curl -X GET http://localhost:1933/api/v1/observer/queue \
  -H "X-API-Key: your-key"

VikingDB Status

Get VikingDB collection and vector count information.
curl -X GET http://localhost:1933/api/v1/observer/vikingdb \
  -H "X-API-Key: your-key"

VLM Token Usage

Get Vision Language Model token usage statistics.
curl -X GET http://localhost:1933/api/v1/observer/vlm \
  -H "X-API-Key: your-key"

Overall System Status

Get combined status of all components.
curl -X GET http://localhost:1933/api/v1/observer/system \
  -H "X-API-Key: your-key"

Response Schemas

Component Status Response

status
string
Response status (ok or error)
result
object
Component status information
time
number
Request processing time in seconds

System Status Response

status
string
Response status (ok or error)
result
object
Overall system status
time
number
Request processing time in seconds

Response Examples

{
  "status": "ok",
  "result": {
    "name": "queue",
    "is_healthy": true,
    "has_errors": false,
    "status": "Queue\tPending\tIn Progress\tProcessed\tErrors\tTotal\nEmbedding\t0\t0\t10\t0\t10\nSemantic\t0\t0\t10\t0\t10\nTOTAL\t0\t0\t20\t0\t20"
  },
  "time": 0.03
}

Wait for Processing

Wait for all asynchronous processing (embedding, semantic generation) to complete.
curl -X POST http://localhost:1933/api/v1/system/wait \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-key" \
  -d '{"timeout": 60.0}'
# Add resources
client.add_resource("./docs/")

# Wait for all processing to complete
status = client.wait_processed(timeout=60.0)
print(f"Pending: {status['pending']}")
print(f"Processed: {status['processed']}")
print(f"Errors: {status['errors']}")
Response:
{
  "status": "ok",
  "result": {
    "pending": 0,
    "in_progress": 0,
    "processed": 20,
    "errors": 0
  },
  "time": 0.1
}

Health Check

Quick boolean health check:
curl -X GET http://localhost:1933/api/v1/debug/health \
  -H "X-API-Key: your-key"
if client.observer.is_healthy():
    print("System OK")
else:
    print(client.observer.system)
Response:
{
  "status": "ok",
  "result": {
    "healthy": true
  },
  "time": 0.02
}

Monitoring Best Practices

Configure the /ready endpoint (no auth required) as your readiness probe:
readinessProbe:
  httpGet:
    path: /ready
    port: 1933
  initialDelaySeconds: 10
  periodSeconds: 5
If pending or in_progress counts remain high:
  • Check embedding service health
  • Verify VikingDB connectivity
  • Consider scaling workers
Monitor token consumption to:
  • Predict costs
  • Identify usage spikes
  • Detect potential abuse