Skip to main content

Overview

While RDK auto-instruments LLM calls, you may want to trace other operations — database queries, API calls, or business logic. Use the span() context manager to create custom spans.

Creating Manual Spans

from rdk import init, observe, span
from rdk.models import SpanType

init(endpoint="...", api_key="...")

@observe(name="process-order")
def process_order(order_id: str) -> dict:
    with span("fetch_order", span_type=SpanType.FUNCTION, input_data={"order_id": order_id}) as s:
        order = database.get_order(order_id)
        s.metadata["found"] = order is not None
    return order
The span() context manager handles timing and status automatically:
  • On normal exit: span is marked SUCCESS
  • On exception: span is marked ERROR, then the exception re-raises

Span Types

Choose the type that best describes the operation:
TypeUse Case
SpanType.LLMLanguage model calls
SpanType.CHAINOrchestration / workflow logic
SpanType.TOOLExternal tool or API calls
SpanType.FUNCTIONInternal functions or business logic

Complete Example

Multiple manual spans in one function:
from anthropic import Anthropic
from rdk import init, observe, span, shutdown
from rdk.models import SpanType

init(endpoint="...", api_key="...")

@observe(name="customer-lookup")
def lookup_customer(email: str) -> dict:
    # Span for database lookup
    with span("database.query", span_type=SpanType.FUNCTION, input_data={"query": "SELECT ... WHERE email = ?"}):
        customer = {"id": "123", "name": "John", "email": email}

    # Span for external API call
    with span("crm.enrich", span_type=SpanType.TOOL, input_data={"customer_id": "123"}) as s:
        enriched = {"customer": customer, "score": 85}
        s.metadata["score"] = 85

    # LLM call is auto-traced
    client = Anthropic()
    response = client.messages.create(
        model="claude-sonnet-4-6",
        max_tokens=256,
        messages=[{
            "role": "user",
            "content": f"Summarize this customer: {enriched}"
        }]
    )

    return {
        "customer": customer,
        "summary": response.content[0].text
    }

result = lookup_customer("john@example.com")
shutdown()

Attaching Data During Execution

Write to s.metadata inside the block to record data discovered during execution:
with span("vector-search", input_data={"query": query}) as s:
    results = vector_db.search(query)
    s.metadata["result_count"] = len(results)
    s.metadata["top_score"] = results[0].score if results else None

Outside a Trace

If span() is called outside an active trace context, it yields a no-op dummy span. No error is raised, and no data is sent. This makes it safe to call span() in library code that may or may not be traced by the caller.

Best Practices

Only create manual spans for operations you actually want to trace. Too many spans can make traces hard to read.
  1. Use descriptive namesdatabase.query.customers not db1
  2. Include relevant input — But avoid sensitive data
  3. Set appropriate types — Helps with filtering and visualization
  4. Don’t over-instrument — Auto-instrumented LLM calls don’t need manual spans

See Also