Core Concepts๏ƒ

Understanding Pravalโ€™s architecture and design philosophy.

The Coral Reef Metaphor๏ƒ

Praval is inspired by coral reef ecosystems:

Coral polyps are simple organisms with specialized functions. Individually, theyโ€™re not complex. But when thousands of polyps collaborate, they create magnificent coral reefs - some of the most complex and productive ecosystems on Earth.

Similarly, in Praval:

  • Agents are like coral polyps - simple, specialized functions

  • The Reef is the communication substrate connecting them

  • Spores are the messages carrying knowledge between agents

  • Complex intelligence emerges from agent collaboration

Design Principles๏ƒ

1. Specialization Over Generalization๏ƒ

Each agent excels at one thing.

Good:

@agent("researcher")
def research_agent(spore):
    """I research topics in depth."""
    topic = spore.knowledge.get("topic")
    return {"research": chat(f"Research: {topic}")}

Avoid:

@agent("super_agent")
def do_everything(spore):
    """I research, analyze, summarize, format, and deploy."""
    # Too many responsibilities!

Why? Specialized agents are:

  • Easier to understand and maintain

  • Can run concurrently

  • Fail independently (resilience)

  • Can be reused across projects

2. Declarative Design๏ƒ

Define what agents ARE, not what they DO.

The @agent decorator is declarative - you specify:

  • Agentโ€™s identity (name)

  • What it responds to (responds_to)

  • Its capabilities (system_message)

  • Its resources (memory, knowledge_base)

You donโ€™t specify:

  • When it runs (agents self-organize)

  • How it coordinates (handled by Reef)

  • Order of execution (emergent from message flow)

3. Emergent Intelligence๏ƒ

Complex behaviors emerge from simple agent interactions.

Example: A business analysis system doesnโ€™t need a โ€œmaster orchestratorโ€. Instead:

  1. Interviewer asks questions โ†’ broadcasts question_asked

  2. Researcher hears it โ†’ researches โ†’ broadcasts research_ready

  3. Analyst hears research โ†’ analyzes โ†’ broadcasts analysis_ready

  4. Reporter hears analysis โ†’ generates report โ†’ broadcasts report_ready

Each agent only knows its own job. The workflow emerges naturally.

4. Zero Configuration๏ƒ

Sensible defaults, progressive enhancement.

Basic agent:

@agent("simple")
def simple_agent(spore):
    return chat("Hello")

No configuration needed. It just works.

Enhanced agent:

@agent("advanced",
       channel="knowledge",
       responds_to=["specific_events"],
       memory=True,
       knowledge_base="./docs/")
def advanced_agent(spore):
    # All features enabled
    pass

You add features as needed, not upfront.

5. Composability๏ƒ

Agents combine naturally through standard interfaces.

All agents:

  • Receive Spores (standard message format)

  • Use chat() (standard LLM interface)

  • Return dictionaries (standard data format)

  • Communicate via broadcast() (standard messaging)

This means any agent can work with any other agent.

Core Components๏ƒ

Agents๏ƒ

What: Functions decorated with @agent() that become autonomous agents.

Signature:

@agent(name, channel=None, system_message=None,
       auto_broadcast=True, responds_to=None,
       memory=False, knowledge_base=None)
def agent_function(spore):
    return {"result": "..."}

Key attributes:

  • name: Unique identifier

  • responds_to: List of message types to handle

  • memory: Enable persistent memory

  • knowledge_base: Auto-index documents

Agent capabilities:

  • chat(prompt): Talk to LLM

  • broadcast(message): Send to other agents

  • remember(text): Store in memory (if enabled)

  • recall(query): Retrieve from memory (if enabled)

Spores๏ƒ

What: Structured messages carrying knowledge between agents.

Structure:

{
    "type": "message_type",      # Required: message category
    "knowledge": {               # Optional: data payload
        "key": "value",
        ...
    },
    "sender": "agent_name",      # Auto-filled: who sent it
    "timestamp": 1234567890,     # Auto-filled: when sent
    "metadata": {...}            # Optional: extra context
}

Accessing spore data:

def my_agent(spore):
    msg_type = spore.type
    data = spore.knowledge.get("key")
    sender = spore.sender

Spore types are how agents filter messages:

@agent("listener", responds_to=["event_a", "event_b"])
def listener(spore):
    if spore.type == "event_a":
        # Handle event A
    elif spore.type == "event_b":
        # Handle event B

The Reef๏ƒ

What: The communication substrate connecting all agents.

Key features:

  • Message routing: Delivers spores to interested agents

  • Channels: Organize communication streams

  • Async delivery: Non-blocking message passing

  • History tracking: Maintains message logs

The Reef is automatic - you rarely interact with it directly:

# This happens automatically when you:
broadcast({"type": "event"})

# Behind the scenes:
# 1. Reef receives the spore
# 2. Finds all agents listening to "event"
# 3. Delivers to each one asynchronously
# 4. Logs the transaction

Manual Reef access (advanced):

from praval import get_reef

reef = get_reef()
messages = reef.get_history(channel="main")

Registry๏ƒ

What: Catalog of all agents in the system.

Automatic registration:

@agent("worker")  # Automatically registered
def worker(spore):
    pass

Discovery:

from praval import get_registry

registry = get_registry()
all_agents = registry.list_agents()
worker = registry.get_agent("worker")

Use cases:

  • Debugging: See all active agents

  • Monitoring: Track agent states

  • Dynamic dispatch: Route to agents by capability

Communication Patterns๏ƒ

Pattern 1: Broadcast & Filter๏ƒ

Most common pattern in Praval.

@agent("listener1", responds_to=["event"])
def listener1(spore):
    print("Listener 1 heard event")

@agent("listener2", responds_to=["event"])
def listener2(spore):
    print("Listener 2 heard event")

@agent("listener3", responds_to=["other_event"])
def listener3(spore):
    print("Listener 3 won't hear 'event'")

broadcast({"type": "event"})
# Output:
# Listener 1 heard event
# Listener 2 heard event

Pattern 2: Request-Response๏ƒ

Agent makes a request, another responds.

@agent("requester")
def requester(spore):
    broadcast({"type": "data_request", "query": "user_data"})
    # Continue with other work...

@agent("responder", responds_to=["data_request"])
def responder(spore):
    query = spore.knowledge.get("query")
    data = fetch_data(query)
    broadcast({"type": "data_response", "data": data})

Pattern 3: Pipeline๏ƒ

Chain of agents, each processing and passing along.

@agent("ingestion", responds_to=["raw_data"])
def ingestion(spore):
    clean = clean_data(spore.knowledge.get("data"))
    broadcast({"type": "clean_data", "data": clean})

@agent("analysis", responds_to=["clean_data"])
def analysis(spore):
    results = analyze(spore.knowledge.get("data"))
    broadcast({"type": "analyzed_data", "results": results})

@agent("reporting", responds_to=["analyzed_data"])
def reporting(spore):
    report = generate_report(spore.knowledge.get("results"))
    broadcast({"type": "final_report", "report": report})

Pattern 4: Coordinator๏ƒ

One agent orchestrates others.

@agent("coordinator")
def coordinator(spore):
    task = spore.knowledge.get("task")

    # Dispatch to specialists
    broadcast({"type": "research_task", "topic": task})
    broadcast({"type": "analysis_task", "subject": task})
    broadcast({"type": "summary_task", "item": task})

    # Collect results in another agent...

@agent("researcher", responds_to=["research_task"])
def researcher(spore):
    # Do research
    broadcast({"type": "research_complete", "findings": "..."})

Memory System๏ƒ

Praval provides multi-layered memory for agents that need to remember.

Memory Types๏ƒ

  1. Short-term Memory: Working memory, temporary

  2. Long-term Memory: Persistent vector storage

  3. Episodic Memory: Conversation history

  4. Semantic Memory: Facts and knowledge

Enabling Memory๏ƒ

@agent("learner", memory=True)
def learner(spore):
    question = spore.knowledge.get("question")

    # Store
    learner.remember(f"Asked: {question}")

    # Retrieve
    context = learner.recall(question, limit=5)

    # Use context
    answer = chat(f"Context: {context}\nQuestion: {question}")
    return {"answer": answer}

Knowledge Base๏ƒ

Auto-index documents for instant agent knowledge:

@agent("expert", memory=True, knowledge_base="./docs/")
def expert(spore):
    # Agent automatically has access to all documents in ./docs/
    query = spore.knowledge.get("query")

    # Semantic search across documents
    relevant = expert.recall(query)

    return {"answer": chat(f"Based on: {relevant}\nAnswer: {query}")}

See Memory System Guide for details.

Tool System๏ƒ

Agents can use external tools and APIs.

Defining Tools๏ƒ

from praval import tool

@tool("calculator", description="Performs mathematical calculations")
def calculator(expression: str) -> float:
    """Evaluates a mathematical expression."""
    return eval(expression)  # Simplified for demo

@tool("web_search")
def search_web(query: str) -> str:
    """Searches the web and returns results."""
    # Implementation...
    return results

Using Tools in Agents๏ƒ

@agent("assistant")
def assistant(spore):
    # Agent automatically discovers registered tools
    question = spore.knowledge.get("question")

    # LLM can suggest tool usage via chat
    result = chat(f"Answer this using available tools: {question}")

    return {"answer": result}

See Tool System Guide for details.

Storage System๏ƒ

Unified interface for data persistence across providers.

Supported Providers๏ƒ

  • FileSystem: Local file storage

  • PostgreSQL: Relational database

  • Redis: In-memory cache

  • S3: Cloud object storage

  • Qdrant: Vector database

Using Storage๏ƒ

from praval import get_data_manager

@agent("data_agent")
def data_agent(spore):
    dm = get_data_manager()

    # Store data
    ref = dm.store(
        data={"user": "alice", "score": 95},
        storage_type="postgresql",
        metadata={"category": "user_data"}
    )

    # Retrieve data
    data = dm.retrieve(ref)

    return {"stored_ref": ref, "data": data}

See Storage Guide for details.

LLM Provider System๏ƒ

Praval supports multiple LLM providers with automatic selection.

Supported Providers๏ƒ

  • OpenAI: GPT-4, GPT-3.5-turbo, etc.

  • Anthropic: Claude models

  • Cohere: Command and Generate models

Provider Selection๏ƒ

Automatic (based on API keys):

# Just use chat() - Praval picks the provider
result = chat("Hello, world!")

Explicit:

from praval.providers import get_provider

provider = get_provider("openai", model="gpt-4-turbo")
result = provider.generate("Hello, world!")

Configuration๏ƒ

Via environment:

export PRAVAL_DEFAULT_PROVIDER=anthropic
export PRAVAL_DEFAULT_MODEL=claude-3-opus-20240229

Programmatic:

from praval import configure

configure({
    "default_provider": "openai",
    "default_model": "gpt-4-turbo"
})

Agent Lifecycle๏ƒ

Understanding how agents work through their lifecycle:

1. Definition๏ƒ

@agent("worker")
def worker(spore):
    return {"status": "done"}

When Python executes this:

  • Decorator creates Agent instance

  • Wraps the function

  • Registers with Registry

  • Subscribes to Reef

2. Activation๏ƒ

start_agents()

This:

  • Initializes the Reef

  • Activates all registered agents

  • Prepares message routing

3. Execution๏ƒ

broadcast({"type": "task"})

For each matching agent:

  • Reef delivers spore

  • Agent function executes

  • Return value captured

  • Auto-broadcast if enabled

4. Communication๏ƒ

Agents can:

  • Receive spores (automatic via responds_to)

  • Send broadcasts (explicit via broadcast())

  • Chat with LLM (via chat())

  • Store/retrieve data (via storage system)

Error Handling๏ƒ

Agent Resilience๏ƒ

Key principle: One agentโ€™s failure doesnโ€™t crash the system.

@agent("risky")
def risky_agent(spore):
    try:
        # Potentially failing operation
        result = dangerous_operation()
        return {"result": result}
    except Exception as e:
        # Handle gracefully
        broadcast({"type": "error", "error": str(e)})
        return {"status": "failed", "error": str(e)}

Reef Guarantees๏ƒ

The Reef ensures:

  • Messages are logged even if delivery fails

  • Agent failures are isolated

  • Other agents continue operating

  • Errors are traceable

Performance Considerations๏ƒ

Concurrency๏ƒ

Agents run concurrently by default:

  • Each agent in separate execution context

  • Messages delivered asynchronously

  • No blocking between agents

Memory Usage๏ƒ

For memory-enabled agents:

  • Short-term memory is RAM-based (fast, limited)

  • Long-term memory is disk-based (slower, unlimited)

  • Configure limits based on your needs

Scaling๏ƒ

Vertical (single machine):

configure({
    "max_concurrent_agents": 20  # More parallel agents
})

Horizontal (multiple machines):

  • Use external Reef (Redis, RabbitMQ)

  • Shared storage backend

  • See advanced deployment guides

Best Practices๏ƒ

1. Keep Agents Small๏ƒ

# Good
@agent("parser")
def parse_data(spore):
    return {"parsed": parse(spore.knowledge.get("raw"))}

# Too big
@agent("everything")
def do_everything(spore):
    # 500 lines of code doing 10 different things

2. Use Descriptive Names๏ƒ

# Good
@agent("user_data_validator")
@agent("email_notification_sender")

# Unclear
@agent("thing1")
@agent("processor")

3. Document System Messages๏ƒ

@agent("analyzer", system_message="""
You are a financial data analyzer specializing in:
- Revenue trend analysis
- Cost optimization
- Profit margin calculation

Be precise and cite data sources.
""")
def analyzer(spore):
    # Agent has clear instructions
    pass

4. Filter Messages Specifically๏ƒ

# Good - specific filtering
@agent("handler", responds_to=["user_login", "user_logout"])

# Too broad - receives everything
@agent("handler")  # No filtering

5. Handle Errors Gracefully๏ƒ

@agent("robust")
def robust_agent(spore):
    try:
        result = risky_operation()
        return {"result": result}
    except ValueError as e:
        return {"error": "invalid_input", "detail": str(e)}
    except Exception as e:
        return {"error": "unknown", "detail": str(e)}

Next Steps๏ƒ

Now that you understand core concepts:

  • Tutorials: Build real applications

  • API Reference: Detailed function documentation

  • Examples: Production-ready patterns

  • Advanced Guides: Memory, Tools, Storage systems