Getting Started with Praval
Welcome to Praval! This guide will help you get up and running with the framework in minutes.
What is Praval?
Praval is a Python framework for building multi-agent AI systems. Instead of creating monolithic AI applications, you create ecosystems of specialized agents that collaborate intelligently.
The name: Praval (प्रवाल) is Sanskrit for coral, representing how simple agents collaborate to create complex, intelligent ecosystems.
Installation
Minimal Installation
For basic agent functionality with LLM support:
pip install praval
With Memory System
To enable persistent memory with vector search:
pip install praval[memory]
This adds:
ChromaDB for vector storage
Sentence Transformers for embeddings
scikit-learn for similarity search
With All Features
For the complete Praval experience:
pip install praval[all]
This includes:
Memory system
Secure messaging (enterprise features)
PDF knowledge base support
All storage providers (PostgreSQL, Redis, S3, Qdrant)
For Development
If you’re contributing to Praval:
git clone https://github.com/aiexplorations/praval.git
cd praval
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -e ".[dev]"
Prerequisites
Python Version
Praval requires Python 3.9 or higher. We support:
Python 3.9
Python 3.10
Python 3.11
Python 3.12
API Keys
You’ll need at least one LLM provider API key. Praval supports:
OpenAI (recommended for beginners):
export OPENAI_API_KEY="sk-..."
Anthropic (Claude models):
export ANTHROPIC_API_KEY="sk-ant-..."
Cohere:
export COHERE_API_KEY="..."
Praval automatically detects which provider to use based on available API keys.
Your First Agent
Let’s create a simple research agent:
from praval import agent, chat, broadcast, start_agents
@agent("researcher")
def research_agent(spore):
"""I research topics and provide insights."""
topic = spore.knowledge.get("topic", "AI")
result = chat(f"Provide a brief overview of: {topic}")
return {"summary": result}
# Start the agent system
start_agents()
# Interact with the agent
result = research_agent({"topic": "quantum computing"})
print(result["summary"])
That’s it! You’ve created your first Praval agent.
Understanding the Code
Let’s break down what’s happening:
1. The @agent Decorator
@agent("researcher")
def research_agent(spore):
...
This transforms a regular Python function into an intelligent agent. The agent:
Has a unique name:
"researcher"Receives messages through the
sporeparameterCan communicate with other agents
Has access to LLM capabilities
2. The chat() Function
result = chat(f"Provide a brief overview of: {topic}")
This sends a prompt to your configured LLM provider and returns the response. It automatically:
Selects the appropriate LLM provider
Handles API communication
Manages errors and retries
3. The Spore Object
topic = spore.knowledge.get("topic", "AI")
A Spore is Praval’s message format. It’s a structured container carrying:
knowledge: Data dictionarytype: Message typesender: Who sent itmetadata: Additional context
4. Starting Agents
start_agents()
This initializes the agent communication system. It:
Starts the Reef (message bus)
Registers all agents
Prepares them to receive messages
Multi-Agent Communication
Now let’s create agents that collaborate:
from praval import agent, chat, broadcast, start_agents
@agent("researcher", responds_to=["research_request"])
def researcher(spore):
"""Research topics in depth."""
topic = spore.knowledge.get("topic")
findings = chat(f"Research this deeply: {topic}")
# Broadcast findings to other agents
broadcast({
"type": "research_complete",
"topic": topic,
"findings": findings
})
return {"status": "research_complete"}
@agent("summarizer", responds_to=["research_complete"])
def summarizer(spore):
"""Create concise summaries."""
findings = spore.knowledge.get("findings")
summary = chat(f"Summarize this in 3 bullet points: {findings}")
print(f"Summary:\n{summary}")
return {"summary": summary}
# Start the system
start_agents()
# Trigger the workflow
broadcast({
"type": "research_request",
"topic": "neural networks"
})
# Give agents time to process
import time
time.sleep(3)
What happens:
You broadcast a
research_requestThe
researcheragent responds (it listens toresearch_request)Researcher does its work and broadcasts
research_completeThe
summarizeragent responds (it listens toresearch_complete)Summarizer creates and prints a summary
Key insight: Agents coordinate themselves. You don’t orchestrate the workflow - you just declare what each agent responds to.
Adding Memory
Give your agents persistent memory:
@agent("expert", memory=True)
def expert_agent(spore):
"""An expert that learns from conversations."""
question = spore.knowledge.get("question")
# Recall similar past questions
past_context = expert_agent.recall(question, limit=3)
# Generate answer with context
answer = chat(f"Question: {question}\nContext: {past_context}")
# Remember this interaction
expert_agent.remember(f"Q: {question}\nA: {answer}")
return {"answer": answer}
Memory features:
remember(text): Store informationrecall(query, limit=5): Retrieve similar memoriesforget(): Clear memoryWorks across sessions (persistent)
Next Steps
Now that you have the basics:
Read Core Concepts - Understand Praval’s architecture
Follow Tutorials - Build real applications step-by-step
Explore Examples - See production-ready patterns
Check API Reference - Deep dive into all capabilities
Recommended Learning Path
Beginners:
Tutorial: Creating Your First Agent
Tutorial: Agent Communication
Example: Simple Calculator
Intermediate:
Tutorial: Memory-Enabled Agents
Tutorial: Tool Integration
Example: Knowledge Graph Miner
Advanced:
Tutorial: Multi-Agent Systems
Guide: Storage System
Guide: Secure Spores
Common Patterns
Pattern 1: Request-Response
@agent("responder", responds_to=["request"])
def responder(spore):
return {"response": "done"}
broadcast({"type": "request"})
Pattern 2: Pipeline
@agent("step1", responds_to=["start"])
def step1(spore):
broadcast({"type": "step2_input", "data": "processed"})
@agent("step2", responds_to=["step2_input"])
def step2(spore):
broadcast({"type": "final_output", "result": "complete"})
Pattern 3: Fan-Out/Fan-In
# One trigger, multiple responders
@agent("worker1", responds_to=["task"])
def worker1(spore):
broadcast({"type": "result", "from": "worker1"})
@agent("worker2", responds_to=["task"])
def worker2(spore):
broadcast({"type": "result", "from": "worker2"})
@agent("aggregator", responds_to=["result"])
def aggregator(spore):
# Collect all results
pass
Troubleshooting
No API Key Found
Error: No valid API key found for any LLM provider
Solution: Set at least one API key:
export OPENAI_API_KEY="your-key-here"
Import Errors
ImportError: cannot import name 'MemoryManager'
Solution: Install memory dependencies:
pip install praval[memory]
Agents Not Responding
If agents aren’t receiving messages:
Check you called
start_agents()Verify the
responds_totypes match broadcast typesAdd debug prints to see message flow
Configuration
Environment Variables
# LLM Provider Selection
export PRAVAL_DEFAULT_PROVIDER=openai
export PRAVAL_DEFAULT_MODEL=gpt-4-turbo
# Memory Configuration
export QDRANT_URL=http://localhost:6333
# Logging
export PRAVAL_LOG_LEVEL=INFO
Programmatic Configuration
from praval import configure
configure({
"default_provider": "openai",
"default_model": "gpt-4-turbo",
"max_concurrent_agents": 10,
"memory_config": {
"embedding_model": "all-MiniLM-L6-v2"
}
})
Getting Help
Documentation: You’re reading it!
GitHub Issues: Report bugs
Examples: See
examples/directoryAPI Reference: Complete function documentation
Ready to dive deeper? Head to Core Concepts to understand Praval’s architecture!