Prerequisites
Before starting, ensure you have:- Installed Handler - See the installation guide
- Python 3.11+ installed on your system
- An A2A agent URL to connect to (or use the local server setup below)
If you don’t have an A2A agent to test with, you can run Handler’s built-in local agent (see Running a local agent below).
Your first message
Let’s send a simple message to an A2A agent:Send a message
Use the Handler will connect to the agent, send your message, and display the response:
message send command to send text to an agent:The
context_id and task_id are automatically saved to your session for conversation continuity.Continue the conversation
Send a follow-up message using the saved session:The
--continue flag (or -C) uses the saved context and task IDs from your previous interaction.Exploring agent capabilities
Before interacting with an agent, you can inspect its capabilities:Get the agent card
- Name and description
- Supported skills
- Capabilities (streaming, push notifications)
- Content types (text, markdown, images)
- Authentication requirements
Validate the agent card
Ensure an agent card is valid according to A2A protocol:Using the terminal UI (TUI)
For a more interactive experience, launch Handler’s terminal user interface:- Visual agent card display with syntax highlighting
- Real-time message streaming with markdown rendering
- Task and artifact management in dedicated tabs
- Session persistence across restarts
- Debug logs for troubleshooting
Ctrl+Q- Quit/- Command paletteCtrl+M- Maximize focused panelTab- Switch focus between panels
Working with sessions
Handler automatically manages sessions for conversation continuity:List all sessions
View a specific session
Clear session data
Authentication
Many A2A agents require authentication. Handler supports bearer tokens and API keys:Set credentials
~/.handler/sessions.json and automatically used for future requests.
Inline authentication
You can also provide credentials per-request:Clear credentials
Running a local agent
Handler includes a reference A2A agent implementation for testing:- OpenAI, Anthropic, Google, and other LLM providers
- Ollama for local models
- Custom system prompts
- Streaming responses
Configure the local agent
Configure the local agent
Set environment variables to customize the agent:For Ollama:
Task management
For long-running tasks, you can check status and cancel if needed:Get task status
Cancel a task
Set up push notifications
Get notified when a task completes:Next steps
Now that you’ve sent your first messages, explore more advanced features:CLI reference
Complete reference for all CLI commands and options
TUI guide
Master the terminal user interface
MCP integration
Connect Handler to Claude Desktop and other AI assistants
Local server
Configure and deploy your own A2A agent