Chapter 4: Usage and Advanced Examples
This chapter details how to effectively use the T20 Multi-Agent System, covering basic usage patterns, how to explore the artifacts generated during a run, and diving into advanced scenarios. It also provides guidance on debugging and troubleshooting common issues encountered when working with the system. The aim is to empower users to leverage the full potential of T20 for complex task automation.
4.1 Basic Usage Patterns
The T20 system is primarily operated through its Command-Line Interface (CLI), t20-cli
. The fundamental interaction involves providing a high-level goal as a string argument.
Crafting Effective High-Level Goals
The quality and specificity of your goal directly influence the outcome. Aim for clarity and provide enough context for the Orchestrator (Meta-AI
) to understand the desired end state.
- Be Specific: Instead of "Make a webpage," try "Design and create the HTML and CSS for a modern, minimalist landing page for a new SaaS product called 'Innovate'."
- Define Constraints: If there are specific requirements (e.g., color palette, technology stack), include them.
- State the Objective: Clearly articulate what the final output should be.
Interpreting CLI Output
When you run t20-cli <your_goal>
, the system provides real-time feedback:
- Session Information: It will indicate the creation of a new session directory (e.g.,
sessions/session_xyz...
). - Agent Actions: You'll see which agent is currently performing a task (e.g., "Meta-AI is planning...", "Aurora is generating design...", "Kodax is coding...").
- Artifacts: You might see notifications about generated artifacts being saved.
- Completion: A message indicating the successful completion of the task or any errors encountered.
Understanding this output helps you track the progress and identify potential bottlenecks or issues.
4.2 Exploring Session Artifacts
Each run of the T20 system generates a dedicated session directory (e.g., sessions/session_xyz...
) containing all the artifacts produced during the execution. These artifacts are invaluable for understanding the system's process and for debugging.
Navigating Session Directories
Each session folder typically contains:
initial_plan.json
: The structured plan generated by the Orchestrator, detailing the sequence of tasks and agent assignments.- Agent Output Files: Files named according to the step and agent (e.g.,
0__step_0_Lyra_result.txt
,1__step_1_Aurora_result.md
). These contain the direct output of each agent's execution. - Log Files: Detailed logs capturing the interactions, prompts, and system operations.
Analyzing Key Artifacts
initial_plan.json
: Review this to understand how the Orchestrator decomposed your goal. It shows the logic and steps the system intended to follow.- Agent Results: Examine the output files from each agent. For example,
Aurora
's output might contain design descriptions, color palettes, and layout ideas, whileKodax
's output would be the generated code. - Logs: Use logs to trace the flow of information, identify specific prompts sent to agents, and diagnose errors if the process fails.
4.3 Advanced Scenarios
The T20 system's flexibility allows for sophisticated applications beyond simple tasks.
Complex Goal Decomposition
For highly complex goals, the Orchestrator's dynamic planning is crucial. You might see plans with many steps, involving multiple iterations or conditional logic (though explicit conditional logic in the plan structure itself might depend on future enhancements).
- Example: "Develop a full-stack web application for a task management system, including user authentication, task creation, progress tracking, and a responsive UI." This would likely result in a multi-step plan involving agents for requirements analysis, database design, backend API development, frontend implementation, and testing.
Leveraging Lyra
for Prompt Optimization
Lyra
, the Prompt Engineer agent, can dynamically refine system prompts to improve agent performance. While this is often handled automatically, understanding this capability is key.
- Scenario: If
Aurora
(Designer) is not producing designs that align with a "modern minimalist" aesthetic,Lyra
might adjust the system prompt given toAurora
to emphasize these qualities more strongly in subsequent steps or reruns. - Illustrative Example: Imagine a scenario where initial design prompts for
Aurora
lack emphasis on "whitespace".Lyra
could dynamically modify the prompt to include: "Ensure ample whitespace and clean lines in the layout, adhering to a minimalist aesthetic." This refinement helps guideAurora
towards the desired output without manual intervention.
Customizing Agent Configurations
Agents are defined in YAML files (e.g., agents/designer.yaml
). You can customize these definitions to:
- Change Underlying Models: Swap
gemini-2.5-flash
with another compatible model if needed. - Adjust Agent Goals: Fine-tune the specific objectives of an agent.
- Modify Prompts: Directly edit the default system prompts associated with an agent (though
Lyra
can override these dynamically).
Example: To use a different model for Kodax
, you would edit its YAML file, changing the model:
field. For instance:
# agents/kodax.yaml (example modification)
name: Kodax
role: Engineer
model: gemini-2.5-flash-alt # Changed from default
goal: Implement designs into clean, modular, and performant code.
4.4 Debugging and Troubleshooting
When things don't go as planned, the session artifacts are your best resource.
Common Issues and Solutions
- API Key Errors:
- Symptom: Errors related to authentication or missing API keys.
- Solution: Ensure your
.env
file is correctly formatted and contains a validGOOGLE_API_KEY
. Verify that the key is active.
- Plan Generation Failures:
- Symptom: The Orchestrator fails to create
initial_plan.json
or produces an invalid plan. - Solution: Check the Orchestrator's logs for specific error messages. The goal might be too complex, ambiguous, or require capabilities not understood by the planning LLM. Try simplifying the goal or providing more context.
- Symptom: The Orchestrator fails to create
- Agent Execution Errors:
- Symptom: A specific agent fails during its task (e.g.,
Kodax
produces broken code). - Solution: Examine the session logs and the output artifact for the failed agent. The error message might indicate issues with the input it received, limitations of its model, or problems with its prompt. Review the
initial_plan.json
to ensure the task delegation was appropriate.
- Symptom: A specific agent fails during its task (e.g.,
- Unexpected Output:
- Symptom: The final output doesn't meet expectations (e.g., suboptimal design, incorrect code logic).
- Solution: Analyze the artifacts from preceding steps. Was the input to the final agent appropriate? Did
Lyra
optimize prompts effectively? Consider refining the initial goal or exploring agent configurations.
Using Session Logs for Debugging
Session logs provide a chronological record of the system's execution. By tracing the prompts sent to agents and their responses, you can pinpoint where the process deviated or failed. Look for specific error messages, timeouts, or unexpected content in the logs to diagnose the problem.
By mastering these usage patterns and troubleshooting techniques, you can effectively harness the power of the T20 Multi-Agent System for a wide range of complex tasks.