Execution Logs and Common Errors
Debugging Your Agent
Creating Bug Resistant Agents
Cod3x agents operate autonomously, but ensuring smooth execution requires setting up clear and well-structured strategies. Bugs and unexpected behaviors often stem from inconsistencies in the agent’s trading, social, and personality strategies, as well as poorly defined goals. Following best practices when configuring these elements will significantly reduce errors and improve performance.
Defining a Clear Strategy
Each agent’s strategy serves as its foundation, guiding how it behaves in trading, social engagement, and general interactions. Issues arise when strategies are vague, overly complex, or exceed the recommended 500-word limit. If you want to change your initial auto set-up strategy, try to avoid this:
Keep strategies concise and well-structured using ## for section headers and for bullet points to improve clarity.
Write a few sentences describing the strategy, then use the "Generate" button to refine and modify the output.
Ensure trading strategies explicitly define risk tolerance, timeframes, asset focus, and execution style.
Social strategies should specify tone, engagement frequency, and preferred content formats.
Personality configurations should align with the agent’s intended role, avoiding conflicting traits that could lead to erratic responses.
By structuring the strategy properly, the agent has a clearer framework to follow, reducing confusion during execution.
Setting Executable Goals
The goal system in Cod3x agents is designed to work best with direct, concrete instructions rather than long, essay-like descriptions. Agents process commands more effectively when goals are concise and specific, rather than ambiguous or overly detailed.
Use straightforward sentences with a clear directive.
Avoid excessive explanations—stick to what needs to be done rather than why.
For trading goals, specify conditions in simple terms, such as "Buy ETH if price drops below $3000 and RSI is below 30."
For social goals, provide structured instructions like "Post one daily market update summarizing ETH trends."
When goals are overly detailed or indirect, agents may misinterpret them, leading to failed or unexpected executions. Keeping instructions clear ensures that tasks are carried out as intended.
By following these practices, users can create more reliable, efficient, and bug-resistant agents, ensuring smooth automation across trading, social engagement, and broader decision-making tasks.
Using The Runs Interface
Using the Runs Interface
The Runs Interface provides a comprehensive overview of your agent's execution history, helping you analyze its performance, track errors, and refine its behavior. This feature ensures transparency, allowing you to see exactly how your agent processes goals and interacts with tools in real-time.
Accessing Goal History
To view your agent’s execution history, navigate to the goal you want to inspect and click on the "History" button. This opens the Goal History module, where you’ll find a performance breakdown, detailed execution logs, and recommendations for improving goal success rates.
Performance Overview Metrics
At the top of the Goal History module, several key performance indicators help assess how efficiently your agent is operating:
Efficiency: Measures how effectively the agent executed tasks based on available resources.
Success Rate: Indicates the percentage of tasks that completed successfully without errors.
Tool Utilization: Shows how efficiently the agent leveraged its plugins to accomplish goals.
Time Efficiency: Evaluates whether the agent completed tasks within an optimal timeframe.
Token Usage Graph: Displays how much compute power was consumed per goal execution, helping users optimize efficiency.
Execution History and Run States
Below the performance overview, you’ll find a chronological list of goal execution runs, ordered by time. Each run is color-coded based on its execution state:
Blue Runs (Live Execution Logs): These logs capture raw execution data in real-time. They are useful for debugging and monitoring what the agent is doing live as it happens. More details on troubleshooting are covered in the Execution Logs and Common Errors section.
Green Runs (Completed Goals): These runs represent fully executed goals, from start to finish.
Each run is also assigned a performance rating, ranging from Beginner to Legendary:
Beginner: The goal did not complete successfully or was halted.
Legendary: Every step executed perfectly with no errors.
Expanding a Run for Details
Clicking on a run expands it, revealing a structured breakdown of each execution step in the process. This allows you to analyze how your agent handled the task in sequential order. Each step includes:
Execution Status: Whether the step was successful or failed (failed steps are marked in red).
Success Criteria: The conditions the agent needed to meet for successful execution.
Subtasks: Break down complex actions into granular steps.
Tool Utilization: The plugin or tool the agent used for that specific step.
Workflow Status: Indicates whether the execution was halted at this stage.
Completion Status & Missing Requirements: If the step failed, this section outlines what prevented it from executing properly.
Final Assessment: A summary of what happened during that stage of execution.
Recommendations for Optimization
At the bottom of each expanded run, you’ll find agent-generated recommendations suggesting improvements to enhance execution success. These recommendations may include:
Adjusting task structures for better efficiency.
Refining strategy definitions to align with execution logic.
Enhancing tool selection for smoother automation.
For users seeking deeper analysis, full raw execution logs are available at the very bottom, offering a granular look at every event during the run.
At the bottom of the Goal History module, you'll also find quick edit buttons that allow you to instantly adjust your agent's personality, social strategy, trading strategy, and goal settings. Instead of navigating through multiple menus, this streamlined interface enables you to make real-time modifications based on execution results.
With this structured history system, the agents provide a transparent and data-driven approach to automation, allowing users to refine execution, troubleshoot errors, and continuously improve agent performance.
Deep-Diving With Subtasks
Cod3x agents break down tasks into steps and subtasks, providing transparency into how each action is performed. Subtasks serve as a detailed look at what happens within a given step, offering insights into which tool was used, what parameters were applied, the reasoning behind the action, and the data retrieved or executed.
This structured execution allows users to track exactly how decisions are made, what information is processed, and how outcomes are determined—whether it’s trading, market analysis, or social media interactions.
Understanding Subtasks
Subtasks are not separate executions but rather a deeper window into how a specific step within a run was carried out. They provide a detailed breakdown of:
Tool Execution – Identifies the specific plugin or function the agent used.
Parameter Inputs – Details the variables that shaped the tool’s output.
Reasoning – Explains why the agent made a particular decision.
Data Retrieved – Shows the results pulled from external sources or generated through internal calculations.
For example, if an agent is executing a market analysis task, it might retrieve key indicators such as price trends, RSI values, MACD crossovers, and Bollinger Bands. Instead of simply providing the final decision, the subtask reveals the full breakdown of why that decision was made.
In some cases, the subtask will display technical analysis results in a structured format, detailing recent price movements, overbought/oversold conditions, and potential trading signals. This provides a clear rationale for trade execution, helping users understand the logic behind their agent’s trading strategy.
Breaking Down Analysis and Decision-Making
Some tasks require extensive data processing, and subtasks will log every relevant detail to ensure full transparency. For instance, when an agent analyzes market conditions, it may outline:
Recent price movements and closing prices.
Momentum indicators like RSI and MACD.
Support and resistance levels using Bollinger Bands.
Key observations, such as whether an asset is in an uptrend or facing bearish pressure.
These details allow users to audit the agent’s thought process and confirm that decisions align with their intended strategy.
Similarly, when an agent executes a trade, the subtask will show:
The chosen order type (market or limit).
The selected asset pair and trade size.
The reasoning behind risk management settings, such as stop-loss and take-profit levels.
By presenting all this information within the Goal History, users gain a complete picture of why and how trades are executed.
Execution-Based Subtasks
Not all subtasks involve analysis—some focus on direct execution. For instance, if an agent places a trade, the subtask will log:
The selected token and exchange route.
The trade size and risk parameters.
The final decision to buy, sell, or hold.
The most important part of this process is the reasoning, which explains why a specific action was taken based on the agent’s evaluation of market conditions. Users can reference this reasoning to validate their agent’s behavior and fine-tune its decision-making if necessary.
Similarly, in social-based tasks, subtasks can capture interactions such as retrieving trending topics or analyzing sentiment on X (Twitter). The breakdown will show:
Which accounts or hashtags were monitored.
The engagement level of retrieved tweets.
The sentiment score of certain discussions.
This allows users to see exactly how their agent is identifying trends and formulating content strategies.
Enhancing User Control and Debugging
The subtask system serves two critical functions:
It provides full transparency into how agents operate, ensuring users can trust the decisions being made.
It allows for easy debugging and optimization, helping users adjust parameters or refine strategies if an agent isn’t performing as expected.
At the bottom of the Goal History, users can also make quick edits to adjust an agent’s:
Personality and decision-making style.
Social engagement approach.
Trading strategy and execution preferences.
Goal settings for improved precision.
By regularly reviewing subtasks and fine-tuning strategies, users can ensure that their agents operate efficiently, adapt to market conditions, and align with their intended objectives.
Execution Logs and Common Errors
The Execution Logs provide an advanced, real-time breakdown of how an agent processes tasks, interacts with tools, and executes decisions. This section is intended for power users who want deeper insights into how the system operates under the hood. By examining execution logs, users can diagnose issues, optimize performance, and better understand how their agent handles tasks at a technical level.
Understanding Execution Logs
Each log entry represents a specific action taken by the agent, here are some examples:
Workflow Initiation – Logs the initial agent setup, including defined goals and strategies.
Node Execution – Displays when the agent activates specific processing nodes.
Tool Execution – Shows which tools (e.g., market analysis, social media posting, trade execution) were called and what data was retrieved or acted upon.
Completion Assessment – Evaluates whether a given step met its success criteria.
Performance Evaluation – Assesses how well an agent executed a goal, factoring in efficiency, tool utilization, and time metrics.
For example, a trading goal might show:
The agent fetching market data using a price inference tool.
The decision-making process for generating a trade order.
The execution of that trade on-chain.
A completion assessment to confirm whether the execution was successful.
Each action is timestamped, enabling users to trace execution flows and identify potential bottlenecks.
Common Errors and Troubleshooting
While Cod3x agents are designed for seamless execution, issues can arise due to missing data, incorrect tool initialization, or external service errors. Below are some common errors and their explanations:
Undefined or Null Property Errors
Cannot set properties of undefined (setting 'startTime')
Explanation: The agent is trying to reference a task ID that does not exist.
Solution: Ensure that the correct ID is being used for execution.
Cannot read properties of undefined (reading 'message')
Explanation: OpenAI’s API is failing or experiencing overload.
Solution: Retry the request or check OpenAI’s service status.
Cannot read properties of null (reading 'execution_steps')
Explanation: The system expected execution steps to be returned but received none.
Solution: Retry the goal, as the output may not have been properly generated.
Cannot read properties of null (reading 'efficiency_metrics')
Explanation: Similar to execution steps—failed to generate the expected output.
Solution: Adjust the goal structure and ensure all dependencies are initialized.
Tool Initialization Errors
toolNames.map is not a function
Explanation: The agent attempted to reference a tool that was not properly initialized or does not exist.
Solution: Verify that the tool name is correctly configured and available in the agent’s environment.
Using Execution Logs for Debugging
For advanced users, the execution logs serve as a real-time debugger, allowing you to:
Pinpoint where failures occur – By reviewing logs step by step, users can identify whether a failure happened at the goal initialization, tool execution, or completion assessment stage.
Optimize tool interactions – If certain tools are misfiring or returning incomplete data, logs can help determine if parameter adjustments are needed.
Monitor execution trends – Performance evaluations in logs provide insight into efficiency, tool utilization, and much more, helping users fine-tune their agent’s strategies.
Last updated