# Settings

## Settings

![](https://2516265397-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FJHPweZaUGqPeoTwjADbq%2Fuploads%2FuO6DOKJj2hf6dV3VsJzd%2Fsgf.png?alt=media\&token=684f4067-17f5-409a-a251-6d8ca111b395)\
The **Settings** tab is where you define, refine, and fine-tune how your agent behaves—both as a trader and as an AI personality.

<figure><img src="https://2516265397-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FJHPweZaUGqPeoTwjADbq%2Fuploads%2FFE3ZuauoK7d6XgOgaXP8%2Fimage.png?alt=media&#x26;token=b9bfff81-4a78-4f79-a2c4-b1f94607b8a3" alt="" width="563"><figcaption></figcaption></figure>

***

### Agent Status & Early Access

At the top of the page you’ll see the **Agent Status** card:

* Shows whether your agent is **Active** or **Deactivated**
* Displays your **CDX balance**
* Lets early supporters with **100,000 CDX on Base** claim **free early-access credits** to run the app

This is the operational control center for starting or pausing your agent.

***

### Core Personality

Shape how your agent communicates and responds:

* **Personality Description** – The fundamental character (e.g., professional, degen, humorous).
* **Communication Tone** – Formal, conversational, or anything in between.
* **Response Length** – Concise or more elaborate, depending on your needs.

Adjust these to give your trading agent a unique voice and presence.

***

### Thinking & Learning

Define the way your agent reasons and improves:

* **Direct** – Fast, simple tool calling. Best for straightforward tasks like checking a price or closing a position.
* **ReAct** – Think → Act → Observe loop. The agent reasons, takes an action, observes the result, then decides the next step. Great for multi-step tasks.
* **Chain of Thought** – Analyze → Plan → Execute. Sequential reasoning where each step builds on the last, producing a clear logical chain. Ideal for complex analytical tasks.
* **Graph of Thought** – Multi-perspective branching analysis. The agent explores the problem from multiple angles simultaneously, then synthesizes the best insights into a final answer. Best for nuanced decisions.

***

### Trading Approach&#x20;

This is where you control the **guardrails for every trade**.\
The trading style and strategy defined during agent creation are stored here for easy editing:

* Update risk parameters and execution logic
* Select your preferred exchange
* Refine trigger commands
* Enforce TPs and SLs
* Adjust growth targets and trading philosophy

> **Important:** If you want to modify your agent’s trading strategy later, this is the place to do it.

***

### Optimizations

Cod3x lets you fine-tune how each goal uses compute resources.\
These **Credit Usage Optimization** settings help reduce costs without affecting overall performance.

* **Skip Final Chat Formatting**\
  Returns raw model output instead of formatted text. much simpler output.
* **Skip Step Completion Assessment**\
  Bypasses additional verification logic after each step. This slightly increases speed and lowers credit use at the cost of skipping LLM self-checks.
* **Enable LLM Model Auto-Selection**\
  Allows Cod3x to automatically choose the most efficient model for each task based on complexity — using smaller models for simple steps and larger ones when needed.
* **Enable AI Commit Messages**\
  Allows AI to generate commit messages when making code changes, improving consistency accross versions.

***

### Advanced Parameters

The **Advanced Parameters** section lets you fine-tune how your selected model behaves when running a goal.\
Cod3x supports hundreds of LLMs across providers like Anthropic, OpenAI, and xAI — each optimized for different types of reasoning and execution.

<figure><img src="https://2516265397-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FJHPweZaUGqPeoTwjADbq%2Fuploads%2FYaVy4SOwZLdeJisHujsw%2Fadv.png?alt=media&#x26;token=a0575e00-6f4f-44a8-8600-3238d1c42899" alt=""><figcaption></figcaption></figure>

If you have **Enable LLM Model Auto-Selection** in the **"Optimizations"** modal, then you can select each LLM model for the 3 main reasoning steps of Cod3x' workflow.

**Claude Haiku 4.5** as the Primary and Simple models and **Sonnet 4.5** as the Main Analysis model is a good balance between cost and good results. For optimal results use **Sonnet 4.5** on all three spots.\
Our internal testing shows that Sonnet offers the best balance between **analytical precision**, **execution stability**, and **cost efficiency**.\
However, users can freely switch to alternative models depending on preference, cost targets, or desired response style.

<figure><img src="https://2516265397-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FJHPweZaUGqPeoTwjADbq%2Fuploads%2FhyQyyVnN6inJ3VMLjQKt%2Fazv.png?alt=media&#x26;token=01b2b2bc-f48d-4d11-b5af-6f7b54a2ef6c" alt=""><figcaption></figcaption></figure>

Our research has shown that **Haiku 4.5** + **Kimi** is a very cost effective solution with solid results as well.&#x20;

***

**Model Settings**

* **Temperature** – Controls output creativity.\
  Lower values produce focused, deterministic responses; higher values encourage more exploratory ideas.
* **Top P** – Adjusts the probability range of word sampling, influencing output diversity and tone.
* **Frequency Penalty** – Reduces repetition across responses, ensuring less echoing or redundancy.
* **Presence Penalty** – Encourages the model to explore new topics and avoid looping around previous ideas.

These parameters allow advanced users to shape model behavior for **consistent execution**, **custom style**, or **maximized reasoning depth** — depending on the task.

> ⚙️ *Tip:* If you’re unsure where to start, use the default settings. Cod3x automatically optimizes these values for balanced performance.

{% hint style="info" %}
These settings won't be live during the beta version.
{% endhint %}

***

### Feature Availability

Many personality and reasoning features are still being finalized.\
Except for **Trading Approach**, which is fully live, some options (like deep adaptive learning or full social engagement) may remain partially functional during beta.
