| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103 |
- ---
- sidebar_position: 2
- slug: /agent_component
- ---
-
- # Agent component
-
- The component equipped with reasoning, tool usage, and multi-agent collaboration capabilities.
-
- ---
-
- An **Agent** component fine-tunes the LLM and sets its prompt. From v0.20.1 onwards, an **Agent** component is able to work independently and with the following capabilities:
-
- - Autonomous reasoning with reflection and adjustment based on environmental feedback.
- - Use of tools or subagents to complete tasks.
-
- ## Scenarios
-
- An **Agent** component is essential when you need the LLM to assist with summarizing, translating, or controlling various tasks.
-
- ## Configurations
-
- ### Model
-
- Click the dropdown menu of **Model** to show the model configuration window.
-
- - **Model**: The chat model to use.
- - Ensure you set the chat model correctly on the **Model providers** page.
- - You can use different models for different components to increase flexibility or improve overall performance.
- - **Freedom**: A shortcut to **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty** settings, indicating the freedom level of the model. From **Improvise**, **Precise**, to **Balance**, each preset configuration corresponds to a unique combination of **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty**.
- This parameter has three options:
- - **Improvise**: Produces more creative responses.
- - **Precise**: (Default) Produces more conservative responses.
- - **Balance**: A middle ground between **Improvise** and **Precise**.
- - **Temperature**: The randomness level of the model's output.
- Defaults to 0.1.
- - Lower values lead to more deterministic and predictable outputs.
- - Higher values lead to more creative and varied outputs.
- - A temperature of zero results in the same output for the same prompt.
- - **Top P**: Nucleus sampling.
- - Reduces the likelihood of generating repetitive or unnatural text by setting a threshold *P* and restricting the sampling to tokens with a cumulative probability exceeding *P*.
- - Defaults to 0.3.
- - **Presence penalty**: Encourages the model to include a more diverse range of tokens in the response.
- - A higher **presence penalty** value results in the model being more likely to generate tokens not yet been included in the generated text.
- - Defaults to 0.4.
- - **Frequency penalty**: Discourages the model from repeating the same words or phrases too frequently in the generated text.
- - A higher **frequency penalty** value results in the model being more conservative in its use of repeated tokens.
- - Defaults to 0.7.
- - **Max tokens**:
-
- :::tip NOTE
- - It is not necessary to stick with the same model for all components. If a specific model is not performing well for a particular task, consider using a different one.
- - If you are uncertain about the mechanism behind **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty**, simply choose one of the three options of **Preset configurations**.
- :::
-
- ### System prompt
-
- Typically, you use the system prompt to describe the task for the LLM, specify how it should respond, and outline other miscellaneous requirements. We do not plan to elaborate on this topic, as it can be as extensive as prompt engineering. However, please be aware that the system prompt is often used in conjunction with keys (variables), which serve as various data inputs for the LLM.
-
- :::danger IMPORTANT
- An **Agent** component relies on keys (variables) to specify its data inputs. Its immediate upstream component is *not* necessarily its data input, and the arrows in the workflow indicate *only* the processing sequence. Keys in a **Agent** component are used in conjunction with the system prompt to specify data inputs for the LLM. Use a forward slash `/` or the **(x)** button to show the keys to use.
- :::
-
- ### User prompt
-
- The user-defined prompt. Defaults to `sys.query`, the user query.
-
-
- ### Tools
-
- You can use an **Agent** component as a collaborator that reasons and reflects with the aid of other tools; for instance, **Retrieval** can serve as one such tool for an **Agent**.
-
- ### Agent
-
- You use an **Agent** component as a collaborator that reasons and reflects with the aid of subagents or other tools, forming a multi-agent system.
-
- ### Message window size
-
- An integer specifying the number of previous dialogue rounds to input into the LLM. For example, if it is set to 12, the tokens from the last 12 dialogue rounds will be fed to the LLM. This feature consumes additional tokens.
-
- :::tip IMPORTANT
- This feature is used for multi-turn dialogue *only*.
- :::
-
- ### Max retries
-
- Defines the maximum number of attempts the agent will make to retry a failed task or operation before stopping or reporting failure.
-
- ### Delay after error
-
- The waiting period in seconds that the agent observes before retrying a failed task, helping to prevent immediate repeated attempts and allowing system conditions to improve. Defaults to 1 second.
-
- ### Max rounds
-
- Defines the maximum number reflection rounds of the selected chat model. Defaults to 1 round.
-
- :::tip NOTE
- Increasing this value will significantly extend your agent's response time.
- :::
-
- ### Output
-
- The global variable name for the output of the **Agent** component, which can be referenced by other components in the workflow.
|