This article provides a comprehensive guide to understanding and configuring AI model nodes within the AgentRunner platform. AgentRunner is an agent builder platform that uses a node editor, offering a flexible way to design and deploy AI-powered agents. This guide focuses on the settings and functionalities available for integrating various AI models, including OpenAI, Gemini, and Anthropic, into your agents.
Configuring AI Model Nodes: A Deep Dive
AI Model Nodes are a foundational feature in AgentRunner, enabling users to harness the power of large language models (LLMs) from providers like OpenAI, Gemini, and Anthropic. Configuring these nodes correctly is essential for building effective agents. This section will walk you through the different settings available for these nodes, explaining how each one affects the behavior of your AI agent. Proper configuration ensures that your agent behaves as expected, generating accurate and relevant responses. Whether you're a seasoned AI developer or just starting out, understanding these configurations is key to unlocking the full potential of AgentRunner.
Understanding Prompts: System, User, and Assistant (Model)
Prompts are a crucial aspect of AI model nodes, guiding the behavior and output of the AI model. AgentRunner supports three types of prompts: system, user, and assistant, each serving a distinct purpose in shaping the AI's response.
System Prompt: The system prompt sets the context for the AI model. It provides high-level instructions, defining the role the AI should assume, the overall task it should perform, and any constraints or guidelines it should follow. For example, you might instruct the AI to act as a helpful customer service representative or a knowledgeable research assistant.
User Prompt: The user prompt represents the specific input or query from the user that the AI model needs to respond to. This is the direct question, request, or instruction that the user is providing to the agent. The AI model will use this prompt, in conjunction with the system prompt, to generate an appropriate response.
Assistant (Model) Prompt: The assistant prompt provides example outputs that tell the AI mode what the answer should look like.
Using the different prompts effectively, helps to improve the model accuracy.
Controls: Model Selection and Settings
The "controls" section of the node settings allows you to specify which AI model to use and configure its settings. This is where you choose the specific model from OpenAI, Gemini, or Anthropic that best suits your needs, and fine-tune its behavior.
Model Selection: AgentRunner supports a variety of models, including:
OpenAI: GPT-4o, GPT-4o mini, o1-mini, o1-preview, GPT-4.5-preview
Gemini: Gemini 2.0 Flash, Gemini 2.0 Flash-Lite, Gemini 1.5 Pro, Gemini 1.5 Flash, Gemini 1.5 Flash-8B
Anthropic: Claude 3.7 Sonnet: Latest, Claude 3.7 Sonnet: 20250219, Claude 3.5 Haiku: Latest, Claude 3.5 Haiku: 20241022, Claude 3.5 Sonnet: Latest, Claude 3.5 Sonnet: 20241022, Claude 3.5 Sonnet: 20240620, Claude 3 Opus: Latest, Claude 3 Opus: 20240229, Claude 3 Haiku: 202403027
Model Settings: Each model has its own required settings and specifications, such as:
Temperature: Controls the randomness of the output. Lower values result in more predictable responses, while higher values introduce more creativity.
TopP: Also known as nucleus sampling, this setting controls the diversity of the output by considering only the most likely tokens.
TopK: Limits the selection of the next token to the K most likely tokens.
Maximum Tokens: Sets the maximum length of the generated response.
Each model displays the same settings as they do in their native interfaces, ensuring a consistent experience. You will need to use your own API key to access these models. It is also possible to use environment variables or secrets to store API keys and other sensitive information using private inputs that can’t be seen by those who run them.
Testing and History: Ensuring Optimal Performance
Testing and history logs are vital tools for refining and maintaining the performance of AI model nodes within AgentRunner. These features offer a way to experiment with different inputs and configurations, while also keeping a record of past node executions for analysis and debugging. By leveraging these tools, you can optimize your AI agents for accuracy, reliability, and overall effectiveness. Ensuring the agents deliver consistent and relevant responses. This section provides a detailed look at the "test" and "history" settings, highlighting their importance in the development process.
Testing Nodes
The "test" setting allows you to add inputs and test the node directly within the AgentRunner interface. This feature enables you to experiment with different prompts and settings, observing how the AI model responds in real-time.
Input Data: The supported data type for inputs is text.
Iteration and Refinement: This testing process allows you to refine your prompts and model settings iteratively, ensuring that the AI model behaves as expected for various inputs.
History Logs
The "history" log provides a record of all node runs, allowing you to track performance and debug any issues.
Information Included: The history log includes:
Timestamp of the run
User who initiated the run
Input values used
Prompts used
Output generated by the node
Accessing Logs: While individual node histories cannot be filtered or searched directly, you can access the logs of the entire agent by clicking "Logs" while you are in the agent editor. You can filter, search and see more detailed data about the run in the agent logs.
Agent Logs: The agent logs can be searched by Run ID and filtered in various ways, while allowing you to inspect each individual node’s history within that run in more detail.
Additional Considerations for Advanced Usage
As you become more experienced with AgentRunner, you may encounter more complex scenarios that require advanced techniques. While there are no limitations on the length or complexity of prompts, it is advisable to use multiple steps for more complex agents. This approach helps reduce the possibility of hallucination.
Currently, the node cannot be configured to automatically retry failed requests to the AI model. If a request fails, the user has to manually try to run it again. Also, there are no rate limits or usage restrictions associated with using these AI models through the node. Anyone with access to the agent editor can access and configure these node settings.