Workflow tutorial

This is an experimental feature and is not recommended for use in a production environment. For updates on the progress of the feature or if you want to leave feedback, see the associated GitHub issue.

You can automate the setup of common use cases, such as conversational chat, using a Chain-of-Thought (CoT) agent. An agent orchestrates and runs ML models and tools. A tool performs a set of specific tasks. This page presents a complete example of setting up a CoT agent. For more information about agents and tools, see Agents and tools

The setup requires the following sequence of API requests, with provisioned resources used in subsequent requests. The following list provides an overview of the steps required for this workflow. The step names correspond to the names in the template:

  1. Deploy a model on the cluster
  2. Use the deployed model for inference
    • Set up several tools that perform specific tasks:
    • Set up one or more agents that use some combination of the tools:
      • sub_agent: Create an agent that uses the cat_index_tool.
    • Set up tools representing these agents:
      • agent_tool: Wrap the sub_agent so that you can use it as a tool.
    • root_agent: Set up a root agent that may delegate the task to either a tool or another agent.

The following sections describe the steps in detail. For the complete workflow template, see Complete YAML workflow template.

Workflow graph

The workflow described in the previous section is organized into a template. Note that you can order the steps in several ways. In the example template, the ml_model_tool step is specified right before the root_agent step, but you can specify it at any point after the deploy_model_3 step and before the root_agent step. The following diagram shows the directed acyclic graph (DAG) that OpenSearch creates for all of the steps in the order specified in the template.

Example workflow steps graph

1. Deploy a model on the cluster

To deploy a model on the cluster, you need to create a connector to the model, register the model, and deploy the model.

create_connector_1

The first step in the workflow is to create a connector to an externally hosted model (in the following example, this step is called create_connector_1). The content of the user_inputs field exactly matches the ML Commons Create Connector API:

  1. nodes:
  2. - id: create_connector_1
  3. type: create_connector
  4. user_inputs:
  5. name: OpenAI Chat Connector
  6. description: The connector to public OpenAI model service for GPT 3.5
  7. version: '1'
  8. protocol: http
  9. parameters:
  10. endpoint: api.openai.com
  11. model: gpt-3.5-turbo
  12. credential:
  13. openAI_key: '12345'
  14. actions:
  15. - action_type: predict
  16. method: POST
  17. url: https://${parameters.endpoint}/v1/chat/completions

When you create a connector, OpenSearch returns a connector_id, which you need in order to register the model.

register_model_2

When registering a model, the previous_node_inputs field tells OpenSearch to obtain the required connector_id from the output of the create_connector_1 step. Other inputs required by the Register Model API are included in the user_inputs field:

  1. - id: register_model_2
  2. type: register_remote_model
  3. previous_node_inputs:
  4. create_connector_1: connector_id
  5. user_inputs:
  6. name: openAI-gpt-3.5-turbo
  7. function_name: remote
  8. description: test model

The output of this step is a model_id. You must then deploy the registered model to the cluster.

deploy_model_3

The Deploy Model API requires the model_id from the previous step, as specified in the previous_node_inputs field:

  1. - id: deploy_model_3
  2. type: deploy_model
  3. # This step needs the model_id produced as an output of the previous step
  4. previous_node_inputs:
  5. register_model_2: model_id

When using the Deploy Model API directly, a task ID is returned, requiring use of the Tasks API to determine when the deployment is complete. The automated workflow eliminates the manual status check and returns the final model_id directly.

Ordering steps

To order these steps in a sequence, you must connect them by an edge in the graph. When a previous_node_input field is present in a step, OpenSearch automatically creates a node with source and dest fields for this step. The output of the source is required as input for the dest. For example, the register_model_2 step requires the connector_id from the create_connector_1 step. Similarly, the deploy_model_3 step requires the model_id from the register_model_2 step. Thus, OpenSearch creates the first two edges in the graph as follows in order to match the output with the required input and raise errors if the required input is missing:

  1. edges:
  2. - source: create_connector_1
  3. dest: register_model_2
  4. - source: register_model_2
  5. dest: deploy_model_3

If you define previous_node_inputs, then defining edges is optional.

2. Use the deployed model for inference

A CoT agent can use the deployed model in a tool. This step doesn’t strictly correspond to an API but represents a component of the body required by the Register Agent API. This simplifies the register request and allows reuse of the same tool in multiple agents. For more information about agents and tools, see Agents and tools.

cat_index_tool

You can configure other tools to be used by the CoT agent. For example, you can configure a cat_index_tool as follows. This tool does not depend on any previous steps:

  1. - id: cat_index_tool
  2. type: create_tool
  3. user_inputs:
  4. name: CatIndexTool
  5. type: CatIndexTool
  6. parameters:
  7. max_iteration: 5

sub_agent

To use the cat_index_tool in the agent configuration, specify it as one of the tools in the previous_node_inputs field of the agent. You can add other tools to previous_node_inputs as necessary. The agent also needs a large language model (LLM) in order to reason with the tools. The LLM is defined by the llm.model_id field. This example assumes that the model_id from the deploy_model_3 step will be used. However, if another model is already deployed, the model_id of that previously deployed model could be included in the user_inputs field instead:

  1. - id: sub_agent
  2. type: register_agent
  3. previous_node_inputs:
  4. # When llm.model_id is not present this can be used as a fallback value
  5. deploy-model-3: model_id
  6. cat_index_tool: tools
  7. user_inputs:
  8. name: Sub Agent
  9. type: conversational
  10. description: this is a test agent
  11. parameters:
  12. hello: world
  13. llm.parameters:
  14. max_iteration: '5'
  15. stop_when_no_tool_found: 'true'
  16. memory:
  17. type: conversation_index
  18. app_type: chatbot

OpenSearch will automatically create the following edges so that the agent can retrieve the fields from the previous node:

  1. - source: cat_index_tool
  2. dest: sub_agent
  3. - source: deploy_model_3
  4. dest: sub_agent

agent_tool

You can use an agent as a tool for another agent. Registering an agent produces an agent_id in the output. The following step defines a tool that uses the agent_id from the previous step:

  1. - id: agent_tool
  2. type: create_tool
  3. previous_node_inputs:
  4. sub_agent: agent_id
  5. user_inputs:
  6. name: AgentTool
  7. type: AgentTool
  8. description: Agent Tool
  9. parameters:
  10. max_iteration: 5

OpenSearch automatically creates an edge connection because this step specifies the previous_node_input:

  1. - source: sub_agent
  2. dest: agent_tool

ml_model_tool

A tool may reference an ML model. This example gets the required model_id from the model deployed in a previous step:

  1. - id: ml_model_tool
  2. type: create_tool
  3. previous_node_inputs:
  4. deploy-model-3: model_id
  5. user_inputs:
  6. name: MLModelTool
  7. type: MLModelTool
  8. alias: language_model_tool
  9. description: A general tool to answer any question.
  10. parameters:
  11. prompt: Answer the question as best you can.
  12. response_filter: choices[0].message.content

OpenSearch automatically creates an edge in order to use the previous_node_input:

  1. - source: deploy-model-3
  2. dest: ml_model_tool

root_agent

A conversational chat application will communicate with a single root agent that includes the ML model tool and the agent tool in its tools field. It will also obtain the llm.model_id from the deployed model. Some agents require tools to be in a specific order, which can be enforced by including the tools_order field in the user inputs:

  1. - id: root_agent
  2. type: register_agent
  3. previous_node_inputs:
  4. deploy-model-3: model_id
  5. ml_model_tool: tools
  6. agent_tool: tools
  7. user_inputs:
  8. name: DEMO-Test_Agent_For_CoT
  9. type: conversational
  10. description: this is a test agent
  11. parameters:
  12. prompt: Answer the question as best you can.
  13. llm.parameters:
  14. max_iteration: '5'
  15. stop_when_no_tool_found: 'true'
  16. tools_order: ['agent_tool', 'ml_model_tool']
  17. memory:
  18. type: conversation_index
  19. app_type: chatbot

OpenSearch automatically creates edges for the previous_node_input sources:

  1. - source: deploy-model-3
  2. dest: root_agent
  3. - source: ml_model_tool
  4. dest: root_agent
  5. - source: agent_tool
  6. dest: root_agent

For the complete DAG that OpenSearch creates for this workflow, see the workflow graph.

Complete YAML workflow template

The following is the final template including all of the provision workflow steps in YAML format:

YAML template

  1. # This template demonstrates provisioning the resources for a
  2. # Chain-of-Thought chat bot
  3. name: tool-register-agent
  4. description: test case
  5. use_case: REGISTER_AGENT
  6. version:
  7. template: 1.0.0
  8. compatibility:
  9. - 2.12.0
  10. - 3.0.0
  11. workflows:
  12. # This workflow defines the actions to be taken when the Provision Workflow API is used
  13. provision:
  14. nodes:
  15. # The first three nodes create a connector to a remote model, registers and deploy that model
  16. - id: create_connector_1
  17. type: create_connector
  18. user_inputs:
  19. name: OpenAI Chat Connector
  20. description: The connector to public OpenAI model service for GPT 3.5
  21. version: '1'
  22. protocol: http
  23. parameters:
  24. endpoint: api.openai.com
  25. model: gpt-3.5-turbo
  26. credential:
  27. openAI_key: '12345'
  28. actions:
  29. - action_type: predict
  30. method: POST
  31. url: https://${parameters.endpoint}/v1/chat/completions
  32. - id: register_model_2
  33. type: register_remote_model
  34. previous_node_inputs:
  35. create_connector_1: connector_id
  36. user_inputs:
  37. # deploy: true could be added here instead of the deploy step below
  38. name: openAI-gpt-3.5-turbo
  39. description: test model
  40. - id: deploy_model_3
  41. type: deploy_model
  42. previous_node_inputs:
  43. register_model_2: model_id
  44. # For example purposes, the model_id obtained as the output of the deploy_model_3 step will be used
  45. # for several below steps. However, any other deployed model_id can be used for those steps.
  46. # This is one example tool from the Agent Framework.
  47. - id: cat_index_tool
  48. type: create_tool
  49. user_inputs:
  50. name: CatIndexTool
  51. type: CatIndexTool
  52. parameters:
  53. max_iteration: 5
  54. # This simple agent only has one tool, but could be configured with many tools
  55. - id: sub_agent
  56. type: register_agent
  57. previous_node_inputs:
  58. deploy-model-3: model_id
  59. cat_index_tool: tools
  60. user_inputs:
  61. name: Sub Agent
  62. type: conversational
  63. parameters:
  64. hello: world
  65. llm.parameters:
  66. max_iteration: '5'
  67. stop_when_no_tool_found: 'true'
  68. memory:
  69. type: conversation_index
  70. app_type: chatbot
  71. # An agent can be used itself as a tool in a nested relationship
  72. - id: agent_tool
  73. type: create_tool
  74. previous_node_inputs:
  75. sub_agent: agent_id
  76. user_inputs:
  77. name: AgentTool
  78. type: AgentTool
  79. parameters:
  80. max_iteration: 5
  81. # An ML Model can be used as a tool
  82. - id: ml_model_tool
  83. type: create_tool
  84. previous_node_inputs:
  85. deploy-model-3: model_id
  86. user_inputs:
  87. name: MLModelTool
  88. type: MLModelTool
  89. alias: language_model_tool
  90. parameters:
  91. prompt: Answer the question as best you can.
  92. response_filter: choices[0].message.content
  93. # This final agent will be the interface for the CoT chat user
  94. # Using a flow agent type tools_order matters
  95. - id: root_agent
  96. type: register_agent
  97. previous_node_inputs:
  98. deploy-model-3: model_id
  99. ml_model_tool: tools
  100. agent_tool: tools
  101. user_inputs:
  102. name: DEMO-Test_Agent
  103. type: flow
  104. parameters:
  105. prompt: Answer the question as best you can.
  106. llm.parameters:
  107. max_iteration: '5'
  108. stop_when_no_tool_found: 'true'
  109. tools_order: ['agent_tool', 'ml_model_tool']
  110. memory:
  111. type: conversation_index
  112. app_type: chatbot
  113. # These edges are all automatically created with previous_node_input
  114. edges:
  115. - source: create_connector_1
  116. dest: register_model_2
  117. - source: register_model_2
  118. dest: deploy_model_3
  119. - source: cat_index_tool
  120. dest: sub_agent
  121. - source: deploy_model_3
  122. dest: sub_agent
  123. - source: sub_agent
  124. dest: agent_tool
  125. - source: deploy-model-3
  126. dest: ml_model_tool
  127. - source: deploy-model-3
  128. dest: root_agent
  129. - source: ml_model_tool
  130. dest: root_agent
  131. - source: agent_tool
  132. dest: root_agent

Complete JSON workflow template

The following is the same template in JSON format:

JSON template

  1. {
  2. "name": "tool-register-agent",
  3. "description": "test case",
  4. "use_case": "REGISTER_AGENT",
  5. "version": {
  6. "template": "1.0.0",
  7. "compatibility": [
  8. "2.12.0",
  9. "3.0.0"
  10. ]
  11. },
  12. "workflows": {
  13. "provision": {
  14. "nodes": [
  15. {
  16. "id": "create_connector_1",
  17. "type": "create_connector",
  18. "user_inputs": {
  19. "name": "OpenAI Chat Connector",
  20. "description": "The connector to public OpenAI model service for GPT 3.5",
  21. "version": "1",
  22. "protocol": "http",
  23. "parameters": {
  24. "endpoint": "api.openai.com",
  25. "model": "gpt-3.5-turbo"
  26. },
  27. "credential": {
  28. "openAI_key": "12345"
  29. },
  30. "actions": [
  31. {
  32. "action_type": "predict",
  33. "method": "POST",
  34. "url": "https://${parameters.endpoint}/v1/chat/completions"
  35. }
  36. ]
  37. }
  38. },
  39. {
  40. "id": "register_model_2",
  41. "type": "register_remote_model",
  42. "previous_node_inputs": {
  43. "create_connector_1": "connector_id"
  44. },
  45. "user_inputs": {
  46. "name": "openAI-gpt-3.5-turbo",
  47. "description": "test model"
  48. }
  49. },
  50. {
  51. "id": "deploy_model_3",
  52. "type": "deploy_model",
  53. "previous_node_inputs": {
  54. "register_model_2": "model_id"
  55. }
  56. },
  57. {
  58. "id": "cat_index_tool",
  59. "type": "create_tool",
  60. "user_inputs": {
  61. "name": "CatIndexTool",
  62. "type": "CatIndexTool",
  63. "parameters": {
  64. "max_iteration": 5
  65. }
  66. }
  67. },
  68. {
  69. "id": "sub_agent",
  70. "type": "register_agent",
  71. "previous_node_inputs": {
  72. "deploy-model-3": "llm.model_id",
  73. "cat_index_tool": "tools"
  74. },
  75. "user_inputs": {
  76. "name": "Sub Agent",
  77. "type": "conversational",
  78. "parameters": {
  79. "hello": "world"
  80. },
  81. "llm.parameters": {
  82. "max_iteration": "5",
  83. "stop_when_no_tool_found": "true"
  84. },
  85. "memory": {
  86. "type": "conversation_index"
  87. },
  88. "app_type": "chatbot"
  89. }
  90. },
  91. {
  92. "id": "agent_tool",
  93. "type": "create_tool",
  94. "previous_node_inputs": {
  95. "sub_agent": "agent_id"
  96. },
  97. "user_inputs": {
  98. "name": "AgentTool",
  99. "type": "AgentTool",
  100. "parameters": {
  101. "max_iteration": 5
  102. }
  103. }
  104. },
  105. {
  106. "id": "ml_model_tool",
  107. "type": "create_tool",
  108. "previous_node_inputs": {
  109. "deploy-model-3": "model_id"
  110. },
  111. "user_inputs": {
  112. "name": "MLModelTool",
  113. "type": "MLModelTool",
  114. "alias": "language_model_tool",
  115. "parameters": {
  116. "prompt": "Answer the question as best you can.",
  117. "response_filter": "choices[0].message.content"
  118. }
  119. }
  120. },
  121. {
  122. "id": "root_agent",
  123. "type": "register_agent",
  124. "previous_node_inputs": {
  125. "deploy-model-3": "llm.model_id",
  126. "ml_model_tool": "tools",
  127. "agent_tool": "tools"
  128. },
  129. "user_inputs": {
  130. "name": "DEMO-Test_Agent",
  131. "type": "flow",
  132. "parameters": {
  133. "prompt": "Answer the question as best you can."
  134. },
  135. "llm.parameters": {
  136. "max_iteration": "5",
  137. "stop_when_no_tool_found": "true"
  138. },
  139. "tools_order": [
  140. "agent_tool",
  141. "ml_model_tool"
  142. ],
  143. "memory": {
  144. "type": "conversation_index"
  145. },
  146. "app_type": "chatbot"
  147. }
  148. }
  149. ],
  150. "edges": [
  151. {
  152. "source": "create_connector_1",
  153. "dest": "register_model_2"
  154. },
  155. {
  156. "source": "register_model_2",
  157. "dest": "deploy_model_3"
  158. },
  159. {
  160. "source": "cat_index_tool",
  161. "dest": "sub_agent"
  162. },
  163. {
  164. "source": "deploy_model_3",
  165. "dest": "sub_agent"
  166. },
  167. {
  168. "source": "sub_agent",
  169. "dest": "agent_tool"
  170. },
  171. {
  172. "source": "deploy-model-3",
  173. "dest": "ml_model_tool"
  174. },
  175. {
  176. "source": "deploy-model-3",
  177. "dest": "root_agent"
  178. },
  179. {
  180. "source": "ml_model_tool",
  181. "dest": "root_agent"
  182. },
  183. {
  184. "source": "agent_tool",
  185. "dest": "root_agent"
  186. }
  187. ]
  188. }
  189. }
  190. }

Next steps

To learn more about agents and tools, see Agents and tools.