RAG chatbot with a conversational flow agent

This tutorial explains how to use a conversational flow agent to build a retrieval-augmented generation (RAG) application with your OpenSearch data as a knowledge base.

Replace the placeholders beginning with the prefix your_ with your own values.

An alternative way to build RAG conversational search is to use a RAG pipeline. For more information, see Conversational search using the Cohere Command model.

Prerequisite

In this tutorial, you’ll build a RAG application that provides an OpenSearch k-NN index as a knowledge base for a large language model (LLM). For data retrieval, you’ll use semantic search. For a comprehensive semantic search tutorial, see Neural search tutorial.

First, you’ll need to update your cluster settings. If you don’t have a dedicated machine learning (ML) node, set "plugins.ml_commons.only_run_on_ml_node": false. To avoid triggering a native memory circuit breaker, set "plugins.ml_commons.native_memory_threshold" to 100%:

  1. PUT _cluster/settings
  2. {
  3. "persistent": {
  4. "plugins.ml_commons.only_run_on_ml_node": false,
  5. "plugins.ml_commons.native_memory_threshold": 100,
  6. "plugins.ml_commons.agent_framework_enabled": true
  7. }
  8. }

copy

Step 1: Prepare the knowledge base

Use the following steps to prepare the knowledge base that will supplement the LLM’s knowledge.

Step 1.1: Register a text embedding model

Register a text embedding model that will translate text into vector embeddings:

  1. POST /_plugins/_ml/models/_register
  2. {
  3. "name": "huggingface/sentence-transformers/all-MiniLM-L12-v2",
  4. "version": "1.0.1",
  5. "model_format": "TORCH_SCRIPT"
  6. }

copy

Note the text embedding model ID; you’ll use it in the following steps.

As an alternative, you can get the model ID by calling the Get Task API:

  1. GET /_plugins/_ml/tasks/your_task_id

copy

Deploy the model:

  1. POST /_plugins/_ml/models/your_text_embedding_model_id/_deploy

copy

Test the model:

  1. POST /_plugins/_ml/models/your_text_embedding_model_id/_predict
  2. {
  3. "text_docs":[ "today is sunny"],
  4. "return_number": true,
  5. "target_response": ["sentence_embedding"]
  6. }

copy

For more information about using models within your OpenSearch cluster, see Pretrained models.

Step 1.2: Create an ingest pipeline

Create an ingest pipeline with a text embedding processor, which can invoke the model created in the previous step to generate embeddings from text fields:

  1. PUT /_ingest/pipeline/test_population_data_pipeline
  2. {
  3. "description": "text embedding pipeline",
  4. "processors": [
  5. {
  6. "text_embedding": {
  7. "model_id": "your_text_embedding_model_id",
  8. "field_map": {
  9. "population_description": "population_description_embedding"
  10. }
  11. }
  12. }
  13. ]
  14. }

copy

For more information about ingest pipelines, see Ingest pipelines.

Step 1.3: Create a k-NN index

Create a k-NN index specifying the ingest pipeline as a default pipeline:

  1. PUT test_population_data
  2. {
  3. "mappings": {
  4. "properties": {
  5. "population_description": {
  6. "type": "text"
  7. },
  8. "population_description_embedding": {
  9. "type": "knn_vector",
  10. "dimension": 384
  11. }
  12. }
  13. },
  14. "settings": {
  15. "index": {
  16. "knn.space_type": "cosinesimil",
  17. "default_pipeline": "test_population_data_pipeline",
  18. "knn": "true"
  19. }
  20. }
  21. }

copy

For more information about k-NN indexes, see k-NN index.

Step 1.4: Ingest data

Ingest test data into the k-NN index:

  1. POST _bulk
  2. {"index": {"_index": "test_population_data"}}
  3. {"population_description": "Chart and table of population level and growth rate for the Ogden-Layton metro area from 1950 to 2023. United Nations population projections are also included through the year 2035.\nThe current metro area population of Ogden-Layton in 2023 is 750,000, a 1.63% increase from 2022.\nThe metro area population of Ogden-Layton in 2022 was 738,000, a 1.79% increase from 2021.\nThe metro area population of Ogden-Layton in 2021 was 725,000, a 1.97% increase from 2020.\nThe metro area population of Ogden-Layton in 2020 was 711,000, a 2.16% increase from 2019."}
  4. {"index": {"_index": "test_population_data"}}
  5. {"population_description": "Chart and table of population level and growth rate for the New York City metro area from 1950 to 2023. United Nations population projections are also included through the year 2035.\\nThe current metro area population of New York City in 2023 is 18,937,000, a 0.37% increase from 2022.\\nThe metro area population of New York City in 2022 was 18,867,000, a 0.23% increase from 2021.\\nThe metro area population of New York City in 2021 was 18,823,000, a 0.1% increase from 2020.\\nThe metro area population of New York City in 2020 was 18,804,000, a 0.01% decline from 2019."}
  6. {"index": {"_index": "test_population_data"}}
  7. {"population_description": "Chart and table of population level and growth rate for the Chicago metro area from 1950 to 2023. United Nations population projections are also included through the year 2035.\\nThe current metro area population of Chicago in 2023 is 8,937,000, a 0.4% increase from 2022.\\nThe metro area population of Chicago in 2022 was 8,901,000, a 0.27% increase from 2021.\\nThe metro area population of Chicago in 2021 was 8,877,000, a 0.14% increase from 2020.\\nThe metro area population of Chicago in 2020 was 8,865,000, a 0.03% increase from 2019."}
  8. {"index": {"_index": "test_population_data"}}
  9. {"population_description": "Chart and table of population level and growth rate for the Miami metro area from 1950 to 2023. United Nations population projections are also included through the year 2035.\\nThe current metro area population of Miami in 2023 is 6,265,000, a 0.8% increase from 2022.\\nThe metro area population of Miami in 2022 was 6,215,000, a 0.78% increase from 2021.\\nThe metro area population of Miami in 2021 was 6,167,000, a 0.74% increase from 2020.\\nThe metro area population of Miami in 2020 was 6,122,000, a 0.71% increase from 2019."}
  10. {"index": {"_index": "test_population_data"}}
  11. {"population_description": "Chart and table of population level and growth rate for the Austin metro area from 1950 to 2023. United Nations population projections are also included through the year 2035.\\nThe current metro area population of Austin in 2023 is 2,228,000, a 2.39% increase from 2022.\\nThe metro area population of Austin in 2022 was 2,176,000, a 2.79% increase from 2021.\\nThe metro area population of Austin in 2021 was 2,117,000, a 3.12% increase from 2020.\\nThe metro area population of Austin in 2020 was 2,053,000, a 3.43% increase from 2019."}
  12. {"index": {"_index": "test_population_data"}}
  13. {"population_description": "Chart and table of population level and growth rate for the Seattle metro area from 1950 to 2023. United Nations population projections are also included through the year 2035.\\nThe current metro area population of Seattle in 2023 is 3,519,000, a 0.86% increase from 2022.\\nThe metro area population of Seattle in 2022 was 3,489,000, a 0.81% increase from 2021.\\nThe metro area population of Seattle in 2021 was 3,461,000, a 0.82% increase from 2020.\\nThe metro area population of Seattle in 2020 was 3,433,000, a 0.79% increase from 2019."}

copy

Step 2: Prepare an LLM

This tutorial uses the Amazon Bedrock Claude model for conversational search. You can also use other LLMs. For more information about using externally hosted models, see Connecting to externally hosted models.

Step 2.1: Create a connector

Create a connector for the Claude model:

  1. POST /_plugins/_ml/connectors/_create
  2. {
  3. "name": "BedRock Claude instant-v1 Connector ",
  4. "description": "The connector to BedRock service for claude model",
  5. "version": 1,
  6. "protocol": "aws_sigv4",
  7. "parameters": {
  8. "region": "us-east-1",
  9. "service_name": "bedrock",
  10. "anthropic_version": "bedrock-2023-05-31",
  11. "max_tokens_to_sample": 8000,
  12. "temperature": 0.0001,
  13. "response_filter": "$.completion"
  14. },
  15. "credential": {
  16. "access_key": "your_aws_access_key",
  17. "secret_key": "your_aws_secret_key",
  18. "session_token": "your_aws_session_token"
  19. },
  20. "actions": [
  21. {
  22. "action_type": "predict",
  23. "method": "POST",
  24. "url": "https://bedrock-runtime.us-east-1.amazonaws.com/model/anthropic.claude-instant-v1/invoke",
  25. "headers": {
  26. "content-type": "application/json",
  27. "x-amz-content-sha256": "required"
  28. },
  29. "request_body": "{\"prompt\":\"${parameters.prompt}\", \"max_tokens_to_sample\":${parameters.max_tokens_to_sample}, \"temperature\":${parameters.temperature}, \"anthropic_version\":\"${parameters.anthropic_version}\" }"
  30. }
  31. ]
  32. }

copy

Note the connector ID; you’ll use it to register the model.

Step 2.2: Register the model

Register the Claude model hosted on Amazon Bedrock:

  1. POST /_plugins/_ml/models/_register
  2. {
  3. "name": "Bedrock Claude Instant model",
  4. "function_name": "remote",
  5. "description": "Bedrock Claude instant-v1 model",
  6. "connector_id": "your_LLM_connector_id"
  7. }

copy

Note the LLM model ID; you’ll use it in the following steps.

Step 2.3: Deploy the model

Deploy the Claude model:

  1. POST /_plugins/_ml/models/your_LLM_model_id/_deploy

copy

Step 2.4: Test the model

To test the model, send a Predict API request:

  1. POST /_plugins/_ml/models/your_LLM_model_id/_predict
  2. {
  3. "parameters": {
  4. "prompt": "\n\nHuman: how are you? \n\nAssistant:"
  5. }
  6. }

copy

Step 3: Register an agent

OpenSearch provides the following agent types: flow, conversational_flow, and conversational. For more information about agents, see Agents.

You will use a conversational_flow agent in this tutorial. The agent consists of the following:

  • Meta info: name, type, and description.
  • app_type: Differentiates between application types.
  • memory: Stores user questions and LLM responses as a conversation so that an agent can retrieve conversation history from memory and continue the same conversation.
  • tools: Defines a list of tools to use. The agent will run these tools sequentially.

To register an agent, send the following request:

  1. POST /_plugins/_ml/agents/_register
  2. {
  3. "name": "population data analysis agent",
  4. "type": "conversational_flow",
  5. "description": "This is a demo agent for population data analysis",
  6. "app_type": "rag",
  7. "memory": {
  8. "type": "conversation_index"
  9. },
  10. "tools": [
  11. {
  12. "type": "VectorDBTool",
  13. "name": "population_knowledge_base",
  14. "parameters": {
  15. "model_id": "your_text_embedding_model_id",
  16. "index": "test_population_data",
  17. "embedding_field": "population_description_embedding",
  18. "source_field": [
  19. "population_description"
  20. ],
  21. "input": "${parameters.question}"
  22. }
  23. },
  24. {
  25. "type": "MLModelTool",
  26. "name": "bedrock_claude_model",
  27. "description": "A general tool to answer any question",
  28. "parameters": {
  29. "model_id": "your_LLM_model_id",
  30. "prompt": "\n\nHuman:You are a professional data analysist. You will always answer question based on the given context first. If the answer is not directly shown in the context, you will analyze the data and find the answer. If you don't know the answer, just say don't know. \n\nContext:\n${parameters.population_knowledge_base.output:-}\n\n${parameters.chat_history:-}\n\nHuman:${parameters.question}\n\nAssistant:"
  31. }
  32. }
  33. ]
  34. }

copy

OpenSearch responds with an agent ID:

  1. {
  2. "agent_id": "fQ75lI0BHcHmo_czdqcJ"
  3. }

Note the agent ID; you’ll use it in the next step.

Step 4: Run the agent

You’ll run the agent to analyze the increase in Seattle’s population. When you run this agent, the agent will create a new conversation. Later, you can continue this conversation by asking other questions.

Step 4.1: Start a new conversation

First, start a new conversation by asking the LLM a question:

  1. POST /_plugins/_ml/agents/your_agent_id/_execute
  2. {
  3. "parameters": {
  4. "question": "what's the population increase of Seattle from 2021 to 2023?"
  5. }
  6. }

copy

The response contains the answer generated by the LLM:

  1. {
  2. "inference_results": [
  3. {
  4. "output": [
  5. {
  6. "name": "memory_id",
  7. "result": "gQ75lI0BHcHmo_cz2acL"
  8. },
  9. {
  10. "name": "parent_message_id",
  11. "result": "gg75lI0BHcHmo_cz2acZ"
  12. },
  13. {
  14. "name": "bedrock_claude_model",
  15. "result": """ Based on the context given:
  16. - The metro area population of Seattle in 2021 was 3,461,000
  17. - The current metro area population of Seattle in 2023 is 3,519,000
  18. - So the population increase of Seattle from 2021 to 2023 is 3,519,000 - 3,461,000 = 58,000"""
  19. }
  20. ]
  21. }
  22. ]
  23. }

The response contains the following fields:

  • memory_id is the identifier for the memory (conversation) that groups all messages within a single conversation. Note this ID; you’ll use it in the next step.
  • parent_message_id is the identifier for the current message (one question/answer) between the human and the LLM. One memory can contain multiple messages.

To obtain memory details, call the Get Memory API:

  1. GET /_plugins/_ml/memory/gQ75lI0BHcHmo_cz2acL

copy

To obtain all messages within a memory, call the Get Messages API:

  1. GET /_plugins/_ml/memory/gQ75lI0BHcHmo_cz2acL/messages

copy

To obtain message details, call the Get Message API:

  1. GET /_plugins/_ml/memory/message/gg75lI0BHcHmo_cz2acZ

copy

For debugging purposes, you can obtain trace data for a message by calling the Get Message Traces API:

  1. GET /_plugins/_ml/memory/message/gg75lI0BHcHmo_cz2acZ/traces

copy

4.2 Continue a conversation by asking new questions

To continue the same conversation, provide the memory ID from the previous step.

Additionally, you can provide the following parameters:

  • message_history_limit: Specify how many historical messages you want included in the new question/answer round for an agent.
  • prompt: Use this parameter to customize the LLM prompt. For example, the following example adds a new instruction always learn useful information from chat history and a new parameter next_action:
  1. POST /_plugins/_ml/agents/your_agent_id/_execute
  2. {
  3. "parameters": {
  4. "question": "What's the population of New York City in 2023?",
  5. "next_action": "then compare with Seattle population of 2023",
  6. "memory_id": "gQ75lI0BHcHmo_cz2acL",
  7. "message_history_limit": 5,
  8. "prompt": "\n\nHuman:You are a professional data analysist. You will always answer question based on the given context first. If the answer is not directly shown in the context, you will analyze the data and find the answer. If you don't know the answer, just say don't know. \n\nContext:\n${parameters.population_knowledge_base.output:-}\n\n${parameters.chat_history:-}\n\nHuman:always learn useful information from chat history\nHuman:${parameters.question}, ${parameters.next_action}\n\nAssistant:"
  9. }
  10. }

copy

The response contains the answer generated by the LLM:

  1. {
  2. "inference_results": [
  3. {
  4. "output": [
  5. {
  6. "name": "memory_id",
  7. "result": "gQ75lI0BHcHmo_cz2acL"
  8. },
  9. {
  10. "name": "parent_message_id",
  11. "result": "wQ4JlY0BHcHmo_cz8Kc-"
  12. },
  13. {
  14. "name": "bedrock_claude_model",
  15. "result": """ Based on the context given:
  16. - The current metro area population of New York City in 2023 is 18,937,000
  17. - The current metro area population of Seattle in 2023 is 3,519,000
  18. - So the population of New York City in 2023 (18,937,000) is much higher than the population of Seattle in 2023 (3,519,000)"""
  19. }
  20. ]
  21. }
  22. ]
  23. }

If you know which tool the agent should use to execute a particular Predict API request, you can specify the tool when executing the agent. For example, if you want to translate the preceding answer into Chinese, you don’t need to retrieve any data from the knowledge base. To run only the Claude model, specify the bedrock_claude_model tool in the selected_tools parameter:

  1. POST /_plugins/_ml/agents/your_agent_id/_execute
  2. {
  3. "parameters": {
  4. "question": "Translate last answer into Chinese?",
  5. "selected_tools": ["bedrock_claude_model"]
  6. }
  7. }

copy

The agent will run the tools one by one in the new order defined in selected_tools.

Configuring multiple knowledge bases

You can configure multiple knowledge bases for an agent. For example, if you have both product description and comment data, you can configure the agent with the following two tools:

  1. {
  2. "name": "My product agent",
  3. "type": "conversational_flow",
  4. "description": "This is an agent with product description and comments knowledge bases.",
  5. "memory": {
  6. "type": "conversation_index"
  7. },
  8. "app_type": "rag",
  9. "tools": [
  10. {
  11. "type": "VectorDBTool",
  12. "name": "product_description_vectordb",
  13. "parameters": {
  14. "model_id": "your_embedding_model_id",
  15. "index": "product_description_data",
  16. "embedding_field": "product_description_embedding",
  17. "source_field": [
  18. "product_description"
  19. ],
  20. "input": "${parameters.question}"
  21. }
  22. },
  23. {
  24. "type": "VectorDBTool",
  25. "name": "product_comments_vectordb",
  26. "parameters": {
  27. "model_id": "your_embedding_model_id",
  28. "index": "product_comments_data",
  29. "embedding_field": "product_comment_embedding",
  30. "source_field": [
  31. "product_comment"
  32. ],
  33. "input": "${parameters.question}"
  34. }
  35. },
  36. {
  37. "type": "MLModelTool",
  38. "description": "A general tool to answer any question",
  39. "parameters": {
  40. "model_id": "",
  41. "prompt": "\n\nHuman:You are a professional product recommendation engine. You will always recommend product based on the given context. If you don't have enough context, you will ask Human to provide more information. If you don't see any related product to recommend, just say we don't have such product. \n\n Context:\n${parameters.product_description_vectordb.output}\n\n${parameters.product_comments_vectordb.output}\n\nHuman:${parameters.question}\n\nAssistant:"
  42. }
  43. }
  44. ]
  45. }

copy

When you run the agent, the agent will query product description and comment data and then send the query results and the question to the LLM.

To query a specific knowledge base, specify it in selected_tools. For example, if the question relates only to product comments, you can retrieve information only from product_comments_vectordb:

  1. POST /_plugins/_ml/agents/your_agent_id/_execute
  2. {
  3. "parameters": {
  4. "question": "What feature people like the most for Amazon Echo Dot",
  5. "selected_tools": ["product_comments_vectordb", "MLModelTool"]
  6. }
  7. }

copy

Running queries on an index

Use SearchIndexTool to run any OpenSearch query on any index.

Setup: Register an agent

  1. POST /_plugins/_ml/agents/_register
  2. {
  3. "name": "Demo agent",
  4. "type": "conversational_flow",
  5. "description": "This agent supports running any search query",
  6. "memory": {
  7. "type": "conversation_index"
  8. },
  9. "app_type": "rag",
  10. "tools": [
  11. {
  12. "type": "SearchIndexTool",
  13. "parameters": {
  14. "input": "{\"index\": \"${parameters.index}\", \"query\": ${parameters.query} }"
  15. }
  16. },
  17. {
  18. "type": "MLModelTool",
  19. "description": "A general tool to answer any question",
  20. "parameters": {
  21. "model_id": "your_llm_model_id",
  22. "prompt": "\n\nHuman:You are a professional data analysist. You will always answer question based on the given context first. If the answer is not directly shown in the context, you will analyze the data and find the answer. If you don't know the answer, just say don't know. \n\n Context:\n${parameters.SearchIndexTool.output:-}\n\nHuman:${parameters.question}\n\nAssistant:"
  23. }
  24. }
  25. ]
  26. }

copy

Run a BM25 query

  1. POST /_plugins/_ml/agents/your_agent_id/_execute
  2. {
  3. "parameters": {
  4. "question": "what's the population increase of Seattle from 2021 to 2023?",
  5. "index": "test_population_data",
  6. "query": {
  7. "query": {
  8. "match": {
  9. "population_description": "${parameters.question}"
  10. }
  11. },
  12. "size": 2,
  13. "_source": "population_description"
  14. }
  15. }
  16. }

copy

Exposing only the question parameter

To expose only the question parameter, define the agent as follows:

  1. POST /_plugins/_ml/agents/_register
  2. {
  3. "name": "Demo agent",
  4. "type": "conversational_flow",
  5. "description": "This is a test agent support running any search query",
  6. "memory": {
  7. "type": "conversation_index"
  8. },
  9. "app_type": "rag",
  10. "tools": [
  11. {
  12. "type": "SearchIndexTool",
  13. "parameters": {
  14. "input": "{\"index\": \"${parameters.index}\", \"query\": ${parameters.query} }",
  15. "index": "test_population_data",
  16. "query": {
  17. "query": {
  18. "match": {
  19. "population_description": "${parameters.question}"
  20. }
  21. },
  22. "size": 2,
  23. "_source": "population_description"
  24. }
  25. }
  26. },
  27. {
  28. "type": "MLModelTool",
  29. "description": "A general tool to answer any question",
  30. "parameters": {
  31. "model_id": "your_llm_model_id",
  32. "prompt": "\n\nHuman:You are a professional data analyst. You will always answer question based on the given context first. If the answer is not directly shown in the context, you will analyze the data and find the answer. If you don't know the answer, just say don't know. \n\n Context:\n${parameters.SearchIndexTool.output:-}\n\nHuman:${parameters.question}\n\nAssistant:"
  33. }
  34. }
  35. ]
  36. }

copy

Now you can run the agent specifying only the question parameter:

  1. POST /_plugins/_ml/agents/your_agent_id/_execute
  2. {
  3. "parameters": {
  4. "question": "what's the population increase of Seattle from 2021 to 2023?"
  5. }
  6. }

copy

Run a neural search query

  1. POST /_plugins/_ml/agents/your_agent_id/_execute
  2. {
  3. "parameters": {
  4. "question": "what's the population increase of Seattle from 2021 to 2023??",
  5. "index": "test_population_data",
  6. "query": {
  7. "query": {
  8. "neural": {
  9. "population_description_embedding": {
  10. "query_text": "${parameters.question}",
  11. "model_id": "your_embedding_model_id",
  12. "k": 10
  13. }
  14. }
  15. },
  16. "size": 2,
  17. "_source": ["population_description"]
  18. }
  19. }
  20. }

copy

To expose the question parameter, see Exposing only the question parameter.

Run a hybrid search query

Hybrid search combines keyword and neural search to improve search relevance. For more information, see Hybrid search.

Configure a search pipeline:

  1. PUT /_search/pipeline/nlp-search-pipeline
  2. {
  3. "description": "Post processor for hybrid search",
  4. "phase_results_processors": [
  5. {
  6. "normalization-processor": {
  7. "normalization": {
  8. "technique": "min_max"
  9. },
  10. "combination": {
  11. "technique": "arithmetic_mean",
  12. "parameters": {
  13. "weights": [
  14. 0.3,
  15. 0.7
  16. ]
  17. }
  18. }
  19. }
  20. }
  21. ]
  22. }

copy

Run an agent with a hybrid query:

  1. POST /_plugins/_ml/agents/your_agent_id/_execute
  2. {
  3. "parameters": {
  4. "question": "what's the population increase of Seattle from 2021 to 2023??",
  5. "index": "test_population_data",
  6. "query": {
  7. "_source": {
  8. "exclude": [
  9. "population_description_embedding"
  10. ]
  11. },
  12. "size": 2,
  13. "query": {
  14. "hybrid": {
  15. "queries": [
  16. {
  17. "match": {
  18. "population_description": {
  19. "query": "${parameters.question}"
  20. }
  21. }
  22. },
  23. {
  24. "neural": {
  25. "population_description_embedding": {
  26. "query_text": "${parameters.question}",
  27. "model_id": "your_embedding_model_id",
  28. "k": 10
  29. }
  30. }
  31. }
  32. ]
  33. }
  34. }
  35. }
  36. }
  37. }

copy

To expose the question parameter, see Exposing only the question parameter.

Natural language query

The PPLTool can translate a natural language query (NLQ) to Piped Processing Language (PPL) and execute the generated PPL query.

Setup

Before you start, go to the OpenSearch Dashboards home page, select Add sample data, and then add Sample eCommerce orders.

Step 1: Register an agent with the PPLTool

The PPLTool has the following parameters:

  • model_type (Enum): CLAUDE, OPENAI, or FINETUNE.
  • execute (Boolean): If true, executes the generated PPL query.
  • input (String): You must provide the index and question as inputs.

For this tutorial, you’ll use Bedrock Claude, so set the model_type to CLAUDE:

  1. POST /_plugins/_ml/agents/_register
  2. {
  3. "name": "Demo agent for NLQ",
  4. "type": "conversational_flow",
  5. "description": "This is a test flow agent for NLQ",
  6. "memory": {
  7. "type": "conversation_index"
  8. },
  9. "app_type": "rag",
  10. "tools": [
  11. {
  12. "type": "PPLTool",
  13. "parameters": {
  14. "model_id": "your_ppl_model_id",
  15. "model_type": "CLAUDE",
  16. "execute": true,
  17. "input": "{\"index\": \"${parameters.index}\", \"question\": ${parameters.question} }"
  18. }
  19. },
  20. {
  21. "type": "MLModelTool",
  22. "description": "A general tool to answer any question",
  23. "parameters": {
  24. "model_id": "your_llm_model_id",
  25. "prompt": "\n\nHuman:You are a professional data analysist. You will always answer question based on the given context first. If the answer is not directly shown in the context, you will analyze the data and find the answer. If you don't know the answer, just say don't know. \n\n Context:\n${parameters.PPLTool.output:-}\n\nHuman:${parameters.question}\n\nAssistant:"
  26. }
  27. }
  28. ]
  29. }

copy

Step 2: Run the agent with an NLQ

Run the agent:

  1. POST /_plugins/_ml/agents/your_agent_id/_execute
  2. {
  3. "parameters": {
  4. "question": "How many orders do I have in last week",
  5. "index": "opensearch_dashboards_sample_data_ecommerce"
  6. }
  7. }

copy

The response contains the answer generated by the LLM:

  1. {
  2. "inference_results": [
  3. {
  4. "output": [
  5. {
  6. "name": "memory_id",
  7. "result": "sqIioI0BJhBwrVXYeYOM"
  8. },
  9. {
  10. "name": "parent_message_id",
  11. "result": "s6IioI0BJhBwrVXYeYOW"
  12. },
  13. {
  14. "name": "MLModelTool",
  15. "result": " Based on the given context, the number of orders in the last week is 3992. The data shows a query that counts the number of orders where the order date is greater than 1 week ago. The query result shows the count as 3992."
  16. }
  17. ]
  18. }
  19. ]
  20. }

For more information, obtain trace data by calling the Get Message Traces API:

  1. GET _plugins/_ml/memory/message/s6IioI0BJhBwrVXYeYOW/traces

copy