Function Description

LLM structured response plugin, used to structure AI responses according to the default or user-configured Json Schema for subsequent plugin processing. Note that only non-streaming responses are currently supported.

Running Attributes

Plugin execution phase: default phase Plugin execution priority: 150

Configuration Description

NameTypeRequirementDefaultDescription
serviceNamestrrequired-AI service or gateway service name that supports AI-Proxy
serviceDomainstroptional-AI service or gateway service domain/IP address that supports AI-Proxy
servicePathstroptional’/v1/chat/completions’AI service or gateway service base path that supports AI-Proxy
serviceUrlstroptional-AI service or gateway service URL that supports AI-Proxy; the plugin will automatically extract domain and path to fill in the unconfigured serviceDomain or servicePath
servicePortintoptional443Gateway service port
serviceTimeoutintoptional50000Default request timeout
maxRetryintoptional3Number of retry attempts when the answer cannot be correctly extracted and formatted
contentPathstroptional”choices.0.message.content”gpath path to extract the response result from the LLM answer
jsonSchemastr (json)optional-The jsonSchema against which the request is validated; if empty, only valid Json format responses are returned
enableSwaggerbooloptionalfalseWhether to enable the Swagger protocol for validation
enableOas3booloptionaltrueWhether to enable the Oas3 protocol for validation
enableContentDispositionbooloptionaltrueWhether to enable the Content-Disposition header; if enabled, the response header will include Content-Disposition: attachment; filename=”response.json”

For performance reasons, the maximum supported Json Schema depth is 6 by default. Json Schemas exceeding this depth will not be used to validate responses; the plugin will only check if the returned response is a valid Json format.

Request and Return Parameter Description

  • Request Parameters: The request format for this plugin is the OpenAI request format, including the model and messages fields, where model is the AI model name and messages is a list of conversation messages, each containing role and content fields, with role being the message role and content being the message content.

    1. {
    2. model”: gpt-4”,
    3. messages”: [
    4. {“role”: user”, content”: give me a api doc for add the variable x to x+5”}
    5. ]
    6. }

    Other request parameters should refer to the corresponding documentation of the configured AI service or gateway service.

  • Return Parameters:

    • Returns a Json format response that satisfies the constraints of the defined Json Schema.
    • If no Json Schema is defined, returns a valid Json format response.
    • If an internal error occurs, returns { "Code": 10XX, "Msg": "Error message" }.

Request Example

  1. -H Content-Type: application/json \
  2. -d ‘{
  3. model”: gpt-4”,
  4. messages”: [
  5. {“role”: user”, content”: give me a api doc for add the variable x to x+5”}
  6. ]
  7. }’

Return Example

Normal Return

Under normal circumstances, the system should return JSON data validated by the JSON Schema. If no JSON Schema is configured, the system will return legally valid JSON data that complies with JSON standards.

  1. {
  2. apiVersion”: 1.0”,
  3. request”: {
  4. endpoint”: “/add_to_five”,
  5. method”: POST”,
  6. port”: 8080,
  7. headers”: {
  8. Content-Type”: application/json
  9. },
  10. body”: {
  11. x”: 7
  12. }
  13. }
  14. }

Exception Return

In case of an error, the return status code is 500, and the return content is a JSON format error message. It contains two fields: error code Code and error message Msg.

  1. {
  2. Code”: 1006,
  3. Msg”: retry count exceed max retry count
  4. }

Error Code Description

Error CodeDescription
1001The configured Json Schema is not in a valid Json format
1002The configured Json Schema compilation failed; it is not a valid Json Schema format or depth exceeds jsonSchemaMaxDepth while rejectOnDepthExceeded is true
1003Unable to extract valid Json from the response
1004Response is an empty string
1005Response does not conform to the Json Schema definition
1006Retry count exceeds the maximum limit
1007Unable to retrieve the response content; may be due to upstream service configuration errors or incorrect ContentPath path to get the content
1008serviceDomain is empty; please note that either serviceDomain or serviceUrl cannot be empty at the same time

Service Configuration Description

This plugin requires configuration of upstream services to support automatic retry mechanisms in case of exceptions. Supported configurations mainly include AI services supporting OpenAI interfaces or local gateway services.

AI Services Supporting OpenAI Interfaces

Taking Qwen as an example, the basic configuration is as follows:

  1. serviceName: qwen
  2. serviceDomain: dashscope.aliyuncs.com
  3. apiKey: [Your API Key]
  4. servicePath: /compatible-mode/v1/chat/completions
  5. jsonSchema:
  6. title: ReasoningSchema
  7. type: object
  8. properties:
  9. reasoning_steps:
  10. type: array
  11. items:
  12. type: string
  13. description: The reasoning steps leading to the final conclusion.
  14. answer:
  15. type: string
  16. description: The final answer, taking into account the reasoning steps.
  17. required:
  18. - reasoning_steps
  19. - answer
  20. additionalProperties: false

Local Gateway Services

To reuse already configured services, this plugin also supports configuring local gateway services. For example, if the gateway has already configured the AI-proxy service, it can be directly configured as follows:

  1. Create a service with a fixed IP address of 127.0.0.1:80, for example, localservice.static.
  2. Add the service configuration for localservice.static in the configuration file.
  1. serviceName: localservice
  2. serviceDomain: 127.0.0.1
  3. servicePort: 80
  1. Automatically extract request Path, Header, and other information. The plugin will automatically extract request Path, Header, and other information to avoid repetitive configuration for the AI service.