Advanced functionality

OpenSearch Learning to Rank (LTR) offers additional functionality. It is recommended that you have a foundational understanding of OpenSearch LTR before working with these features.

Reusable features

Building features involves uploading a list of features. To avoid repeating common features across multiple sets, you can maintain a library of reusable features.

For example, if a title field query is frequently used in your feature sets, then you can create a reusable title query using the feature API:

  1. POST _ltr/_feature/titleSearch
  2. {
  3. "feature":
  4. {
  5. "params": [
  6. "keywords"
  7. ],
  8. "template": {
  9. "match": {
  10. "title": ""
  11. }
  12. }
  13. }
  14. }

copy

Normal CRUD operations apply, so you can delete a feature by using the following operation:

  1. DELETE _ltr/_feature/titleSearch

copy

To fetch an individual feature, you can use the following request:

  1. GET _ltr/_feature/titleSearch

copy

To view a list of all features filtered by name prefix, you can use the following request:

  1. GET /_ltr/_feature?prefix=t

copy

To create or update a feature set, you can refer to the titleSearch feature by using the following request:

  1. POST /_ltr/_featureset/my_featureset/_addfeatures/titleSearch

copy

This adds the titleSearch feature to the next ordinal position within the my_featureset feature set.

Derived features

Derived features are those that build upon other features. These can be expressed as Lucene expressions and are identified by the "template_language": "derived_expression".

Additionally, derived features can accept query-time variables of type Number, as described in Creating feature sets.

Script features

Script features are a type of derived feature. These features have access to the feature_vector, but they are implemented as native or Painless OpenSearch scripts rather than as Lucene expressions.

To identify these features, set the "template_language": "script_feature"". The custom script can access the feature_vector through the Java Map, as described in Create a feature set.

Script-based features may impact the performance of your OpenSearch cluster, so it is best to avoid them if you require highly performant queries.

Script feature parameters

Script features are native or Painless scripts within the context of LTR. These script features can accept parameters as described in the OpenSearch script documentation. When working with LTR scripts, you can override parameter values and names. The priority for parameterization, in increasing order, is as follows:

  • The parameter name and value are passed directly to the source script, but not in the LTR script parameters. These cannot be configured at query time.
  • The parameter name is passed to both the sltr query and the source script, allowing the script parameter values to be overridden at query time.
  • The LTR script parameter name to native script parameter name indirection allows you to use different parameter names in your LTR feature definition than those in the underlying native script. This gives you flexibility in how you define and use scripts within the LTR context.

For example, to set up a customizable way to rank movies in search results, considering both the title match and other adjustable factors, you can use the following request:

  1. POST _ltr/_featureset/more_movie_features
  2. {
  3. "featureset": {
  4. "features": [
  5. {
  6. "name": "title_query",
  7. "params": [
  8. "keywords"
  9. ],
  10. "template_language": "mustache",
  11. "template": {
  12. "match": {
  13. "title": ""
  14. }
  15. }
  16. },
  17. {
  18. "name": "custom_title_query_boost",
  19. "params": [
  20. "some_multiplier",
  21. "ltr_param_foo"
  22. ],
  23. "template_language": "script_feature",
  24. "template": {
  25. "lang": "painless",
  26. "source": "(long)params.default_param * params.feature_vector.get('title_query') * (long)params.some_multiplier * (long) params.param_foo",
  27. "params": {
  28. "default_param": 10,
  29. "some_multiplier": "some_multiplier",
  30. "extra_script_params": {
  31. "ltr_param_foo": "param_foo"
  32. }
  33. }
  34. }
  35. }
  36. ]
  37. }
  38. }

copy

Multiple feature stores

A feature store corresponds to an independent LTR system, including features, feature sets, and models backed by a single index and cache. A feature store typically represents a single search problem or application, like Wikipedia or Wiktionary. To use multiple feature stores in your OpenSearch cluster, you can create and manage them using the provided API. For example, you can create a feature set for the wikipedia feature store as follows:

  1. PUT _ltr/wikipedia
  2. POST _ltr/wikipedia/_featureset/attempt_1
  3. {
  4. "featureset": {
  5. "features": [
  6. {
  7. "name": "title_query",
  8. "params": [
  9. "keywords"
  10. ],
  11. "template_language": "mustache",
  12. "template": {
  13. "match": {
  14. "title": ""
  15. }
  16. }
  17. }
  18. ]
  19. }
  20. }

copy

When logging features, you can specify the feature store using the store parameter in the sltr section of your query, as shown in the following example structure. If you do not provide a store parameter, the default store is used to look up the feature set.

  1. {
  2. "sltr": {
  3. "_name": "logged_featureset",
  4. "featureset": "attempt_1",
  5. "store": "wikipedia",
  6. "params": {
  7. "keywords": "star"
  8. }
  9. }
  10. }

copy

To delete the feature set, you can use the following operation:

  1. DELETE _ltr/wikipedia/_featureset/attempt_1

copy

Model caching

The Model Caching plugin uses an internal cache for compiled models. To force the models to be recompiled, you can clear the cache for a feature store:

  1. POST /_ltr/_clearcache

copy

To get cluster-wide cache statistics for a specific store, use the following request:

  1. GET /_ltr/_cachestats

copy

You can control the characteristics of the internal cache by using the following node settings:

  1. # limit cache usage to 12 megabytes (defaults to 10mb or max_heap/10 if lower) ltr.caches.max_mem: 12mb
  2. # Evict cache entries 10 minutes after insertion (defaults to 1hour, set to 0 to disable) ltr.caches.expire_after_write: 10m
  3. # Evict cache entries 10 minutes after access (defaults to 1hour, set to 0 to disable) ltr.caches.expire_after_read: 10m

copy

Extra logging

As described in Logging features, you can use the logging extension to return feature values with each document. For native scripts, you can also return additional arbitrary information along with the logged features.

For native scripts, the extra_logging parameter is injected into the script parameters. This parameter is a Supplier>, which provides a non-null Map<String,Object> only during the logging fetch phase. Any values you add to this map are returned alongside the logged features:

  1. {
  2. @Override
  3. public double runAsDouble() {
  4. ...
  5. Map<String,Object> extraLoggingMap = ((Supplier<Map<String,Object>>) getParams().get("extra_logging")).get();
  6. if (extraLoggingMap != null) {
  7. extraLoggingMap.put("extra_float", 10.0f);
  8. extraLoggingMap.put("extra_string", "additional_info");
  9. }
  10. ...
  11. }
  12. }

copy

If the extra logging map is accessed, it is returned as an additional entry with the logged features. The format of the logged features, including the extra logging information, will appear similar to the following example:

  1. {
  2. "log_entry1": [
  3. {
  4. "name": "title_query",
  5. "value": 9.510193
  6. },
  7. {
  8. "name": "body_query",
  9. "value": 10.7808075
  10. },
  11. {
  12. "name": "user_rating",
  13. "value": 7.8
  14. },
  15. {
  16. "name": "extra_logging",
  17. "value": {
  18. "extra_float": 10.0,
  19. "extra_string": "additional_info"
  20. }
  21. }
  22. ]
  23. }

copy

Feature score caching

By default, the Feature Score Caching plugin calculates feature scores for both model inference and feature score logging. For example, if you write a query to rescore the top 100 documents and return the top 10 with feature scores, then the plugin calculates the feature scores of the top 100 documents for model inference and then calculates and logs the scores for the top 10 documents.

The following query shows this behavior:

  1. POST tmdb/_search
  2. {
  3. "size": 10,
  4. "query": {
  5. "match": {
  6. "_all": "rambo"
  7. }
  8. },
  9. "rescore": {
  10. "window_size" : 100,
  11. "query": {
  12. "rescore_query": {
  13. "sltr": {
  14. "params": {
  15. "keywords": "rambo"
  16. },
  17. "model": "my_model"
  18. }
  19. }
  20. }
  21. },
  22. "ext": {
  23. "ltr_log": {
  24. "log_specs": {
  25. "name": "log_entry1",
  26. "rescore_index": 0
  27. }
  28. }
  29. }
  30. }

copy

In some environments, it may be faster to cache the feature scores for model inference and reuse them for logging. To enable feature score caching, add the cache: "true" flag to the sltr query that is the target of feature score logging, as shown in the following example:

  1. {
  2. "sltr":{
  3. "cache":true,
  4. "params":{
  5. "keywords":"rambo"
  6. },
  7. "model":"my_model"
  8. }
  9. }

copy

Stats

You can use the Stats API to retrieve the plugin’s overall status and statistics. To do this, send the following request:

  1. GET /_ltr/_stats

copy

The response includes information about the cluster, configured stores, and cache statistics for various plugin components:

  1. {
  2. "_nodes":{
  3. "total":1,
  4. "successful":1,
  5. "failed":0
  6. },
  7. "cluster_name":"es-cluster",
  8. "stores":{
  9. "_default_":{
  10. "model_count":10,
  11. "featureset_count":1,
  12. "feature_count":0,
  13. "status":"green"
  14. }
  15. },
  16. "status":"green",
  17. "nodes":{
  18. "2QtMvxMvRoOTymAsoQbxhw":{
  19. "cache":{
  20. "feature":{
  21. "eviction_count":0,
  22. "miss_count":0,
  23. "hit_count":0,
  24. "entry_count":0,
  25. "memory_usage_in_bytes":0
  26. },
  27. "featureset":{
  28. "eviction_count":0,
  29. "miss_count":0,
  30. "hit_count":0,
  31. "entry_count":0,
  32. "memory_usage_in_bytes":0
  33. },
  34. "model":{
  35. "eviction_count":0,
  36. "miss_count":0,
  37. "hit_count":0,
  38. "entry_count":0,
  39. "memory_usage_in_bytes":0
  40. }
  41. }
  42. }
  43. }
  44. }

copy

You can use filters to retrieve a single statistic by sending the following request:

  1. GET /_ltr/_stats/{stat}

copy

You can limit the information to a single node in the cluster by sending the following requests:

  1. GET /_ltr/_stats/nodes/{nodeId}
  2. GET /_ltr/_stats/{stat}/nodes/{nodeId}

copy

TermStat query

Experimental

The TermStatQuery is in an experimental stage, and the Domain-Specific Language (DSL) may change as the code advances. For stable term-statistic access, see [ExplorerQuery]{.title-ref}.

The TermStatQuery is a reimagined version of the legacy ExplorerQuery. It provides a clearer way to specify terms and offers more flexibility for experimentation. This query surfaces the same data as the [ExplorerQuery]{.title-ref}, but it allows you to specify a custom Lucene expression to retrieve the desired data, such as in the following example:

  1. POST tmdb/_search
  2. {
  3. "query": {
  4. "term_stat": {
  5. "expr": "df",
  6. "aggr": "max",
  7. "terms": ["rambo", "rocky"],
  8. "fields": ["title"]
  9. }
  10. }
  11. }

copy

The expr parameter is used to specify a Lucene expression. This expression is run on a per-term basis. The expression can be a simple stat type or a custom formula with multiple stat types, such as (tf * idf) / 2. Available stat types in the Lucene expression context are listed in the following table.

TypeDescription
dfThe direct document frequency for a term. For example, if rambo occurs in three movie titles across multiple documents, then the value would be 3.
idfThe inverse document frequency (IDF) calculation using the formula log((NUM_DOCS+1)/(raw_df+1)) + 1.
tfThe term frequency for a document. For example, if rambo occurs three times in a movie synopsis in the same document, then the value would be 3.
tpThe term positions for a document. Multiple positions can be returned for a single term, so you should review the behavior of the pos_aggr parameter.
ttfThe total term frequency for a term across an index. For example, if rambo is mentioned a total of 100 times in the overview field across all documents, then the value would be 100.

The aggr parameter specifies the type of aggregation to be applied to the collected statistics from the expr. For example, if you specify the terms rambo and rocky, then the query gathers statistics for both terms. Because you can only return a single value, you need to decide which statistical calculation to use. The available aggregation types are min, max, avg, sum, and stddev. The query also provides the following counts: matches (the number of terms that matched in the current document) and unique (the unique number of terms that were passed in the query).

The terms parameter specifies an array of terms for which you want to gather statistics. Only single terms are supported, with no support for phrases or span queries. If your field is tokenized, you can pass multiple terms in one string in the array.

The fields parameter specifies the fields to check for the specified terms. If no analyzer is specified, then the configured search_analyzer for each field is used.

The optional parameters are listed in the following table.

TypeDescription
analyzerIf specified, this analyzer is used instead of the configured search_analyzer for each field.
pos_aggrBecause each term can have multiple positions, you can use this parameter to specify the aggregation to apply to the term positions. This supports the same values as the aggr parameter and defaults to avg.

Script injection

Script injection provides the ability to inject term statistics into a scripting context. When working with ScriptFeatures, you can pass a term_stat object with the terms, fields, and analyzer parameters. An injected variable named termStats then provides access to the raw values in your custom script. This enables advanced feature engineering by giving you access to all the underlying data.

To access the count of matched tokens, use [params.matchCount.get]{.title-ref}. To access the unique token count, use [params.uniqueTerms]{.title-ref}.

You can either hardcode the term_stat parameter in your script definition or pass the parameter to be set at query time. For example, the following example query defines a feature set with a script feature that uses hardcoded term_stat parameters:

  1. POST _ltr/_featureset/test
  2. {
  3. "featureset": {
  4. "features": [
  5. {
  6. "name": "injection",
  7. "template_language": "script_feature",
  8. "template": {
  9. "lang": "painless",
  10. "source": "params.termStats['df'].size()",
  11. "params": {
  12. "term_stat": {
  13. "analyzer": "!standard",
  14. "terms": ["rambo rocky"],
  15. "fields": ["overview"]
  16. }
  17. }
  18. }
  19. }
  20. ]
  21. }
  22. }

copy

Analyzer names must be prefixed with a bang(!) when specifying them locally. Otherwise, they are treated as the parameter lookup value.

To set parameter lookups, you can pass the name of the parameter from which you want to pull the value, as shown in the following example request:

  1. POST _ltr/_featureset/test
  2. {
  3. "featureset": {
  4. "features": [
  5. {
  6. "name": "injection",
  7. "template_language": "script_feature",
  8. "template": {
  9. "lang": "painless",
  10. "source": "params.termStats['df'].size()",
  11. "params": {
  12. "term_stat": {
  13. "analyzer": "analyzerParam",
  14. "terms": "termsParam",
  15. "fields": "fieldsParam"
  16. }
  17. }
  18. }
  19. }
  20. ]
  21. }
  22. }

copy

Alternatively, you can pass the term_stat parameters as query-time parameters, as shown in the following request:

  1. POST tmdb/_search
  2. {
  3. "query": {
  4. "bool": {
  5. "filter": [
  6. {
  7. "terms": {
  8. "_id": ["7555", "1370", "1369"]
  9. }
  10. },
  11. {
  12. "sltr": {
  13. "_name": "logged_featureset",
  14. "featureset": "test",
  15. "params": {
  16. "analyzerParam": "standard",
  17. "termsParam": ["troutman"],
  18. "fieldsParam": ["overview"]
  19. }
  20. }}
  21. ]
  22. }
  23. },
  24. "ext": {
  25. "ltr_log": {
  26. "log_specs": {
  27. "name": "log_entry1",
  28. "named_query": "logged_featureset"
  29. }
  30. }
  31. }
  32. }

copy