Generator
The Generator pipeline takes an input prompt and generates follow-on text.
Example
The following shows a simple example using this pipeline.
from txtai.pipeline import Generator
# Create and run pipeline
generator = Generator()
generator("Hello, how are you?")
Configuration-driven example
Pipelines are run with Python or configuration. Pipelines can be instantiated in configuration using the lower case name of the pipeline. Configuration-driven pipelines are run with workflows or the API.
config.yml
# Create pipeline using lower case class name
generator:
# Run pipeline with workflow
workflow:
generator:
tasks:
- action: generator
Run with Workflows
from txtai.app import Application
# Create and run pipeline with workflow
app = Application("config.yml")
list(app.workflow("generator", ["Hello, how are you?"]))
Run with API
CONFIG=config.yml uvicorn "txtai.api:app" &
curl \
-X POST "http://localhost:8000/workflow" \
-H "Content-Type: application/json" \
-d '{"name":"generator", "elements": ["Hello, how are you?"]}'
Methods
Python documentation for the pipeline.
__init__(self, path=None, quantize=False, gpu=True, model=None)
special
Source code in txtai/pipeline/text/generator.py
def __init__(self, path=None, quantize=False, gpu=True, model=None):
super().__init__(self.task(), path, quantize, gpu, model)
__call__(self, text, prefix=None, maxlength=512, workers=0)
special
Generates text using input text
Parameters:
Name | Type | Description | Default |
---|---|---|---|
text | text|list | required | |
prefix | optional prefix to prepend to text elements | None | |
maxlength | maximum sequence length | 512 | |
workers | number of concurrent workers to use for processing data, defaults to None | 0 |
Returns:
Type | Description |
---|---|
generated text |
Source code in txtai/pipeline/text/generator.py
def __call__(self, text, prefix=None, maxlength=512, workers=0):
"""
Generates text using input text
Args:
text: text|list
prefix: optional prefix to prepend to text elements
maxlength: maximum sequence length
workers: number of concurrent workers to use for processing data, defaults to None
Returns:
generated text
"""
# List of texts
texts = text if isinstance(text, list) else [text]
# Add prefix, if necessary
if prefix:
texts = [f"{prefix}{x}" for x in texts]
# Run pipeline
results = self.pipeline(texts, max_length=maxlength, num_workers=workers)
# Get generated text
results = [self.clean(x) for x in results]
return results[0] if isinstance(text, str) else results