Segmentation
The Segmentation pipeline segments text into semantic units.
Example
The following shows a simple example using this pipeline.
from txtai.pipeline import Segmentation
# Create and run pipeline
segment = Segmentation(sentences=True)
segment("This is a test. And another test.")
Configuration-driven example
Pipelines are run with Python or configuration. Pipelines can be instantiated in configuration using the lower case name of the pipeline. Configuration-driven pipelines are run with workflows or the API.
config.yml
# Create pipeline using lower case class name
segmentation:
sentences: true
# Run pipeline with workflow
workflow:
segment:
tasks:
- action: segmentation
Run with Workflows
from txtai.app import Application
# Create and run pipeline with workflow
app = Application("config.yml")
list(app.workflow("segment", ["This is a test. And another test."]))
Run with API
CONFIG=config.yml uvicorn "txtai.api:app" &
curl \
-X POST "http://localhost:8000/workflow" \
-H "Content-Type: application/json" \
-d '{"name":"segment", "elements":["This is a test. And another test."]}'
Methods
Python documentation for the pipeline.
Creates a new Segmentation pipeline.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sentences | tokenize text into sentences if True, defaults to False | False | |
lines | tokenizes text into lines if True, defaults to False | False | |
paragraphs | tokenizes text into paragraphs if True, defaults to False | False | |
minlength | require at least minlength characters per text element, defaults to None | None | |
join | joins tokenized sections back together if True, defaults to False | False |
Source code in txtai/pipeline/data/segmentation.py
def __init__(self, sentences=False, lines=False, paragraphs=False, minlength=None, join=False):
"""
Creates a new Segmentation pipeline.
Args:
sentences: tokenize text into sentences if True, defaults to False
lines: tokenizes text into lines if True, defaults to False
paragraphs: tokenizes text into paragraphs if True, defaults to False
minlength: require at least minlength characters per text element, defaults to None
join: joins tokenized sections back together if True, defaults to False
"""
if not NLTK:
raise ImportError('Segmentation pipeline is not available - install "pipeline" extra to enable')
self.sentences = sentences
self.lines = lines
self.paragraphs = paragraphs
self.minlength = minlength
self.join = join
Segments text into semantic units.
This method supports text as a string or a list. If the input is a string, the return type is text|list. If text is a list, a list of returned, this could be a list of text or a list of lists depending on the tokenization strategy.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
text | text|list | required |
Returns:
Type | Description |
---|---|
segmented text |
Source code in txtai/pipeline/data/segmentation.py
def __call__(self, text):
"""
Segments text into semantic units.
This method supports text as a string or a list. If the input is a string, the return
type is text|list. If text is a list, a list of returned, this could be a
list of text or a list of lists depending on the tokenization strategy.
Args:
text: text|list
Returns:
segmented text
"""
# Get inputs
texts = [text] if not isinstance(text, list) else text
# Extract text for each input file
results = []
for value in texts:
# Get text
value = self.text(value)
# Parse and add extracted results
results.append(self.parse(value))
return results[0] if isinstance(text, str) else results