×
思维导图备注
百度飞桨 PaddlePaddle v2.0 深度学习教程
首页
白天
夜间
小程序
阅读
书签
我的书签
添加书签
移除书签
paddle.text
来源:百度飞桨
浏览
378
扫码
分享
2021-03-02 20:09:08
Overview
datasets
当前内容版权归
百度飞桨
或其关联方所有,如需对内容或内容相关联开源项目进行关注与资助,请访问
百度飞桨
.
上一篇:
下一篇:
安装说明
Pip安装
Linux下的PIP安装
MacOS下的PIP安装
Windows下的PIP安装
Conda安装
Linux下的Conda安装
MacOS下的Conda安装
Windows下的Conda安装
Docker安装
Linux下的Docker安装
MacOS下的Docker安装
从源码编译
Linux下从源码编译
MacOS下从源码编译
Windows下从源码编译
飞腾/鲲鹏下从源码编译
申威下从源码编译
兆芯下从源码编译
昆仑XPU芯片安装及运行飞桨
附录
使用教程
整体介绍
基本概念
Tensor概念介绍
广播 (broadcasting)
升级指南
版本迁移工具
模型开发
10分钟快速上手飞桨(PaddlePaddle)
数据集定义与加载
数据预处理
模型组网
训练与预测
资源配置
自定义指标
模型存储与载入
模型导出ONNX协议
VisualDL 工具
VisualDL 工具简介
VisualDL 使用指南
动态图转静态图
基本用法
内部架构原理
支持语法列表
InputSpec 功能介绍
报错信息处理
调试方法
推理部署
服务器端部署
安装与编译 Linux 预测库
安装与编译 Windows 预测库
C++ 预测 API介绍
C 预测 API介绍
Python 预测 API介绍
移动端部署
Paddle-Lite
模型压缩
分布式训练
分布式训练快速开始
使用FleetAPI进行分布式训练
昆仑XPU芯片运行飞桨
飞桨对昆仑XPU芯片的支持
飞桨框架昆仑XPU版安装说明
飞桨框架昆仑XPU版训练示例
自定义OP
如何写新的C++ OP
C++ OP相关注意事项
如何写新的Python OP
如何在框架外部自定义C++ OP
参与开发
本地开发指南
提交PR注意事项
FAQ
其他说明
硬件支持
飞桨框架API映射表
应用实践
快速上手
hello paddle: 从普通程序走向机器学习程序
动态图
飞桨高层API使用指南
模型保存及加载
使用线性回归预测波士顿房价
计算机视觉
使用LeNet在MNIST数据集实现图像分类
使用卷积神经网络进行图像分类
基于图片相似度的图片搜索
基于U-Net卷积神经网络实现宠物图像分割
通过OCR实现验证码识别
人脸关键点检测
通过Sub-Pixel实现图像超分辨率
自然语言处理
用N-Gram模型在莎士比亚文集中训练word embedding
IMDB 数据集使用BOW网络的文本分类
使用注意力机制的LSTM的机器翻译
使用序列到序列模型完成数字加法
时序数据
通过AutoEncoder实现时序数据异常检测
API 文档
paddle
Overview
abs
acos
add
add_n
addmm
all
allclose
any
arange
argmax
argmin
argsort
asin
assign
atan
bernoulli
bmm
broadcast_shape
broadcast_to
cast
ceil
cholesky
chunk
clip
concat
conj
cos
cosh
CPUPlace
cross
CUDAPinnedPlace
CUDAPlace
cumsum
DataParallel
diag
disable_static
dist
divide
dot
empty
empty_like
enable_static
equal
equal_all
erf
exp
expand
expand_as
eye
flatten
flip
floor
floor_divide
flops
full
full_like
gather
gather_nd
get_cuda_rng_state
get_cudnn_version
get_default_dtype
get_device
grad
greater_equal
greater_than
histogram
imag
in_dynamic_mode
increment
index_sample
index_select
inverse
is_compiled_with_cuda
is_compiled_with_xpu
is_empty
is_tensor
isfinite
isinf
isnan
kron
less_equal
less_than
linspace
load
log
log10
log1p
log2
logical_and
logical_not
logical_or
logical_xor
logsumexp
masked_select
matmul
max
maximum
mean
median
meshgrid
min
minimum
mm
mod
Model
multinomial
multiplex
multiply
mv
no_grad
nonzero
norm
normal
not_equal
numel
ones
ones_like
ParamAttr
pow
prod
rand
randint
randn
randperm
rank
real
reciprocal
reshape
reshape_
roll
round
rsqrt
save
scale
scatter
scatter_
scatter_nd
scatter_nd_add
seed
set_cuda_rng_state
set_default_dtype
set_device
shape
shard_index
sign
sin
sinh
slice
sort
split
sqrt
square
squeeze
squeeze_
stack
stanh
std
strided_slice
subtract
sum
summary
t
tan
tanh
tanh_
Tensor
tile
to_tensor
topk
trace
transpose
tril
triu
unbind
uniform
unique
unsqueeze
unsqueeze_
unstack
var
where
XPUPlace
zeros
zeros_like
paddle.amp
Overview
auto_cast
GradScaler
paddle.callbacks
Overview
Callback
EarlyStopping
LRScheduler
ModelCheckpoint
ProgBarLogger
ReduceLROnPlateau
VisualDL
paddle.compat
floor_division
get_exception_message
long_type
round
to_bytes
to_text
paddle.distributed
all_gather
all_reduce
barrier
broadcast
fleet
DistributedStrategy
Fleet
PaddleCloudRoleMaker
UserDefinedRoleMaker
UtilBase
utils
fs
HDFSClient
LocalFS
get_rank
get_world_size
init_parallel_env
InMemoryDataset
ParallelEnv
QueueDataset
reduce
ReduceOp
scatter
spawn
split
paddle.distribution
Overview
Categorical
Distribution
Normal
Uniform
paddle.fluid
clip
ErrorClipByValue
set_gradient_clip
create_lod_tensor
create_random_int_lodtensor
cuda_pinned_places
data
DataFeedDesc
DataFeeder
dataset
DatasetFactory
InMemoryDataset
QueueDataset
device_guard
DistributeTranspiler
DistributeTranspilerConfig
dygraph
BilinearTensorProduct
checkpoint
load_dygraph
save_dygraph
Conv2D
Conv2DTranspose
Conv3D
Conv3DTranspose
Dropout
Embedding
enabled
GroupNorm
GRUCell
GRUUnit
guard
LambdaDecay
LayerNorm
Linear
LinearLrWarmup
LSTMCell
MultiStepDecay
NCE
parallel
prepare_context
Pool2D
PRelu
ReduceLROnPlateau
StepDecay
TreeConv
evaluator
ChunkEvaluator
DetectionMAP
EditDistance
get_flags
initializer
Bilinear
Constant
MSRA
Normal
NumpyArrayInitializer
TruncatedNormal
Uniform
Xavier
io
get_program_parameter
get_program_persistable_vars
load_params
load_persistables
load_vars
PyReader
save_params
save_persistables
save_vars
shuffle
layers
adaptive_pool2d
adaptive_pool3d
add_position_encoding
affine_channel
affine_grid
anchor_generator
argmax
argmin
argsort
array_length
array_read
array_write
assign
autoincreased_step_counter
BasicDecoder
beam_search
beam_search_decode
bipartite_match
box_clip
box_coder
box_decoder_and_assign
bpr_loss
brelu
Categorical
center_loss
clip
clip_by_norm
collect_fpn_proposals
concat
cond
continuous_value_model
cosine_decay
create_array
create_py_reader_by_data
create_tensor
crop
crop_tensor
cross_entropy
ctc_greedy_decoder
cumsum
data
DecodeHelper
Decoder
deformable_conv
deformable_roi_pooling
density_prior_box
detection_output
diag
distribute_fpn_proposals
double_buffer
dropout
dynamic_gru
dynamic_lstm
dynamic_lstmp
DynamicRNN
edit_distance
elementwise_add
elementwise_div
elementwise_floordiv
elementwise_max
elementwise_min
elementwise_mod
elementwise_pow
elementwise_sub
elu
embedding
equal
expand
expand_as
exponential_decay
eye
fc
fill_constant
filter_by_instag
flatten
fsp_matrix
gather
gather_nd
gaussian_random
gelu
generate_mask_labels
generate_proposal_labels
generate_proposals
get_tensor_from_selected_rows
greater_equal
greater_than
GreedyEmbeddingHelper
grid_sampler
gru_unit
GRUCell
hard_shrink
hard_sigmoid
hard_swish
has_inf
has_nan
hash
hsigmoid
huber_loss
IfElse
im2sequence
image_resize
image_resize_short
increment
inplace_abn
inverse_time_decay
iou_similarity
isfinite
kldiv_loss
l2_normalize
label_smooth
leaky_relu
less_equal
less_than
linear_chain_crf
linear_lr_warmup
locality_aware_nms
lod_append
lod_reset
logsigmoid
lrn
lstm
lstm_unit
LSTMCell
margin_rank_loss
matmul
matrix_nms
maxout
mean
merge_selected_rows
mse_loss
mul
multiclass_nms
MultivariateNormalDiag
natural_exp_decay
noam_decay
Normal
not_equal
one_hot
ones
ones_like
pad
pad2d
pad_constant_like
piecewise_decay
pixel_shuffle
polygon_box_transform
polynomial_decay
pool2d
pool3d
pow
prior_box
prroi_pool
psroi_pool
py_reader
random_crop
range
rank_loss
read_file
reduce_all
reduce_any
reduce_max
reduce_mean
reduce_min
reduce_prod
reduce_sum
relu
relu6
reorder_lod_tensor_by_rank
reshape
resize_bilinear
resize_nearest
resize_trilinear
retinanet_detection_output
retinanet_target_assign
reverse
rnn
RNNCell
roi_align
roi_perspective_transform
roi_pool
rpn_target_assign
sampled_softmax_with_cross_entropy
SampleEmbeddingHelper
sampling_id
scatter
selu
sequence_concat
sequence_conv
sequence_enumerate
sequence_expand
sequence_expand_as
sequence_first_step
sequence_last_step
sequence_mask
sequence_pad
sequence_pool
sequence_reshape
sequence_reverse
sequence_scatter
sequence_slice
sequence_softmax
sequence_unpad
shuffle_channel
sigmoid_cross_entropy_with_logits
sigmoid_focal_loss
sign
similarity_focus
size
smooth_l1
soft_relu
softmax
softplus
softshrink
softsign
space_to_depth
split
squeeze
ssd_loss
stack
StaticRNN
strided_slice
sum
sums
swish
Switch
tanh
tanh_shrink
target_assign
teacher_student_sigmoid_loss
tensor_array_to_tensor
thresholded_relu
topk
TrainingHelper
unbind
Uniform
uniform_random
unique
unique_with_counts
unsqueeze
warpctc
where
While
while_loop
yolo_box
yolov3_loss
zeros
zeros_like
load_op_library
LoDTensor
LoDTensorArray
memory_optimize
metrics
Accuracy
Auc
ChunkEvaluator
CompositeMetric
DetectionMAP
EditDistance
MetricBase
Precision
Recall
nets
glu
img_conv_group
scaled_dot_product_attention
sequence_conv_pool
simple_img_conv_pool
one_hot
optimizer
Momentum
MomentumOptimizer
SGD
SGDOptimizer
profiler
cuda_profiler
profiler
reset_profiler
start_profiler
stop_profiler
reader
PyReader
regularizer
L1Decay
L1DecayRegularizer
L2Decay
L2DecayRegularizer
release_memory
require_version
set_flags
Tensor
transpiler
HashName
paddle.io
Overview
BatchSampler
ChainDataset
ComposeDatast
DataLoader
Dataset
default_collate_fn
default_convert_fn
DistributedBatchSampler
get_worker_info
IterableDataset
random_split
RandomSampler
Sampler
SequenceSampler
SubsetDataset
TensorDataset
WeightedRandomSampler
paddle.jit
Overview
load
ProgramTranslator
save
set_code_level
set_verbosity
to_static
TracedLayer
TranslatedLayer
paddle.metric
Overview
accuracy
Auc
Metric
Precision
Recall
paddle.nn
Overview
AdaptiveAvgPool1D
AdaptiveAvgPool2D
AdaptiveAvgPool3D
AdaptiveMaxPool1D
AdaptiveMaxPool2D
AdaptiveMaxPool3D
AlphaDropout
AvgPool1D
AvgPool2D
AvgPool3D
BatchNorm
BatchNorm1D
BatchNorm2D
BatchNorm3D
BCELoss
BCEWithLogitsLoss
BeamSearchDecoder
Bilinear
BiRNN
ClipGradByGlobalNorm
ClipGradByNorm
ClipGradByValue
Conv1D
Conv1DTranspose
Conv2D
Conv2DTranspose
Conv3D
Conv3DTranspose
CosineSimilarity
CrossEntropyLoss
CTCLoss
Dropout
Dropout2D
Dropout3D
dynamic_decode
ELU
Embedding
Flatten
functional
adaptive_avg_pool1d
adaptive_avg_pool2d
adaptive_avg_pool3d
adaptive_max_pool1d
adaptive_max_pool2d
adaptive_max_pool3d
affine_grid
alpha_dropout
avg_pool1d
avg_pool2d
avg_pool3d
batch_norm
bilinear
binary_cross_entropy
binary_cross_entropy_with_logits
conv1d
conv1d_transpose
conv2d
conv2d_transpose
conv3d
conv3d_transpose
cosine_similarity
cross_entropy
ctc_loss
diag_embed
dice_loss
dropout
dropout2d
dropout3d
elu
elu_
embedding
gather_tree
gelu
grid_sample
hardshrink
hardsigmoid
hardswish
hardtanh
hsigmoid_loss
instance_norm
interpolate
kl_div
l1_loss
label_smooth
layer_norm
leaky_relu
linear
local_response_norm
log_loss
log_sigmoid
log_softmax
margin_ranking_loss
max_pool1d
max_pool2d
max_pool3d
maxout
mse_loss
nll_loss
normalize
npair_loss
one_hot
pad
pixel_shuffle
prelu
relu
relu6
relu_
selu
sigmoid
sigmoid_focal_loss
smooth_l1_loss
softmax
softmax_
softmax_with_cross_entropy
softplus
softshrink
softsign
square_error_cost
swish
tanhshrink
temporal_shift
thresholded_relu
unfold
upsample
GELU
GroupNorm
GRU
GRUCell
Hardshrink
Hardsigmoid
Hardswish
Hardtanh
HSigmoidLoss
initializer
Assign
Bilinear
Constant
KaimingNormal
KaimingUniform
Normal
set_global_initializer
TruncatedNormal
Uniform
XavierNormal
XavierUniform
InstanceNorm1D
InstanceNorm2D
InstanceNorm3D
KLDivLoss
L1Loss
Layer
LayerList
LayerNorm
LeakyReLU
Linear
LocalResponseNorm
LogSigmoid
LogSoftmax
LSTM
LSTMCell
MarginRankingLoss
Maxout
MaxPool1D
MaxPool2D
MaxPool3D
MSELoss
MultiHeadAttention
NLLLoss
Pad1D
Pad2D
Pad3D
PairwiseDistance
ParameterList
PixelShuffle
PReLU
ReLU
ReLU6
RNN
RNNCellBase
SELU
Sequential
Sigmoid
SimpleRNN
SimpleRNNCell
SmoothL1Loss
Softmax
Softplus
Softshrink
Softsign
SpectralNorm
Swish
SyncBatchNorm
Tanh
Tanhshrink
ThresholdedReLU
Transformer
TransformerDecoder
TransformerDecoderLayer
TransformerEncoder
TransformerEncoderLayer
Upsample
UpsamplingBilinear2D
UpsamplingNearest2D
utils
remove_weight_norm
weight_norm
paddle.onnx
export
paddle.optimizer
Overview
Adadelta
Adagrad
Adam
Adamax
AdamW
Lamb
lr
CosineAnnealingDecay
ExponentialDecay
InverseTimeDecay
LambdaDecay
LinearWarmup
LRScheduler
MultiStepDecay
NaturalExpDecay
NoamDecay
PiecewiseDecay
PolynomialDecay
ReduceOnPlateau
StepDecay
Momentum
Optimizer
RMSProp
SGD
paddle.regularizer
L1Decay
L2Decay
paddle.static
accuracy
append_backward
auc
BuildStrategy
CompiledProgram
cpu_places
create_global_var
create_parameter
cuda_places
data
default_main_program
default_startup_program
deserialize_persistables
deserialize_program
device_guard
ExecutionStrategy
Executor
global_scope
gradients
InputSpec
load
load_from_file
load_inference_model
load_program_state
name_scope
nn
batch_norm
bilinear_tensor_product
case
conv2d
conv2d_transpose
conv3d
conv3d_transpose
crf_decoding
data_norm
deform_conv2d
embedding
fc
group_norm
instance_norm
layer_norm
multi_box_head
nce
prelu
row_conv
spectral_norm
switch_case
ParallelExecutor
Print
Program
program_guard
py_func
save
save_inference_model
save_to_file
scope_guard
serialize_persistables
serialize_program
set_program_state
Variable
WeightNormParamAttr
xpu_places
paddle.sysconfig
get_include
get_lib
paddle.text
Overview
datasets
Conll05st
Imdb
Imikolov
Movielens
UCIHousing
WMT14
WMT16
paddle.utils
Overview
deprecated
download
get_weights_path_from_url
run_check
unique_name
generate
guard
switch
paddle.vision
Overview
datasets
Cifar10
Cifar100
DatasetFolder
FashionMNIST
Flowers
ImageFolder
MNIST
VOC2012
get_image_backend
image_load
models
LeNet
mobilenet_v1
mobilenet_v2
MobileNetV1
MobileNetV2
ResNet
resnet101
resnet152
resnet18
resnet34
resnet50
VGG
vgg11
vgg13
vgg16
vgg19
ops
deform_conv2d
DeformConv2D
yolo_box
yolo_loss
set_image_backend
transforms
adjust_brightness
adjust_contrast
adjust_hue
BaseTransform
BrightnessTransform
center_crop
CenterCrop
ColorJitter
Compose
ContrastTransform
crop
Grayscale
hflip
HueTransform
Normalize
normalize
Pad
pad
RandomCrop
RandomHorizontalFli
RandomResizedCrop
RandomRotation
RandomVerticalFlip
Resize
resize
rotate
SaturationTransform
to_grayscale
to_tensor
ToTensor
Transpose
vflip
常见问题
安装类FAQ
框架类FAQ
其他常见问题
Release Note
暂无相关搜索结果!
本文档使用
BookStack
构建
×
分享,让知识传承更久远
×
文章二维码
手机扫一扫,轻松掌上读
×
文档下载
普通下载
下载码下载(免登录无限下载)
你与大神的距离,只差一个APP
请下载您需要的格式的文档,随时随地,享受汲取知识的乐趣!
PDF
文档
EPUB
文档
MOBI
文档
温馨提示
每天每在网站阅读学习一分钟时长可下载一本电子书,每天连续签到可增加阅读时长
下载码方式下载:免费、免登录、无限制。
免费获取下载码
下载码
文档格式
PDF
EPUB
MOBI
码上下载
×
微信小程序阅读
您与他人的薪资差距,只差一个随时随地学习的小程序
×
书签列表
×
阅读记录
阅读进度:
0.00%
(
0/0
)
重置阅读进度