命令行界面
Flink provides a Command-Line Interface (CLI) to run programs that are packaged as JAR files, and control their execution. The CLI is part of any Flink setup, available in local single node setups and in distributed setups. It is located under <flink-home>/bin/flink
and connects by default to the running JobManager that was started from the same installation directory.
The command line can be used to
- submit jobs for execution,
- cancel a running job,
- provide information about a job,
- list running and waiting jobs,
- trigger and dispose savepoints, and
A prerequisite to using the command line interface is that the JobManager has been started (via <flink-home>/bin/start-cluster.sh
) or that another deployment target such as YARN or Kubernetes is available.
Deployment targets
Flink has the concept of executors for defining available deployment targets. You can see the available executors in the output of bin/flink --help
, for example:
Options for Generic CLI mode:
-D <property=value> Generic configuration options for
execution/deployment and for the configured executor.
The available options can be found at
https://ci.apache.org/projects/flink/flink-docs-stabl
e/ops/config.html
-t,--target <arg> The deployment target for the given application,
which is equivalent to the "execution.target" config
option. The currently available targets are:
"remote", "local", "kubernetes-session", "yarn-per-job",
"yarn-session", "yarn-application" and "kubernetes-application".
When running one of the bin/flink
actions, the executor is specified using the --executor
option.
Examples
作业提交示例
这些示例是关于如何通过脚本提交一个作业
Run example program with no arguments:
./bin/flink run ./examples/batch/WordCount.jar
Run example program with arguments for input and result files:
./bin/flink run ./examples/batch/WordCount.jar \
--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
Run example program with parallelism 16 and arguments for input and result files:
./bin/flink run -p 16 ./examples/batch/WordCount.jar \
--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
Run example program with flink log output disabled:
./bin/flink run -q ./examples/batch/WordCount.jar
Run example program in detached mode:
./bin/flink run -d ./examples/batch/WordCount.jar
Run example program on a specific JobManager:
./bin/flink run -m myJMHost:8081 \
./examples/batch/WordCount.jar \
--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
Run example program with a specific class as an entry point:
./bin/flink run -c org.apache.flink.examples.java.wordcount.WordCount \
./examples/batch/WordCount.jar \
--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
Run example program using a per-job YARN cluster with 2 TaskManagers:
./bin/flink run -m yarn-cluster \
./examples/batch/WordCount.jar \
--input hdfs:///user/hamlet.txt --output hdfs:///user/wordcount_out
注意 通过flink run
提交Python任务时Flink会调用“python”命令。请执行以下命令以确认当前环境下的指令“python”指向Python的版本为3.5, 3.6 或者 3.7中的一个:
$ python --version
# the version printed here must be 3.5, 3.6 or 3.7
提交一个Python Table的作业:
./bin/flink run -py WordCount.py
提交一个有多个依赖的Python Table的作业:
./bin/flink run -py examples/python/table/batch/word_count.py \
-pyfs file:///user.txt,hdfs:///$namenode_address/username.txt
提交一个Python Table的作业,并指定依赖的jar包:
./bin/flink run -py examples/python/table/batch/word_count.py -j <jarFile>
提交一个有多个依赖的Python Table的作业,Python作业的主入口通过pym选项指定:
./bin/flink run -pym batch.word_count -pyfs examples/python/table/batch
提交一个指定并发度为16的Python Table的作业:
./bin/flink run -p 16 -py examples/python/table/batch/word_count.py
提交一个关闭flink日志输出的Python Table的作业:
./bin/flink run -q -py examples/python/table/batch/word_count.py
提交一个运行在detached模式下的Python Table的作业:
./bin/flink run -d -py examples/python/table/batch/word_count.py
提交一个运行在指定JobManager上的Python Table的作业:
./bin/flink run -m myJMHost:8081 \
-py examples/python/table/batch/word_count.py
提交一个运行在有两个TaskManager的per-job YARN cluster的Python Table的作业:
./bin/flink run -m yarn-cluster \
-py examples/python/table/batch/word_count.py
作业管理示例
Display the optimized execution plan for the WordCount example program as JSON:
./bin/flink info ./examples/batch/WordCount.jar \
--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
List scheduled and running jobs (including their JobIDs):
./bin/flink list
List scheduled jobs (including their JobIDs):
./bin/flink list -s
List running jobs (including their JobIDs):
./bin/flink list -r
List all existing jobs (including their JobIDs):
./bin/flink list -a
List running Flink jobs inside Flink YARN session:
./bin/flink list -m yarn-cluster -yid <yarnApplicationID> -r
Cancel a job:
./bin/flink cancel <jobID>
Cancel a job with a savepoint (deprecated; use “stop” instead):
./bin/flink cancel -s [targetDirectory] <jobID>
Gracefully stop a job with a savepoint (streaming jobs only):
./bin/flink stop [-p targetDirectory] [-d] <jobID>
Savepoints
Savepoints are controlled via the command line client:
Trigger a Savepoint
./bin/flink savepoint <jobId> [savepointDirectory]
This will trigger a savepoint for the job with ID jobId
, and returns the path of the created savepoint. You need this path to restore and dispose savepoints.
Furthermore, you can optionally specify a target file system directory to store the savepoint in. The directory needs to be accessible by the JobManager.
If you don’t specify a target directory, you need to have configured a default directory. Otherwise, triggering the savepoint will fail.
Trigger a Savepoint with YARN
./bin/flink savepoint <jobId> [savepointDirectory] -yid <yarnAppId>
This will trigger a savepoint for the job with ID jobId
and YARN application ID yarnAppId
, and returns the path of the created savepoint.
Everything else is the same as described in the above Trigger a Savepoint section.
Stop
Use the stop
to gracefully stop a running streaming job with a savepoint.
./bin/flink stop [-p targetDirectory] [-d] <jobID>
A “stop” call is a more graceful way of stopping a running streaming job, as the “stop” signal flows from source to sink. When the user requests to stop a job, all sources will be requested to send the last checkpoint barrier that will trigger a savepoint, and after the successful completion of that savepoint, they will finish by calling their cancel()
method. If the -d
flag is specified, then a MAX_WATERMARK
will be emitted before the last checkpoint barrier. This will result all registered event-time timers to fire, thus flushing out any state that is waiting for a specific watermark, e.g. windows. The job will keep running until all sources properly shut down. This allows the job to finish processing all in-flight data.
Cancel with a savepoint (deprecated)
You can atomically trigger a savepoint and cancel a job.
./bin/flink cancel -s [savepointDirectory] <jobID>
If no savepoint directory is configured, you need to configure a default savepoint directory for the Flink installation (see Savepoints).
The job will only be cancelled if the savepoint succeeds.
Note: Cancelling a job with savepoint is deprecated. Use “stop” instead.
Restore a savepoint
./bin/flink run -s <savepointPath> ...
The run command has a savepoint flag to submit a job, which restores its state from a savepoint. The savepoint path is returned by the savepoint trigger command.
By default, we try to match all savepoint state to the job being submitted. If you want to allow to skip savepoint state that cannot be restored with the new job you can set the allowNonRestoredState
flag. You need to allow this if you removed an operator from your program that was part of the program when the savepoint was triggered and you still want to use the savepoint.
./bin/flink run -s <savepointPath> -n ...
This is useful if your program dropped an operator that was part of the savepoint.
Dispose a savepoint
./bin/flink savepoint -d <savepointPath>
Disposes the savepoint at the given path. The savepoint path is returned by the savepoint trigger command.
If you use custom state instances (for example custom reducing state or RocksDB state), you have to specify the path to the program JAR with which the savepoint was triggered in order to dispose the savepoint with the user code class loader:
./bin/flink savepoint -d <savepointPath> -j <jarFile>
Otherwise, you will run into a ClassNotFoundException
.
Usage
The command line syntax is as follows:
./flink <ACTION> [OPTIONS] [ARGUMENTS]
The following actions are available:
Action "run" compiles and runs a program.
Syntax: run [OPTIONS] <jar-file> <arguments>
"run" action options:
-c,--class <classname> Class with the program entry point
("main()" method). Only needed if the
JAR file does not specify the class in
its manifest.
-C,--classpath <url> Adds a URL to each user code
classloader on all nodes in the
cluster. The paths must specify a
protocol (e.g. file://) and be
accessible on all nodes (e.g. by means
of a NFS share). You can use this
option multiple times for specifying
more than one URL. The protocol must
be supported by the {@link
java.net.URLClassLoader}.
-d,--detached If present, runs the job in detached
mode
-n,--allowNonRestoredState Allow to skip savepoint state that
cannot be restored. You need to allow
this if you removed an operator from
your program that was part of the
program when the savepoint was
triggered.
-p,--parallelism <parallelism> The parallelism with which to run the
program. Optional flag to override the
default value specified in the
configuration.
-py,--python <pythonFile> Python script with the program entry
point. The dependent resources can be
configured with the `--pyFiles`
option.
-pyarch,--pyArchives <arg> Add python archive files for job. The
archive files will be extracted to the
working directory of python UDF
worker. Currently only zip-format is
supported. For each archive file, a
target directory be specified. If the
target directory name is specified,
the archive file will be extracted to
a name can directory with the
specified name. Otherwise, the archive
file will be extracted to a directory
with the same name of the archive
file. The files uploaded via this
option are accessible via relative
path. '#' could be used as the
separator of the archive file path and
the target directory name. Comma (',')
could be used as the separator to
specify multiple archive files. This
option can be used to upload the
virtual environment, the data files
used in Python UDF (e.g.: --pyArchives
file:///tmp/py37.zip,file:///tmp/data.
zip#data --pyExecutable
py37.zip/py37/bin/python). The data
files could be accessed in Python UDF,
e.g.: f = open('data/data.txt', 'r').
-pyexec,--pyExecutable <arg> Specify the path of the python
interpreter used to execute the python
UDF worker (e.g.: --pyExecutable
/usr/local/bin/python3). The python
UDF worker depends on a specified Python
version 3.5, 3.6 or 3.7, Apache Beam
(version == 2.19.0), Pip (version >= 7.1.0)
and SetupTools (version >= 37.0.0).
Please ensure that the specified environment
meets the above requirements.
-pyfs,--pyFiles <pythonFiles> Attach custom python files for job.
These files will be added to the
PYTHONPATH of both the local client
and the remote python UDF worker. The
standard python resource file suffixes
such as .py/.egg/.zip or directory are
all supported. Comma (',') could be
used as the separator to specify
multiple files (e.g.: --pyFiles
file:///tmp/myresource.zip,hdfs:///$na
menode_address/myresource2.zip).
-pym,--pyModule <pythonModule> Python module with the program entry
point. This option must be used in
conjunction with `--pyFiles`.
-pyreq,--pyRequirements <arg> Specify a requirements.txt file which
defines the third-party dependencies.
These dependencies will be installed
and added to the PYTHONPATH of the
python UDF worker. A directory which
contains the installation packages of
these dependencies could be specified
optionally. Use '#' as the separator
if the optional parameter exists
(e.g.: --pyRequirements
file:///tmp/requirements.txt#file:///t
mp/cached_dir).
-s,--fromSavepoint <savepointPath> Path to a savepoint to restore the job
from (for example
hdfs:///flink/savepoint-1537).
-sae,--shutdownOnAttachedExit If the job is submitted in attached
mode, perform a best-effort cluster
shutdown when the CLI is terminated
abruptly, e.g., in response to a user
interrupt, such as typing Ctrl + C.
Options for yarn-cluster mode:
-d,--detached If present, runs the job in detached
mode
-m,--jobmanager <arg> Address of the JobManager to
which to connect. Use this flag to
connect to a different JobManager than
the one specified in the
configuration.
-yat,--yarnapplicationType <arg> Set a custom application type for the
application on YARN
-yD <property=value> use value for given property
-yd,--yarndetached If present, runs the job in detached
mode (deprecated; use non-YARN
specific option instead)
-yh,--yarnhelp Help for the Yarn session CLI.
-yid,--yarnapplicationId <arg> Attach to running YARN session
-yj,--yarnjar <arg> Path to Flink jar file
-yjm,--yarnjobManagerMemory <arg> Memory for JobManager Container with
optional unit (default: MB)
-ynl,--yarnnodeLabel <arg> Specify YARN node label for the YARN
application
-ynm,--yarnname <arg> Set a custom name for the application
on YARN
-yq,--yarnquery Display available YARN resources
(memory, cores)
-yqu,--yarnqueue <arg> Specify YARN queue.
-ys,--yarnslots <arg> Number of slots per TaskManager
-yt,--yarnship <arg> Ship files in the specified directory
(t for transfer)
-ytm,--yarntaskManagerMemory <arg> Memory per TaskManager Container with
optional unit (default: MB)
-yz,--yarnzookeeperNamespace <arg> Namespace to create the Zookeeper
sub-paths for high availability mode
-z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
sub-paths for high availability mode
Options for Generic CLI mode:
-D <property=value> Generic configuration options for
execution/deployment and for the configured executor.
The available options can be found at
https://ci.apache.org/projects/flink/flink-docs-stabl
e/ops/config.html
-t,--target <arg> The deployment target for the given application,
which is equivalent to the "execution.target" config
option. The currently available targets are:
"remote", "local", "kubernetes-session", "yarn-per-job",
"yarn-session", "yarn-application" and "kubernetes-application".
Options for default mode:
-m,--jobmanager <arg> Address of the JobManager to which
to connect. Use this flag to connect to a
different JobManager than the one specified
in the configuration.
-z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
for high availability mode
Action "info" shows the optimized execution plan of the program (JSON).
Syntax: info [OPTIONS] <jar-file> <arguments>
"info" action options:
-c,--class <classname> Class with the program entry point
("main()" method). Only needed if the JAR
file does not specify the class in its
manifest.
-p,--parallelism <parallelism> The parallelism with which to run the
program. Optional flag to override the
default value specified in the
configuration.
Action "list" lists running and scheduled programs.
Syntax: list [OPTIONS]
"list" action options:
-a,--all Show all programs and their JobIDs
-r,--running Show only running programs and their JobIDs
-s,--scheduled Show only scheduled programs and their JobIDs
Options for yarn-cluster mode:
-m,--jobmanager <arg> Address of the JobManager to
which to connect. Use this flag to connect
to a different JobManager than the one
specified in the configuration.
-yid,--yarnapplicationId <arg> Attach to running YARN session
-z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
sub-paths for high availability mode
Options for Generic CLI mode:
-D <property=value> Generic configuration options for
execution/deployment and for the configured executor.
The available options can be found at
https://ci.apache.org/projects/flink/flink-docs-stabl
e/ops/config.html
-t,--target <arg> The deployment target for the given application,
which is equivalent to the "execution.target" config
option. The currently available targets are:
"remote", "local", "kubernetes-session", "yarn-per-job",
"yarn-session", "yarn-application" and "kubernetes-application".
Options for default mode:
-m,--jobmanager <arg> Address of the JobManager to which
to connect. Use this flag to connect to a
different JobManager than the one specified
in the configuration.
-z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
for high availability mode
Action "stop" stops a running program with a savepoint (streaming jobs only).
Syntax: stop [OPTIONS] <Job ID>
"stop" action options:
-d,--drain Send MAX_WATERMARK before taking the
savepoint and stopping the pipelne.
-p,--savepointPath <savepointPath> Path to the savepoint (for example
hdfs:///flink/savepoint-1537). If no
directory is specified, the configured
default will be used
("state.savepoints.dir").
Options for yarn-cluster mode:
-m,--jobmanager <arg> Address of the JobManager to
which to connect. Use this flag to connect
to a different JobManager than the one
specified in the configuration.
-yid,--yarnapplicationId <arg> Attach to running YARN session
-z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
sub-paths for high availability mode
Options for Generic CLI mode:
-D <property=value> Generic configuration options for
execution/deployment and for the configured executor.
The available options can be found at
https://ci.apache.org/projects/flink/flink-docs-stabl
e/ops/config.html
-t,--target <arg> The deployment target for the given application,
which is equivalent to the "execution.target" config
option. The currently available targets are:
"remote", "local", "kubernetes-session", "yarn-per-job",
"yarn-session", "yarn-application" and "kubernetes-application".
Options for default mode:
-m,--jobmanager <arg> Address of the JobManager to which
to connect. Use this flag to connect to a
different JobManager than the one specified
in the configuration.
-z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
for high availability mode
Action "cancel" cancels a running program.
Syntax: cancel [OPTIONS] <Job ID>
"cancel" action options:
-s,--withSavepoint <targetDirectory> **DEPRECATION WARNING**: Cancelling
a job with savepoint is deprecated.
Use "stop" instead.
Trigger savepoint and cancel job.
The target directory is optional. If
no directory is specified, the
configured default directory
(state.savepoints.dir) is used.
Options for yarn-cluster mode:
-m,--jobmanager <arg> Address of the JobManager to
which to connect. Use this flag to connect
to a different JobManager than the one
specified in the configuration.
-yid,--yarnapplicationId <arg> Attach to running YARN session
-z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
sub-paths for high availability mode
Options for Generic CLI mode:
-D <property=value> Generic configuration options for
execution/deployment and for the configured executor.
The available options can be found at
https://ci.apache.org/projects/flink/flink-docs-stabl
e/ops/config.html
-t,--target <arg> The deployment target for the given application,
which is equivalent to the "execution.target" config
option. The currently available targets are:
"remote", "local", "kubernetes-session", "yarn-per-job",
"yarn-session", "yarn-application" and "kubernetes-application".
Options for default mode:
-m,--jobmanager <arg> Address of the JobManager to which
to connect. Use this flag to connect to a
different JobManager than the one specified
in the configuration.
-z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
for high availability mode
Action "savepoint" triggers savepoints for a running job or disposes existing ones.
Syntax: savepoint [OPTIONS] <Job ID> [<target directory>]
"savepoint" action options:
-d,--dispose <arg> Path of savepoint to dispose.
-j,--jarfile <jarfile> Flink program JAR file.
Options for yarn-cluster mode:
-m,--jobmanager <arg> Address of the JobManager to
which to connect. Use this flag to connect
to a different JobManager than the one
specified in the configuration.
-yid,--yarnapplicationId <arg> Attach to running YARN session
-z,--zookeeperNamespace <arg> Namespace to create the Zookeeper
sub-paths for high availability mode
Options for Generic CLI mode:
-D <property=value> Generic configuration options for
execution/deployment and for the configured executor.
The available options can be found at
https://ci.apache.org/projects/flink/flink-docs-stabl
e/ops/config.html
-t,--target <arg> The deployment target for the given application,
which is equivalent to the "execution.target" config
option. The currently available targets are:
"remote", "local", "kubernetes-session", "yarn-per-job",
"yarn-session", "yarn-application" and "kubernetes-application".
Options for default mode:
-m,--jobmanager <arg> Address of the JobManager to which
to connect. Use this flag to connect to a
different JobManager than the one specified
in the configuration.
-z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths
for high availability mode