Pulsar configuration
您可以通过 Pulsar 安装包下的 conf 目录中的配置文件来管理 Pulsar 配置。
BookKeeper
BookKeeper 是一个冗余的日志存储系统,Pulsar 用它来持久化存储所有消息。
配置项 | 说明 | 默认值 |
---|---|---|
bookiePort | bookie 服务器监听的端口。 | 3181 |
allowLoopback | 是否允许 bookie 使用回环接口作为其主要接口( 即用于建立其身份的接口 )。 默认情况下,回环接口不允许作为主接口。 使用 loopback 接口作为主接口通常意味着配置错误。 例如,在某些 VPS 设置中,通常不会配置主机名或使主机名解析为 127.0.0.1 。 如果情况如此,那么集群中的所有 bookie 都会将其身份设置为 127.0.0.1:3181 并且只有一个能成功加入集群。 对于像这样配置的 VPS,你应该显性地设置监听接口。 | false |
listeningInterface | 选择需要监听的网卡。 By default, the bookie listens on all interfaces. | eth0 |
advertisedAddress | 配置一个特定的主机名或IP地址,用于将bookie发布给客户端。 默认情况下,bookie 根据 listeningInterface 和 useHostNameAsBookieID 设置公布自己的 IP 地址或主机名。 | N/A |
allowMultipleDirsUnderSameDiskPartition | 配置 bookie 以启用/禁用同一文件系统磁盘分区中的多个 ledger/index/journal 目录。 | false |
minUsableSizeForIndexFileCreation | Bookie以只读模式启动并重放日志时,供bookie在索引目录中创建索引文件的最小安全可用大小(以字节为单位)。 | 1073741824 |
journalDirectory | BookKeeper 输出其预写日志 (WAL) 的目录。 | data/bookkeeper/journal |
journalDirectories | 用于BookKeeper输出WAL日志的目录。 有多个目录可用,以, 分隔。 例如: journalDirectories=/tmp/bk-journal1,/tmp/bk-journal2 。 如果设置了 journalDirectories ,bookies 会跳过 journalDirectory 并使用此设置目录。 | /tmp/bk-journal |
ledgerDirectories | BookKeeper 输出 ledger 快照的目录。 这可以定义多个目录来存储由 , 分隔的快照,例如 ledgerDirectories=/tmp/bk1-data,/tmp/bk2-data 。 最理想的情况是,ledger 目录和日志目录都是在不同的设备中,这减少了随机读写和顺序写入之间的争执。 可以用单个磁盘运行,但性能将显著降低。 | data/bookkeeper/ledgers |
ledgerManagerType | 用于管理 ledger 如何存储与管理,以及垃圾收集的 ledger 管理器类型。 查看 BookKeeper Internals 获取更多信息。 | hierarchical |
zkLedgersRootPath | Zookeeper 上用于存储 ledger 元信息的路径。 这个参数仅适用于使用 root znode 来存储所有 ledger 信息的Zookeeper 元信息管理器。 | /ledgers |
ledgerStorageClass | Ledger 存储实现类 | org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage |
entryLogFilePreallocationEnabled | 启用或禁用条目日志记录器预分配 | true |
logSizeLimit | 最大的日志文件大小,单位是字节。 当日志文件达到本参数设置的大小后,会自动创建一个新的日志文件。 | 2147483648 |
minorCompactionThreshold | 次级压缩的阈值。 其剩余大小百分比低于此阈值的条目日志文件将在次级压缩中被压缩。 如果设置为零,次级压缩将被禁用。 | 0.2 |
minorCompactionInterval | 运行次级压缩的时间间隔,单位是秒。 如果设置为零,次级压缩将被禁用。 Note: should be greater than gcWaitTime. | 3600 |
majorCompactionThreshold | 首级压缩的阈值。 其剩余大小百分比低于此阈值的条目日志文件将在首级压缩中被压缩。 剩余大小百分比仍高于阈值的条目日志文件永远不会被压缩。 如果设置为零,次级压缩将被禁用。 | 0.5 |
majorCompactionInterval | 运行一级压缩的时间间隔,单位是秒。 如果设置为 0,一级压缩将会被禁用。 Note: should be greater than gcWaitTime. | 86400 |
readOnlyModeEnabled | If readOnlyModeEnabled=true , then on all full ledger disks, bookie will be converted to read-only mode and serve only read requests. 否则 bookie 将被关闭。 | true |
forceReadOnlyBookie | Bookie 是否以只读模式强制启动。 | false |
persistBookieStatusEnabled | Persist the bookie status locally on the disks. So the bookies can keep their status upon restarts. | false |
compactionMaxOutstandingRequests | 设置压缩过程中无需刷写的最大条目数。 在压缩时,条目被写入条目日志,新的偏移被缓存在内存中。 一旦条目日志被刷写,索引将被新的偏移量更新。 此参数控制在强制刷写之前添加到条目日志的条目数量。 此参数的值越狱高意味着将有更多内存用于偏移量。 每个偏移量由三个 long 型组成。 此参数不应被修改,除非你完全了解其后果。 | 100000 |
compactionRate | 压缩过程中读取条目的频率,以每秒添加数为单位。 | 1000 |
isThrottleByBytes | 以字节或条目为单位进行阈值压缩 | false |
compactionRateByEntries | 压缩过程中读取条目的频率,以每秒添加数为单位。 | 1000 |
compactionRateByBytes | Set the rate at which compaction reads entries. 单位是每秒读取的字节数。 | 1000000 |
journalMaxSizeMB | 日志(journal)文件的最大大小,单位是MB。 当日志(journal)文件达到本参数设置的大小后,会自动创建一个新的日志文件。 | 2048 |
journalMaxBackups | The max number of old journal files to keep. 保留一定数量的日志文件,可以方便某些特殊情况下的数据恢复。 | 5 |
journalPreAllocSizeMB | 每次在日志中预分配的空间大小。 | 16 |
journalWriteBufferSizeKB | 日志使用的写缓冲区大小。 | 64 |
journalRemoveFromPageCache | 强制写入后,页是否应从页缓存中删除。 | true |
journalAdaptiveGroupWrites | 是否将日志的强制写入进行分组,这能优化分组提交获得更高吞吐量。 | true |
journalMaxGroupWaitMSec | 实现分组写入日志的最大延迟。 | 1 |
journalAlignmentSize | 所有日志的写入和提交应与给定的大小对齐 | 4096 |
journalBufferedWritesThreshold | 要实现分组的最大写入缓存 | 524288 |
journalFlushWhenQueueEmpty | 是否在日志队列为空时刷写日志 | false |
numJournalCallbackThreads | 处理日志回调的线程数 | 8 |
openLedgerRereplicationGracePeriod | The grace period, in milliseconds, that the replication worker waits before fencing and replicating a ledger fragment that’s still being written to upon bookie failure. | 30000 |
rereplicationEntryBatchSize | 重复制时在 fragment 中保存的最大条目数 | 100 |
autoRecoveryDaemonEnabled | Bookie 本身是否可以启动自动恢复服务。 | true |
lostBookieRecoveryDelay | 在开始自动恢复丢失的 bookie 之前等待多长时间(以秒为单位)。 | 0 |
gcWaitTime | 触发下一次垃圾收集的时间间隔(毫秒)。 由于垃圾收集工作在后台进行,过于频繁的垃圾收集工作将降低性能。 如果磁盘容量足够,最好用更大的 gc 时间间隔。 | 900000 |
gcOverreplicatedLedgerWaitTime | 触发下一次垃圾收集器收集过度复制的 ledger 的时间间隔(毫秒)。 这种情况不应经常发生,因为我们从 zookeeper 的 bookie 上读取所有 ledger 的元数据。 | 86400000 |
flushInterval | 刷新 ledger 索引页到磁盘的间隔,以毫秒计。 刷新索引文件会引入大量随机磁盘 I/O。 如果在不同的设备上将日志目录和 ledger 目录分开,刷写就不会影响性能。 但是,如果在同一设备上放置日志目录和 ledger 目录,性能会在频繁刷写的情况下大幅下降。 你可以考虑增加刷写间隔来获得更好的性能,但你需要在 bookie 服务器失败重启后花更多时间。 | 60000 |
bookieDeathWatchInterval | 查看 bookie 是否已死亡的时间间隔,以毫秒为单位 | 1000 |
allowStorageExpansion | Allow the bookie storage to expand. Newly added ledger and index dirs must be empty. | false |
zkServers | Zookeeper 运行的服务器节点清单。 服务器清单用逗号分隔,例如:zkServers=zk1:2181,zk2:2181,zk3:2181 。 | localhost:2181 |
zkTimeout | ZooKeeper 客户端会话超时时间,以毫秒为单位。Bookie 服务器如果收到 SESSION_EXPIRED,将退出,因为它从 ZooKeeper 进行分区超过了会话超时时间。JVM 垃圾收集和磁盘 I/O 会导致 SESSION_EXPIRED。 增加这个值可以帮助避免这个问题 | 30000 |
zkRetryBackoffStartMs | Zookeeper 客户端退避重试的开始时间(以毫秒为单位)。 | 1000 |
zkRetryBackoffMaxMs | Zookeeper 客户端退避重试的最长时间(以毫秒为单位)。 | 10000 |
zkEnableSecurity | 在 ZooKeeper 上写入的每个节点上设置 ACL,允许用户读取和写入存储在 ZooKeeper 上的 BookKeeper 元数据。 为了让ACL正常工作,您需要设置ZooKeeper JAAS认证。 所有 bookies 和客户端需要共享相同的用户,通常使用 Kerberos 身份验证。 请参见 ZooKeeper 文档。 | false |
httpServerEnabled | 启用/禁用启动管理HTTP服务器。 | false |
httpServerPort | The HTTP server port to listen on. By default, the value is 8080 . If you want to keep it consistent with the Prometheus stats provider, you can set it to 8000 . | 8080 |
httpServerClass | Http服务器实现类。 | org.apache.bookkeeper.http.vertx.VertxHttpServer |
serverTcpNoDelay | 此设置用于启用/禁用 Nagle 的算法,该算法能通过减少通过网络发送的数据包数量来提高 TCP/IP 网络效率。 如果你正在发送许多小消息,这样在单个 IP 数据包中就可以放入不止一个消息,设置 server.tcpnodelay 为 false 来启用 Nagle 算法可以提供更好的性能。 | true |
serverSockKeepalive | 此设置用于在面向连接的套接字上发送 keep-alive 消息。 | true |
serverTcpLinger | 关闭时TCP延迟超时。 当启用时,关闭或关机将不会返回,直到TCP的所有排队消息都被成功发送或达到徘徊超时。 否则,调用将立即返回,关闭是在后台完成的。 | 0 |
byteBufAllocatorSizeMax | ByteBuf分配器能够接受的最大buf的大小 | 1048576 |
nettyMaxFrameSizeBytes | The maximum netty frame size in bytes. Any message received larger than this will be rejected. | 5253120 |
openFileLimit | 在 bookie 服务器上可以打开 ledger 索引文件的最大数量。如果 ledger 索引文件数量达到这个限制,bookie 服务器会开始将一些 ledger 从内存交换到磁盘。 过于频繁的交换会影响性能。 你可以根据你的要求调整这个数字以获得性能提升。 | 0 |
pageSize | Ledger 缓存中索引页的大小,是以字节为单位。一个大的索引页可以改善将页写入磁盘的性能。当你有少量有 ledger 并且这些 ledger 有着相似数量的条目时,会很高效。 如果你有大量的 ledger,而每个 ledger 都有较少的条目,较小的索引页面将会提高内存使用率。 | 8192 |
pageLimit | 在 ledger 缓存中提供多少索引页面。如果索引页面数量达到此限制,bookie 服务器会开始将一些 ledger 从内存交换到磁盘。 当你发现交换变得更频繁时,可以增加这个值。 但请确认 pageLimit*page 的大小不应超过 JVM 最大内存限制,否则你将会得到 OutOfMemoryException。 一般来说,增加pageLimit,使用较小的索引页会在条目较少的大账本中获得更好的性能。 如果pageLimit为-1,bookie服务将根据1/3的JVM内存来计算索引页的数量限制。 | 0 |
readOnlyModeEnabled | 如果所有已配置的 ledger 目录已满,则只支持客户的读取请求。 如果 “readOnlyModeEnabled=true” 那么当所有的 ledger 磁盘满时,bookie 会被转换为只读模式并只处理读请求。 否则 bookie 将被关闭。 默认情况下,这会被禁用。 | true |
diskUsageThreshold | 对于每个 ledger 目录,可使用的最大磁盘空间。 默认值为 0.95f。 比如,最多只能使用 95% 的磁盘,此后不会将任何内容写入该分区。 如果所有的ledger目录分区都满了,那么如果设置了’readOnlyModeEnabled=true’,bookie将转为只读模式,否则它将关闭。 有效值应在 0 和 1 之间(不包含端点)。 | 0.95 |
diskCheckInterval | 磁盘检查间隔,以毫秒为单位,检查 ledger 目录使用情况的时间间隔。 | 10000 |
auditorPeriodicCheckInterval | 审查者对集群内所有 ledger 进行核对的时间间隔。 默认情况下,每周运行一次。 间隔以秒为单位。 要完全禁用定期检查,请将此设置为 0。 请注意,定期检查会给集群带来额外的负荷,因此每天的运行次数不应超过一次。 | 604800 |
sortedLedgerStorageEnabled | Whether sorted-ledger storage is enabled. | true |
auditorPeriodicBookieCheckInterval | 审查者 bookie 检查的时间间隔。 审查者 bookie 检查,检查 ledger 的元数据来查看哪些 bookie 应该包含各个 ledger 的条目。 如果本应包含条目的 bookie 不可用,包含该条目的 ledger 会被标记去做恢复。 设置为 0 会禁用定期检查。 当 bookie 失效时,bookie 检查仍然会进行。 间隔以秒为单位。 | 86400 |
numAddWorkerThreads | The number of threads that should handle write requests. 如果设置为 0,写入请求将直接被 nerry 线程进行处理。 | 0 |
numReadWorkerThreads | The number of threads that should handle read requests. 如果设置为 0,读取请求将直接被 netty 线程处理。 | 8 |
numHighPriorityWorkerThreads | 用于高优先级请求的线程数量(例如恢复读和添加,以及fencing)。 | 8 |
maxPendingReadRequestsPerThread | 如果启用了读 worker 线程,限制待处理请求的数量,以避免执行器队列无止境增长。 | 2500 |
maxPendingAddRequestsPerThread | 待处理请求的限制的线程数量。用于避免在启用添加工作线程时,执行者队列无限期增长。 | 10000 |
isForceGCAllowWhenNoSpace | 当磁盘已满或快满时,是否允许强制压缩。 强制GC可以找回一些空间,但也可能更快地填满磁盘空间。 这是因为新的日志文件是在GC之前创建的,而旧的垃圾日志文件在GC之后被删除。 | false |
verifyMetadataOnGC | 设置了true,bookie就会在GC之前去复查readMetadata | false |
flushEntrylogBytes | Entry log flush interval in bytes. Flushing in smaller chunks but more frequently reduces spikes in disk I/O. Flushing too frequently may also affect performance negatively. | 268435456 |
readBufferSizeBytes | 用作 BufferedReadChannel 容量的字节数量。 | 4096 |
writeBufferSizeBytes | 用作写缓存容量的字节数 | 65536 |
useHostNameAsBookieID | 是否使用主机名去注册 bookie 的协调服务(如:Zookeeper)。 When false, bookie will use its ip address for the registration. | false |
bookieId | 如果你想定制一个bookie的ID,或者为bookie使用一个动态网络地址, 你可以设置 bookieId 的值。Bookie advertises itself using the bookieId rather than the BookieSocketAddress (hostname:port or IP:port ). If you set the bookieId , then the useHostNameAsBookieID does not take effect.The bookieId is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.For more information about bookieId , see here. | N/A |
allowEphemeralPorts | 是否允许bookie 使用一个临时的端口(端口0)作为其服务器端口。 默认情况下,不允许使用短暂的端临时 使用一个临时的端口作为服务端口通常会显示配置错误。 然而,在单元测试中,使用一个临时的端口配置将解决端口冲突问题,并可以进行并行的测试。 | false |
enableLocalTransport | 是否允许bookie监听本地JVM上运行的BookKeeper客户端。 | false |
disableServerSocketBind | Whether the bookie is allowed to disable bind on network interfaces. This bookie will be available only to BookKeeper clients executed on the local JVM. | false |
skipListArenaChunkSize | 用来给org.apache.bookkeeper.bookie.SkipListArena 分配的块大小。 | 4194304 |
skipListArenaMaxAllocSize | The maximum size that we should allocate from the skiplist arena. Allocations larger than this should be allocated directly by the VM to avoid fragmentation. | 131072 |
bookieAuthProviderFactoryClass | The factory class name of the bookie authentication provider. If this is null, then there is no authentication. | null |
statsProviderClass | org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider | |
prometheusStatsHttpPort | 8000 | |
dbStorage_writeCacheMaxSizeMb | 写入缓存的大小。 所使用的内存是 JVM 直接内存。 写入缓存用于在冲入条目日志之前对条目进行缓冲。 为了获得良好的性能,它应该足够大,以便在冲刷间隔内容纳足够多的条目。 | 25% 的直接内存 |
dbStorage_readAheadCacheMaxSizeMb | 读取缓存的大小。 所使用的内存是 JVM 直接内存。 每当发生缓存缺失时,这个读缓存就会被预先填满,进行读预处理。 默认情况下,它被分配到可用直接内存的25%。 | N/A |
dbStorage_readAheadCacheBatchSize | 当读缓存 miss 发生后预装填的条目数量 | 1000 |
dbStorage_rocksDB_blockCacheSize | RocksDB 的 block-cache 大小。 为了获得最佳性能,这个缓存应该大到足以容纳索引数据库的绝大数部分,在某些情况下可以达到~2GB。 默认情况下,它使用10%的直接内存。 | N/A |
dbStorage_rocksDB_writeBufferSizeMB | 64 | |
dbStorage_rocksDB_sstSizeInMB | 64 | |
dbStorage_rocksDB_blockSize | 65536 | |
dbStorage_rocksDB_bloomFilterBitsPerKey | 10 | |
dbStorage_rocksDB_numLevels | -1 | |
dbStorage_rocksDB_numFilesInLevel0 | 4 | |
dbStorage_rocksDB_maxSizeInLevel1MB | 256 |
Broker
Pulsar broker 负责处理从生产者发出消息、向消费者派发消息、在集群间复制数据等。
<th>
说明
</th>
<th>
默认值
</th>
<td>
Specify multiple advertised listeners for the broker.<br><br>The format is <code><listener_name>:pulsar://<host>:<port></code>.<br><br>If there are multiple listeners, separate them with commas.<br><br><strong x-id="1">Note</strong>: do not use this configuration with <code>advertisedAddress</code> and <code>brokerServicePort</code>. If the value of this configuration is empty, the broker uses <code>advertisedAddress</code> and <code>brokerServicePort</code>
</td>
<td>
/
</td>
<td>
Specify the internal listener name for the broker.<br><br><strong x-id="1">Note</strong>: the listener name must be contained in <code>advertisedListeners</code>.<br><br> If the value of this configuration is empty, the broker uses the first listener as the internal listener.
</td>
<td>
/
</td>
<td>
If this flag is set to <code>true</code>, the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required).
</td>
<td>
false
</td>
<td>
Whether persistent topics are enabled on the broker
</td>
<td>
true
</td>
<td>
Whether non-persistent topics are enabled on the broker
</td>
<td>
true
</td>
<td>
Whether the Pulsar Functions worker service is enabled in the broker
</td>
<td>
false
</td>
<td>
Whether to enable topic level metrics.
</td>
<td>
true
</td>
<td>
</td>
<td>
60
</td>
<td>
</td>
<td>
60
</td>
<td>
Zookeeper quorum connection string
</td>
<td>
</td>
<td>
ZooKeeper 缓存过期时间(秒)
</td>
<td>
300
</td>
<td>
配置存储连接字符串(以逗号分隔的列表)
</td>
<td>
</td>
<td>
Broker data port
</td>
<td>
6650
</td>
<td>
Broker data port for TLS
</td>
<td>
6651
</td>
<td>
Port to use to server HTTP request
</td>
<td>
8080
</td>
<td>
Port to use to server HTTPS request
</td>
<td>
8443
</td>
<td>
Enable the WebSocket API service in broker
</td>
<td>
false
</td>
<td>
The number of IO threads in Pulsar Client used in WebSocket proxy.
</td>
<td>
8
</td>
<td>
The number of connections per Broker in Pulsar Client used in WebSocket proxy.
</td>
<td>
8
</td>
<td>
Time in milliseconds that idle WebSocket session times out.
</td>
<td>
300000
</td>
<td>
The maximum size of a text message during parsing in WebSocket proxy.
</td>
<td>
1048576
</td>
<td>
Whether to enable topic level metrics.
</td>
<td>
true
</td>
<td>
Whether to enable consumer level metrics.
</td>
<td>
false
</td>
<td>
Classname of Pluggable JVM GC metrics logger that can log GC specific metrics.
</td>
<td>
N/A
</td>
<td>
Hostname or IP address the service binds on, default is 0.0.0.0.
</td>
<td>
0.0.0.0
</td>
<td>
服务对外发布的主机名或 IP 地址。 如果未设置,将使用 <code>InnetAddress.getLocalHost().getHostName()</code> 的值。
</td>
<td>
</td>
<td>
Name of the cluster to which this broker belongs to
</td>
<td>
</td>
<td>
The maximum number of tenants that can be created in each Pulsar cluster. 当租户的数量达到阈值时, broker会拒绝创建新租户的请求。 The default value 0 disables the check.
</td>
<td>
0
</td>
<td>
The maximum number of namespaces that can be created in each tenant. When the number of namespaces reaches this threshold, the broker rejects the request of creating a new tenant. The default value 0 disables the check.
</td>
<td>
0
</td>
<td>
Sets the default behavior for message deduplication in the broker. If enabled, the broker will reject messages that were already stored in the topic. This setting can be overridden on a per-namespace basis.
</td>
<td>
false
</td>
<td>
The maximum number of producers for which information will be stored for deduplication purposes.
</td>
<td>
10000
</td>
<td>
去重快照生成后,允许存储的去重信息数量。 A larger interval will lead to fewer snapshots being taken, though this would also lengthen the topic recovery time (the time required for entries published after the snapshot to be replayed).
</td>
<td>
1000
</td>
<td>
The time period after which a deduplication informational snapshot is taken. It runs simultaneously with <code>brokerDeduplicationEntriesInterval</code>.
</td>
<td>
120
</td>
<td>
The time of inactivity (in minutes) after which the broker will discard deduplication information related to a disconnected producer.
</td>
<td>
360
</td>
<td>
The default messages per second dispatch throttling-limit for every replicator in replication. The value of <code>0</code> means disabling replication message dispatch-throttling
</td>
<td>
0
</td>
<td>
The default bytes per second dispatch throttling-limit for every replicator in replication. The value of <code>0</code> means disabling replication message-byte dispatch-throttling
</td>
<td>
0
</td>
<td>
Zookeeper session timeout in milliseconds
</td>
<td>
30000
</td>
<td>
等待 broker 关闭的时长。 达到本设置时间之后,将会直接关掉进程。
</td>
<td>
60000
</td>
<td>
Flag to skip broker shutdown when broker handles Out of memory error.
</td>
<td>
false
</td>
<td>
打开 backlog 配额检查。 当某个 Topic 已经达到配额,将会采取强制行动。
</td>
<td>
true
</td>
<td>
How often to check for topics that have reached the quota
</td>
<td>
60
</td>
<td>
The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1.
</td>
<td>
-1
</td>
<td>
The defaulted backlog quota retention policy. By Default, it is <code>producer_request_hold</code>. <li>'producer_request_hold'保持生产者的发送请求,直到资源可用(或保持时间过长)的策略。</li> <li>'producer_exception' Policy which throws `javax.jms.ResourceAllocationException` to the producer </li><li>'consumer_backlog_eviction' 驱逐最慢的消费者积压的最旧信息的策略</li>
</td>
<td>
producer_request_hold
</td>
<td>
Enable topic auto creation if a new producer or consumer connected
</td>
<td>
true
</td>
<td>
The type of topic that is allowed to be automatically created.(partitioned/non-partitioned)
</td>
<td>
non-partitioned
</td>
<td>
Enable subscription auto creation if a new consumer connected
</td>
<td>
true
</td>
<td>
The number of partitioned topics that is allowed to be automatically created if <code>allowAutoTopicCreationType</code> is partitioned
</td>
<td>
1
</td>
<td>
Enable the deletion of inactive topics
</td>
<td>
true
</td>
<td>
How often to check for inactive topics
</td>
<td>
60
</td>
<td>
Set the mode to delete inactive topics. <li> <code>delete_when_no_subscriptions</code>: 删除没有订阅或活动生产者的主题。 <li> <code>delete_when_subscriptions_caught_up</code>: delete the topic whose subscriptions have no backlogs and which has no active producers or consumers.
</td>
<td>
<code>delete_when_no_subscriptions</code>
</td>
<td>
Set the maximum duration for inactive topics. If it is not specified, the <code>brokerDeleteInactiveTopicsFrequencySeconds</code> parameter is adopted.
</td>
<td>
N/A
</td>
<td>
Enable you to delete a tenant forcefully.
</td>
<td>
false
</td>
<td>
Enable you to delete a namespace forcefully.
</td>
<td>
false
</td>
<td>
The frequency of proactively checking and purging expired messages.
</td>
<td>
5
</td>
<td>
Interval between checks to determine whether topics with compaction policies need compaction.
</td>
<td>
60
</td>
配置项 |
---|
advertisedListeners |
internalListenerName |
authenticateOriginalAuthData |
enablePersistentTopics |
enableNonPersistentTopics |
functionsWorkerEnabled |
exposePublisherStats |
statsUpdateFrequencyInSecs |
statsUpdateInitialDelayInSecs |
zookeeperServers |
zooKeeperCacheExpirySeconds |
configurationStoreServers |
brokerServicePort |
brokerServicePortTls |
webServicePort |
webServicePortTls |
webSocketServiceEnabled |
webSocketNumIoThreads |
webSocketConnectionsPerBroker |
webSocketSessionIdleTimeoutMillis |
webSocketMaxTextFrameSize |
exposeTopicLevelMetricsInPrometheus |
exposeConsumerLevelMetricsInPrometheus |
jvmGCMetricsLoggerClassName |
bindAddress |
advertisedAddress |
clusterName |
maxTenants |
maxNamespacesPerTenant |
brokerDeduplicationEnabled |
brokerDeduplicationMaxNumberOfProducers |
brokerDeduplicationEntriesInterval |
brokerDeduplicationSnapshotIntervalSeconds |
brokerDeduplicationProducerInactivityTimeoutMinutes |
dispatchThrottlingRatePerReplicatorInMsg |
dispatchThrottlingRatePerReplicatorInByte |
zooKeeperSessionTimeoutMillis |
brokerShutdownTimeoutMs |
skipBrokerShutdownOnOOM |
backlogQuotaCheckEnabled |
backlogQuotaCheckIntervalInSeconds |
backlogQuotaDefaultLimitGB |
backlogQuotaDefaultRetentionPolicy |
allowAutoTopicCreation |
allowAutoTopicCreationType |
allowAutoSubscriptionCreation |
defaultNumPartitions |
brokerDeleteInactiveTopicsEnabled |
brokerDeleteInactiveTopicsFrequencySeconds |
brokerDeleteInactiveTopicsMode |
brokerDeleteInactiveTopicsMaxInactiveDurationSeconds |
forceDeleteTenantAllowed |
forceDeleteNamespaceAllowed |
messageExpiryCheckIntervalInMinutes |
brokerServiceCompactionMonitorIntervalInSeconds |
brokerServiceCompactionThresholdInBytes|If the estimated backlog size is greater than this threshold, compression is triggered.
Set this threshold to 0 means disabling the compression check.|N/A |delayedDeliveryEnabled| Whether to enable the delayed delivery for messages. 如果禁用,消息将被立即发送,并且没有跟踪开销。|true||delayedDeliveryTickTimeMillis|控制延迟发送时重试的时间,这将影响投递时间与预定时间相比的准确性。 默认情况下,它是 1 秒.|1000| |ActiveConsumerFailoverDelayTimeMillis| 当活动消费者改变时,延时多长时间才能延迟光标和发送消息。 |1000| |clientLibraryVersionCheckEnabled|启用允许的最小客户端库版本检查|false| |clientLibraryVersionCheckAllowUnversioned|允许没有版本信息的客户端库|true| ||statusFilePath|在响应服务发现健康检查时用于确定代理的rotation状态的文件路径| |preferLaterVersions|如果为真。(并且正在使用ModularLoadManagerImpl),负载管理器将尝试只使用运行最新软件版本的broker(以尽量减少对bundle的影响)|false| |maxNumPartitionsPerPartitionedTopic|每个分区主题的最大分区数 使用0或负数来禁用检查。 | maxSubscriptionsPerTopic | 一个主题允许订阅的最大订阅数。 一旦达到这个限制, broker就会拒绝新的订阅,直到订阅的数量减少。 当值设置为 0时,发布限制将被禁用。 | 0 | | maxProducersPerTopic | 允许生产者连接到主题的最大数量。 Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. 当值设置为 0时,发布限制将被禁用。 | 0 | | maxConsumersPerTopic | 允许消费者连接到主题的最大数量。 Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. 当值设置为 0时,发布限制将被禁用。 | 0 | | maxConsumersPerTopic | 允许连接到一个订阅的最大消费者数量。 Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. 当值设置为 0时,发布限制将被禁用。 | 0 | |tlsEnabled|Deprecated - Use
webServicePortTls
and brokerServicePortTls
instead. |false| |tlsCertificateFilePath| Path for the TLS certificate file || |tlsKeyFilePath| Path for the TLS private key file || |tlsTrustCertsFilePath| Path for the trusted TLS certificate file. 此证书用于验证通过连接客户端提供的任何证书由证书机构签名。 If this verification fails, then the certs are untrusted and the connections are dropped. || |tlsAllowInsecureConnection|接受来自客户端的不被信任的TLS证书。 If it is set to true
, a client with a cert which cannot be verified with the ‘tlsTrustCertsFilePath’ cert will be allowed to connect to the server, though the cert will not be used for client authentication. |false| |tlsProtocol|指定broker在TLS Handake期间将使用的 tls 协议。 Multiple values can be specified, separated by commas. Example:- TLSv1.3
, TLSv1.2
|| |tlsCiphers|Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
|| |tlsEnabledWithKeyStore| Enable TLS with KeyStore type configuration in broker |false| |tlsProvider| TLS Provider for KeyStore type || |tlsKeyStoreType| LS KeyStore type configuration in broker: JKS, PKCS12 |JKS| |tlsKeyStore| TLS KeyStore path in broker || |tlsKeyStorePassword| TLS KeyStore password for broker || |brokerClientTlsEnabledWithKeyStore| Whether internal client use KeyStore type to authenticate with Pulsar brokers |false| |brokerClientSslProvider| The TLS Provider used by internal client to authenticate with other Pulsar brokers || |brokerClientTlsTrustStoreType| TLS TrustStore type configuration for internal client: JKS, PKCS12, used by the internal client to authenticate with Pulsar brokers |JKS| |brokerClientTlsTrustStore| TLS TrustStore path for internal client, used by the internal client to authenticate with Pulsar brokers || |brokerClientTlsTrustStorePassword| TLS TrustStore password for internal client, used by the internal client to authenticate with Pulsar brokers || |brokerClientTlsCiphers| Specify the tls cipher the internal client will use to negotiate during TLS Handshake. (逗号分隔的密码列表),例如:[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]||。 |brokerClientTlsProtocols|指定broker在TLS握手期间将使用的TLS协议。 (逗号分隔的协议名称列表)。 e.g. TLSv1.3
, TLSv1.2
|| |ttlDurationDefaultInSeconds|The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to 0
, TTL is disabled. By default, TTL is disabled. |0| |tokenSecretKey| Configure the secret key to be used to validate auth tokens. The key can be specified like: tokenSecretKey=data:;base64,xxxxxxxxx
or tokenSecretKey=file:///my/secret.key
. Note: key file must be DER-encoded.|| |tokenPublicKey| Configure the public key to be used to validate auth tokens. The key can be specified like: tokenPublicKey=data:;base64,xxxxxxxxx
or tokenPublicKey=file:///my/secret.key
. 注意:密钥文件必须是 DER-encoded.|| |tokenPublicAlg| 配置用于验证认证令牌的算法。 这可以是Java JWT 支持的任意一种非对称算法 (https://github.com/jwtk/jjwt#signature-algorithms-keys) |RS256| |tokenAuthClaim| Specify which of the token’s claims will be used as the authentication “principal” or “role”. 如果这里留空,则默认的”sub” claim 会被使用 || |tokenAudienceClaim| 令牌受众的 “claim” 名称,比如”aud”会用于从令牌获取受众。 If not set, audience will not be verified. || |tokenAudience| 该 broker 的令牌受众。 The field tokenAudienceClaim
of a valid token, need contains this. || |maxUnackedMessagesPerConsumer|允许共享订阅中的消费者接收的未确认信息的最大数量。 Broker will stop sending messages to consumer once, this limit reaches until consumer starts acknowledging messages back. 使用0,就是禁用未确认消息限制检查,消费者可以不受任何限制地接收消息。 |maxUnackedMessagesPerSubscription|每个共享订阅允许的未确认信息的最大数量。 Broker will stop dispatching messages to all consumers of the subscription once this limit reaches until consumer starts acknowledging messages back and unack count reaches to limit/2. 使用 0 值,禁用 unackedMessage-limit 检查且调度器可以不受任何限制地发送消息 |200000| |subscriptionRedeliveryTrackerEnabled| 启用订阅消息重送跟踪 |true| subscriptionExpirationTimeMinutes | 删除上次消费后非活动订阅的时间。
Setting this configuration to a value greater than 0 deletes inactive subscriptions automatically.
Setting this configuration to 0 does not delete inactive subscriptions automatically.
Since this configuration takes effect on all topics, if there is even one topic whose subscriptions should not be deleted automatically, you need to set it to 0.
Instead, you can set a subscription expiration time for each namespace using the pulsar-admin namespaces set-subscription-expiration-time options command. | 0 | |maxConcurrentLookupRequest| 最大的并发查询请求数, broker允许节制大量的传入查询流量 |50000| |maxConcurrentTopicLoadRequest| 最大并发主题加载请求数,代理允许控制zk-operations的数量 |5000| |authenticationEnabled|启用认证|false| |authenticationProviders| 认证提供者名称列表,是以逗号分隔的类名列表。 |认证更新检查时间|检查过期认证凭证的时间间隔|60| |authorizationEnabled| 强制授权|false| |superUserRoles|被视为 “超级用户 “的角色名称,这意味着他们将能够进行所有的管理操作,并从所有主题发布/消费|| |brokerClientAuthenticationPlugin| broker本身的认证设置。 Used when the broker connects to other brokers, either in same or other clusters || |brokerClientAuthenticationParameters||| |athenzDomainNames| Supported Athenz provider domain names(comma separated) for authentication || |exposePreciseBacklogInPrometheus| Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. |false| |schemaRegistryStorageClassName|The schema storage implementation used by this broker.|org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory| |isSchemaValidationEnforced|Enforce schema validation on following cases: if a producer without a schema attempts to produce to a topic with schema, the producer will be failed to connect. PLEASE be carefully on using this, since non-java clients don’t support schema. If this setting is enabled, then non-java clients fail to produce.|false| | topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. 如果设置为 0 或负数,则围栏主题不会关闭。 | 0 | |offloadersDirectory|所有卸载器实现的目录,|./offloaders|。 |bookkeeperMetadataServiceUri|元数据服务URI,bookkeeper用于加载相应的元数据驱动并解析其元数据服务位置。 This value can be fetched using bookkeeper shell whatisinstanceid
command in BookKeeper cluster. 例如: zk+hierarchical://localhost:2181/ledgers。 元数据服务URI列表也可以是分号分隔的值,如下所示。zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers || |bookkeeperClientAuthenticationPlugin| 连接bookies时使用的认证插件 || |bookkeeperClientAuthenticationParametersName| BookKeeper auth插件实施的具体参数名称和值|| |bookkeeperClientAuthenticationParameters|||| |bookkeeperClientNumWorkerThreads|| BookKeeper客户端工作线程的数量。 默认为Runtime.getRuntime().availableProcessors() || |bookkeeperClientTimeoutInSeconds|BK添加/读取操作的超时|30| |bookkeeperClientSpeculativeReadTimeoutInMillis|如果读取请求在一定时间内没有完成,就会启动推测性读取 使用0值,就是禁用推测性读取|0| |bookkeeperNumberOfChannelsPerBookie|每个bookie的通道数量 |16| |bookkeeperClientHealthCheckEnabled| 启用bookie健康检查。 Bookies that have more than the configured number of failure within the interval will be quarantined for some time. During this period, new ledgers won’t be created on these bookies |true| |bookkeeperClientHealthCheckIntervalSeconds||60| |bookkeeperClientHealthCheckErrorThresholdPerInterval||5| |bookkeeperClientHealthCheckQuarantineTimeInSeconds ||1800| |bookkeeperClientRackawarePolicyEnabled| Enable rack-aware bookie selection policy. BK will chose bookies from different racks when forming a new bookie ensemble |true| |bookkeeperClientRegionawarePolicyEnabled| Enable region-aware bookie selection policy. BK will chose bookies from different regions and racks when forming a new bookie ensemble. 如果启用,bookkeeperClientRackawarePolicyEnabled的值将被忽略|false| |bookkeeperClientMinNumRacksPerWriteQuorum|每个写入法定数的最小机架数。 BK rack-aware bookie selection policy will try to get bookies from at least ‘bookkeeperClientMinNumRacksPerWriteQuorum’ racks for a write quorum. |2| |bookkeeperClientEnforceMinNumRacksPerWriteQuorum|执行机架感知的bookies选择策略,从’bookkeeperClientMinNumRacksPerWriteQuorum’机架中挑选bookies进行写入Quorum。 If BK can’t find bookie then it would throw BKNotEnoughBookiesException instead of picking random one. |false| |bookkeeperClientReorderReadSequenceEnabled| 启用/禁用在读取条目时重新安排读取顺序。 |false| |bookkeeperClientIsolationGroups| Enable bookie isolation by specifying a list of bookie groups to choose from. Any bookie outside the specified groups will not be used by the broker || |bookkeeperClientSecondaryIsolationGroups| Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn’t have enough bookie available. || |bookkeeperClientMinAvailableBookiesInIsolationGroups| Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. || |bookkeeperClientGetBookieInfoIntervalSeconds| Set the interval to periodically check bookie info |86400| |bookkeeperClientGetBookieInfoRetryIntervalSeconds| Set the interval to retry a failed bookie info lookup |60| |bookkeeperEnableStickyReads | Enable/disable having read operations for a ledger to be sticky to a single bookie. If this flag is enabled, the client will use one single bookie (by preference) to read all entries for a ledger. | true | |managedLedgerDefaultEnsembleSize| Number of bookies to use when creating a ledger |2| |managedLedgerDefaultWriteQuorum| Number of copies to store for each message |2| |managedLedgerDefaultAckQuorum| Number of guaranteed copies (acks to wait before write is complete) |2| |managedLedgerCacheSizeMB| Amount of memory to use for caching data payload in managed ledger. This memory is allocated from JVM direct memory and it’s shared across all the topics running in the same broker. By default, uses 1/5th of available direct memory || |managedLedgerCacheCopyEntries| Whether we should make a copy of the entry payloads when inserting in cache| false| |managedLedgerCacheEvictionWatermark| Threshold to which bring down the cache level when eviction is triggered |0.9| |managedLedgerCacheEvictionFrequency| Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 | |managedLedgerCacheEvictionTimeThresholdMillis| All entries that have stayed in cache for more than the configured time, will be evicted | 1000 | |managedLedgerCursorBackloggedThreshold| Configure the threshold (in number of entries) from where a cursor should be considered ‘backlogged’ and thus should be set as inactive. | 1000| |managedLedgerDefaultMarkDeleteRateLimit| 限制由消费者确认消息产生的每秒写入频率 |1.0| |managedLedgerMaxEntriesPerLedger| 在触发倒转前追加至 ledger 的最大条目数。 A ledger rollover is triggered after the min rollover time has passed and one of the following conditions is true:
- 已达到最大滚动时间
- The max entries have been written to the ledger
- The max ledger size has been written to the ledger
|50000| |managedLedgerMinLedgerRolloverTimeMinutes| Minimum time between ledger rollover for a topic |10| |managedLedgerMaxLedgerRolloverTimeMinutes| Maximum time before forcing a ledger rollover for a topic |240| |managedLedgerCursorMaxEntriesPerLedger| Max number of entries to append to a cursor ledger |50000| |managedLedgerCursorRolloverTimeInSeconds| Max time before triggering a rollover on a cursor ledger |14400| |managedLedgerMaxUnackedRangesToPersist| Max number of “acknowledgment holes” that are going to be persistently stored. When acknowledging out of order, a consumer will leave holes that are supposed to be quickly filled by acking all the messages. The information of which messages are acknowledged is persisted by compressing in “ranges” of messages that were acknowledged. After the max number of ranges is reached, the information will only be tracked in memory and messages will be redelivered in case of crashes. |1000| |autoSkipNonRecoverableData| Skip reading non-recoverable/unreadable data-ledger under managed-ledger’s list.It helps when data-ledgers gets corrupted at bookkeeper and managed-cursor is stuck at that ledger. |false| |loadBalancerEnabled| Enable load balancer |true| |loadBalancerPlacementStrategy| Strategy to assign a new bundle weightedRandomSelection || |loadBalancerReportUpdateThresholdPercentage| Percentage of change to trigger load report update |10| |loadBalancerReportUpdateMaxIntervalMinutes| maximum interval to update load report |15| |loadBalancerHostUsageCheckIntervalMinutes| Frequency of report to collect |1| |loadBalancerSheddingIntervalMinutes| Load shedding interval. Broker periodically checks whether some traffic should be offload from some over-loaded broker to other under-loaded brokers |30| |loadBalancerSheddingGracePeriodMinutes| Prevent the same topics to be shed and moved to other broker more than once within this timeframe |30| |loadBalancerBrokerMaxTopics| Usage threshold to allocate max number of topics to broker |50000| |loadBalancerBrokerUnderloadedThresholdPercentage| Usage threshold to determine a broker as under-loaded |1| |loadBalancerBrokerOverloadedThresholdPercentage| Usage threshold to determine a broker as over-loaded |85| |loadBalancerResourceQuotaUpdateIntervalMinutes| Interval to update namespace bundle resource quota |15| |loadBalancerBrokerComfortLoadLevelPercentage| Usage threshold to determine a broker is having just right level of load |65| |loadBalancerAutoBundleSplitEnabled| enable/disable namespace bundle auto split |false| |loadBalancerNamespaceBundleMaxTopics| maximum topics in a bundle, otherwise bundle split will be triggered |1000| |loadBalancerNamespaceBundleMaxSessions| maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered |1000| |loadBalancerNamespaceBundleMaxMsgRate| maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered |1000| |loadBalancerNamespaceBundleMaxBandwidthMbytes| maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered |100| |loadBalancerNamespaceMaximumBundles| maximum number of bundles in a namespace |128| |replicationMetricsEnabled| Enable replication metrics |true| |replicationConnectionsPerBroker| Max number of connections to open for each broker in a remote cluster More connections host-to-host lead to better throughput over high-latency links. |16| |replicationProducerQueueSize| Replicator producer queue size |1000| |replicatorPrefix| Replicator prefix used for replicator producer name and cursor name pulsar.repl|| |replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages |false| |brokerServicePurgeInactiveFrequencyInSeconds|Deprecated. Use brokerDeleteInactiveTopicsFrequencySeconds
.|60| |transactionCoordinatorEnabled|Whether to enable transaction coordinator in broker.|true| |transactionMetadataStoreProviderClassName| |org.apache.pulsar.transaction.coordinator.impl.InMemTransactionMetadataStoreProvider| |defaultRetentionTimeInMinutes| Default message retention time |0| |defaultRetentionSizeInMB| Default retention size |0| |keepAliveIntervalSeconds| How often to check whether the connections are still alive |30| |bootstrapNamespaces| The bootstrap name. | N/A | |loadManagerClassName| Name of load manager to use |org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl| |supportedNamespaceBundleSplitAlgorithms| Supported algorithms name for namespace bundle split |[range_equally_divide,topic_count_equally_divide]| |defaultNamespaceBundleSplitAlgorithm| Default algorithm name for namespace bundle split |range_equally_divide| |managedLedgerOffloadDriver| The directory for all the offloader implementations offloadersDirectory=./offloaders
. Driver to use to offload old data to long term storage (Possible values: S3, aws-s3, google-cloud-storage). When using google-cloud-storage, Make sure both Google Cloud Storage and Google Cloud Storage JSON API are enabled for the project (check from Developers Console -> Api&auth -> APIs). || |managedLedgerOffloadMaxThreads| Maximum number of thread pool threads for ledger offloading |2| |managedLedgerOffloadPrefetchRounds|The maximum prefetch rounds for ledger reading for offloading.|1| |managedLedgerUnackedRangesOpenCacheSetEnabled| Use Open Range-Set to cache unacknowledged messages |true| |managedLedgerOffloadDeletionLagMs|Delay between a ledger being successfully offloaded to long term storage and the ledger being deleted from bookkeeper | 14400000| |managedLedgerOffloadAutoTriggerSizeThresholdBytes|The number of bytes before triggering automatic offload to long term storage |-1 (disabled)| |s3ManagedLedgerOffloadRegion| For Amazon S3 ledger offload, AWS region || |s3ManagedLedgerOffloadBucket| For Amazon S3 ledger offload, Bucket to place offloaded ledger into || |s3ManagedLedgerOffloadServiceEndpoint| For Amazon S3 ledger offload, Alternative endpoint to connect to (useful for testing) || |s3ManagedLedgerOffloadMaxBlockSizeInBytes| For Amazon S3 ledger offload, Max block size in bytes. (64MB by default, 5MB minimum) |67108864| |s3ManagedLedgerOffloadReadBufferSizeInBytes| For Amazon S3 ledger offload, Read buffer size in bytes (1MB by default) |1048576| |gcsManagedLedgerOffloadRegion|For Google Cloud Storage ledger offload, region where offload bucket is located. Go to this page for more details: https://cloud.google.com/storage/docs/bucket-locations .|N/A| |gcsManagedLedgerOffloadBucket|For Google Cloud Storage ledger offload, Bucket to place offloaded ledger into.|N/A| |gcsManagedLedgerOffloadMaxBlockSizeInBytes|For Google Cloud Storage ledger offload, the maximum block size in bytes. (64MB by default, 5MB minimum)|67108864| |gcsManagedLedgerOffloadReadBufferSizeInBytes|For Google Cloud Storage ledger offload, Read buffer size in bytes. (1MB by default)|1048576| |gcsManagedLedgerOffloadServiceAccountKeyFile|For Google Cloud Storage, path to json file containing service account credentials. For more details, see the “Service Accounts” section of https://support.google.com/googleapi/answer/6158849 .|N/A| |fileSystemProfilePath|For File System Storage, file system profile path.|../conf/filesystem_offload_core_site.xml| |fileSystemURI|For File System Storage, file system uri.|N/A| |s3ManagedLedgerOffloadRole| For Amazon S3 ledger offload, provide a role to assume before writing to s3 || |s3ManagedLedgerOffloadRoleSessionName| For Amazon S3 ledger offload, provide a role session name when using a role |pulsar-s3-offload| | acknowledgmentAtBatchIndexLevelEnabled | Enable or disable the batch index acknowledgement. | false | |enableReplicatedSubscriptions|Whether to enable tracking of replicated subscriptions state across clusters.|true| |replicatedSubscriptionsSnapshotFrequencyMillis|The frequency of snapshots for replicated subscriptions tracking.|1000| |replicatedSubscriptionsSnapshotTimeoutSeconds|The timeout for building a consistent snapshot for tracking replicated subscriptions state.|30| |replicatedSubscriptionsSnapshotMaxCachedPerSubscription|The maximum number of snapshot to be cached per subscription.|10| |maxMessagePublishBufferSizeInMB|The maximum memory size for a broker to handle messages that are sent by producers. If the processing message size exceeds this value, the broker stops reading data from the connection. The processing messages refer to the messages that are sent to the broker but the broker has not sent response to the client. Usually the messages are waiting to be written to bookies. It is shared across all the topics running in the same broker. The value -1
disables the memory limitation. By default, it is 50% of direct memory.|N/A| |messagePublishBufferCheckIntervalInMillis|Interval between checks to see if message publish buffer size exceeds the maximum. Use 0
or negative number to disable the max publish buffer limiting.|100| |retentionCheckIntervalInSeconds|Check between intervals to see if consumed ledgers need to be trimmed. Use 0 or negative number to disable the check.|120| | maxMessageSize | Set the maximum size of a message. | 5242880 | | preciseTopicPublishRateLimiterEnable | Enable precise topic publish rate limiting. | false | | lazyCursorRecovery | Whether to recover cursors lazily when trying to recover a managed ledger backing a persistent topic. It can improve write availability of topics. The caveat is now when recovered ledger is ready to write we’re not sure if all old consumers’ last mark delete position(ack position) can be recovered or not. So user can make the trade off or have custom logic in application to checkpoint consumer state.| false |
|haProxyProtocolEnabled | Enable or disable the HAProxy protocol. |false| | maxTopicsPerNamespace | The maximum number of persistent topics that can be created in the namespace. When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 | |subscriptionTypesEnabled| Enable all subscription types, which are exclusive, shared, failover, and key_shared. | Exclusive, Shared, Failover, Key_Shared | / managedLedgerInfoCompressionType / ManagedLedgerInfo compression type, option values (NONE, LZ4, ZLIB, ZSTD, SNAPPY), if value is NONE or invalid, the managedLedgerInfo will not be compressed. Notice, after enable this configuration, if you want to degrade broker, you should change the configuration to NONE
and make sure all ledger metadata are saved without compression. | None |
Client
You can use the pulsar-client CLI tool to publish messages to and consume messages from Pulsar topics. You can use this tool in place of a client library.
配置项 | 说明 | 默认值 |
---|---|---|
webServiceUrl | 群集的 web URL。 | http://localhost:8080/ |
brokerServiceUrl | 集群的Pulsar 协议地址。 | pulsar://localhost:6650/ |
authPlugin | 身份认证插件。 | |
authParams | 群集的身份认证参数, 逗号分隔的字符串。 | |
useTls | 是否在集群中执行 TLS 身份验证。 | false |
tlsAllowInsecureConnection | Allow TLS connections to servers whose certificate cannot be verified to have been signed by a trusted certificate authority. | false |
tlsEnableHostnameVerification | Whether the server hostname must match the common name of the certificate that is used by the server. | false |
tlsTrustCertsFilePath | ||
useKeyStoreTls | Enable TLS with KeyStore type configuration in the broker. | false |
tlsTrustStoreType | TLS TrustStore type configuration. | JKS |
tlsTrustStore | TLS TrustStore path. | |
tlsTrustStorePassword | TLS TrustStore password. |
服务发现
配置项 | 说明 | 默认值 |
---|---|---|
zookeeperServers | Zookeeper quorum connection string (comma-separated) | |
zooKeeperCacheExpirySeconds | ZooKeeper 缓存过期时间(秒) | 300 |
configurationStoreServers | 配置存储连接字符串(以逗号分隔的列表) | |
zookeeperSessionTimeoutMs | ZooKeeper session timeout | 30000 |
servicePort | Port to use to server binary-proto request | 6650 |
servicePortTls | Port to use to server binary-proto-tls request | 6651 |
webServicePort | Port that discovery service listen on | 8080 |
webServicePortTls | Port to use to server HTTPS request | 8443 |
bindOnLocalhost | Control whether to bind directly on localhost rather than on normal hostname | false |
authenticationEnabled | Enable authentication | false |
authenticationProviders | Authentication provider name list, which is comma separated list of class names (comma-separated) | |
authorizationEnabled | Enforce authorization | false |
superUserRoles | Role names that are treated as “super-user”, meaning they will be able to do all admin operations and publish/consume from all topics (comma-separated) | |
tlsEnabled | Enable TLS | false |
tlsCertificateFilePath | TLS证书文件的路径 | |
tlsKeyFilePath | TLS私钥文件的路径 |
Log4j
您可以在 log4j2.yaml 文件中设置日志级别和配置。 以下日志配置参数是可用的:
配置项 | 默认值 |
---|---|
pulsar.root.logger | WARN,CONSOLE |
pulsar.log.dir | logs |
pulsar.log.file | pulsar.log |
log4j.rootLogger | ${pulsar.root.logger} |
log4j.appender.CONSOLE | org.apache.log4j.ConsoleAppender |
log4j.appender.CONSOLE.Threshold | DEBUG |
log4j.appender.CONSOLE.layout | org.apache.log4j.PatternLayout |
log4j.appender.CONSOLE.layout.ConversionPattern | %d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n |
log4j.appender.ROLLINGFILE | org.apache.log4j.DailyRollingFileAppender |
log4j.appender.ROLLINGFILE.Threshold | DEBUG |
log4j.appender.ROLLINGFILE.File | ${pulsar.log.dir}/${pulsar.log.file} |
log4j.appender.ROLLINGFILE.layout | org.apache.log4j.PatternLayout |
log4j.appender.ROLLINGFILE.layout.ConversionPattern | %d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n |
log4j.appender.TRACEFILE | org.apache.log4j.FileAppender |
log4j.appender.TRACEFILE.Threshold | TRACE |
log4j.appender.TRACEFILE.File | pulsar-trace.log |
log4j.appender.TRACEFILE.layout | org.apache.log4j.PatternLayout |
log4j.appender.TRACEFILE.layout.ConversionPattern | %d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n |
注意:log4j2.appender 中的“topic”是可配置的。 - 如果你想要将所有日志追加到单个主题,请设置相同的主题名称。 - 如果你想要将日志追加到不同的主题,你可以设置不同的主题名称。
Log4j shell
配置项 | 默认值 |
---|---|
bookkeeper.root.logger | ERROR,CONSOLE |
log4j.rootLogger | ${bookkeeper.root.logger} |
log4j.appender.CONSOLE | org.apache.log4j.ConsoleAppender |
log4j.appender.CONSOLE.Threshold | DEBUG |
log4j.appender.CONSOLE.layout | org.apache.log4j.PatternLayout |
log4j.appender.CONSOLE.layout.ConversionPattern | %d{ABSOLUTE} %-5p %m%n |
log4j.logger.org.apache.zookeeper | ERROR |
log4j.logger.org.apache.bookkeeper | ERROR |
log4j.logger.org.apache.bookkeeper.bookie.BookieShell | INFO |
Standalone
配置项 | 说明 | 默认值 |
---|---|---|
authenticateOriginalAuthData | If this flag is set to true , the broker authenticates the original Auth data; else it just accepts the originalPrincipal and authorizes it (if required). | false |
zookeeperServers | The quorum connection string for local ZooKeeper | |
zooKeeperCacheExpirySeconds | ZooKeeper 缓存过期时间(秒) | 300 |
configurationStoreServers | 配置存储连接字符串(以逗号分隔的列表) | |
brokerServicePort | The port on which the standalone broker listens for connections | 6650 |
webServicePort | The port used by the standalone broker for HTTP requests | 8080 |
bindAddress | The hostname or IP address on which the standalone service binds | 0.0.0.0 |
advertisedAddress | The hostname or IP address that the standalone service advertises to the outside world. 如果未设置,将使用 InnetAddress.getLocalHost().getHostName() 的值。 | |
numAcceptorThreads | Number of threads to use for Netty Acceptor | 1 |
numIOThreads | Number of threads to use for Netty IO | 2 Runtime.getRuntime().availableProcessors() |
numHttpServerThreads | Number of threads to use for HTTP requests processing | 2 Runtime.getRuntime().availableProcessors() |
isRunningStandalone | This flag controls features that are meant to be used when running in standalone mode. | N/A |
clusterName | The name of the cluster that this broker belongs to. | standalone |
failureDomainsEnabled | Enable cluster’s failure-domain which can distribute brokers into logical region. | false |
zooKeeperSessionTimeoutMillis | The ZooKeeper session timeout, in milliseconds. | 30000 |
zooKeeperOperationTimeoutSeconds | ZooKeeper operation timeout in seconds. | 30 |
brokerShutdownTimeoutMs | The time to wait for graceful broker shutdown. After this time elapses, the process will be killed. | 60000 |
skipBrokerShutdownOnOOM | Flag to skip broker shutdown when broker handles Out of memory error. | false |
backlogQuotaCheckEnabled | Enable the backlog quota check, which enforces a specified action when the quota is reached. | true |
backlogQuotaCheckIntervalInSeconds | How often to check for topics that have reached the backlog quota. | 60 |
backlogQuotaDefaultLimitGB | The default per-topic backlog quota limit. Being less than 0 means no limitation. By default, it is -1. | -1 |
ttlDurationDefaultInSeconds | The default Time to Live (TTL) for namespaces if the TTL is not configured at namespace policies. When the value is set to 0 , TTL is disabled. By default, TTL is disabled. | 0 |
brokerDeleteInactiveTopicsEnabled | Enable the deletion of inactive topics. | true |
brokerDeleteInactiveTopicsFrequencySeconds | How often to check for inactive topics, in seconds. | 60 |
maxPendingPublishRequestsPerConnection | Maximum pending publish requests per connection to avoid keeping large number of pending requests in memory | 1000 |
messageExpiryCheckIntervalInMinutes | How often to proactively check and purged expired messages. | 5 |
activeConsumerFailoverDelayTimeMillis | How long to delay rewinding cursor and dispatching messages when active consumer is changed. | 1000 |
subscriptionExpirationTimeMinutes | How long to delete inactive subscriptions from last consumption. When it is set to 0, inactive subscriptions are not deleted automatically | 0 |
subscriptionRedeliveryTrackerEnabled | Enable subscription message redelivery tracker to send redelivery count to consumer. | true |
subscriptionKeySharedEnable | Whether to enable the Key_Shared subscription. | true |
subscriptionKeySharedUseConsistentHashing | In Key_Shared subscription type, with default AUTO_SPLIT mode, use splitting ranges or consistent hashing to reassign keys to new consumers. | false |
subscriptionKeySharedConsistentHashingReplicaPoints | In Key_Shared subscription type, the number of points in the consistent-hashing ring. The greater the number, the more equal the assignment of keys to consumers. | 100 |
subscriptionExpiryCheckIntervalInMinutes | How frequently to proactively check and purge expired subscription | 5 |
brokerDeduplicationEnabled | Set the default behavior for message deduplication in the broker. This can be overridden per-namespace. If it is enabled, the broker rejects messages that are already stored in the topic. | false |
brokerDeduplicationMaxNumberOfProducers | Maximum number of producer information that it’s going to be persisted for deduplication purposes | 10000 |
brokerDeduplicationEntriesInterval | Number of entries after which a deduplication information snapshot is taken. A greater interval leads to less snapshots being taken though it would increase the topic recovery time, when the entries published after the snapshot need to be replayed. | 1000 |
brokerDeduplicationProducerInactivityTimeoutMinutes | 停止活动的时间(分钟级别),当生产者断开连接超过这个时间,就丢弃和该生产者有关的去重信息。 | 360 |
defaultNumberOfNamespaceBundles | When a namespace is created without specifying the number of bundles, this value is used as the default setting. | 4 |
clientLibraryVersionCheckEnabled | Enable checks for minimum allowed client library version. | false |
clientLibraryVersionCheckAllowUnversioned | Allow client libraries with no version information | true |
statusFilePath | The path for the file used to determine the rotation status for the broker when responding to service discovery health checks | /usr/local/apache/htdocs |
maxUnackedMessagesPerConsumer | The maximum number of unacknowledged messages allowed to be received by consumers on a shared subscription. The broker will stop sending messages to a consumer once this limit is reached or until the consumer begins acknowledging messages. A value of 0 disables the unacked message limit check and thus allows consumers to receive messages without any restrictions. | 50000 |
maxUnackedMessagesPerSubscription | The same as above, except per subscription rather than per consumer. | 200000 |
maxUnackedMessagesPerBroker | Maximum number of unacknowledged messages allowed per broker. Once this limit reaches, the broker stops dispatching messages to all shared subscriptions which has a higher number of unacknowledged messages until subscriptions start acknowledging messages back and unacknowledged messages count reaches to limit/2. When the value is set to 0, unacknowledged message limit check is disabled and broker does not block dispatchers. | 0 |
maxUnackedMessagesPerSubscriptionOnBrokerBlocked | Once the broker reaches maxUnackedMessagesPerBroker limit, it blocks subscriptions which have higher unacknowledged messages than this percentage limit and subscription does not receive any new messages until that subscription acknowledges messages back. | 0.16 |
unblockStuckSubscriptionEnabled | Broker periodically checks if subscription is stuck and unblock if flag is enabled. | false |
zookeeperSessionExpiredPolicy | There are two policies when ZooKeeper session expired happens, “shutdown” and “reconnect”. If it is set to “shutdown” policy, when ZooKeeper session expired happens, the broker is shutdown. If it is set to “reconnect” policy, the broker tries to reconnect to ZooKeeper server and re-register metadata to ZooKeeper. Note: the “reconnect” policy is an experiment feature. | shutdown |
topicPublisherThrottlingTickTimeMillis | 选择时间来安排任务来检查所有主题上的主题发布率限制。 A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. (禁用发布节点和值0) | 10 |
brokerPublisherThrottlingTickTimeMillis | 选时间来安排任务,检查 broker在所有主题中的发布率限制。 A lower value can improve accuracy while throttling publish but it uses more CPU to perform frequent check. 当值设置为 0时,发布限制将被禁用。 | 50 |
brokerPublisherThrottlingMaxMessageRate | Maximum rate (in 1 second) of messages allowed to publish for a broker if the message rate limiting is enabled. When the value is set to 0, message rate limiting is disabled. | 0 |
brokerPublisherThrottlingMaxByteRate | Maximum rate (in 1 second) of bytes allowed to publish for a broker if the byte rate limiting is enabled. When the value is set to 0, the byte rate limiting is disabled. | 0 |
subscribeThrottlingRatePerConsumer | 来自消费者的太多订阅请求会导致broker重置消费者的光标,并从bookies那里加载数据,因此造成网络带宽的高使用率。 当正值设定时, broker会为一个消费者缓存订阅请求。 否则,将被禁用。 默认情况下,这会被禁用。 | 0 |
subscribeRatePeriodPerConsumerInSecond | Rate period for {subscribeThrottlingRatePerConsumer}. By default, it is 30s. | 30 |
dispatchThrottlingRatePerTopicInMsg | Default messages (per second) dispatch throttling-limit for every topic. When the value is set to 0, default message dispatch throttling-limit is disabled. | 0 |
dispatchThrottlingRatePerTopicInByte | Default byte (per second) dispatch throttling-limit for every topic. When the value is set to 0, default byte dispatch throttling-limit is disabled. | 0 |
dispatchThrottlingRateRelativeToPublishRate | Enable dispatch rate-limiting relative to publish rate. | false |
dispatchThrottlingRatePerSubscriptionInMsg | The defaulted number of message dispatching throttling-limit for a subscription. The value of 0 disables message dispatch-throttling. | 0 |
dispatchThrottlingRatePerSubscriptionInByte | The default number of message-bytes dispatching throttling-limit for a subscription. The value of 0 disables message-byte dispatch-throttling. | 0 |
dispatchThrottlingOnNonBacklogConsumerEnabled | Enable dispatch-throttling for both caught up consumers as well as consumers who have backlogs. | true |
dispatcherMaxReadBatchSize | The maximum number of entries to read from BookKeeper. By default, it is 100 entries. | 100 |
dispatcherMaxReadSizeBytes | The maximum size in bytes of entries to read from BookKeeper. By default, it is 5MB. | 5242880 |
dispatcherMinReadBatchSize | 从 BookKeeper 读取的最小条目数。 默认情况下,它是 1 项。 当从bookkeeper处读取条目时发生错误,经纪人将回退批大小到这个最低数字。 | 1 |
dispatcherMaxRoundRobinBatchSize | The maximum number of entries to dispatch for a shared subscription. By default, it is 20 entries. | 20 |
preciseDispatcherFlowControl | 基于每个条目的历史消息数的精准的派发器流程控制。 | false |
streamingDispatch | 是否使用流读取调度器。 当出现大量积压需要耗尽的情况时,我们可以简化从bookkeeper那里读到的内容,最大限度地利用消费者能力,直到我们达到bookkeeper阅读限制或消费流程限制, 然后我们可以使用消费流控制来调节速度。 此功能目前正在预览,可以在随后发布中更改。 | false |
maxConcurrentLookupRequest | Maximum number of concurrent lookup request that the broker allows to throttle heavy incoming lookup traffic. | 50000 |
maxConcurrentTopicLoadRequest | Maximum number of concurrent topic loading request that the broker allows to control the number of zk-operations. | 5000 |
maxConcurrentNonPersistentMessagePerConnection | Maximum number of concurrent non-persistent message that can be processed per connection. | 1000 |
numWorkerThreadsForNonPersistentTopic | Number of worker threads to serve non-persistent topic. | 8 |
enablePersistentTopics | Enable broker to load persistent topics. | true |
enableNonPersistentTopics | Enable broker to load non-persistent topics. | true |
maxSubscriptionsPerTopic | Maximum number of subscriptions allowed to subscribe to a topic. 一旦达到这个限制, broker就会拒绝新的订阅,直到订阅的数量减少。 当值设置为 0时,发布限制将被禁用。 | 0 |
maxProducersPerTopic | Maximum number of producers allowed to connect to a topic. Once this limit reaches, the broker rejects new producers until the number of connected producers decreases. 当值设置为 0时,发布限制将被禁用。 | 0 |
maxConsumersPerTopic | Maximum number of consumers allowed to connect to a topic. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. 当值设置为 0时,发布限制将被禁用。 | 0 |
maxConsumersPerSubscription | Maximum number of consumers allowed to connect to a subscription. Once this limit reaches, the broker rejects new consumers until the number of connected consumers decreases. 当值设置为 0时,发布限制将被禁用。 | 0 |
maxNumPartitionsPerPartitionedTopic | Maximum number of partitions per partitioned topic. When the value is set to a negative number or is set to 0, the check is disabled. | 0 |
tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. When the value is set to 0, check the TLS certificate on every new connection. | 300 |
tlsCertificateFilePath | Path for the TLS certificate file. | |
tlsKeyFilePath | Path for the TLS private key file. | |
tlsTrustCertsFilePath | 受信任的 TLS 证书文件的路径。 | |
tlsAllowInsecureConnection | Accept untrusted TLS certificate from the client. If it is set to true, a client with a certificate which cannot be verified with the ‘tlsTrustCertsFilePath’ certificate is allowed to connect to the server, though the certificate is not be used for client authentication. | false |
tlsProtocols | Specify the TLS protocols the broker uses to negotiate during TLS handshake. | |
tlsCiphers | Specify the TLS cipher the broker uses to negotiate during TLS Handshake. | |
tlsRequireTrustedClientCertOnConnect | Trusted client certificates are required for to connect TLS. Reject the Connection if the client certificate is not trusted. In effect, this requires that all connecting clients perform TLS client authentication. | false |
tlsEnabledWithKeyStore | Enable TLS with KeyStore type configuration in broker. | false |
tlsProvider | TLS Provider for KeyStore type. | |
tlsKeyStoreType | TLS KeyStore type configuration in the broker. | JKS |
tlsKeyStore | TLS KeyStore path in the broker. | |
tlsKeyStorePassword | TLS KeyStore password for the broker. | |
tlsTrustStoreType | TLS TrustStore type configuration in the broker | JKS |
tlsTrustStore | TLS TrustStore path in the broker. | |
tlsTrustStorePassword | TLS TrustStore password for the broker. | |
brokerClientTlsEnabledWithKeyStore | Configure whether the internal client uses the KeyStore type to authenticate with Pulsar brokers. | false |
brokerClientSslProvider | The TLS Provider used by the internal client to authenticate with other Pulsar brokers. | |
brokerClientTlsTrustStoreType | TLS TrustStore type configuration for the internal client to authenticate with Pulsar brokers. | JKS |
brokerClientTlsTrustStore | TLS TrustStore path for the internal client to authenticate with Pulsar brokers. | |
brokerClientTlsTrustStorePassword | TLS TrustStore password for the internal client to authenticate with Pulsar brokers. | |
brokerClientTlsCiphers | Specify the TLS cipher that the internal client uses to negotiate during TLS Handshake. | |
brokerClientTlsProtocols | Specify the TLS protocols that the broker uses to negotiate during TLS handshake. | |
systemTopicEnabled | Enable/Disable system topics. | false |
topicLevelPoliciesEnabled | Enable or disable topic level policies. Topic level policies depends on the system topic. Please enable the system topic first. | false |
topicFencingTimeoutSeconds | If a topic remains fenced for a certain time period (in seconds), it is closed forcefully. 如果设置为 0 或负数,则围栏主题不会关闭。 | 0 |
proxyRoles | Role names that are treated as “proxy roles”. If the broker sees a request with role as proxyRoles, it demands to see a valid original principal. | |
authenticationEnabled | Enable authentication for the broker. | false |
authenticationProviders | A comma-separated list of class names for authentication providers. | false |
authorizationEnabled | Enforce authorization in brokers. | false |
authorizationProvider | Authorization provider fully qualified class-name. | org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider |
authorizationAllowWildcardsMatching | Allow wildcard matching in authorization. Wildcard matching is applicable only when the wildcard-character (*) presents at the first or last position. | false |
superUserRoles | Role names that are treated as “superusers.” Superusers are authorized to perform all admin tasks. | |
brokerClientAuthenticationPlugin | The authentication settings of the broker itself. Used when the broker connects to other brokers either in the same cluster or from other clusters. | |
brokerClientAuthenticationParameters | The parameters that go along with the plugin specified using brokerClientAuthenticationPlugin. | |
athenzDomainNames | Supported Athenz authentication provider domain names as a comma-separated list. | |
anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | |
tokenSecretKey | Configure the secret key to be used to validate auth tokens. The key can be specified like: tokenSecretKey=data:;base64,xxxxxxxxx or tokenSecretKey=file:///my/secret.key . Note: key file must be DER-encoded. | |
tokenPublicKey | Configure the public key to be used to validate auth tokens. The key can be specified like: tokenPublicKey=data:;base64,xxxxxxxxx or tokenPublicKey=file:///my/secret.key . Note: key file must be DER-encoded. | |
tokenAuthClaim | Specify the token claim that will be used as the authentication “principal” or “role”. The “subject” field will be used if this is left blank | |
tokenAudienceClaim | The token audience “claim” name, e.g. “aud”. It is used to get the audience from token. If it is not set, the audience is not verified. | |
tokenAudience | The token audience stands for this broker. The field tokenAudienceClaim of a valid token need contains this parameter. | |
saslJaasClientAllowedIds | This is a regexp, which limits the range of possible ids which can connect to the Broker using SASL. By default, it is set to SaslConstants.JAAS_CLIENT_ALLOWED_IDS_DEFAULT , which is “.pulsar.“, so only clients whose id contains ‘pulsar’ are allowed to connect. | N/A |
saslJaasBrokerSectionName | Service Principal, for login context name. By default, it is set to SaslConstants.JAAS_DEFAULT_BROKER_SECTION_NAME , which is “Broker”. | N/A |
httpMaxRequestSize | If the value is larger than 0, it rejects all HTTP requests with bodies larged than the configured limit. | -1 |
exposePreciseBacklogInPrometheus | Enable expose the precise backlog stats, set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. | false |
bookkeeperMetadataServiceUri | Metadata service uri is what BookKeeper used for loading corresponding metadata driver and resolving its metadata service location. This value can be fetched using bookkeeper shell whatisinstanceid command in BookKeeper cluster. For example: zk+hierarchical://localhost:2181/ledgers . The metadata service uri list can also be semicolon separated values like: zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers . | N/A |
bookkeeperClientAuthenticationPlugin | Authentication plugin to be used when connecting to bookies (BookKeeper servers). | |
bookkeeperClientAuthenticationParametersName | BookKeeper authentication plugin implementation parameters and values. | |
bookkeeperClientAuthenticationParameters | Parameters associated with the bookkeeperClientAuthenticationParametersName | |
bookkeeperClientNumWorkerThreads | Number of BookKeeper client worker threads. Default is Runtime.getRuntime().availableProcessors() | |
bookkeeperClientTimeoutInSeconds | Timeout for BookKeeper add and read operations. | 30 |
bookkeeperClientSpeculativeReadTimeoutInMillis | Speculative reads are initiated if a read request doesn’t complete within a certain time. A value of 0 disables speculative reads. | 0 |
bookkeeperUseV2WireProtocol | Use older Bookkeeper wire protocol with bookie. | true |
bookkeeperClientHealthCheckEnabled | Enable bookie health checks. | true |
bookkeeperClientHealthCheckIntervalSeconds | The time interval, in seconds, at which health checks are performed. New ledgers are not created during health checks. | 60 |
bookkeeperClientHealthCheckErrorThresholdPerInterval | Error threshold for health checks. | 5 |
bookkeeperClientHealthCheckQuarantineTimeInSeconds | If bookies have more than the allowed number of failures within the time interval specified by bookkeeperClientHealthCheckIntervalSeconds | 1800 |
bookkeeperClientGetBookieInfoIntervalSeconds | Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers. | 86400 |
bookkeeperClientGetBookieInfoRetryIntervalSeconds | Specify options for the GetBookieInfo check. This setting helps ensure the list of bookies that are up to date on the brokers. | 60 |
bookkeeperClientRackawarePolicyEnabled | true | |
bookkeeperClientRegionawarePolicyEnabled | false | |
bookkeeperClientMinNumRacksPerWriteQuorum | 2 | |
bookkeeperClientMinNumRacksPerWriteQuorum | false | |
bookkeeperClientReorderReadSequenceEnabled | false | |
bookkeeperClientIsolationGroups | ||
bookkeeperClientSecondaryIsolationGroups | Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn’t have enough bookie available. | |
bookkeeperClientMinAvailableBookiesInIsolationGroups | Minimum bookies that should be available as part of bookkeeperClientIsolationGroups else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. | |
bookkeeperTLSProviderFactoryClass | Set the client security provider factory class name. | org.apache.bookkeeper.tls.TLSContextFactory |
bookkeeperTLSClientAuthentication | Enable TLS authentication with bookie. | false |
bookkeeperTLSKeyFileType | Supported type: PEM, JKS, PKCS12. | PEM |
bookkeeperTLSTrustCertTypes | Supported type: PEM, JKS, PKCS12. | PEM |
bookkeeperTLSKeyStorePasswordPath | Path to file containing keystore password, if the client keystore is password protected. | |
bookkeeperTLSTrustStorePasswordPath | Path to file containing truststore password, if the client truststore is password protected. | |
bookkeeperTLSKeyFilePath | Path for the TLS private key file. | |
bookkeeperTLSCertificateFilePath | Path for the TLS certificate file. | |
bookkeeperTLSTrustCertsFilePath | 受信任的 TLS 证书文件的路径。 | |
bookkeeperDiskWeightBasedPlacementEnabled | Enable/Disable disk weight based placement. | false |
bookkeeperExplicitLacIntervalInMills | Set the interval to check the need for sending an explicit LAC. When the value is set to 0, no explicit LAC is sent. | 0 |
bookkeeperClientExposeStatsToPrometheus | Expose BookKeeper client managed ledger stats to Prometheus. | false |
managedLedgerDefaultEnsembleSize | 1 | |
managedLedgerDefaultWriteQuorum | 1 | |
managedLedgerDefaultAckQuorum | 1 | |
managedLedgerDigestType | Default type of checksum to use when writing to BookKeeper. | CRC32C |
managedLedgerNumSchedulerThreads | Number of threads to be used for managed ledger scheduled tasks. | 8 |
managedLedgerCacheSizeMB | N/A | |
managedLedgerCacheCopyEntries | Whether to copy the entry payloads when inserting in cache. | false |
managedLedgerCacheEvictionWatermark | 0.9 | |
managedLedgerCacheEvictionFrequency | Configure the cache eviction frequency for the managed ledger cache (evictions/sec) | 100.0 |
managedLedgerCacheEvictionTimeThresholdMillis | All entries that have stayed in cache for more than the configured time, will be evicted | 1000 |
managedLedgerCursorBackloggedThreshold | Configure the threshold (in number of entries) from where a cursor should be considered ‘backlogged’ and thus should be set as inactive. | 1000 |
managedLedgerUnackedRangesOpenCacheSetEnabled | Use Open Range-Set to cache unacknowledged messages | true |
managedLedgerDefaultMarkDeleteRateLimit | 0.1 | |
managedLedgerMaxEntriesPerLedger | 50000 | |
managedLedgerMinLedgerRolloverTimeMinutes | 10 | |
managedLedgerMaxLedgerRolloverTimeMinutes | 240 | |
managedLedgerCursorMaxEntriesPerLedger | 50000 | |
managedLedgerCursorRolloverTimeInSeconds | 14400 | |
managedLedgerMaxSizePerLedgerMbytes | Maximum ledger size before triggering a rollover for a topic. | 2048 |
managedLedgerMaxUnackedRangesToPersist | Maximum number of “acknowledgment holes” that are going to be persistently stored. When acknowledging out of order, a consumer leaves holes that are supposed to be quickly filled by acknowledging all the messages. The information of which messages are acknowledged is persisted by compressing in “ranges” of messages that were acknowledged. After the max number of ranges is reached, the information is only tracked in memory and messages are redelivered in case of crashes. | 10000 |
managedLedgerMaxUnackedRangesToPersistInZooKeeper | Maximum number of “acknowledgment holes” that can be stored in Zookeeper. If the number of unacknowledged message range is higher than this limit, the broker persists unacknowledged ranges into bookkeeper to avoid additional data overhead into Zookeeper. | 1000 |
autoSkipNonRecoverableData | false | |
managedLedgerMetadataOperationsTimeoutSeconds | Operation timeout while updating managed-ledger metadata. | 60 |
managedLedgerReadEntryTimeoutSeconds | Read entries timeout when the broker tries to read messages from BookKeeper. | 0 |
managedLedgerAddEntryTimeoutSeconds | Add entry timeout when the broker tries to publish message to BookKeeper. | 0 |
managedLedgerNewEntriesCheckDelayInMillis | New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor tries to check again after the delay time. For consumption latency sensitive scenario, you can set the value to a smaller value or 0. Of course, a smaller value may degrade consumption throughput. | 10 ms |
managedLedgerPrometheusStatsLatencyRolloverSeconds | Managed ledger prometheus stats latency rollover seconds. | 60 |
managedLedgerTraceTaskExecution | Whether to trace managed ledger task execution time. | true |
managedLedgerNewEntriesCheckDelayInMillis | New entries check delay for the cursor under the managed ledger. If no new messages in the topic, the cursor will try to check again after the delay time. For consumption latency sensitive scenario, it can be set to a smaller value or 0. A smaller value degrades consumption throughput. By default, it is 10ms. | 10 |
loadBalancerEnabled | false | |
loadBalancerPlacementStrategy | weightedRandomSelection | |
loadBalancerReportUpdateThresholdPercentage | 10 | |
loadBalancerReportUpdateMaxIntervalMinutes | 15 | |
loadBalancerHostUsageCheckIntervalMinutes | 1 | |
loadBalancerSheddingIntervalMinutes | 30 | |
loadBalancerSheddingGracePeriodMinutes | 30 | |
loadBalancerBrokerMaxTopics | 50000 | |
loadBalancerBrokerUnderloadedThresholdPercentage | 1 | |
loadBalancerBrokerOverloadedThresholdPercentage | 85 | |
loadBalancerResourceQuotaUpdateIntervalMinutes | 15 | |
loadBalancerBrokerComfortLoadLevelPercentage | 65 | |
loadBalancerAutoBundleSplitEnabled | false | |
loadBalancerAutoUnloadSplitBundlesEnabled | Enable/Disable automatic unloading of split bundles. | true |
loadBalancerNamespaceBundleMaxTopics | 1000 | |
loadBalancerNamespaceBundleMaxSessions | 1000 | |
loadBalancerNamespaceBundleMaxMsgRate | 1000 | |
loadBalancerNamespaceBundleMaxBandwidthMbytes | 100 | |
loadBalancerNamespaceMaximumBundles | 128 | |
loadBalancerBrokerThresholdShedderPercentage | The broker resource usage threshold. When the broker resource usage is greater than the pulsar cluster average resource usage, the threshold shedder is triggered to offload bundles from the broker. It only takes effect in the ThresholdShedder strategy. | 10 |
loadBalancerHistoryResourcePercentage | The history usage when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 0.9 |
loadBalancerBandwithInResourceWeight | The BandWithIn usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 |
loadBalancerBandwithOutResourceWeight | The BandWithOut usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 |
loadBalancerCPUResourceWeight | The CPU usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 |
loadBalancerMemoryResourceWeight | The heap memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 |
loadBalancerDirectMemoryResourceWeight | The direct memory usage weight when calculating new resource usage. It only takes effect in the ThresholdShedder strategy. | 1.0 |
loadBalancerBundleUnloadMinThroughputThreshold | Bundle unload minimum throughput threshold. Avoid bundle unload frequently. It only takes effect in the ThresholdShedder strategy. | 10 |
replicationMetricsEnabled | true | |
replicationConnectionsPerBroker | 16 | |
replicationProducerQueueSize | 1000 | |
replicationPolicyCheckDurationSeconds | Duration to check replication policy to avoid replicator inconsistency due to missing ZooKeeper watch. When the value is set to 0, disable checking replication policy. | 600 |
defaultRetentionTimeInMinutes | 0 | |
defaultRetentionSizeInMB | 0 | |
keepAliveIntervalSeconds | 30 | |
haProxyProtocolEnabled | Enable or disable the HAProxy protocol. | false |
bookieId | 如果你想定制一个bookie的ID,或者为bookie使用一个动态网络地址, 你可以设置 bookieId 的值。Bookie advertises itself using the bookieId rather than the BookieSocketAddress (hostname:port or IP:port ).The bookieId is a non-empty string that can contain ASCII digits and letters ([a-zA-Z9-0]), colons, dashes, and dots.For more information about bookieId , see here. | / |
maxTopicsPerNamespace | 可以在命名空间中创建的持久主题的最大数量。 When the number of topics reaches this threshold, the broker rejects the request of creating a new topic, including the auto-created topics by the producer or consumer, until the number of connected consumers decreases. The default value 0 disables the check. | 0 |
WebSocket
配置项 | 说明 | 默认值 |
---|---|---|
configurationStoreServers | ||
zooKeeperSessionTimeoutMillis | 30000 | |
zooKeeperCacheExpirySeconds | ZooKeeper 缓存过期时间(秒) | 300 |
serviceUrl | ||
serviceUrlTls | ||
brokerServiceUrl | ||
brokerServiceUrlTls | ||
webServicePort | 8080 | |
webServicePortTls | 8443 | |
bindAddress | 0.0.0.0 | |
clusterName | ||
authenticationEnabled | false | |
authenticationProviders | ||
authorizationEnabled | false | |
superUserRoles | ||
brokerClientAuthenticationPlugin | ||
brokerClientAuthenticationParameters | ||
tlsEnabled | false | |
tlsAllowInsecureConnection | false | |
tlsCertificateFilePath | ||
tlsKeyFilePath | ||
tlsTrustCertsFilePath |
Pulsar 代理
The Pulsar proxy can be configured in the conf/proxy.conf
file.
配置项 | 说明 | 默认值 |
---|---|---|
forwardAuthorizationCredentials | Forward client authorization credentials to Broker for re-authorization, and make sure authentication is enabled for this to take effect. | false |
zookeeperServers | ZooKeeper quorum 连接字符串(以逗号分隔的列表) | |
configurationStoreServers | 配置存储连接字符串(以逗号分隔的列表) | |
brokerServiceURL | The service URL pointing to the broker cluster. Must begin with pulsar:// . | |
brokerServiceURLTLS | The TLS service URL pointing to the broker cluster. Must begin with pulsar+ssl:// . | |
brokerWebServiceURL | The Web service URL pointing to the broker cluster | |
brokerWebServiceURLTLS | The TLS Web service URL pointing to the broker cluster | |
functionWorkerWebServiceURL | The Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | |
functionWorkerWebServiceURLTLS | The TLS Web service URL pointing to the function worker cluster. It is only configured when you setup function workers in a separate cluster. | |
zookeeperSessionTimeoutMs | ZooKeeper会话超时(以毫秒为单位) | 30000 |
zooKeeperCacheExpirySeconds | ZooKeeper 缓存过期时间(秒) | 300 |
advertisedAddress | 服务对外发布的主机名或 IP 地址。 If not set, the value of InetAddress.getLocalHost().getHostname() is used. | N/A |
servicePort | 用于服务器二进制Protobuf请求的端口 | 6650 |
servicePortTls | 用于服务器二进制Protobuf TLS请求的端口 | 6651 |
statusFilePath | 在响应服务发现健康检查时,用于确定代理实例的轮换状态的文件的路径 | |
proxyLogLevel | Proxy log level | 0 |
authenticationEnabled | 是否为Pulsar代理启用身份验证 | false |
authenticateMetricsEndpoint | Whether the ‘/metrics’ endpoint requires authentication. Defaults to true. ‘authenticationEnabled’ must also be set for this to take effect. | true |
authenticationProviders | 身份验证提供者名称列表(以逗号分隔的类名列表) | |
authorizationEnabled | 是否由Pulsar代理强制执行授权 | false |
authorizationProvider | 授权提供程序的完全限定类名 | org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider |
anonymousUserRole | When this parameter is not empty, unauthenticated users perform as anonymousUserRole. | |
brokerClientAuthenticationPlugin | Pulsar代理使用的身份验证插件,用于对Pulsar brokers进行身份验证 | |
brokerClientAuthenticationParameters | Pulsar代理用于对Pulsar Brokers进行身份验证的参数 | |
brokerClientTrustCertsFilePath | Pulsar代理用于对Pulsar Brokers进行身份验证的可信证书的路径 | |
superUserRoles | “超级用户”的角色名,这意味着它们将能够执行所有管理 | |
maxConcurrentInboundConnections | Max concurrent inbound connections. The proxy will reject requests beyond that. | 10000 |
maxConcurrentLookupRequests | Max concurrent outbound connections. The proxy will error out requests beyond that. | 50000 |
tlsEnabledInProxy | Deprecated - use servicePortTls and webServicePortTls instead. | false |
tlsEnabledWithBroker | Whether TLS is enabled when communicating with Pulsar brokers. | false |
tlsCertRefreshCheckDurationSec | TLS certificate refresh duration in seconds. If the value is set 0, check TLS certificate every new connection. | 300 |
tlsCertificateFilePath | TLS证书文件的路径 | |
tlsKeyFilePath | TLS私钥文件的路径 | |
tlsTrustCertsFilePath | 受信任的TLS证书pem文件的路径 | |
tlsHostnameVerificationEnabled | 当代理与brokers建立TLS连接时是否验证主机名 | false |
tlsRequireTrustedClientCertOnConnect | Whether client certificates are required for TLS. Connections are rejected if the client certificate isn’t trusted. | false |
tlsProtocols | Specify the tls protocols the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- TLSv1.3 , TLSv1.2 | |
tlsCiphers | Specify the tls cipher the broker will use to negotiate during TLS Handshake. Multiple values can be specified, separated by commas. Example:- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 | |
httpReverseProxyConfigs | HTTP directs to redirect to non-pulsar services | |
httpOutputBufferSize | HTTP output buffer size. The amount of data that will be buffered for HTTP requests before it is flushed to the channel. A larger buffer size may result in higher HTTP throughput though it may take longer for the client to see data. If using HTTP streaming via the reverse proxy, this should be set to the minimum value (1) so that clients see the data as soon as possible. | 32768 |
httpNumThreads | Number of threads to use for HTTP requests processing | 2 * Runtime.getRuntime().availableProcessors() |
tokenSecretKey | Configure the secret key to be used to validate auth tokens. The key can be specified like: tokenSecretKey=data:;base64,xxxxxxxxx or tokenSecretKey=file:///my/secret.key . Note: key file must be DER-encoded. | |
tokenPublicKey | Configure the public key to be used to validate auth tokens. The key can be specified like: tokenPublicKey=data:;base64,xxxxxxxxx or tokenPublicKey=file:///my/secret.key . Note: key file must be DER-encoded. | |
tokenAuthClaim | Specify the token claim that will be used as the authentication “principal” or “role”. The “subject” field will be used if this is left blank | |
tokenAudienceClaim | The token audience “claim” name, e.g. “aud”. It is used to get the audience from token. If it is not set, the audience is not verified. | |
tokenAudience | The token audience stands for this broker. The field tokenAudienceClaim of a valid token need contains this parameter. | |
haProxyProtocolEnabled | Enable or disable the HAProxy protocol. | false |
ZooKeeper
ZooKeeper handles a broad range of essential configuration- and coordination-related tasks for Pulsar. The default configuration file for ZooKeeper is in the conf/zookeeper.conf
file in your Pulsar installation. The following parameters are available:
配置项 | 说明 | 默认值 |
---|---|---|
tickTime | The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick. | 2000 |
initLimit | The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. | 10 |
syncLimit | The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter. | 5 |
dataDir | The location where ZooKeeper will store in-memory database snapshots as well as the transaction log of updates to the database. | data/zookeeper |
clientPort | The port on which the ZooKeeper server will listen for connections. | 2181 |
admin.enableServer | The port at which the admin listens. | true |
admin.serverPort | The port at which the admin listens. | 9990 |
autopurge.snapRetainCount | ZooKeeper 中自动清理决定了在 autopurge.purgeInterval 参数设定的时间间隔内,存储在 dataDir 中的数据库快照数(其他快照会被删除)。 | 3 |
autopurge.purgeInterval | The time interval, in hours, by which the ZooKeeper database purge task is triggered. Setting to a non-zero number will enable auto purge; setting to 0 will disable. Read this guide before enabling auto purge. | 1 |
forceSync | Requires updates to be synced to media of the transaction log before finishing processing the update. If this option is set to ‘no’, ZooKeeper will not require updates to be synced to the media. WARNING: it’s not recommended to run a production ZK cluster with forceSync disabled. | 是 |
maxClientCnxns | The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. | 60 |
In addition to the parameters in the table above, configuring ZooKeeper for Pulsar involves adding a server.N
line to the conf/zookeeper.conf
file for each node in the ZooKeeper cluster, where N
is the number of the ZooKeeper node. Here’s an example for a three-node ZooKeeper cluster:
server.1=zk1.us-west.example.com:2888:3888
server.2=zk2.us-west.example.com:2888:3888
server.3=zk3.us-west.example.com:2888:3888
We strongly recommend consulting the ZooKeeper Administrator’s Guide for a more thorough and comprehensive introduction to ZooKeeper configuration