- Reference architecture: up to 5,000 users
- Reference architecture: up to 5,000 users
Reference architecture: up to 5,000 users
原文:https://docs.gitlab.com/ee/administration/reference_architectures/5k_users.html
- Setup components
- Configure the external load balancer
- Configure Redis
- Configure Consul and Sentinel
- Configure PostgreSQL
- Configure PgBouncer
- Configure Gitaly
- Configure Sidekiq
- Configure GitLab Rails
- Configure Prometheus
- Configure the object storage
- Configure NFS (optional)
- Troubleshooting
Reference architecture: up to 5,000 users
该页面描述了最多可容纳 5,000 个用户的 GitLab 参考架构. 有关参考架构的完整列表,请参见可用参考架构 .
注意:下面记录的 5,000 个用户参考体系结构旨在帮助您的组织实现高度可用的 GitLab 部署. 如果您没有专业知识或需要维护高度可用的环境,则可以遵循2,000 个用户的参考体系结构 ,从而拥有一个更简单且成本更低的操作环境.
- 支持的用户(大约): 5,000
- 高可用性: True
- 测试 RPS 速率: API:100 RPS,网站:10 RPS,Git:10 RPS
Service | Nodes | Configuration | GCP | AWS | Azure |
---|---|---|---|---|---|
外部负载平衡节点 | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
c5.large |
F2s v2 |
Redis | 3 | 2 个 vCPU,7.5GB 内存 | n1-standard-2 |
m5.large |
D2s v3 |
领事+前哨 | 3 | 2 个 vCPU,1.8GB 内存 | n1-highcpu-2 |
c5.large |
F2s v2 |
PostgreSQL | 3 | 2 个 vCPU,7.5GB 内存 | n1-standard-2 |
m5.large |
D2s v3 |
PgBouncer | 3 | 2 个 vCPU,1.8GB 内存 | n1-highcpu-2 |
c5.large |
F2s v2 |
内部负载平衡节点 | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
c5.large |
F2s v2 |
Gitaly | 最少 2 个 | 8 个 vCPU,30GB 内存 | n1-standard-8 |
m5.2xlarge |
D8s v3 |
Sidekiq | 4 | 2 个 vCPU,7.5GB 内存 | n1-standard-2 |
m5.large |
D2s v3 |
亚搏体育 app Rails | 3 | 16 个 vCPU,14.4GB 内存 | n1-highcpu-16 |
c5.4xlarge |
F16s v2 |
监控节点 | 1 | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
c5.large |
F2s v2 |
对象存储 | n/a | n/a | n/a | n/a | n/a |
NFS 服务器(可选,不推荐) | 1 | 4 个 vCPU,3.6GB 内存 | n1-highcpu-4 |
c5.xlarge |
F4s v2 |
这些架构是使用 GCP 上的Intel Xeon E5 v3(Haswell) CPU 平台构建和测试的. 在不同的硬件上,您可能会发现需要对 CPU 或节点数进行相应的调整,无论是较低还是较高. 有关更多信息,请在此处找到 CPU 的Sysbench基准.
对于 LFS,Uploads,Artifacts 等数据对象,由于性能和可用性更好,建议在 NFS 上尽可能使用对象存储服务 . 由于这不需要设置节点,因此在上表中将其标记为不适用(n / a).
Setup components
设置 GitLab 及其组件以容纳多达 5,000 个用户:
- 配置外部负载平衡节点 ,该节点将处理两个 GitLab 应用程序服务节点的负载平衡.
- Configure Redis.
- Configure Consul and Sentinel.
- 配置 PostgreSQL (GitLab 的数据库).
- Configure PgBouncer.
- Configure the internal load balancing node
- 配置 Gitaly ,它提供对 Git 存储库的访问.
- Configure Sidekiq.
- 配置主 GitLab Rails 应用程序以运行 Puma / Unicorn,Workhorse,GitLab Shell,并服务所有前端请求(UI,API,基于 HTTP / SSH 的 Git).
- 配置 Prometheus来监视您的 GitLab 环境.
- 配置用于共享数据对象的对象存储 .
- 将 NFS(可选)配置为具有共享磁盘存储服务,以替代 Gitaly 和/或对象存储(尽管不建议这样做). GitLab 页面需要 NFS,如果不使用该功能,则可以跳过此步骤.
我们从同一 10.6.0.0/16 专用网络范围内的所有服务器开始,它们可以在这些地址上自由地相互连接.
这是每台机器和分配的 IP 的列表和说明:
10.6.0.10
:外部负载平衡器10.6.0.61
主要10.6.0.62
:返回副本 110.6.0.63
:返回副本 210.6.0.11
:领事/前哨 110.6.0.12
:领事/前哨 210.6.0.13
:领事/前哨 310.6.0.31
主10.6.0.32
中学 110.6.0.33
中学 210.6.0.21
:PgBouncer 110.6.0.22
:PgBouncer 210.6.0.23
:PgBouncer 310.6.0.20
:内部负载均衡器10.6.0.51
:Gitaly 110.6.0.52
:Gitaly 210.6.0.71
:Sidekiq 110.6.0.72
:Sidekiq 210.6.0.73
:Sidekiq 310.6.0.74
:Sidekiq 410.6.0.41
应用程序 110.6.0.42
应用程序 210.6.0.43
应用程序 310.6.0.81
:普罗米修斯
Configure the external load balancer
注意:此体系结构已通过HAProxy作为负载均衡器进行了测试和验证. 尽管也可以使用具有类似功能集的其他负载均衡器,但这些负载均衡器尚未经过验证.
在主动/主动 GitLab 配置中,您将需要一个负载均衡器来将流量路由到应用程序服务器. 有关使用负载均衡器或进行确切配置的细节超出了 GitLab 文档的范围. 我们希望,如果您要管理像 GitLab 这样的多节点系统,那么已经选择了负载均衡器. 一些示例包括 HAProxy(开源),F5 Big-IP LTM 和 Citrix Net Scaler. 本文档将概述需要在 GitLab 上使用哪些端口和协议.
下一个问题是如何在环境中处理 SSL. 有几种不同的选择:
- The application node terminates SSL.
- 负载平衡器终止没有后端 SSL 的 SSL,并且负载平衡器与应用程序节点之间的通信不安全.
- 负载均衡器使用后端 SSL 终止 SSL,并且负载均衡器与应用程序节点之间的通信是安全的.
Application node terminates SSL
配置您的负载均衡器以将端口 443 上的连接作为TCP
而不是HTTP(S)
协议进行传递. 这会将连接直接传递到应用程序节点的 NGINX 服务. NGINX 将具有 SSL 证书并在端口 443 上侦听.
有关管理 SSL 证书和配置 NGINX 的详细信息,请参见NGINX HTTPS 文档 .
Load balancer terminates SSL without backend SSL
将您的负载均衡器配置为使用HTTP(S)
协议而不是TCP
. 然后,负载平衡器将负责管理 SSL 证书和终止 SSL.
由于负载均衡器和 GitLab 之间的通信将不安全,因此需要一些其他配置. 有关详细信息,请参见NGINX 代理的 SSL 文档 .
Load balancer terminates SSL with backend SSL
Configure your load balancer(s) to use the ‘HTTP(S)’ protocol rather than ‘TCP’. The load balancer(s) will be responsible for managing SSL certificates that end users will see.
在这种情况下,负载均衡器和 NGINX 之间的流量也将是安全的. 无需为代理 SSL 添加配置,因为连接将一直保持安全. 但是,需要将配置添加到 GitLab 来配置 SSL 证书. 有关管理 SSL 证书和配置 NGINX 的详细信息,请参见NGINX HTTPS 文档 .
Ports
下表显示了要使用的基本端口.
LB 端口 | 后端端口 | Protocol |
---|---|---|
80 | 80 | HTTP( 1 ) |
443 | 443 | TCP 或 HTTPS( 1 )( 2 ) |
22 | 22 | TCP |
- ( 1 ): Web 终端支持要求您的负载平衡器正确处理 WebSocket 连接. 当使用 HTTP 或 HTTPS 代理,这意味着负载平衡器必须被配置为通过
Connection
和Upgrade
逐跳头. 有关更多详细信息,请参见Web 终端集成指南. - ( 2 ):当对端口 443 使用 HTTPS 协议时,需要向负载均衡器添加 SSL 证书. 如果您想在 GitLab 应用程序服务器上终止 SSL,请使用 TCP 协议.
如果您使用具有自定义域支持的 GitLab 页面,则将需要一些其他端口配置. GitLab 页面需要一个单独的虚拟 IP 地址. 配置 DNS,将pages_external_url
的/etc/gitlab/gitlab.rb
指向新的虚拟 IP 地址. 有关更多信息,请参见GitLab 页面文档 .
LB 端口 | 后端端口 | Protocol |
---|---|---|
80 | 变化( 1 ) | HTTP |
443 | 变化( 1 ) | TCP( 2 ) |
- ( 1 ):GitLab 页面的后端端口取决于
gitlab_pages['external_http']
和gitlab_pages['external_https']
设置. 有关更多详细信息,请参见GitLab Pages 文档 . - ( 2 ):GitLab 页面的端口 443 应该始终使用 TCP 协议. 用户可以使用自定义 SSL 配置自定义域,如果 SSL 在负载均衡器处终止,则不可能.
Alternate SSH Port
某些组织有禁止打开 SSH 端口 22 的策略.在这种情况下,配置允许用户在端口 443 上使用 SSH 的备用 SSH 主机名可能会有所帮助.与其他 GitLab 相比,备用 SSH 主机名将需要一个新的虚拟 IP 地址.上面的 HTTP 配置.
为备用 SSH 主机名(例如altssh.gitlab.example.com
配置 DNS.
LB 端口 | 后端端口 | Protocol |
---|---|---|
443 | 22 | TCP |
Configure Redis
使用Redis 的可扩展环境,可以使用一次 X 副本拓扑与Redis 的哨兵服务来观看,并自动启动故障转移过程.
如果与 Sentinel 一起使用,Redis 需要身份验证. 有关更多信息,请参见Redis 安全性文档. 我们建议结合使用 Redis 密码和严格的防火墙规则来保护您的 Redis 服务. 强烈建议您在使用 GitLab 配置 Redis 之前阅读Redis Sentinel文档,以充分了解拓扑和体系结构.
在本节中,将指导您配置与 GitLab 一起使用的外部 Redis 实例. 以下 IP 将作为示例:
10.6.0.61
主要10.6.0.62
:返回副本 110.6.0.63
:返回副本 2
Provide your own Redis instance
来自云提供商(例如 AWS ElastiCache)的托管 Redis 将可以使用. 如果这些服务支持高可用性,请确保它不是 Redis 群集类型.
需要 Redis 5.0 或更高版本,因为这是从 GitLab 13.0 开始的 Omnibus GitLab 软件包附带的版本. 较旧的 Redis 版本不支持 SPOP 的可选 count 参数,这对于合并火车现在是必需的.
注意 Redis 节点的 IP 地址或主机名,端口和密码(如果需要). 这些在以后配置GitLab 应用程序服务器时是必需的.
Standalone Redis using Omnibus GitLab
这是我们安装和设置新 Redis 实例的部分.
Redis 设置的要求如下:
- 所有 Redis 节点必须能够互相通信并接受通过 Redis(
6379
)和 Sentinel(26379
)端口的传入连接(除非您更改默认端口). - 托管 GitLab 应用程序的服务器必须能够访问 Redis 节点.
- 使用防火墙保护节点免受来自外部网络( Internet )的访问.
注意: Redis 节点(主节点和副本节点)将需要使用redis['password']
定义的相同密码. 在故障转移期间的任何时间,Sentinels 都可以重新配置节点并将其状态从主节点更改为副本节点,反之亦然.
Configuring the primary Redis instance
- SSH 进入主 Redis 服务器.
- 从 GitLab 下载页面使用步骤 1 和 2 下载/安装所需的 Omnibus GitLab 软件包.
- 确保选择正确的 Omnibus 软件包,并使用与当前安装相同的版本和类型(社区版,企业版).
- 不要完成下载页面上的任何其他步骤.
编辑
/etc/gitlab/gitlab.rb
并添加内容:# Specify server role as 'redis_master_role'
roles ['redis_master_role']
# IP address pointing to a local IP that the other machines can reach to.
# You can also set bind to '0.0.0.0' which listen in all interfaces.
# If you really need to bind to an external accessible IP, make
# sure you add extra firewall rules to prevent unauthorized access.
redis['bind'] = '10.6.0.61'
# Define a port so Redis can listen for TCP requests which will allow other
# machines to connect to it.
redis['port'] = 6379
# Set up password authentication for Redis (use the same password in all nodes).
redis['password'] = 'redis-password-goes-here'
## Enable service discovery for Prometheus
consul['enable'] = true
consul['monitoring_service_discovery'] = true
## The IPs of the Consul server nodes
## You can also use FQDNs and intermix them with IPs
consul['configuration'] = {
retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
}
# Set the network addresses that the exporters will listen on
node_exporter['listen_address'] = '0.0.0.0:9100'
redis_exporter['listen_address'] = '0.0.0.0:9121'
redis_exporter['flags'] = {
'redis.addr' => 'redis://10.6.0.61:6379',
'redis.password' => 'redis-password-goes-here',
}
# Disable auto migrations
gitlab_rails['auto_migrate'] = false
重新配置 Omnibus GitLab,以使更改生效.
注意:您可以将多个角色(如哨兵和 Redis)指定为: roles ['redis_sentinel_role', 'redis_master_role']
. 阅读有关角色的更多信息.
您可以通过以下方式列出当前 Redis 主副本服务器状态:
/opt/gitlab/embedded/bin/redis-cli -h <host> -a 'redis-password-goes-here' info replication
通过以下方式显示正在运行的 GitLab 服务:
gitlab-ctl status
输出应类似于以下内容:
run: consul: (pid 30043) 76863s; run: log: (pid 29691) 76892s
run: logrotate: (pid 31152) 3070s; run: log: (pid 29595) 76908s
run: node-exporter: (pid 30064) 76862s; run: log: (pid 29624) 76904s
run: redis: (pid 30070) 76861s; run: log: (pid 29573) 76914s
run: redis-exporter: (pid 30075) 76861s; run: log: (pid 29674) 76896s
Configuring the replica Redis instances
- SSH 进入副本 Redis 服务器.
- 从 GitLab 下载页面使用步骤 1 和 2 下载/安装所需的 Omnibus GitLab 软件包.
- 确保选择正确的 Omnibus 软件包,并使用与当前安装相同的版本和类型(社区版,企业版).
- 不要完成下载页面上的任何其他步骤.
编辑
/etc/gitlab/gitlab.rb
并添加内容:# Specify server role as 'redis_replica_role'
roles ['redis_replica_role']
# IP address pointing to a local IP that the other machines can reach to.
# You can also set bind to '0.0.0.0' which listen in all interfaces.
# If you really need to bind to an external accessible IP, make
# sure you add extra firewall rules to prevent unauthorized access.
redis['bind'] = '10.6.0.62'
# Define a port so Redis can listen for TCP requests which will allow other
# machines to connect to it.
redis['port'] = 6379
# The same password for Redis authentication you set up for the primary node.
redis['password'] = 'redis-password-goes-here'
# The IP of the primary Redis node.
redis['master_ip'] = '10.6.0.61'
# Port of primary Redis server, uncomment to change to non default. Defaults
# to `6379`.
#redis['master_port'] = 6379
## Enable service discovery for Prometheus
consul['enable'] = true
consul['monitoring_service_discovery'] = true
## The IPs of the Consul server nodes
## You can also use FQDNs and intermix them with IPs
consul['configuration'] = {
retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
}
# Set the network addresses that the exporters will listen on
node_exporter['listen_address'] = '0.0.0.0:9100'
redis_exporter['listen_address'] = '0.0.0.0:9121'
redis_exporter['flags'] = {
'redis.addr' => 'redis://10.6.0.62:6379',
'redis.password' => 'redis-password-goes-here',
}
# Disable auto migrations
gitlab_rails['auto_migrate'] = false
重新配置 Omnibus GitLab,以使更改生效.
- 对于所有其他副本节点,请再次执行该步骤,并确保正确设置 IP.
注意:您可以将多个角色(如哨兵和 Redis)指定为: roles ['redis_sentinel_role', 'redis_master_role']
. 阅读有关角色的更多信息.
故障转移后, /etc/gitlab/gitlab.rb
在/etc/gitlab/gitlab.rb
再次更改这些值,因为节点将由Sentinels管理,即使在gitlab-ctl reconfigure
,它们也将通过恢复配置恢复.同样的哨兵
支持高级配置选项 ,可以根据需要添加.
Configure Consul and Sentinel
注意:如果您使用的是外部 Redis Sentinel 实例,请确保从 Sentinel 配置中排除requirepass
参数. 此参数将导致客户端报告NOAUTH Authentication required.
. Redis Sentinel 3.2.x 不支持密码身份验证 .
现在已经全部安装了 Redis 服务器,让我们配置 Sentinel 服务器. 以下 IP 将作为示例:
10.6.0.11
:领事/前哨 110.6.0.12
:领事/前哨 210.6.0.13
:领事/前哨 3
要配置 Sentinel:
- SSH 进入将托管 Consul / Sentinel 的服务器.
- 从 GitLab 下载页面使用步骤 1 和 2 下载/安装 Omnibus GitLab 企业版软件包.
- 确保选择正确的 Omnibus 软件包,并且与 GitLab 应用程序正在运行的版本相同.
- 不要完成下载页面上的任何其他步骤.
编辑
/etc/gitlab/gitlab.rb
并添加内容:roles ['redis_sentinel_role', 'consul_role']
# Must be the same in every sentinel node
redis['master_name'] = 'gitlab-redis'
# The same password for Redis authentication you set up for the primary node.
redis['master_password'] = 'redis-password-goes-here'
# The IP of the primary Redis node.
redis['master_ip'] = '10.6.0.61'
# Define a port so Redis can listen for TCP requests which will allow other
# machines to connect to it.
redis['port'] = 6379
# Port of primary Redis server, uncomment to change to non default. Defaults
# to `6379`.
#redis['master_port'] = 6379
## Configure Sentinel
sentinel['bind'] = '10.6.0.11'
# Port that Sentinel listens on, uncomment to change to non default. Defaults
# to `26379`.
# sentinel['port'] = 26379
## Quorum must reflect the amount of voting sentinels it take to start a failover.
## Value must NOT be greater then the amount of sentinels.
##
## The quorum can be used to tune Sentinel in two ways:
## 1\. If a the quorum is set to a value smaller than the majority of Sentinels
## we deploy, we are basically making Sentinel more sensible to primary failures,
## triggering a failover as soon as even just a minority of Sentinels is no longer
## able to talk with the primary.
## 1\. If a quorum is set to a value greater than the majority of Sentinels, we are
## making Sentinel able to failover only when there are a very large number (larger
## than majority) of well connected Sentinels which agree about the primary being down.s
sentinel['quorum'] = 2
## Consider unresponsive server down after x amount of ms.
# sentinel['down_after_milliseconds'] = 10000
## Specifies the failover timeout in milliseconds. It is used in many ways:
##
## - The time needed to re-start a failover after a previous failover was
## already tried against the same primary by a given Sentinel, is two
## times the failover timeout.
##
## - The time needed for a replica replicating to a wrong primary according
## to a Sentinel current configuration, to be forced to replicate
## with the right primary, is exactly the failover timeout (counting since
## the moment a Sentinel detected the misconfiguration).
##
## - The time needed to cancel a failover that is already in progress but
## did not produced any configuration change (REPLICAOF NO ONE yet not
## acknowledged by the promoted replica).
##
## - The maximum time a failover in progress waits for all the replica to be
## reconfigured as replicas of the new primary. However even after this time
## the replicas will be reconfigured by the Sentinels anyway, but not with
## the exact parallel-syncs progression as specified.
# sentinel['failover_timeout'] = 60000
## Enable service discovery for Prometheus
consul['enable'] = true
consul['monitoring_service_discovery'] = true
## The IPs of the Consul server nodes
## You can also use FQDNs and intermix them with IPs
consul['configuration'] = {
server: true,
retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
}
# Set the network addresses that the exporters will listen on
node_exporter['listen_address'] = '0.0.0.0:9100'
redis_exporter['listen_address'] = '0.0.0.0:9121'
# Disable auto migrations
gitlab_rails['auto_migrate'] = false
重新配置 Omnibus GitLab,以使更改生效.
- 对于其他所有 Consul / Sentinel 节点,请再次执行步骤,并确保设置了正确的 IP.
注意:第三个 Consul 服务器的配置完成后,将选举 Consul 负责人. 查看领事日志sudo gitlab-ctl tail consul
将显示...[INFO] consul: New leader elected: ...
You can list the current Consul members (server, client):
sudo /opt/gitlab/embedded/bin/consul members
您可以验证 GitLab 服务正在运行:
sudo gitlab-ctl status
输出应类似于以下内容:
run: consul: (pid 30074) 76834s; run: log: (pid 29740) 76844s
run: logrotate: (pid 30925) 3041s; run: log: (pid 29649) 76861s
run: node-exporter: (pid 30093) 76833s; run: log: (pid 29663) 76855s
run: sentinel: (pid 30098) 76832s; run: log: (pid 29704) 76850s
Configure PostgreSQL
在本节中,将指导您配置与 GitLab 一起使用的外部 PostgreSQL 数据库.
Provide your own PostgreSQL instance
如果您将 GitLab 托管在云提供商上,则可以选择将托管服务用于 PostgreSQL. 例如,AWS 提供了运行 PostgreSQL 的托管关系数据库服务(RDS).
如果您使用云托管服务,或提供自己的 PostgreSQL:
- 根据数据库要求文档设置 PostgreSQL.
- 使用您选择的密码设置一个
gitlab
用户名.gitlab
用户需要特权才能创建gitlabhq_production
数据库. - 使用适当的详细信息配置 GitLab 应用程序服务器. 配置 GitLab Rails 应用程序涵盖了此步骤.
Standalone PostgreSQL using Omnibus GitLab
以下 IP 将作为示例:
10.6.0.31
主10.6.0.32
中学 110.6.0.33
中学 2
首先,请确保在每个节点上 安装 Linux GitLab 软件包. 按照以下步骤,从步骤 1 安装必需的依赖项,并从步骤 2 添加 GitLab 软件包存储库.在第二步中安装 GitLab 时,请勿提供EXTERNAL_URL
值.
PostgreSQL primary node
- SSH 进入 PostgreSQL 主节点.
为 PostgreSQL 用户名/密码对生成密码哈希. 假设您将使用默认用户名
gitlab
(推荐). 该命令将要求输入密码和确认. 将此命令在下一步中输出的值用作<postgresql_password_hash>
的值:sudo gitlab-ctl pg-password-md5 gitlab
为 PgBouncer 用户名/密码对生成密码哈希. 假设您将使用
pgbouncer
的默认用户名(推荐). 该命令将要求输入密码和确认. 将此命令在下一步中输出的值用作<pgbouncer_password_hash>
的值:sudo gitlab-ctl pg-password-md5 pgbouncer
为 Consul 数据库用户名/密码对生成密码哈希. 假设您将使用默认用户名
gitlab-consul
(推荐). 该命令将要求输入密码和确认. 将此命令在下一步中输出的值用作<consul_password_hash>
的值:sudo gitlab-ctl pg-password-md5 gitlab-consul
在主数据库节点上,编辑
/etc/gitlab/gitlab.rb
替换/etc/gitlab/gitlab.rb
# START user configuration
部分中记录的值:# Disable all components except PostgreSQL and Repmgr and Consul
roles ['postgres_role']
# PostgreSQL configuration
postgresql['listen_address'] = '0.0.0.0'
postgresql['hot_standby'] = 'on'
postgresql['wal_level'] = 'replica'
postgresql['shared_preload_libraries'] = 'repmgr_funcs'
# Disable automatic database migrations
gitlab_rails['auto_migrate'] = false
# Configure the Consul agent
consul['services'] = %w(postgresql)
# START user configuration
# Please set the real values as explained in Required Information section
#
# Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
postgresql['pgbouncer_user_password'] = '<pgbouncer_password_hash>'
# Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
postgresql['sql_user_password'] = '<postgresql_password_hash>'
# Set `max_wal_senders` to one more than the number of database nodes in the cluster.
# This is used to prevent replication from using up all of the
# available database connections.
postgresql['max_wal_senders'] = 4
postgresql['max_replication_slots'] = 4
# Replace XXX.XXX.XXX.XXX/YY with Network Address
postgresql['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
repmgr['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
## Enable service discovery for Prometheus
consul['enable'] = true
consul['monitoring_service_discovery'] = true
# Set the network addresses that the exporters will listen on for monitoring
node_exporter['listen_address'] = '0.0.0.0:9100'
postgres_exporter['listen_address'] = '0.0.0.0:9187'
postgres_exporter['dbname'] = 'gitlabhq_production'
postgres_exporter['password'] = '<postgresql_password_hash>'
## The IPs of the Consul server nodes
## You can also use FQDNs and intermix them with IPs
consul['configuration'] = {
retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
}
#
# END user configuration
重新配置 GitLab,以使更改生效.
您可以通过以下方式列出当前 PostgreSQL 主,辅助节点的状态:
sudo /opt/gitlab/bin/gitlab-ctl repmgr cluster show
验证 GitLab 服务正在运行:
sudo gitlab-ctl status
输出应类似于以下内容:
run: consul: (pid 30593) 77133s; run: log: (pid 29912) 77156s
run: logrotate: (pid 23449) 3341s; run: log: (pid 29794) 77175s
run: node-exporter: (pid 30613) 77133s; run: log: (pid 29824) 77170s
run: postgres-exporter: (pid 30620) 77132s; run: log: (pid 29894) 77163s
run: postgresql: (pid 30630) 77132s; run: log: (pid 29618) 77181s
run: repmgrd: (pid 30639) 77132s; run: log: (pid 29985) 77150s
PostgreSQL secondary nodes
在两个辅助节点上,添加与上面为主要节点指定的配置相同的附加设置,该设置将告知
gitlab-ctl
最初它们是备用节点,无需尝试将它们注册为主要节点:# Disable all components except PostgreSQL and Repmgr and Consul
roles ['postgres_role']
# PostgreSQL configuration
postgresql['listen_address'] = '0.0.0.0'
postgresql['hot_standby'] = 'on'
postgresql['wal_level'] = 'replica'
postgresql['shared_preload_libraries'] = 'repmgr_funcs'
# Disable automatic database migrations
gitlab_rails['auto_migrate'] = false
# Configure the Consul agent
consul['services'] = %w(postgresql)
# Specify if a node should attempt to be primary on initialization.
repmgr['master_on_initialization'] = false
# START user configuration
# Please set the real values as explained in Required Information section
#
# Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
postgresql['pgbouncer_user_password'] = '<pgbouncer_password_hash>'
# Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
postgresql['sql_user_password'] = '<postgresql_password_hash>'
# Set `max_wal_senders` to one more than the number of database nodes in the cluster.
# This is used to prevent replication from using up all of the
# available database connections.
postgresql['max_wal_senders'] = 4
postgresql['max_replication_slots'] = 4
# Replace XXX.XXX.XXX.XXX/YY with Network Address
postgresql['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
repmgr['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 10.6.0.0/24)
## Enable service discovery for Prometheus
consul['enable'] = true
consul['monitoring_service_discovery'] = true
# Set the network addresses that the exporters will listen on for monitoring
node_exporter['listen_address'] = '0.0.0.0:9100'
postgres_exporter['listen_address'] = '0.0.0.0:9187'
postgres_exporter['dbname'] = 'gitlabhq_production'
postgres_exporter['password'] = '<postgresql_password_hash>'
## The IPs of the Consul server nodes
## You can also use FQDNs and intermix them with IPs
consul['configuration'] = {
retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
}
# END user configuration
重新配置 GitLab,以使更改生效.
支持高级配置选项 ,可以根据需要添加.
PostgreSQL post-configuration
SSH 进入主节点 :
打开数据库提示:
gitlab-psql -d gitlabhq_production
Enable the
pg_trgm
extension:CREATE EXTENSION pg_trgm;
键入
\q
并按 Enter 退出数据库提示.验证集群是否已用一个节点初始化:
gitlab-ctl repmgr cluster show
输出应类似于以下内容:
Role | Name | Upstream | Connection String
----------+----------|----------|----------------------------------------
* master | HOSTNAME | | host=HOSTNAME user=gitlab_repmgr dbname=gitlab_repmgr
在连接字符串中记下主机名或 IP 地址:
host=HOSTNAME
. 在下一节中,我们将主机名称为<primary_node_name>
. 如果该值不是 IP 地址,则必须是可解析的名称(通过 DNS 或/etc/hosts
)
SSH 进入辅助节点 :
设置 repmgr 备用数据库:
gitlab-ctl repmgr standby setup <primary_node_name>
Do note that this will remove the existing data on the node. The command has a wait time.
输出应类似于以下内容:
Doing this will delete the entire contents of /var/opt/gitlab/postgresql/data
If this is not what you want, hit Ctrl-C now to exit
To skip waiting, rerun with the -w option
Sleeping for 30 seconds
Stopping the database
Removing the data
Cloning the data
Starting the database
Registering the node with the cluster
ok: run: repmgrd: (pid 19068) 0s
在继续之前,请确保正确配置了数据库. 在主节点上运行以下命令以验证复制是否正常工作,并且辅助节点是否出现在群集中:
gitlab-ctl repmgr cluster show
输出应类似于以下内容:
Role | Name | Upstream | Connection String
----------+---------|-----------|------------------------------------------------
* master | MASTER | | host=<primary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
standby | STANDBY | MASTER | host=<secondary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
standby | STANDBY | MASTER | host=<secondary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
如果任何节点的”角色”列显示”失败”,请在继续操作之前检查” 故障排除”部分 .
另外,请检查repmgr-check-master
命令在每个节点上是否都能正常工作:
su - gitlab-consul
gitlab-ctl repmgr-check-master || echo 'This node is a standby repmgr node'
此命令依靠退出代码来告诉 Consul 特定节点是主节点还是辅助节点. 这里最重要的是该命令不会产生错误. 如果有错误,很可能是由于gitlab-consul
数据库用户权限不正确gitlab-consul
. 在继续之前,请检查” 故障排除”部分 .
Configure PgBouncer
现在已经安装了 PostgreSQL 服务器,让我们配置 PgBouncer. 以下 IP 将作为示例:
10.6.0.21
:PgBouncer 110.6.0.22
:PgBouncer 210.6.0.23
:PgBouncer 3
在每个 PgBouncer 节点上,编辑
/etc/gitlab/gitlab.rb
,并将<consul_password_hash>
和<pgbouncer_password_hash>
替换为之前设置的密码哈希:# Disable all components except Pgbouncer and Consul agent
roles ['pgbouncer_role']
# Configure PgBouncer
pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
pgbouncer['users'] = {
'gitlab-consul': {
password: '<consul_password_hash>'
},
'pgbouncer': {
password: '<pgbouncer_password_hash>'
}
}
# Configure Consul agent
consul['watchers'] = %w(postgresql)
consul['enable'] = true
consul['configuration'] = {
retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
}
# Enable service discovery for Prometheus
consul['monitoring_service_discovery'] = true
# Set the network addresses that the exporters will listen on
node_exporter['listen_address'] = '0.0.0.0:9100'
pgbouncer_exporter['listen_address'] = '0.0.0.0:9188'
重新配置 Omnibus GitLab,以使更改生效.
创建一个
.pgpass
文件,以便 Consul 能够重新加载 PgBouncer. 询问时两次输入 PgBouncer 密码:gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
确保每个节点都在与当前主节点通信:
gitlab-ctl pgb-console # You will be prompted for PGBOUNCER_PASSWORD
如果出现错误
psql: ERROR: Auth failed
输入密码后psql: ERROR: Auth failed
,请确保您以前以正确的格式生成了 MD5 密码哈希. 正确的格式是连接密码和用户名PASSWORDUSERNAME
. 例如,Sup3rS3cr3tpgbouncer
将是为pgbouncer
用户生成 MD5 密码哈希所需的文本.控制台提示可用后,请运行以下查询:
show databases ; show clients ;
输出应类似于以下内容:
name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
---------------------+-------------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
gitlabhq_production | MASTER_HOST | 5432 | gitlabhq_production | | 20 | 0 | | 0 | 0
pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
(2 rows)
type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | remote_pid | tls
------+-----------+---------------------+---------+----------------+-------+------------+------------+---------------------+---------------------+-----------+------+------------+-----
C | pgbouncer | pgbouncer | active | 127.0.0.1 | 56846 | 127.0.0.1 | 6432 | 2017-08-21 18:09:59 | 2017-08-21 18:10:48 | 0x22b3880 | | 0 |
(2 rows)
验证 GitLab 服务正在运行:
sudo gitlab-ctl status
The output should be similar to the following:
run: consul: (pid 31530) 77150s; run: log: (pid 31106) 77182s
run: logrotate: (pid 32613) 3357s; run: log: (pid 30107) 77500s
run: node-exporter: (pid 31550) 77149s; run: log: (pid 30138) 77493s
run: pgbouncer: (pid 32033) 75593s; run: log: (pid 31117) 77175s
run: pgbouncer-exporter: (pid 31558) 77148s; run: log: (pid 31498) 77156s
Configure the internal load balancer
如果按照建议运行多个 PgBouncer 节点,那么此时,您将需要设置一个 TCP 内部负载均衡器以正确地服务每个负载均衡器.
以下 IP 将作为示例:
10.6.0.20
:内部负载均衡器
使用HAProxy 的方法如下:
global
log /dev/log local0
log localhost local1 notice
log stdout format raw local0
defaults
log global
default-server inter 10s fall 3 rise 2
balance leastconn
frontend internal-pgbouncer-tcp-in
bind *:6432
mode tcp
option tcplog
default_backend pgbouncer
backend pgbouncer
mode tcp
option tcp-check
server pgbouncer1 10.6.0.21:6432 check
server pgbouncer2 10.6.0.22:6432 check
server pgbouncer3 10.6.0.23:6432 check
请参阅您首选的负载均衡器的文档以获取更多指导.
Configure Gitaly
在自己的服务器上部署 Gitaly 可以使大于单个计算机的 GitLab 安装受益.
Gitaly 节点要求取决于客户数据,特别是项目数量及其存储库大小. 建议将两个节点作为绝对最小值. 每个 Gitaly 节点应存储的数据不超过 5TB,并将gitaly-ruby
工作者的数量设置为可用 CPU 的 20%. 根据以上建议,应结合其他节点并结合对预期数据大小和分布的审查.
强烈建议所有 Gitaly 节点都安装 SSD 磁盘,因为 Gitaly I / O 繁重,因此其读操作的吞吐量至少为 8000 IOPS,写操作的吞吐量至少为 2,000 IOPS. 这些 IOPS 值仅建议作为启动器使用,因为随着时间的推移,它们可能会根据环境工作负载的规模而调整得更高或更低. 如果您在 Cloud provider 上运行环境,则可能需要参考其文档以了解如何正确配置 IOPS.
注意事项:
- GitLab Rails 应用程序将存储库分片到存储库中 .
- Gitaly 服务器可以托管一个或多个存储.
- 一个 GitLab 服务器可以使用一个或多个 Gitaly 服务器.
- 必须以对所有 Gitaly 客户端正确解析的方式指定 Gitaly 地址.
- Gitaly 服务器一定不能暴露在公共互联网上,因为默认情况下,Gitaly 的网络流量是未加密的. 强烈建议使用防火墙以限制对 Gitaly 服务器的访问. 另一种选择是使用 TLS .
提示:有关 Gitaly 历史和网络体系结构的更多信息,请参见独立的 Gitaly 文档 .
注意: 注意: Gitaly 文档中引用的令牌只是管理员选择的任意密码. 它与为 GitLab API 创建的令牌或其他类似的 Web API 令牌无关.
下面我们描述如何配置两个具有 IP 和域名的 Gitaly 服务器:
10.6.0.51
1(gitaly1.internal
)10.6.0.52
2(gitaly2.internal
)
假定该秘密令牌为gitalysecret
,并且您的 GitLab 安装具有三个存储库存储:
default
为 Gitaly 1storage1
在 Gitaly 1storage2
上 Gitaly 2
在每个节点上:
- 从 GitLab 下载页面使用步骤 1 和 2 下载/安装所需的 Omnibus GitLab 软件包,但不提供
EXTERNAL_URL
值. 编辑
/etc/gitlab/gitlab.rb
以配置存储路径,启用网络侦听器并配置令牌:# /etc/gitlab/gitlab.rb
# Gitaly and GitLab use two shared secrets for authentication, one to authenticate gRPC requests
# to Gitaly, and a second for authentication callbacks from GitLab-Shell to the GitLab internal API.
# The following two values must be the same as their respective values
# of the GitLab Rails application setup
gitaly['auth_token'] = 'gitlaysecret'
gitlab_shell['secret_token'] = 'shellsecret'
# Avoid running unnecessary services on the Gitaly server
postgresql['enable'] = false
redis['enable'] = false
nginx['enable'] = false
puma['enable'] = false
unicorn['enable'] = false
sidekiq['enable'] = false
gitlab_workhorse['enable'] = false
grafana['enable'] = false
gitlab_exporter['enable'] = false
# If you run a seperate monitoring node you can disable these services
alertmanager['enable'] = false
prometheus['enable'] = false
# Prevent database connections during 'gitlab-ctl reconfigure'
gitlab_rails['rake_cache_clear'] = false
gitlab_rails['auto_migrate'] = false
# Configure the gitlab-shell API callback URL. Without this, `git push` will
# fail. This can be your 'front door' GitLab URL or an internal load
# balancer.
# Don't forget to copy `/etc/gitlab/gitlab-secrets.json` from web server to Gitaly server.
gitlab_rails['internal_api_url'] = 'https://gitlab.example.com'
# Make Gitaly accept connections on all network interfaces. You must use
# firewalls to restrict access to this address/port.
# Comment out following line if you only want to support TLS connections
gitaly['listen_addr'] = "0.0.0.0:8075"
## Enable service discovery for Prometheus
consul['enable'] = true
consul['monitoring_service_discovery'] = true
# Set the network addresses that the exporters will listen on for monitoring
gitaly['prometheus_listen_addr'] = "0.0.0.0:9236"
node_exporter['listen_address'] = '0.0.0.0:9100'
gitlab_rails['prometheus_address'] = '10.6.0.81:9090'
## The IPs of the Consul server nodes
## You can also use FQDNs and intermix them with IPs
consul['configuration'] = {
retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
}
对于每个服务器,将以下内容附加到
/etc/gitlab/gitlab.rb
:在
gitaly1.internal
:git_data_dirs ({ 'default' => { 'path' => '/var/opt/gitlab/git-data' }, 'storage1' => { 'path' => '/mnt/gitlab/git-data' }, })
在
gitaly2.internal
:git_data_dirs ({ 'storage2' => { 'path' => '/mnt/gitlab/git-data' }, })
保存文件并重新配置 GitLab .
确认 Gitaly 可以执行对内部 API 的回调:
sudo /opt/gitlab/embedded/service/gitlab-shell/bin/check -config /opt/gitlab/embedded/service/gitlab-shell/config.yml
验证 GitLab 服务正在运行:
sudo gitlab-ctl status
输出应类似于以下内容:
run: consul: (pid 30339) 77006s; run: log: (pid 29878) 77020s
run: gitaly: (pid 30351) 77005s; run: log: (pid 29660) 77040s
run: logrotate: (pid 7760) 3213s; run: log: (pid 29782) 77032s
run: node-exporter: (pid 30378) 77004s; run: log: (pid 29812) 77026s
Gitaly TLS support
Gitaly 支持 TLS 加密. 为了能够与侦听安全连接的 Gitaly 实例进行通信,您将需要在 GitLab 配置中相应存储条目的gitaly_address
中使用tls://
URL 方案.
您将需要携带自己的证书,因为该证书不会自动提供. 证书或其证书颁发机构必须按照GitLab 自定义证书配置中所述的步骤,安装在所有 Gitaly 节点(包括使用证书的 Gitaly 节点)上,以及与之通信的所有客户端节点上.
注意:自签名证书必须指定用于访问 Gitaly 服务器的地址. 如果要通过主机名寻址 Gitaly 服务器,则可以为此使用”公用名”字段,也可以将其添加为”使用者备用名”. 如果要通过 Gitaly 服务器的 IP 地址对其进行寻址,则必须将其作为主题备用名称添加到证书中. gRPC 不支持在证书中使用 IP 地址作为公用名 .注意:可以同时为 Gitaly 服务器配置未加密的侦听地址listen_addr
和已加密的侦听地址tls_listen_addr
. 如果需要,这使您可以从未加密的流量逐渐过渡到加密的流量.
要使用 TLS 配置 Gitaly:
创建
/etc/gitlab/ssl
目录,并在其中复制密钥和证书:sudo mkdir -p /etc/gitlab/ssl
sudo chmod 755 /etc/gitlab/ssl
sudo cp key.pem cert.pem /etc/gitlab/ssl/
sudo chmod 644 key.pem cert.pem
将证书复制到
/etc/gitlab/trusted-certs
以便 Gitaly 在调用自身时信任该证书:sudo cp /etc/gitlab/ssl/cert.pem /etc/gitlab/trusted-certs/
编辑
/etc/gitlab/gitlab.rb
并添加:gitaly['tls_listen_addr'] = "0.0.0.0:9999"
gitaly['certificate_path'] = "/etc/gitlab/ssl/cert.pem"
gitaly['key_path'] = "/etc/gitlab/ssl/key.pem"
删除
gitaly['listen_addr']
以仅允许加密连接.- 保存文件并重新配置 GitLab .
Configure Sidekiq
Sidekiq 需要连接到 Redis,PostgreSQL 和 Gitaly 实例. 以下 IP 将作为示例:
10.6.0.71
:Sidekiq 110.6.0.72
:Sidekiq 210.6.0.73
:Sidekiq 310.6.0.74
:Sidekiq 4
要配置 Sidekiq 节点,每个节点一个:
- SSH 到 Sidekiq 服务器.
- 从 GitLab 下载页面使用步骤 1 和 2 下载/安装所需的 Omnibus GitLab 软件包. 不要完成下载页面上的任何其他步骤.
使用编辑器打开
/etc/gitlab/gitlab.rb
:########################################
##### Services Disabled ###
########################################
nginx['enable'] = false
grafana['enable'] = false
prometheus['enable'] = false
gitlab_rails['auto_migrate'] = false
alertmanager['enable'] = false
gitaly['enable'] = false
gitlab_workhorse['enable'] = false
nginx['enable'] = false
puma['enable'] = false
postgres_exporter['enable'] = false
postgresql['enable'] = false
redis['enable'] = false
redis_exporter['enable'] = false
gitlab_exporter['enable'] = false
########################################
#### Redis ###
########################################
## Must be the same in every sentinel node
redis['master_name'] = 'gitlab-redis'
## The same password for Redis authentication you set up for the master node.
redis['master_password'] = '<redis_primary_password>'
## A list of sentinels with `host` and `port`
gitlab_rails['redis_sentinels'] = [
{'host' => '10.6.0.11', 'port' => 26379},
{'host' => '10.6.0.12', 'port' => 26379},
{'host' => '10.6.0.13', 'port' => 26379},
]
#######################################
### Gitaly ###
#######################################
git_data_dirs({
'default' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
'storage1' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
'storage2' => { 'gitaly_address' => 'tcp://gitaly2.internal:8075' },
})
gitlab_rails['gitaly_token'] = 'YOUR_TOKEN'
#######################################
### Postgres ###
#######################################
gitlab_rails['db_host'] = '10.6.0.20' # internal load balancer IP
gitlab_rails['db_port'] = 6432
gitlab_rails['db_password'] = '<postgresql_user_password>'
gitlab_rails['db_adapter'] = 'postgresql'
gitlab_rails['db_encoding'] = 'unicode'
gitlab_rails['auto_migrate'] = false
#######################################
### Sidekiq configuration ###
#######################################
sidekiq['listen_address'] = "0.0.0.0"
#######################################
### Monitoring configuration ###
#######################################
consul['enable'] = true
consul['monitoring_service_discovery'] = true
consul['configuration'] = {
retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
}
# Set the network addresses that the exporters will listen on
node_exporter['listen_address'] = '0.0.0.0:9100'
# Rails Status for prometheus
gitlab_rails['monitoring_whitelist'] = ['10.6.0.81/32', '127.0.0.0/8']
gitlab_rails['prometheus_address'] = '10.6.0.81:9090'
保存文件并重新配置 GitLab .
验证 GitLab 服务正在运行:
sudo gitlab-ctl status
输出应类似于以下内容:
run: consul: (pid 30114) 77353s; run: log: (pid 29756) 77367s
run: logrotate: (pid 9898) 3561s; run: log: (pid 29653) 77380s
run: node-exporter: (pid 30134) 77353s; run: log: (pid 29706) 77372s
run: sidekiq: (pid 30142) 77351s; run: log: (pid 29638) 77386s
提示:您还可以运行多个 Sidekiq 进程 .Back to setup components
Configure GitLab Rails
注意:在我们的体系结构中,我们使用 Puma Web 服务器运行每个 GitLab Rails 节点,并将其工作程序数设置为可用 CPU 的 90%以及四个线程. 对于运行带有其他组件的 Rails 的节点,应该相应地降低 worker 的值,我们发现 50%达到了很好的平衡,但这取决于工作量.
本节介绍如何配置 GitLab 应用程序(Rails)组件. 在每个节点上执行以下操作:
如果您使用的是 NFS :
如有必要,请使用以下命令安装 NFS 客户端实用程序软件包:
# Ubuntu/Debian
apt-get install nfs-common
# CentOS/Red Hat
yum install nfs-utils nfs-utils-lib
在
/etc/fstab
指定必要的 NFS 挂载./etc/fstab
的确切内容取决于您选择配置 NFS 服务器的方式. 有关示例和各种选项,请参见NFS 文档 .创建共享目录. 这些可能会有所不同,具体取决于您的 NFS 安装位置.
mkdir -p /var/opt/gitlab/.ssh /var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-ci/builds /var/opt/gitlab/git-data
使用GitLab 下载中的 步骤 1 和 2下载/安装 Omnibus GitLab. 不要完成下载页面上的其他步骤.
创建/编辑
/etc/gitlab/gitlab.rb
并使用以下配置. 为了保持整个节点的链接均匀性,external_url
在应用服务器上应指向外部 URL,用户将用来访问 GitLab. 这将是外部负载平衡器的 URL,它将负载流量路由到 GitLab 应用程序服务器:external_url 'https://gitlab.example.com'
# Gitaly and GitLab use two shared secrets for authentication, one to authenticate gRPC requests
# to Gitaly, and a second for authentication callbacks from GitLab-Shell to the GitLab internal API.
# The following two values must be the same as their respective values
# of the Gitaly setup
gitlab_rails['gitaly_token'] = 'gitalyecret'
gitlab_shell['secret_token'] = 'shellsecret'
git_data_dirs({
'default' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
'storage1' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
'storage2' => { 'gitaly_address' => 'tcp://gitaly2.internal:8075' },
})
## Disable components that will not be on the GitLab application server
roles ['application_role']
gitaly['enable'] = false
nginx['enable'] = true
sidekiq['enable'] = false
## PostgreSQL connection details
# Disable PostgreSQL on the application node
postgresql['enable'] = false
gitlab_rails['db_host'] = '10.6.0.20' # internal load balancer IP
gitlab_rails['db_port'] = 6432
gitlab_rails['db_password'] = '<postgresql_user_password>'
gitlab_rails['auto_migrate'] = false
## Redis connection details
## Must be the same in every sentinel node
redis['master_name'] = 'gitlab-redis'
## The same password for Redis authentication you set up for the Redis primary node.
redis['master_password'] = '<redis_primary_password>'
## A list of sentinels with `host` and `port`
gitlab_rails['redis_sentinels'] = [
{'host' => '10.6.0.11', 'port' => 26379},
{'host' => '10.6.0.12', 'port' => 26379},
{'host' => '10.6.0.13', 'port' => 26379}
]
## Enable service discovery for Prometheus
consul['enable'] = true
consul['monitoring_service_discovery'] = true
# Set the network addresses that the exporters used for monitoring will listen on
node_exporter['listen_address'] = '0.0.0.0:9100'
gitlab_workhorse['prometheus_listen_addr'] = '0.0.0.0:9229'
sidekiq['listen_address'] = "0.0.0.0"
puma['listen'] = '0.0.0.0'
## The IPs of the Consul server nodes
## You can also use FQDNs and intermix them with IPs
consul['configuration'] = {
retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13),
}
# Add the monitoring node's IP address to the monitoring whitelist and allow it to
# scrape the NGINX metrics
gitlab_rails['monitoring_whitelist'] = ['10.6.0.81/32', '127.0.0.0/8']
nginx['status']['options']['allow'] = ['10.6.0.81/32', '127.0.0.0/8']
gitlab_rails['prometheus_address'] = '10.6.0.81:9090'
## Uncomment and edit the following options if you have set up NFS
##
## Prevent GitLab from starting if NFS data mounts are not available
##
#high_availability['mountpoint'] = '/var/opt/gitlab/git-data'
##
## Ensure UIDs and GIDs match between servers for permissions via NFS
##
#user['uid'] = 9000
#user['gid'] = 9000
#web_server['uid'] = 9001
#web_server['gid'] = 9001
#registry['uid'] = 9002
#registry['gid'] = 9002
如果您正在使用具有 TLS 支持的
git_data_dirs
,请确保git_data_dirs
条目配置了tls
而不是tcp
:git_data_dirs({
'default' => { 'gitaly_address' => 'tls://gitaly1.internal:9999' },
'storage1' => { 'gitaly_address' => 'tls://gitaly1.internal:9999' },
'storage2' => { 'gitaly_address' => 'tls://gitaly2.internal:9999' },
})
将证书复制到
/etc/gitlab/trusted-certs
:sudo cp cert.pem /etc/gitlab/trusted-certs/
保存文件并重新配置 GitLab .
- 运行
sudo gitlab-rake gitlab:gitaly:check
确认节点可以连接到 Gitaly. 拖尾日志以查看请求:
sudo gitlab-ctl tail gitaly
验证 GitLab 服务正在运行:
sudo gitlab-ctl status
输出应类似于以下内容:
run: consul: (pid 4890) 8647s; run: log: (pid 29962) 79128s
run: gitlab-exporter: (pid 4902) 8647s; run: log: (pid 29913) 79134s
run: gitlab-workhorse: (pid 4904) 8646s; run: log: (pid 29713) 79155s
run: logrotate: (pid 12425) 1446s; run: log: (pid 29798) 79146s
run: nginx: (pid 4925) 8646s; run: log: (pid 29726) 79152s
run: node-exporter: (pid 4931) 8645s; run: log: (pid 29855) 79140s
run: puma: (pid 4936) 8645s; run: log: (pid 29656) 79161s
注意:如上例所示,当在external_url
指定https
时,GitLab 会假定您在/etc/gitlab/ssl/
具有 SSL 证书. 如果没有证书,NGINX 将无法启动. 有关更多信息,请参见NGINX 文档 .
GitLab Rails post-configuration
确保运行所有迁移:
gitlab-rake gitlab:db:configure
注意:如果遇到
rake aborted!
错误,指出 PgBouncer 是无法连接到 PostgreSQL 也可能是您的 PgBouncer 节点的 IP 地址是从 PostgreSQL 的缺失trust_auth_cidr_addresses
在gitlab.rb
你的数据库节点. 请参阅”故障排除”部分中的PgBouncer 错误ERROR: pgbouncer cannot connect to server
,然后再继续.- Configure fast lookup of authorized SSH keys in the database.
Configure Prometheus
Omnibus GitLab 软件包可用于配置运行Prometheus和Grafana的独立 Monitoring 节点:
- SSH 进入”监视”节点.
- 从 GitLab 下载页面使用步骤 1 和 2 下载/安装所需的 Omnibus GitLab 软件包. 不要完成下载页面上的任何其他步骤.
编辑
/etc/gitlab/gitlab.rb
并添加内容:external_url 'http://gitlab.example.com'
# Disable all other services
gitlab_rails['auto_migrate'] = false
alertmanager['enable'] = false
gitaly['enable'] = false
gitlab_exporter['enable'] = false
gitlab_workhorse['enable'] = false
nginx['enable'] = true
postgres_exporter['enable'] = false
postgresql['enable'] = false
redis['enable'] = false
redis_exporter['enable'] = false
sidekiq['enable'] = false
puma['enable'] = false
unicorn['enable'] = false
node_exporter['enable'] = false
gitlab_exporter['enable'] = false
# Enable Prometheus
prometheus['enable'] = true
prometheus['listen_address'] = '0.0.0.0:9090'
prometheus['monitor_kubernetes'] = false
# Enable Login form
grafana['disable_login_form'] = false
# Enable Grafana
grafana['enable'] = true
grafana['admin_password'] = '<grafana_password>'
# Enable service discovery for Prometheus
consul['enable'] = true
consul['monitoring_service_discovery'] = true
consul['configuration'] = {
retry_join: %w(10.6.0.11 10.6.0.12 10.6.0.13)
}
保存文件并重新配置 GitLab .
- 在 GitLab 用户界面中,将
admin/application_settings/metrics_and_profiling
>指标-Grafana 设置为/-/grafana
到http[s]://<MONITOR NODE>/-/grafana
. 验证 GitLab 服务正在运行:
sudo gitlab-ctl status
输出应类似于以下内容:
run: consul: (pid 31637) 17337s; run: log: (pid 29748) 78432s
run: grafana: (pid 31644) 17337s; run: log: (pid 29719) 78438s
run: logrotate: (pid 31809) 2936s; run: log: (pid 29581) 78462s
run: nginx: (pid 31665) 17335s; run: log: (pid 29556) 78468s
run: prometheus: (pid 31672) 17335s; run: log: (pid 29633) 78456s
Configure the object storage
GitLab 支持使用对象存储服务来保存多种类型的数据. 建议在NFS上使用它,并且通常在较大的设置中更好,因为对象存储通常具有更高的性能,可靠性和可伸缩性.
manbetx 客户端打不开已经测试过或知道使用的客户的对象存储选项包括:
- SaaS / Cloud 解决方案,例如Amazon S3 , Google 云存储 .
- 来自各种存储供应商的本地硬件和设备.
- MinIO. 我们的 Helm Chart 文档中提供了有关部署的指南 .
要配置 GitLab 以使用对象存储,请根据要使用的功能参考以下指南:
- Configure object storage for backups.
- Configure object storage for job artifacts including incremental logging.
- Configure object storage for LFS objects.
- Configure object storage for uploads.
- Configure object storage for merge request diffs.
- 配置容器注册表的对象存储 (可选功能).
- 为 Mattermost配置对象存储 (可选功能).
- 配置软件包的对象存储 (可选功能).
- 配置依赖项代理的对象存储 (可选功能).
- 为 Pseudonymizer (可选功能)配置对象存储 .
- 配置对象存储以自动缩放 Runner 缓存 (可选-为了提高性能).
- Configure object storage for Terraform state files.
对于 GitLab,建议为每种数据类型使用单独的存储桶.
我们的配置的局限性是对象存储的每次使用都是单独配置的. 我们有一个需要改进的问题 ,轻松地将一个存储桶与单独的文件夹一起使用可能会带来一个改进.
使用同一个存储桶至少有一个特定的问题:当使用 Helm 图表部署 GitLab 时,除非使用单独的存储桶,否则从备份还原将无法正常工作 .
如果您的组织将来决定将 GitLab 迁移到 Helm 部署,则使用单个存储桶的一种风险是. GitLab 可以运行,但是直到组织对备份起作用的关键要求之前,备份的情况可能无法实现.
Configure NFS (optional)
建议尽可能在 NFS 上使用对象存储以及Gitaly ,以提高性能. 如果您打算使用 GitLab 页面,则当前需要 NFS .
请参阅如何配置 NFS .
Troubleshooting
请参阅故障排除文档 .