1 - 需求
一、操作系统
几乎所有安装了Docker的Linux操作系统都可以运行RKE,但是推荐使用Ubuntu 16.04,因为您大多数RKE的开发和测试都在Ubuntu 16.04上。
1. 某些操作系统有限制和特定要求:
- SSH user - 用于访问节点的SSH用户,必须加入docker组:
usermod -aG docker <user_name>
请参阅Manage Docker as a non-root user 以了解如何在不使用root用户的情况下配置对Docker的访问。
worker
上禁用交换加载以下内核模块,可以使用以下方法检查:
modprobe module_name
lsmod | grep module_name
grep module_name /lib/modules/$(uname -r)/modules.builtin
, 如果它是一个内置模块Module namebr_netfilterip6_udp_tunnelip_setip_set_hash_ipip_set_hash_netiptable_filteriptable_natiptable_mangleiptable_rawnf_conntrack_netlinknf_conntracknf_conntrack_ipv4nf_defrag_ipv4nf_natnf_nat_ipv4nf_nat_masquerade_ipv4nfnetlinkudp_tunnelvethvxlanx_tablesxt_addrtypext_conntrackxt_commentxt_markxt_multiportxt_natxt_recentxt_setxt_statisticxt_tcpudp
必须应用以下sysctl设置
net.bridge.bridge-nf-call-iptables=1
2. Red Hat Enterprise Linux(RHEL)/Oracle Enterprise Linux(OEL)/CentOS
如果使用Red Hat Enterprise Linux,Oracle Enterprise Linux或CentOS,由于Bugzilla 1527565您无法将root用户用作SSH用户。请根据您在节点上安装Docker的方式,按照以下说明正确设置Docker。
- 使用docker-ce
检查是否安装docker-ce或docker-ee,可以执行以下命令检查已安装的软件包:
rpm -q docker-ce
- 使用RHEL/CentOS维护的Docker
如果您使用的是Red Hat/CentOS提供的Docker软件包,则软件包名称为docker。您可以执行以下命令检查已安装的软件包
rpm -q docker
如果您使用的是Red Hat/CentOS提供的Docker软件包,该dockerroot
组将自动添加到系统中。您需要编辑(或创建)/etc/docker/daemon.json
以包含以下内容:
{
"group": "dockerroot"
}
编辑或创建文件后重新启动Docker,重新启动Docker后,您可以检查Docker socket(/var/run/docker.sock)的组权限,该权限应显示为group(dockerroot)
srw-rw----. 1 root dockerroot 0 Jul 4 09:57 /var/run/docker.sock
将要使用的SSH用户添加到该组,这不是root用户。
usermod -aG dockerroot <user_name>
要验证用户配置是否正确,请注销节点并使用SSH用户重新登录,然后执行docker ps
:
ssh <user_name>@node
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3. Red Hat Atomic
在尝试将RKE与Red Hat Atomic节点一起使用之前,需要对操作系统进行一些更新才能使RKE正常工作。
- OpenSSH 版本
默认情况下,Atomic安装OpenSSH 6.4,它不支持SSH隧道,这是核心RKE要求,需要升级openssh。
- 创建Docker Group
默认情况下,Atomic不附带Docker组,可以通过启用特定用户来启动RKE来更新Docker套接字的所有权。
chown <user> /var/run/docker.sock
二、软件
- Docker - 每个Kubernetes版本都支持不同的Docker版本。
Kubernetes版本 支持Docker版本(s) v1.13.x RHEL Docker 1.13, 17.03.2, 18.06.2, 18.09.2 v1.12.x RHEL Docker 1.13, 17.03.2, 18.06.2, 18.09.2 v1.11.x RHEL Docker 1.13, 17.03.2, 18.06.2, 18.09.2
您可以按照Docker安装说明操作,也可以使用Rancher的安装脚本安装Docker。对于RHEL,请参阅如何在Red Hat Enterprise Linux 7上安装Docker。
Docker版本 | 安装脚本 |
---|---|
18.09.2 | curl https://releases.rancher.com/install-docker/18.09.2.sh | sh |
18.06.2 | curl https://releases.rancher.com/install-docker/18.06.2.sh | sh |
17.03.2 | curl https://releases.rancher.com/install-docker/17.03.2.sh | sh |
确认安装的docker版本: docker version —format '{{.Server.Version}}'
docker version --format '{{.Server.Version}}'
17.03.2-ce
- OpenSSH 7.0+ - 必须在每个节点上安装OpenSSH。
三、端口
RKE node:Node that runs the rke
commands
RKE node - Outbound rules
Protocol | Port | Source | Destination | Description |
---|---|---|---|---|
TCP | 22 | RKE node | - Any node configured in Cluster Configuration File | SSH provisioning of node by RKE |
TCP | 6443 | RKE node | - controlplane nodes | Kubernetes apiserver |
etcd nodes:Nodes with the role etcd
etcd nodes - Inbound rules
Protocol | Port | Source | Description |
---|---|---|---|
TCP | 2376 | - Rancher nodes | Docker daemon TLS port used by Docker Machine(only needed when using Node Driver/Templates) |
TCP | 2379 | - etcd nodes- controlplane nodes | etcd client requests |
TCP | 2380 | - etcd nodes- controlplane nodes | etcd peer communication |
UDP | 8472 | - etcd nodes- controlplane nodes- worker nodes | Canal/Flannel VXLAN overlay networking |
TCP | 9099 | - etcd node itself (local traffic, not across nodes)See Local node traffic | Canal/Flannel livenessProbe/readinessProbe |
TCP | 10250 | - controlplane nodes | kubelet |
etcd nodes - Outbound rules
Protocol | Port | Destination | Description |
---|---|---|---|
TCP | 443 | - Rancher nodes | Rancher agent |
TCP | 2379 | - etcd nodes | etcd client requests |
TCP | 2380 | - etcd nodes | etcd peer communication |
TCP | 6443 | - controlplane nodes | Kubernetes apiserver |
UDP | 8472 | - etcd nodes- controlplane nodes- worker nodes | Canal/Flannel VXLAN overlay networking |
TCP | 9099 | - etcd node itself (local traffic, not across nodes)See Local node traffic | Canal/Flannel livenessProbe/readinessProbe |
controlplane nodes:Nodes with the role controlplane
controlplane nodes - Inbound rules
Protocol | Port | Source | Description |
---|---|---|---|
TCP | 80 | - Any that consumes Ingress services | Ingress controller (HTTP) |
TCP | 443 | - Any that consumes Ingress services | Ingress controller (HTTPS) |
TCP | 2376 | - Rancher nodes | Docker daemon TLS port used by Docker Machine(only needed when using Node Driver/Templates) |
TCP | 6443 | - etcd nodes- controlplane nodes- worker nodes | Kubernetes apiserver |
UDP | 8472 | - etcd nodes- controlplane nodes- worker nodes | Canal/Flannel VXLAN overlay networking |
TCP | 9099 | - controlplane node itself (local traffic, not across nodes)See Local node traffic | Canal/Flannel livenessProbe/readinessProbe |
TCP | 10250 | - controlplane nodes | kubelet |
TCP | 10254 | - controlplane node itself (local traffic, not across nodes)See Local node traffic | Ingress controller livenessProbe/readinessProbe |
TCP/UDP | 30000-32767 | - Any source that consumes NodePort services | NodePort port range |
controlplane nodes - Outbound rules
Protocol | Port | Destination | Description |
---|---|---|---|
TCP | 443 | - Rancher nodes | Rancher agent |
TCP | 2379 | - etcd nodes | etcd client requests |
TCP | 2380 | - etcd nodes | etcd peer communication |
UDP | 8472 | - etcd nodes- controlplane nodes- worker nodes | Canal/Flannel VXLAN overlay networking |
TCP | 9099 | - controlplane node itself (local traffic, not across nodes)See Local node traffic | Canal/Flannel livenessProbe/readinessProbe |
TCP | 10250 | - etcd nodes- controlplane nodes- worker nodes | kubelet |
TCP | 10254 | - controlplane node itself (local traffic, not across nodes)See Local node traffic | Ingress controller livenessProbe/readinessProbe |
worker nodes:Nodes with the role worker
worker nodes - Inbound rules
Protocol | Port | Source | Description |
---|---|---|---|
TCP | 22 | - Linux worker nodes only- Any network that you want to be able to remotely access this node from. | Remote access over SSH |
TCP | 3389 | - Windows worker nodes only- Any network that you want to be able to remotely access this node from. | Remote access over RDP |
TCP | 80 | - Any that consumes Ingress services | Ingress controller (HTTP) |
TCP | 443 | - Any that consumes Ingress services | Ingress controller (HTTPS) |
TCP | 2376 | - Rancher nodes | Docker daemon TLS port used by Docker Machine(only needed when using Node Driver/Templates) |
UDP | 8472 | - etcd nodes- controlplane nodes- worker nodes | Canal/Flannel VXLAN overlay networking |
TCP | 9099 | - worker node itself (local traffic, not across nodes)See Local node traffic | Canal/Flannel livenessProbe/readinessProbe |
TCP | 10250 | - controlplane nodes | kubelet |
TCP | 10254 | - worker node itself (local traffic, not across nodes)See Local node traffic | Ingress controller livenessProbe/readinessProbe |
TCP/UDP | 30000-32767 | - Any source that consumes NodePort services | NodePort port range |
worker nodes - Outbound rules
Protocol | Port | Destination | Description |
---|---|---|---|
TCP | 443 | - Rancher nodes | Rancher agent |
TCP | 6443 | - controlplane nodes | Kubernetes apiserver |
UDP | 8472 | - etcd nodes- controlplane nodes- worker nodes | Canal/Flannel VXLAN overlay networking |
TCP | 9099 | - worker node itself (local traffic, not across nodes)See Local node traffic | Canal/Flannel livenessProbe/readinessProbe |
TCP | 10254 | - worker node itself (local traffic, not across nodes)See Local node traffic | Ingress controller livenessProbe/readinessProbe |
Information on local node traffic
Kubernetes healthchecks (livenessProbe
and readinessProbe
) are executed on the host itself. On most nodes, this is allowed by default. When you have applied strict host firewall (i.e. iptables
) policies on the node, or when you are using nodes that have multiple interfaces (multihomed), this traffic gets blocked. In this case, you have to explicitly allow this traffic in your host firewall, or in case of public/private cloud hosted machines (i.e. AWS or OpenStack), in your security group configuration. Keep in mind that when using a security group as Source or Destination in your security group, that this only applies to the private interface of the nodes/instances.
If you are using an external firewall, make sure you have this port opened between the machine you are using to run rke
and the nodes that you are going to use in the cluster.
iptables 放行端口TCP/6443
# Open TCP/6443 for all
iptables -A INPUT -p tcp --dport 6443 -j ACCEPT
# Open TCP/6443 for one specific IP
iptables -A INPUT -p tcp -s your_ip_here --dport 6443 -j ACCEPT
firewalld放行端口TCP/6443
# Open TCP/6443 for all
firewall-cmd --zone=public --add-port=6443/tcp --permanent
firewall-cmd --reload
# Open TCP/6443 for one specific IP
firewall-cmd --permanent --zone=public --add-rich-rule='
rule family="ipv4"
source address="your_ip_here/32"
port protocol="tcp" port="6443" accept'
firewall-cmd --reload