高可用集群部署

前期准备

1. 检查centos版本 / hostname

1
2
3
4
5
6
7
8
9
# 查看centos版本
cat /etc/centos-release
>> CentOS Linux release 7.7.1908 (Core)

# 此处 hostname 的输出将会是该机器在 Kubernetes 集群中的节点名字
hostname

# 修改hostname
hostnamectl set-hostname NAME

2. 检查网络

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# ip route show
default via 111.111.111.1 dev ens192 proto static metric 100
111.111.111.0/24 dev ens192 proto kernel scope link src 111.111.111.81 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8c:5b:38 brd ff:ff:ff:ff:ff:ff
inet 111.111.111.81/24 brd 111.111.111.255 scope global noprefixroute ens192
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe8c:5b38/64 scope link
valid_lft forever preferred_lft forever
3: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8c:1e:41 brd ff:ff:ff:ff:ff:ff

kubelet使用的IP地址

  • ip route show 命令中,可以知道机器的默认网卡,通常是 第一块网卡,如 default via 111.111.111.1 dev ens192
  • ip a 命令中,可显示默认网卡的 IP 地址,Kubernetes 将使用此 IP 地址与集群内的其他节点通信,如 111.111.111.81
  • 所有节点上 Kubernetes 所使用的 IP 地址必须可以互通(无需 NAT 映射、无安全组或防火墙隔离)

3. 环境基本配置

  • 任意节点 centos 版本为 7.6 / 7.7 或 7.8
  • 任意节点 CPU 内核数量大于等于 2,且内存大于等于 4G
  • 任意节点 hostname 不是 localhost,且不包含下划线、小数点、大写字母
  • 任意节点都有固定的内网 IP 地址
  • 任意节点都只有一个网卡,如果有特殊目的,我可以在完成 K8S 安装后再增加新的网卡
  • 任意节点上 Kubelet使用的 IP 地址 可互通(无需 NAT 映射即可相互访问),且没有防火墙、安全组隔离
  • 任意节点不会直接使用 docker run 或 docker-compose 运行容器

安装 docker / kubelet

使用 root 身份在所有节点执行如下代码,安装软件:

  • docker
  • nfs-utils
  • kubectl / kubeadm / kubelet

1. 快速安装

1
2
# 在 master 节点和 worker 节点都要执行
curl -sSL https://kuboard.cn/install-script/v1.16.2/install_kubelet.sh | sh

2. 手动安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
#!/bin/bash

# 在 master 节点和 worker 节点都要执行

# 安装 docker
# 参考文档如下
# https://docs.docker.com/install/linux/docker-ce/centos/
# https://docs.docker.com/install/linux/linux-postinstall/

# 卸载旧版本
yum remove -y docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine

# 设置 yum repository
yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安装并启动 docker
yum install -y docker-ce-18.09.7 docker-ce-cli-18.09.7 containerd.io
systemctl enable docker
systemctl start docker

# 安装 nfs-utils
# 必须先安装 nfs-utils 才能挂载 nfs 网络存储
yum install -y nfs-utils
yum install -y wget

# 关闭 防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭 SeLinux
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

# 关闭 swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

# 修改 /etc/sysctl.conf
# 如果有配置,则修改
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.conf
# 可能没有,追加
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
# 执行命令以应用
sysctl -p

# 配置K8S的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 卸载旧版本
yum remove -y kubelet kubeadm kubectl

# 安装kubelet、kubeadm、kubectl
yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2

# 修改docker Cgroup Driver为systemd
# # 将/usr/lib/systemd/system/docker.service文件中的这一行 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
# # 修改为 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
# 如果不修改,在添加 worker 节点时可能会碰到如下错误
# [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
# Please follow the guide at https://kubernetes.io/docs/setup/cri/
sed -i "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service

# 设置 docker 镜像,提高 docker 镜像下载速度和稳定性
# 如果您访问 https://hub.docker.io 速度非常稳定,亦可以跳过这个步骤
curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io

# 重启 docker,并启动 kubelet
systemctl daemon-reload
systemctl restart docker
systemctl enable kubelet && systemctl start kubelet

docker version

WARNING

如果此时执行 service status kubelet 命令,将得到 kubelet 启动失败的错误提示,请忽略此错误,因为必须完成后续步骤中 kubeadm init 的操作,kubelet 才能正常启动

安装 haproxy / keepalived

1. keepalived配置

yum安装keepalived,编辑配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
! Configuration File for keepalived

global_defs {
router_id LVS_DEVEL
}

vrrp_instance VI_1 {
state {{ state }}
interface {{ interface }}
virtual_router_id 51
priority {{ priority }}
advert_int 1
track_interface {
{% for item in track_interface %}
{{ item }}
{% endfor %}
}
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
{{ vip }}
}
}
  • ${STATE} is MASTER for one and BACKUP for all other hosts, hence the virtual IP will initially be assigned to the MASTER.
  • ${INTERFACE} is the network interface taking part in the negotiation of the virtual IP, e.g. eth0.
  • ${ROUTER_ID} should be the same for all keepalived cluster hosts while unique amongst all clusters in the same subnet. Many distros pre-configure its value to 51.
  • ${PRIORITY} should be higher on the master than on the backups. Hence 101 and 100 respectively will suffice.
  • ${TRACK_INTERFACE} is a list of listening ports.
  • ${AUTH_PASS} should be the same for all keepalived cluster hosts, e.g. 42
  • ${APISERVER_VIP} is the virtual IP address negotiated between the keepalived cluster hosts.

2. haproxy配置

yum安装haproxy,编辑配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
global
log 127.0.0.1 local2 info
pidfile /var/run/haproxy.pid
maxconn 10240
daemon

defaults
log global
mode http
timeout connect 5000
timeout client 5000
timeout server 5000
timeout check 2000

########## K8s-APIServer LB #####################
listen k8s-apiserver-lb
bind 0.0.0.0:8443
mode tcp
balance roundrobin
{% for item in groups.master %}
server {{ item }} {{ hostvars[item].ansible_ssh_host }}:6443 check port 6443 inter 5000 fall 5
{% endfor %}

########## K8s-Web #####################
listen k8s-web
bind 0.0.0.0:32000
mode tcp
balance roundrobin
{% for item in groups.master %}
server {{ item }} {{ hostvars[item].ansible_ssh_host }}:30000 check port 30000 inter 5000 fall 5
{% endfor %}

######## stats ############################
listen admin_stats
bind 0.0.0.0:8080
mode http
option httplog
maxconn 10
stats refresh 30s
stats uri /stats
  • maxconn 最大并发连接数

后续流程参考后续文档

参考文档