Kubernetes 二进制安装 - worker节点

规划

工作路径: /opt/work
x:数字递增

IP 地址 机器名称 操作系统 充当角色 安装软件
192.168.1.13x masterx ubuntu master kube-apiserver、kube-controller-manager、kube-scheduler、kubectl
192.168.1.14x nodex ubuntu worker kubelet、kube-proxy

Kubernetes 内部的 IP

干啥的 IP 在哪配置 在哪用
service 分配的 IP 10.255.0.0/24 kube-apiserver
pod 分配的 IP 192.10.0.0/16 calico kube-proxy 的 clusterCIDR
clusterIP 10.255.0.2 coredns kubelet 的 clusterDNS

版本抉择

  1. kubernetes: v1.23.4,下载链接: https://github.com/kubernetes/kubernetes/releases/

前提工作

关掉 worker 节点的 swap。

1
2
3
4
# 临时关闭
swapoff -a
# 永久关闭
sed -i '/swap/s/^/#/' /etc/fstab

创建工作路径

1
mkdir /opt/work

安装配置 docker

1
apt install docker.io -y

有些 docker 启动后是 cgroupfs 驱动。需要更改成 systemd。

查询:

1
docker info | grep "Cgroup Driver"

修改:

1
2
3
4
5
6
7
8
9
10
11
12
13
cat > /etc/docker/daemon.json << EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors":[
"https://docker.mirrors.ustc.edu.cn/",
"https://reg-mirror.qiniu.com/",
"https://registry.docker-cn.com"
]
}
EOF

systemctl daemon-reload
systemctl restart docker

引入 bin

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
mkdir /opt/work/bin
cd /opt/work/bin/

wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64

mv cfssljson_1.6.1_linux_amd64 cfssljson
mv cfssl_1.6.1_linux_amd64 cfssl
mv cfssl-certinfo_1.6.1_linux_amd64 cfssl-certinfo

chmod +x cfssl*

mkdir /opt/work/download
cd /opt/work/download/

wget https://storage.googleapis.com/kubernetes-release/release/v1.23.4/kubernetes-server-linux-amd64.tar.gz
tar zxf kubernetes-server-linux-amd64.tar.gz

mkdir -p /opt/work/bin/worker
cd kubernetes/server/bin/
cp kubelet kube-proxy kubectl /opt/work/bin/worker/
ln -sf /opt/work/bin/!(worker) /usr/local/bin/
ln -sf /opt/work/bin/worker/* /usr/local/bin/

证书生成

拷贝一下 master 节点的 ca.pem、ca-key.pem、ca-config.json

1
2
3
4
mkdir -p /opt/work/ssl/ca
scp root@192.168.1.131:/opt/work/ssl/ca/ca.pem /opt/work/ssl/ca/
scp root@192.168.1.131:/opt/work/ssl/ca/ca-key.pem /opt/work/ssl/ca/
scp root@192.168.1.131:/opt/work/ssl/ca/ca-config.json /opt/work/ssl/ca/
1
mkdir -p /opt/work/ssl/{kubelet,proxy}

/opt/work/ssl/kubelet 是一定要创建的,但是不用在里面生成证书,kubelet.service 中指定一下该路径,在 master 节点 approve 该节点的加入后,会自动在该路径生成证书。

kube-proxy

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
cd /opt/work/ssl/proxy/
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "GuangDong",
"L": "GuangZhou",
"O": "k8s",
"OU": "system"
}
]
}
EOF

cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

运行组件

1
2
3
mkdir -p /opt/work/conf
mkdir -p /opt/work/kubernetes/{conf,systemd}
mkdir -p /opt/work/logs/kubernetes/{kubelet,proxy}

kubelet

配置文件和系统service文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
scp root@192.168.1.131:/opt/work/kubernetes/conf/token.csv /opt/work/

cd /opt/work/conf/
BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /opt/work/token.csv)
## 设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=/opt/work/ssl/ca/ca.pem --embed-certs=true --server=https://192.168.1.131:6443 --kubeconfig=kubelet-bootstrap.kubeconfig
## 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
## 设置上下文参数
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
## 设置默认上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig

rm -f /opt/work/token.csv
cd /opt/work/kubernetes/conf/
cat > kubelet.json << EOF
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"authentication": {
"x509": {
"clientCAFile": "/opt/work/ssl/ca/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"address": "192.168.1.141",
"port": 10250,
"readOnlyPort": 10255,
"cgroupDriver": "systemd",
"hairpinMode": "promiscuous-bridge",
"serializeImagePulls": false,
"clusterDomain": "cluster.local.",
"clusterDNS": ["10.255.0.2"]
}
EOF
cat > kubelet.conf << EOF
KUBELET_OPTS=" \\
--bootstrap-kubeconfig=/opt/work/conf/kubelet-bootstrap.kubeconfig \\
--cert-dir=/opt/work/ssl/kubelet \\
--kubeconfig=/opt/work/conf/kubelet.kubeconfig \\
--config=/opt/work/kubernetes/conf/kubelet.json \\
--network-plugin=cni \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/opt/work/logs/kubernetes/kubelet \\
--v=2"
EOF

cd /opt/work/kubernetes/systemd/
cat > kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=-/opt/work/kubernetes/conf/kubelet.conf
ExecStart=/usr/local/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

ln -sf $PWD/kubelet.service /lib/systemd/system/
systemctl enable --now kubelet

此时 kubelet 并不能正常启动,会报错:

1
"Failed to run kubelet" err="failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User \"kubelet-bootstrap\" cannot create resource \"certificatesigningrequests\" in API group \"certificates.k8s.io\" at the cluster scope"

在 master 节点输入以下的指令:

1
2
## 创建角色绑定
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

回到 worker 节点重启 kublet 即可

1
systemctl restart kubelet

启动成功后,可以在 master 节点上使用 kubectl get csr 命令看到 CONDITION=Pending 的一条记录。让我们 approve 它。

假设 NAME=node-csr-pZBXWyeVQfp7JG_Y5e3bo1FW7yrginWe97vIVwXLhR8

1
kubectl certificate approve node-csr-pZBXWyeVQfp7JG_Y5e3bo1FW7yrginWe97vIVwXLhR8

再次查询,看到状态 CONDITION=Approved,Issued 即可。

kube-proxy

配置文件和系统service文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
cd /opt/work/conf/
kubectl config set-cluster kubernetes --certificate-authority=/opt/work/ssl/ca/ca.pem --embed-certs=true --server=https://192.168.1.131:6443 --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy --client-certificate=/opt/work/ssl/proxy/kube-proxy.pem --client-key=/opt/work/ssl/proxy/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

cd /opt/work/kubernetes/conf/
cat > kube-proxy.conf << EOF
KUBE_PROXY_OPTS=" \\
--config=/opt/work/kubernetes/conf/kube-proxy.yaml \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/opt/work/logs/kubernetes/proxy \\
--v=2"
EOF
cat > kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.1.141
clientConnection:
kubeconfig: /opt/work/conf/kube-proxy.kubeconfig
clusterCIDR: 192.10.0.0/16
healthzBindAddress: 192.168.1.141:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.1.141:10249
mode: "ipvs"
EOF

cd /opt/work/kubernetes/systemd/
cat > kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
EnvironmentFile=-/opt/work/kubernetes/conf/kube-proxy.conf
ExecStart=/usr/local/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

ln -sf $PWD/kube-proxy.service /lib/systemd/system/
systemctl enable --now kube-proxy

查看结果

在 master1 中 kubectl get node 可以查看节点状态,当状态成了 READY 即成功。

测试

部署个 nginx 来试试

在 master1 中执行下面的操作:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
cd /opt/work/kubernetes/components/

cat > nginx.yaml << EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service-nodeport
namespace: default
spec:
ports:
- port: 80
targetPort: 80
nodePort: 30001
protocol: TCP
type: NodePort
selector:
app: nginx
EOF

kubectl apply -f nginx.yaml

然后访问一下 http://192.168.1.141:30001/ 看看是不是 nginx 的首页,成功即可。不成功则使用 kubectl get all -A 看看有哪个没运行起来,使用 kubectl logs xxxxxx 去各种排错。

附加的小操作

如果是将 master 加入到工作集群里的话,可能强迫症就觉得使用 kubectl get nodes 看到的 ROLE 都是 <none>,想要加上 master / worker,或者不想要调度容器到 master 上,就是这里的小操作。

1
2
3
4
5
6
7
## 给节点打上 node-role.kubernetes.io/master="" 标签,标记其为控制平面=
kubectl label node master1 node-role.kubernetes.io/master=
## 当然你也可以给 worker 节点加上标签,这个是 kubeadm 不会干的
kubectl label node node1 node-role.kubernetes.io/worker=

## 给节点打上 node-role.kubernetes.io/master:NoSchedule 污点,这样就不会被调度了
kubectl taint nodes master1 node-role.kubernetes.io/master:NoSchedule

在 kubeadm 中是自动给你执行了,可以参考看看这个: https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/implementation-details/#mark-the-node-as-control-plane