规划 工作路径: /opt/work x:数字递增
IP 地址
机器名称
操作系统
充当角色
安装软件
192.168.1.13x
masterx
ubuntu
master
kube-apiserver、kube-controller-manager、kube-scheduler、kubectl
192.168.1.14x
nodex
ubuntu
worker
kubelet、kube-proxy
Kubernetes 内部的 IP
干啥的
IP
在哪配置
在哪用
service 分配的 IP
10.255.0.0/24
kube-apiserver
pod 分配的 IP
192.10.0.0/16
calico、kube-controller-manager
kube-proxy 的 clusterCIDR
clusterIP
10.255.0.2
coredns
kubelet 的 clusterDNS
版本抉择
kubernetes: v1.23.4,下载链接: https://github.com/kubernetes/kubernetes/releases/
创建工作路径
引入 bin 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 mkdir /opt/work/bincd /opt/work/bin/wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64 mv cfssljson_1.6.1_linux_amd64 cfssljsonmv cfssl_1.6.1_linux_amd64 cfsslmv cfssl-certinfo_1.6.1_linux_amd64 cfssl-certinfochmod +x cfssl*mkdir /opt/work/downloadcd /opt/work/download/wget https://storage.googleapis.com/kubernetes-release/release/v1.23.4/kubernetes-server-linux-amd64.tar.gz tar zxf kubernetes-server-linux-amd64.tar.gz mkdir -p /opt/work/bin/mastercd kubernetes/server/bin/cp kube-apiserver kube-controller-manager kubectl kube-scheduler /opt/work/bin/master/ln -sf /opt/work/bin/!(master) /usr/local/bin/ln -sf /opt/work/bin/master/* /usr/local/bin/
证书生成 1 mkdir -p /opt/work/ssl/{ca,apiserver,controller-manager,kubectl,scheduler}
ca 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 cd /opt/work/ssl/ca/cat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "GuangDong", "L": "GuangZhou", "O": "k8s", "OU": "system" } ], "ca": { "expiry": "87600h" } } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca cat > ca-config.json << EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
kube-apiserver 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 cd /opt/work/ssl/apiserver/cat > kube-apiserver-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "GuangDong", "L": "GuangZhou", "O": "k8s", "OU": "system" } ], "hosts": [ "127.0.0.1", "10.255.0.1", "192.168.1.130", "192.168.1.131", "192.168.1.132", "192.168.1.133", "192.168.1.140", "192.168.1.141", "192.168.1.142", "192.168.1.143", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ] } EOF cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
kube-controller 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 cd /opt/work/ssl/controller-manager/cat > kube-controller-manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "GuangDong", "L": "GuangZhou", "O": "system:kube-controller-manager", "OU": "system" } ], "hosts": [ "127.0.0.1", "192.168.1.131", "192.168.1.132", "192.168.1.133" ] } EOF cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
kube-scheduler 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 cd /opt/work/ssl/scheduler/cat > kube-scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "GuangDong", "L": "GuangZhou", "O": "system:kube-scheduler", "OU": "system" } ], "hosts": [ "127.0.0.1", "192.168.1.131", "192.168.1.132", "192.168.1.133" ] } EOF cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
kubectl 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 cd /opt/work/ssl/kubectl/cat > admin-csr.json << EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "GuangDong", "L": "GuangZhou", "O": "system:masters", "OU": "system" } ], "hosts": [] } EOF cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
运行组件 /opt/work/conf: 放 kubeconfig 文件 /opt/work/kubernetes/conf: 放启动命令或者启动配置文件 /opt/work/kubernetes/systemd: 放系统 service 文件 /opt/work/kubernetes/components: 放组件yaml文件,例如calico /opt/work/logs/kubernetes: 日志
1 2 3 mkdir -p /opt/work/confmkdir -p /opt/work/kubernetes/{conf,systemd,components}mkdir -p /opt/work/logs/kubernetes/{apiserver,controller-manager,scheduler}
kube-apiserver 拷贝 etcd 证书 因为 etcd 是 https 的,所以要拷贝一下证书。etcd 的安装参考这个 。
1 2 3 4 mkdir /opt/work/ssl/etcdscp root@192.168.1.151:/opt/work/ssl/ca/ca.pem /opt/work/ssl/etcd/ scp root@192.168.1.151:/opt/work/ssl/etcd/etcd.pem /opt/work/ssl/etcd/ scp root@192.168.1.151:/opt/work/ssl/etcd/etcd-key.pem /opt/work/ssl/etcd/
创建一个token:
1 2 3 4 cd /opt/work/kubernetes/conf/cat > token.csv << EOF $(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF
配置文件和系统service文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 cd /opt/work/kubernetes/conf/cat > kube-apiserver.conf << EOF KUBE_APISERVER_OPTS=" \\ --bind-address=192.168.1.131 \\ --secure-port=6443 \\ --advertise-address=192.168.1.131 \\ --service-cluster-ip-range=10.255.0.0/24 \\ --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --token-auth-file=/opt/work/kubernetes/conf/token.csv \\ --service-node-port-range=30000-32767 \\ --kubelet-client-certificate=/opt/work/ssl/apiserver/kube-apiserver.pem \\ --kubelet-client-key=/opt/work/ssl/apiserver/kube-apiserver-key.pem \\ --tls-cert-file=/opt/work/ssl/apiserver/kube-apiserver.pem \\ --tls-private-key-file=/opt/work/ssl/apiserver/kube-apiserver-key.pem \\ --requestheader-client-ca-file=/opt/work/ssl/ca/ca.pem \\ --client-ca-file=/opt/work/ssl/ca/ca.pem \\ --service-account-key-file=/opt/work/ssl/ca/ca-key.pem \\ --service-account-signing-key-file=/opt/work/ssl/ca/ca-key.pem \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --etcd-servers=https://192.168.1.151:2379 \\ --etcd-cafile=/opt/work/ssl/etcd/ca.pem \\ --etcd-certfile=/opt/work/ssl/etcd/etcd.pem \\ --etcd-keyfile=/opt/work/ssl/etcd/etcd-key.pem \\ --proxy-client-cert-file=/opt/work/ssl/etcd/etcd.pem \\ --proxy-client-key-file=/opt/work/ssl/etcd/etcd-key.pem \\ --requestheader-allowed-names=kubernetes \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true \\ --allow-privileged=true \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --audit-log-path=/opt/work/logs/kubernetes/kube-apiserver-audit.log \\ --logtostderr=false \\ --log-dir=/opt/work/logs/kubernetes/apiserver \\ --v=4" EOF cd /opt/work/kubernetes/systemd/cat > kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=etcd.service Wants=etcd.service [Service] EnvironmentFile=-/opt/work/kubernetes/conf/kube-apiserver.conf ExecStart=/usr/local/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF ln -sf /opt/work/kubernetes/systemd/kube-apiserver.service /lib/systemd/system/systemctl enable --now kube-apiserver
查看结果:
有返回即已经成功。
1 curl --insecure https://192.168.1.131:6443/
kube-controller-manage 配置文件和系统service文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 cd /opt/work/conf/kubectl config set-cluster kubernetes --certificate-authority=/opt/work/ssl/ca/ca.pem --embed-certs=true --server=https://192.168.1.131:6443 --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-credentials system:kube-controller-manager --client-certificate=/opt/work/ssl/controller-manager/kube-controller-manager.pem --client-key=/opt/work/ssl/controller-manager/kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig cd /opt/work/kubernetes/conf/cat > kube-controller-manager.conf << EOF KUBE_CONTROLLER_MANAGER_OPTS=" \\ --bind-address=127.0.0.1 \\ --leader-elect=true \\ --kubeconfig=/opt/work/conf/kube-controller-manager.kubeconfig \\ --allocate-node-cidrs=true \\ --service-cluster-ip-range=10.255.0.0/24 \\ --cluster-cidr=192.10.0.0/16 \\ --cluster-signing-cert-file=/opt/work/ssl/ca/ca.pem \\ --cluster-signing-key-file=/opt/work/ssl/ca/ca-key.pem \\ --root-ca-file=/opt/work/ssl/ca/ca.pem \\ --service-account-private-key-file=/opt/work/ssl/ca/ca-key.pem \\ --cluster-name=kubernetes \\ --experimental-cluster-signing-duration=87600h \\ --feature-gates=RotateKubeletServerCertificate=true \\ --controllers=*,bootstrapsigner,tokencleaner \\ --tls-cert-file=/opt/work/ssl/controller-manager/kube-controller-manager.pem \\ --tls-private-key-file=/opt/work/ssl/controller-manager/kube-controller-manager-key.pem \\ --use-service-account-credentials=true \\ --alsologtostderr=true \\ --logtostderr=false \\ --log-dir=/opt/work/logs/kubernetes/controller-manager \\ --v=2" EOF cd /opt/work/kubernetes/systemd/cat > kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/work/kubernetes/conf/kube-controller-manager.conf ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF ln -sf /opt/work/kubernetes/systemd/kube-controller-manager.service /lib/systemd/system/systemctl enable --now kube-controller-manager
查看结果:
1 2 systemctl status kube-controller-manager
kube-scheduler 配置文件和系统service文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 cd /opt/work/conf/kubectl config set-cluster kubernetes --certificate-authority=/opt/work/ssl/ca/ca.pem --embed-certs=true --server=https://192.168.1.131:6443 --kubeconfig=kube-scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler --client-certificate=/opt/work/ssl/scheduler/kube-scheduler.pem --client-key=/opt/work/ssl/scheduler/kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig cd /opt/work/kubernetes/conf/cat > kube-scheduler.conf << EOF KUBE_SCHEDULER_OPTS=" \\ --address=127.0.0.1 \\ --kubeconfig=/opt/work/conf/kube-scheduler.kubeconfig \\ --leader-elect=true \\ --alsologtostderr=true \\ --logtostderr=false \\ --log-dir=/opt/work/logs/kubernetes/scheduler \\ --v=2" EOF cd /opt/work/kubernetes/systemd/cat > kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/work/kubernetes/conf/kube-scheduler.conf ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF ln -sf /opt/work/kubernetes/systemd/kube-scheduler.service /lib/systemd/system/systemctl enable --now kube-scheduler
查看结果:
1 2 systemctl status kube-scheduler
kubectl 使用 kubectl 出现以下错误,可以通过下面的方案解决一下:
1 The connection to the server localhost:8080 was refused - did you specify the right host or port?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 cd /opt/work/conf/kubectl config set-cluster kubernetes --certificate-authority=/opt/work/ssl/ca/ca.pem --embed-certs=true --server=https://192.168.1.131:6443 --kubeconfig=kube.config kubectl config set-credentials admin --client-certificate=/opt/work/ssl/kubectl/admin.pem --client-key=/opt/work/ssl/kubectl/admin-key.pem --embed-certs=true --kubeconfig=kube.config kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config kubectl config use-context kubernetes --kubeconfig=kube.config mkdir ~/.kube/cp kube.config ~/.kube/configkubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
kubectl 命令没有提示可以通知以下指令:
1 2 3 apt install -y bash-completion source <(kubectl completion bash)echo "source <(kubectl completion bash)" >> ~/.bashrc
网络组件 calico 1 2 cd /opt/work/kubernetes/components/wget https://docs.projectcalico.org/v3.22/manifests/calico.yaml
由于默认的 192.168.0.0/16 跟我局域网网络冲突,所以改成了 192.10.0.0/16,所以定位一下 CALICO_IPV4POOL_CIDR需要去掉注释,并修改成以下的样子。
1 2 3 4 5 6 7 8 - name: CALICO_IPV4POOL_CIDR value: "192.10.0.0/16" - name: CALICO_DISABLE_FILE_LOGGING value: "true"
运行一下:
1 kubectl apply -f calico.yaml
coredns 1 2 cd /opt/work/kubernetes/components/vim coredns.yaml
将这部分拷贝到 coredns.yml 里面。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } --- apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS" spec: strategy: type : RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns" ] topologyKey: kubernetes.io/hostname containers: - name: coredns image: coredns/coredns:1.9.0 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf" , "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.255.0.2 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCP
运行起来:
1 kubectl apply -f coredns.yaml
当然你也可以去下载一份,然后自己改:
1 2 3 4 5 6 7 8 9 cd /opt/work/kubernetes/components/wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed mv coredns.yaml.sed coredns.yamlsed -i 's/CLUSTER_DOMAIN/cluster.local/g' coredns.yaml sed -i 's/REVERSE_CIDRS/in-addr.arpa ip6.arpa/g' coredns.yaml sed -i 's/UPSTREAMNAMESERVER/\/etc\/resolv.conf/g' coredns.yaml sed -i 's/STUBDOMAINS//g' coredns.yaml sed -i 's/CLUSTER_DNS_IP/10.255.0.2/g' coredns.yaml