Kubernetes

Kubernetes in Action读书笔记002–kubeadm安装Kubernetes cluster

1 环境说明

我们准备搭建1个3节点的Kubernetes集群环境,其中包含1个master node,2个worker node。环境选择Linux操作系统,版本选择CentOS 7.4或以上,CentOS 8也行,但是版本不能太低。有条件的可以直接用3台物理机器,也可以选择通过虚拟机环境来创建3个Linux虚拟机来实现。当然,购买云厂商提供的ECS,或者直接用他们的Kubernetes运行环境也行(比如Google Kubernetes Environment,或者亚马逊、微软的k8s环境,如果已经有了Kubernetes环境,就可以跳过这个章节了)。如果你手头上的机器资源比较紧张,那么也可以通过minikube来创建1个单节点的Kubernetes环境,来学习和实现该系列教程。

2 准备环境

IP addressOShostnamekernel versionrole
172.16.11.168CentOS Linux release 7.4.1708 (Core)master-node3.10.0-693.el7.x86_64 x86_64master
172.16.11.148CentOS Linux release 7.5.1804 (Core)node-13.10.0-862.el7.x86_64 x86_64worker
172.16.11.161CentOS Linux release 7.5.1804 (Core)node-23.10.0-862.el7.x86_64 x86_64worker

3 配置步骤

0 3个机器分别修改hostname

[root@localhost ~]# hostnamectl set-hostname master-node
[root@localhost ~]# hostname
master-node
[root@localhost ~]#

#另外2个机器
hostname set-hostname node-1
hostname set-hostname node-2

1 3个机器分别修改/etc/hosts(非必须)

172.16.11.168 master-node
172.16.11.148 node-1
172.16.11.161 node-2

2 3个机器分别关闭SELINUX

sed -i 表示原地修改文件,–follow-symlinks表示连带软连接文件一起修改,’s/xxx/yy/g’,通过正则表达式,表示匹配到xx,则就将其修改为yy。

[root@localhost ~]# setenforce 0
[root@localhost ~]# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
[root@localhost ~]#

3 3个机器分别关闭防火墙

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]#

具体视实际情况而定,可能需要开启Firewalls,同时开放对应的端口。

4 3个机器分别关闭SWAP

[root@localhost ~]# free -m
            total       used       free     shared buff/cache   available
Mem:           7983        1129        5459          92        1394        6445
Swap:             0           0           0
[root@localhost ~]#
swapoff -a
free -m
#注释掉/etc/fstab里关于SWAP文件系统的挂载信息

5 reboot3个机器

reboot

6 3个机器分别添加kubernetes Repo

由于国内网络原因,使用阿里云镜像地址

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

7 3个机器分别安装kubeadm,kubelet,kubectl和docker

yum install kubeadm-1.23.1 kubelet-1.23.1 kubectl-1.23.1 docker -y
yum install kubeadm docker --nogpgcheck

[root@master-node ~]# yum install kubeadm-1.23.1 kubelet-1.23.1 kubectl-1.23.1 docker -y
已加载插件:fastestmirror, langpacks, product-id, search-disabled-repos, subscription-manager

This system is not registered with an entitlement server. You can use subscription-manager to register.

base                                                                                                                                         | 3.6 kB  00:00:00    
draios                                                                                                                                       | 3.0 kB  00:00:00    
epel/x86_64/metalink                                                                                                                         | 6.6 kB  00:00:00    
extras                                                                                                                                       | 2.9 kB  00:00:00    
kubernetes/signature                                                                                                                         |  844 B  00:00:00    
kubernetes/signature                                                                                                                         | 1.4 kB  00:00:00 !!!
nodesource                                                                                                                                   | 2.5 kB  00:00:00    
percona-release-noarch                                                                                                                       | 1.5 kB  00:00:00    
percona-release-x86_64                                                                                                                       | 2.9 kB  00:00:00    
updates                                                                                                                                       | 2.9 kB  00:00:00    
Loading mirror speeds from cached hostfile
* base: mirrors.cn99.com
* epel: hkg.mirror.rackspace.com
* extras: mirrors.cn99.com
* updates: mirrors.ustc.edu.cn
正在解决依赖关系
--> 正在检查事务
---> 软件包 docker.x86_64.2.1.13.1-209.git7d71120.el7.centos 将被 安装
--> 正在处理依赖关系 docker-common = 2:1.13.1-209.git7d71120.el7.centos,它被软件包 2:docker-1.13.1-209.git7d71120.el7.centos.x86_64 需要
--> 正在处理依赖关系 docker-client = 2:1.13.1-209.git7d71120.el7.centos,它被软件包 2:docker-1.13.1-209.git7d71120.el7.centos.x86_64 需要
---> 软件包 kubeadm.x86_64.0.1.23.1-0 将被 安装
--> 正在处理依赖关系 kubernetes-cni >= 0.8.6,它被软件包 kubeadm-1.23.1-0.x86_64 需要
---> 软件包 kubectl.x86_64.0.1.23.1-0 将被 安装
---> 软件包 kubelet.x86_64.0.1.23.1-0 将被 安装
--> 正在检查事务
---> 软件包 docker-client.x86_64.2.1.13.1-209.git7d71120.el7.centos 将被 安装
---> 软件包 docker-common.x86_64.2.1.13.1-209.git7d71120.el7.centos 将被 安装
---> 软件包 kubernetes-cni.x86_64.0.0.8.7-0 将被 安装
--> 解决依赖关系完成

依赖关系解决

=====================================================================================================================================================================
Package                             架构                         版本                                                       源                               大小
=====================================================================================================================================================================
正在安装:
docker                               x86_64                       2:1.13.1-209.git7d71120.el7.centos                         extras                            17 M
kubeadm                             x86_64                       1.23.1-0                                                   kubernetes                       9.0 M
kubectl                             x86_64                       1.23.1-0                                                   kubernetes                       9.5 M
kubelet                             x86_64                       1.23.1-0                                                   kubernetes                        21 M
为依赖而安装:
docker-client                       x86_64                       2:1.13.1-209.git7d71120.el7.centos                         extras                           3.9 M
docker-common                       x86_64                       2:1.13.1-209.git7d71120.el7.centos                         extras                           101 k
kubernetes-cni                       x86_64                       0.8.7-0                                                   kubernetes                        19 M

事务概要
=====================================================================================================================================================================
安装  4 软件包 (+3 依赖软件包)

总下载量:79 M
安装大小:338 M
Downloading packages:
(1/7): docker-client-1.13.1-209.git7d71120.el7.centos.x86_64.rpm                                                                             | 3.9 MB  00:00:01    
(2/7): docker-common-1.13.1-209.git7d71120.el7.centos.x86_64.rpm                                                                             | 101 kB  00:00:05    
(3/7): docker-1.13.1-209.git7d71120.el7.centos.x86_64.rpm                                                                                     |  17 MB  00:00:05    
(4/7): 0ec1322286c077c3dd975de1098d4c938b359fb59d961f0c7ce1b35bdc98a96c-kubeadm-1.23.1-0.x86_64.rpm                                           | 9.0 MB  00:00:12    
(5/7): 8d4a11b0303bf2844b69fc4740c2e2f3b14571c0965534d76589a4940b6fafb6-kubectl-1.23.1-0.x86_64.rpm                                           | 9.5 MB  00:00:13    
(6/7): db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm                                     |  19 MB  00:00:25    
(7/7): 7a203c8509258e0c79c8c704406b2d8f7d1af8ff93eadaa76b44bb8e9f9cbabd-kubelet-1.23.1-0.x86_64.rpm                                           |  21 MB  00:00:27    
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
总计                                                                                                                                 1.9 MB/s |  79 MB  00:00:40    
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
警告:RPM 数据库已被非 yum 程序修改。
正在安装   : kubernetes-cni-0.8.7-0.x86_64                                                                                                                    1/7
正在安装   : kubelet-1.23.1-0.x86_64                                                                                                                          2/7
正在安装   : 2:docker-common-1.13.1-209.git7d71120.el7.centos.x86_64                                                                                          3/7
正在安装   : 2:docker-client-1.13.1-209.git7d71120.el7.centos.x86_64                                                                                          4/7
正在安装   : kubectl-1.23.1-0.x86_64                                                                                                                          5/7
正在安装   : kubeadm-1.23.1-0.x86_64                                                                                                                          6/7
正在安装   : 2:docker-1.13.1-209.git7d71120.el7.centos.x86_64                                                                                                 7/7
验证中     : kubeadm-1.23.1-0.x86_64                                                                                                                          1/7
验证中     : kubelet-1.23.1-0.x86_64                                                                                                                          2/7
验证中     : kubernetes-cni-0.8.7-0.x86_64                                                                                                                    3/7
验证中     : 2:docker-common-1.13.1-209.git7d71120.el7.centos.x86_64                                                                                          4/7
验证中     : kubectl-1.23.1-0.x86_64                                                                                                                          5/7
验证中     : 2:docker-client-1.13.1-209.git7d71120.el7.centos.x86_64                                                                                          6/7
验证中     : 2:docker-1.13.1-209.git7d71120.el7.centos.x86_64                                                                                                 7/7

已安装:
docker.x86_64 2:1.13.1-209.git7d71120.el7.centos         kubeadm.x86_64 0:1.23.1-0         kubectl.x86_64 0:1.23.1-0         kubelet.x86_64 0:1.23.1-0        

作为依赖被安装:
docker-client.x86_64 2:1.13.1-209.git7d71120.el7.centos       docker-common.x86_64 2:1.13.1-209.git7d71120.el7.centos       kubernetes-cni.x86_64 0:0.8.7-0      

完毕!
[root@master-node ~]# rpm -qa|grep kube
kubelet-1.23.1-0.x86_64
kubeadm-1.23.1-0.x86_64
kubectl-1.23.1-0.x86_64
kubernetes-cni-0.8.7-0.x86_64
[root@master-node ~]# rpm -qa|grep docker
docker-client-1.13.1-209.git7d71120.el7.centos.x86_64
docker-common-1.13.1-209.git7d71120.el7.centos.x86_64
docker-1.13.1-209.git7d71120.el7.centos.x86_64
[root@master-node ~]#

8 3个机器分别启动kubelet和docker服务

systemctl enable docker
systemctl start docker
systemctl status docker
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

9 关于启动kubelet服务报错的情况说明

[root@master-node ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
  Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
          └─10-kubeadm.conf
  Active: activating (auto-restart) (Result: exit-code) since 五 2022-07-29 23:53:51 CST; 8s ago
    Docs: https://kubernetes.io/docs/
Process: 2293 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 2293 (code=exited, status=1/FAILURE)

7月 29 23:53:51 master-node kubelet[2293]: Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_ED...
7月 29 23:53:51 master-node kubelet[2293]: --tls-min-version string                                   Minimum TLS version supported. Possible values: VersionTLS1...
7月 29 23:53:51 master-node kubelet[2293]: --tls-private-key-file string                             File containing x509 private key matching --tls-cert-file. ...
7月 29 23:53:51 master-node kubelet[2293]: --topology-manager-policy string                           Topology Manager policy to use. Possible values: 'none', 'b...
7月 29 23:53:51 master-node kubelet[2293]: --topology-manager-scope string                           Scope to which topology hints applied. Topology Manager col...
7月 29 23:53:51 master-node kubelet[2293]: -v, --v Level                                                 number for the log level verbosity
7月 29 23:53:51 master-node kubelet[2293]: --version version[=true]                                   Print version information and quit
7月 29 23:53:51 master-node kubelet[2293]: --vmodule pattern=N,...                                   comma-separated list of pattern=N settings for fi...og format)
7月 29 23:53:51 master-node kubelet[2293]: --volume-plugin-dir string                                 The full path of the directory in which to search for addit...
7月 29 23:53:51 master-node kubelet[2293]: --volume-stats-agg-period duration                         Specifies interval for kubelet to calculate and cache the v...
Hint: Some lines were ellipsized, use -l to show in full.
[root@master-node ~]#

10 Initialize Kubernetes Master and Setup Default User

该步骤只在主节点执行:不能直接执行kubectl init;

⚠️⚠️⚠️默认会到https://k8s.gcr.io/v1/_ping下载对应的images,而默认情况下,国内机器的网络是无法直接访问的。

解决办法是,指定阿里云的镜像地址:kubeadm init –image-repository registry.aliyuncs.com/google_containers

此外,

⚠️⚠️⚠️此外,我们还需要指定pod的network。否则,创建出来的Kubernetes cluster还是有故障的,如下,cluster组件中的coredns pod报错!!!

Events:
Type     Reason                 Age                   From               Message
 ----     ------                  ----                   ----               -------
Warning FailedScheduling       21m (x10 over 31m)     default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
Normal   Scheduled               20m                   default-scheduler Successfully assigned kube-system/coredns-6d8c4cb4d-nfkvp to master21
Warning FailedCreatePodSandBox 20m                   kubelet           Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c719af249ca09b2c0c1597b0f36ebd50f7e8790abcc81539c7529e14fd0e771a" network for pod "coredns-6d8c4cb4d-nfkvp": networkPlugin cni failed to set up pod "coredns-6d8c4cb4d-nfkvp_kube-system" network: open /run/flannel/subnet.env: no such file or directory
  Normal   SandboxChanged         5m38s (x464 over 20m) kubelet           Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 38s (x608 over 20m)   kubelet           (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5d6735416999f70d47c419c31c6da15699d3987094d6a61c2169efed74b22463" network for pod "coredns-6d8c4cb4d-nfkvp": networkPlugin cni failed to set up pod "coredns-6d8c4cb4d-nfkvp_kube-system" network: open /run/flannel/subnet.env: no such file or directory
[root@master21 ~]#

因此,最终的初始化命令是:kubeadm init –image-repository registry.aliyuncs.com/google_containers –pod-network-cidr=10.244.0.0/16

[root@master-node ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers  --pod-network-cidr=10.244.0.0/16
I0730 10:38:20.938532    2629 version.go:255] remote version is much newer: v1.24.3; falling back to: stable-1.23
[init] Using Kubernetes version: v1.23.9
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-node] and IPs [10.96.0.1 172.16.11.168]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-node] and IPs [172.16.11.168 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-node] and IPs [172.16.11.168 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.504594 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master-node as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master-node as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: gt378n.ghjnbyxhw9z52v2y
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

 export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.11.168:6443 --token gt378n.ghjnbyxhw9z52v2y \
       --discovery-token-ca-cert-hash sha256:090c4734dd4dd6dfae04cb1beb3e9b9352fd62f0c3502e52bffb5bc28dfa807f
[root@master-node ~]#

此时,可以看到master节点上的kubelet服务已正常,且,看到从阿里云镜像站点下载的几个images:

systemctl status kubelet

根据提示,配置默认用户:

如果想以非root用户启动kubernetes服务的话,执行:

To start using your cluster, you need to run the following as a regular user:

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

我这里想直接以root运行kubernetes,所以配置这个环境变量KUBECONFIG=/etc/kubernetes/admin.conf到.bash_profile即可。

vi .bash_profile 

此时,看到master-node上的组件deployment.apps/coredns 的状态还是未正常启动:

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   0/2     2            0           86s

其主要原因是还没有配置Pod的network。从kubeadm init 的执行结果也看到有明确提示:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

根据提示页,我们选择配置Flannel:

  • Flannel is an overlay network provider that can be used with Kubernetes.

11 配置pod network

注意:该步骤只在主节点执行。

打开Flannel的GitHub页面

看到安装命令为:

For Kubernetes v1.17+ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

这里的1个问题就是:https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml我们无法访问,并且该文件配置的内容为:

[root@master-node w01_kubeadm_install_kubernetes_3_nodes_cluster]# cat flannel.yml 
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
  seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
  seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
  apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
  apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
 # Users and groups
runAsUser:
  rule: RunAsAny
supplementalGroups:
  rule: RunAsAny
fsGroup:
  rule: RunAsAny
 # Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
 # Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
 # Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
  max: 65535
 # SELinux
seLinux:
   # SELinux is unused in CaaSP
  rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
  tier: node
  app: flannel
data:
cni-conf.json: |
  {
    "name": "cbr0",
    "cniVersion": "0.3.1",
    "plugins": [
      {
        "type": "flannel",
        "delegate": {
          "hairpinMode": true,
          "isDefaultGateway": true
        }
      },
      {
        "type": "portmap",
        "capabilities": {
          "portMappings": true
        }
      }
    ]
  }
net-conf.json: |
  {
    "Network": "10.244.0.0/16",
    "Backend": {
      "Type": "vxlan"
    }
  }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
  tier: node
  app: flannel
spec:
selector:
  matchLabels:
    app: flannel
template:
  metadata:
    labels:
      tier: node
      app: flannel
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/os
              operator: In
              values:
              - linux
    hostNetwork: true
    priorityClassName: system-node-critical
    tolerations:
    - operator: Exists
      effect: NoSchedule
    serviceAccountName: flannel
    initContainers:
    - name: install-cni
      image: quay.io/coreos/flannel:v0.14.0
      command:
      - cp
      args:
      - -f
      - /etc/kube-flannel/cni-conf.json
      - /etc/cni/net.d/10-flannel.conflist
      volumeMounts:
      - name: cni
        mountPath: /etc/cni/net.d
      - name: flannel-cfg
        mountPath: /etc/kube-flannel/
    containers:
    - name: kube-flannel
      image: quay.io/coreos/flannel:v0.14.0
      command:
      - /opt/bin/flanneld
      args:
      - --ip-masq
      - --kube-subnet-mgr
      resources:
        requests:
          cpu: "100m"
          memory: "50Mi"
        limits:
          cpu: "100m"
          memory: "50Mi"
      securityContext:
        privileged: false
        capabilities:
          add: ["NET_ADMIN", "NET_RAW"]
      env:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.name
      - name: POD_NAMESPACE
        valueFrom:
          fieldRef:
            fieldPath: metadata.namespace
      volumeMounts:
      - name: run
        mountPath: /run/flannel
      - name: flannel-cfg
        mountPath: /etc/kube-flannel/
    volumes:
    - name: run
      hostPath:
        path: /run/flannel
    - name: cni
      hostPath:
        path: /etc/cni/net.d
    - name: flannel-cfg
      configMap:
        name: kube-flannel-cfg
[root@master-node w01_kubeadm_install_kubernetes_3_nodes_cluster]#

执行安装:

[root@master-node w01_kubeadm_install_kubernetes_3_nodes_cluster]# kubectl apply -f flannel.yml 
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master-node w01_kubeadm_install_kubernetes_3_nodes_cluster]#

至此,看到cluster节点上的各种组件已经全部正常了: deployment.apps/coredns 2/2 。

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   2/2     2            2           33m

且,master 节点上生成了/run/flannel/subnet.env。我们可以查看其中的内容:

[root@master-node ~]# cat /run/flannel/subnet.env 
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
[root@master-node ~]#

12 添加worker节点

Join the Worker Node to the Kubernetes Cluster

根据前面步骤11中的提示,只在worker节点上执行:

kubeadm token create --print-join-command

#node1
[root@node-1 ~]# kubeadm join 172.16.11.168:6443 --token gt378n.ghjnbyxhw9z52v2y \
>         --discovery-token-ca-cert-hash sha256:090c4734dd4dd6dfae04cb1beb3e9b9352fd62f0c3502e52bffb5bc28dfa807f
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@node-1 ~]#

#node2
[root@node-2 ~]# kubeadm join 172.16.11.168:6443 --token gt378n.ghjnbyxhw9z52v2y \
>         --discovery-token-ca-cert-hash sha256:090c4734dd4dd6dfae04cb1beb3e9b9352fd62f0c3502e52bffb5bc28dfa807f
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@node-2 ~]#

添加之后,看到worker节点上的kubectl服务自动成功运行了。且也生成了/run/flannel/subnet.env 文件。

稍等片刻,可以从主节点上看到节点状态全部正常:

[root@master-node ~]# kubectl get nodes 
NAME         STATUS   ROLES                 AGE   VERSION
master-node   Ready   control-plane,master   31m   v1.23.1
node-1       Ready   <none>                 26m   v1.23.1
node-2       Ready   <none>                 25m   v1.23.1
[root@master-node ~]#

13 master安装bash-completion

yum install bash-completion

[root@master-node ~]# yum install bash-completion
已加载插件:fastestmirror, langpacks, product-id, search-disabled-repos, subscription-manager

This system is not registered with an entitlement server. You can use subscription-manager to register.

Loading mirror speeds from cached hostfile
* base: mirrors.cn99.com
* epel: hkg.mirror.rackspace.com
* extras: mirrors.cn99.com
* updates: mirrors.ustc.edu.cn
正在解决依赖关系
--> 正在检查事务
---> 软件包 bash-completion.noarch.1.2.1-8.el7 将被 安装
--> 解决依赖关系完成

依赖关系解决

=====================================================================================================================================================================
Package                                     架构                               版本                                       源                                 大小
=====================================================================================================================================================================
正在安装:
bash-completion                             noarch                              1:2.1-8.el7                               base                               87 k

事务概要
=====================================================================================================================================================================
安装  1 软件包

总下载量:87 k
安装大小:263 k
Is this ok [y/d/N]: y
Downloading packages:
bash-completion-2.1-8.el7.noarch.rpm                                                                                                         |  87 kB  00:00:00    
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
警告:RPM 数据库已被非 yum 程序修改。
正在安装   : 1:bash-completion-2.1-8.el7.noarch                                                                                                               1/1
验证中     : 1:bash-completion-2.1-8.el7.noarch                                                                                                               1/1

已安装:
bash-completion.noarch 1:2.1-8.el7                                                                                                                                

完毕!
[root@master-node ~]#

然后,在root的~/.bash_profile里添加:source <(kubectl completion bash)

[root@master-node ~]# cat .bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
      . ~/.bashrc
fi

# User specific environment and startup programs
GOROOT=/usr/local/go
GOPATH=/usr/local/go/gopackage

PATH=$PATH:$HOME/bin:/usr/local/go/bin
export LIBP2P_FORCE_PNET=1


export PATH GOPATH GOROOT
export KUBECONFIG=/etc/kubernetes/admin.conf
source <(kubectl completion bash)
[root@master-node ~]#

4 测试验证cluster

1创建NGINX Deployment

[root@master-node ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@master-node ~]# kubectl get pods
NAME                     READY   STATUS             RESTARTS   AGE
nginx-85b98978db-2tdlk   0/1     ContainerCreating   0         3s
[root@master-node ~]# kubectl get pods -owide
NAME                     READY   STATUS   RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nginx-85b98978db-2tdlk   1/1     Running   0         9s    10.244.2.2   node-2   <none>           <none>
[root@master-node ~]#

2 创建nodeport类型 service

[root@master-node ~]# kubectl create service nodeport nginx --tcp=80:80
service/nginx created
[root@master-node ~]# kubectl get service
NAME         TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)       AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP       55m
nginx       NodePort    10.96.209.101   <none>        80:32007/TCP   9s
[root@master-node ~]#

3 通过service对象访问

[root@master-node ~]# kubectl get service
NAME         TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)       AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP       55m
nginx       NodePort    10.96.209.101   <none>        80:32007/TCP   9s
[root@master-node ~]#

客户端通过NodePort类型的service访问,即访问cluster中任意机器的IP+端口(32007):

[root@master-node ~]# curl 10.96.209.101
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@master-node ~]# curl 172.16.11.168:32007
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@master-node ~]# curl 172.16.11.161:32007
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@master-node ~]# curl 172.16.11.148:32007
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@master-node ~]#

NGINX服务正常,说明cluster是OK的。

4 删除测试的Deployment和service对象

[root@master-node ~]# kubectl delete service nginx 
service "nginx" deleted
[root@master-node ~]# kubectl delete deployments.apps nginx
deployment.apps "nginx" deleted
[root@master-node ~]#

留言