kubeadm 部署 k8s ,没有硬性要求必须有几台master节点,或者slave节点,保证最基础有一台master节点即可,本文节省资源只部署master节点。
| 主机名 | 系统 | 角色 | 部署组件 |
|---|---|---|---|
| k8s-master | centos 7 | master | etcd,kube-apiserver,kube-controller-manager,kubectl,kubeadm,kubelet,kube-proxy,flannel |
| k8s-slave1 | centos 7 | slave | kubectl,kubelet,kube-proxy,flannel |
| k8s-slave2 | centos 7 | slave | kubectl,kubelet,kube-proxy,flannel |
端口开放
| master节点 | TCP : 6443 , 2379 , 2380 , 60080 , 60081 UDP : 协议端口全部打开 |
| slave节点 | UDP : 协议端口全部打开 |
- [root@localhost ~]# hostnamectl set-hostname k8s-master
- [root@localhost ~]# bash
- [root@k8s-master ]#
- [root@k8s-master ~]# echo "192.168.18.7 k8s-master" >>/etc/hosts
iptables -P FORWARD ACCEPT
- sed -ri 's/.*swap.*/#&/' /etc/fstab
- swapoff -a && sysctl -w vm.swappiness=0
systemctl stop firewalld.service;systemctl disable firewalld.service;sed -i -r 's#(^SELIN.*=)enforcing#\1disable#g' /etc/selinux/config;setenforce 0
- [root@k8s-master ~]# cat >>/etc/sysctl.d/k8s.conf<
- net.bridge.bridge-nf-call-ip6tables = 1
- net.bridge.bridge-nf-call-iptables = 1
- net.ipv4.ip_forward = 1
- vm.max_map_count=262144
- EOF
-
- [root@k8s-master ~]# modprobe br_netfilter
- [root@k8s-master ~]# sysctl -p /etc/sysctl.d/k8s.conf
- [root@k8s-master ~]# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
-
- [root@k8s-master ~]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo_bak
-
- [root@k8s-master ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
-
- [root@k8s-master ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
-
- [root@k8s-master ~]# cat >>/etc/yum.repos.d/kubernetes.repo<
- [kubernetes]
- name=Kubernetes
- baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
- enabled=1
- gpgcheck=0
- EOF
-
- [root@k8s-master ~]# yum clean all && yum makecache
- [root@k8s-master ~]# yum install -y docker-ce
-
- [root@hdss7-21 ~]# mkdir /etc/docker/
- [root@k8s-master ~]# cat >>/etc/docker/daemon.json<
- {
- "graph": "/data/docker",
- "storage-driver": "overlay2",
- "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.od.com","harbor.od.com:180"],
- "registry-mirrors": ["https://6kx4zyno.mirror.aliyuncs.com"],
- "exec-opts": ["native.cgroupdriver=systemd"],
- "live-restore": true
- }
- EOF
-
- [root@hdss7-21 ~]# mkdir -p /data/docker
- [root@hdss7-21 ~]# systemctl start docker ; systemctl enable docker
验证过部署v1.21.1无问题
- [root@k8s-master ~]# yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2 -- disableexcludes=kubernetes
-
- [root@k8s-master ~]# kubeadm version
- kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:15:39Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
-
- [root@k8s-master ~]# systemctl enable kubelet
- [root@k8s-master ~]# mkdir /opt/k8s-install/;cd /opt/k8s-install/
- [root@k8s-master k8s-install]# kubeadm config print init-defaults > kubeadm.yaml
-
- [root@k8s-master k8s-install]# vi kubeadm.yaml
- apiVersion: kubeadm.k8s.io/v1beta2
- bootstrapTokens:
- - groups:
- - system:bootstrappers:kubeadm:default-node-token
- token: abcdef.0123456789abcdef
- ttl: 24h0m0s
- usages:
- - signing
- - authentication
- kind: InitConfiguration
- localAPIEndpoint:
- advertiseAddress: 192.168.18.7 # apiserver的地IP
- bindPort: 6443 # apiserver的port
- nodeRegistration:
- criSocket: /var/run/dockershim.sock
- name: k8s-master # 此位置为你apiserver节点也就是master节点的hostname,对应的在kubectl get node 显示的NAME
- taints: # 此位置是配置给此节点打一个污点,因为此配置文件是给kubeadm做初始化,也就是master节点
- 做初始化,所以默认给master打一个污点,也可以写taints: null
- - effect: NoSchedule
- key: node-role.kubernetes.io/master
- ---
- apiServer:
- timeoutForControlPlane: 4m0s
- apiVersion: kubeadm.k8s.io/v1beta2
- certificatesDir: /etc/kubernetes/pki
- clusterName: kubernetes
- controllerManager: {}
- dns:
- type: CoreDNS
- etcd:
- local:
- dataDir: /var/lib/etcd
- imageRepository: registry.aliyuncs.com/google_containers # 修改为阿里云地址
- kind: ClusterConfiguration
- kubernetesVersion: v1.16.2
- networking:
- dnsDomain: cluster.local # 此位置不用改
- podSubnet: 172.7.0.0/16 # 新增字段,pod 网段,flannel插件需要使用网段。定义后docker启动容器的IP优先考虑
- 此位置,若不定义则采会依赖docker中daemon.json中的BIP定义。
- serviceSubnet: 10.96.0.0/12 # service 资源IP定义,可以不用改
- scheduler: {}
-
- [root@k8s-master k8s-install]#
提前将容器下载到本地,以防在部署时候,是镜像导致的部署失败 (master+slave)
- [root@k8s-master k8s-install]# kubeadm config images pull --config kubeadm.yaml
- [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.16.2
- [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.16.2
- [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.16.2
- [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.16.2
- [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.1
- [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.3.15-0
- [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.6.2
-
- [root@k8s-master k8s-install]# docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.16.2
- [root@k8s-master k8s-install]# docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.16.2
- [root@k8s-master k8s-install]# docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.16.2
- [root@k8s-master k8s-install]# docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.16.2
- [root@k8s-master k8s-install]# docker pull registry.aliyuncs.com/google_containers/pause:3.1
- [root@k8s-master k8s-install]# docker pull registry.aliyuncs.com/google_containers/etcd:3.3.15-0
- [root@k8s-master k8s-install]# docker pull registry.aliyuncs.com/google_containers/coredns:1.6.2
[root@k8s-master k8s-install]# kubeadm init --config kubeadm.yaml # 提示:Your kubernetes master has initialized successfully! 为成功

提示你需要执行
- mkdir -p $HOME/.kube
- cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- [root@k8s-master .kube]# kubectl get node
- NAME STATUS ROLES AGE VERSION
- k8s-master NotReady master 6m14s v1.16.2
在每台slave节点上执行如下命令,该命令是在kubeadm init 成功后提示信息打印出来的
- kubeadm join 192.168.18.7:6443 --token abcdef.0123456789abcdef \
- --discovery-token-ca-cert-hash sha256:7d9e753992ae96e0d5bd34e20129a5b9e5f65b355f69e8d59f89f578c1558a0d

kubeadm init 后生成的 token ( kubeadm join 192.168.18.7:6443 ), 有效时间为24h,超过需要重新申请,超过24h的 kubeadm join命令,将 node加入 master时,出现 error execution phase preflight: couldn't validate the identity of the API Server: abort connecting to API servers after timeout of 5m0s错误,即节点纳入管理失败,五分钟后超时放弃连接。具体信息如下

出现该问题的原因有很多,但主要有两个:
1. token 过期
此时需要通过kubedam重新生成token
- //解决方法
- //master主机上重新生成token
- [root@master ~]# kubeadm token generate #生成toke
- 55rc0f.kdsdbg7vpymrkhr1 #下面这条命令中会用到该结果
- [root@master ~]# kubeadm token create 55rc0f.kdsdbg7vpymrkhr1 --print-join-command --ttl=0 #根据token输出添加命令
- kubeadm join 192.168.206.10:6443 --token 55rc0f.kdsdbg7vpymrkhr1 --discovery-token-ca-cert-hash sha256:73bb4deef64b7ceb47710d049e2f0d89f439f833baf3812d7868cf07da515fae
然后用上面输出的kubeadm join命令放到想要添加的节点中执行

2. k8s api server不可达
此时需要检查和关闭所有服务器的firewalld和selinux
[root@master ~]#setenforce 0
[root@master ~]#sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
[root@master ~]#systemctl disable firewalld --now
查看status状态是NotReady,是由于没有部署flannel插件,flannel插件是能够让不同节点之间,pod与pod,pod与各个节点宿主机互相通信的组件,所以是必不可少的,所以状态为NotReady。即使你现在只有master一个节点,也需要部署,k8s 检测的是有无CNI网络插件

https://github.com/flannel-io/flannel/tree/v0.12.0

或者
- echo "199.232.68.133 raw.githubusercontent.com" >> /etc/hosts
- wget https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
理解flannel : flannel如何实现pod与pod通信?pod是k8s最小单元,容器是docker的最小单元,而pod中可能存在一个或者多个容器。所以说白了,k8s通过手段管理的就是容器,而容器与容器怎么通信,如果是同一台机器,是通过docker0通信的,也就相当于docker0是一个交换机一个网关,一个容器访问另一个容器,通过网关协调。而容器如何跟另一台的物理机的容器通信?首先容器通过docker0无法找到另一台容器的,所以docker0就把请求往上给,给到了物理机的物理网卡,物理网卡又是怎么去知道另一台的物理机的容器在哪?flannel起到了作用。假如有A B 两个物理机器,A机器通过ens33 10.4.7.21/24 跟 B 机器通信,于是配置flannel 绑定ens33,然后在 A 机器daemon.json中配置,当前物理机启动的pod网段是172.7.21/24,也就是说当前物理机启动的pod的IP是172.7.21/24中之一,而 B 物理机的pod网段是172.7.22/24。于是flannel 就会在 A 物理机生成一个路由规则,172.7.22/24 10.4.7.22 255.255.255.0 ens33 ,代表如果访问的是172.7.22/24就把通过ens33网卡流量给10.4.7.22,具体的解释就是,A中的pod,访问172.7.22/24,docker0发现没有符合的地址,给到了上层网卡ens33,通过路由发现,去172.7.22/24的,直接通过ens33发给10.4.7.22,当 B 机器接收到后,发现是访问172.7.22/24,直接转给docker0,在给到pod。
通过上述,首先需要查看物理网卡的名字是什么

找到\flannel-0.12.0\Documentation\kube-flannel.yml 进行修改,添加--iface=网卡,如果不添加此配置,会默认选择第一个物理网卡


- ---
- kind: Namespace
- apiVersion: v1
- metadata:
- name: kube-flannel
- labels:
- pod-security.kubernetes.io/enforce: privileged
- ---
- kind: ClusterRole
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- name: flannel
- rules:
- - apiGroups:
- - ""
- resources:
- - pods
- verbs:
- - get
- - apiGroups:
- - ""
- resources:
- - nodes
- verbs:
- - get
- - list
- - watch
- - apiGroups:
- - ""
- resources:
- - nodes/status
- verbs:
- - patch
- - apiGroups:
- - "networking.k8s.io"
- resources:
- - clustercidrs
- verbs:
- - list
- - watch
- ---
- kind: ClusterRoleBinding
- apiVersion: rbac.authorization.k8s.io/v1
- metadata:
- name: flannel
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: flannel
- subjects:
- - kind: ServiceAccount
- name: flannel
- namespace: kube-flannel
- ---
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- name: flannel
- namespace: kube-flannel
- ---
- kind: ConfigMap
- apiVersion: v1
- metadata:
- name: kube-flannel-cfg
- namespace: kube-flannel
- labels:
- tier: node
- app: flannel
- data:
- cni-conf.json: |
- {
- "name": "cbr0",
- "cniVersion": "0.3.1",
- "plugins": [
- {
- "type": "flannel",
- "delegate": {
- "hairpinMode": true,
- "isDefaultGateway": true
- }
- },
- {
- "type": "portmap",
- "capabilities": {
- "portMappings": true
- }
- }
- ]
- }
- net-conf.json: |
- {
- "Network": "172.7.0.0/16",
- "Backend": {
- "Type": "vxlan"
- }
- }
- ---
- apiVersion: apps/v1
- kind: DaemonSet
- metadata:
- name: kube-flannel-ds
- namespace: kube-flannel
- labels:
- tier: node
- app: flannel
- spec:
- selector:
- matchLabels:
- app: flannel
- template:
- metadata:
- labels:
- tier: node
- app: flannel
- spec:
- affinity:
- nodeAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- nodeSelectorTerms:
- - matchExpressions:
- - key: kubernetes.io/os
- operator: In
- values:
- - linux
- hostNetwork: true
- priorityClassName: system-node-critical
- tolerations:
- - operator: Exists
- effect: NoSchedule
- serviceAccountName: flannel
- initContainers:
- - name: install-cni-plugin
- image: docker.io/flannel/flannel-cni-plugin:v1.1.2
- #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
- command:
- - cp
- args:
- - -f
- - /flannel
- - /opt/cni/bin/flannel
- volumeMounts:
- - name: cni-plugin
- mountPath: /opt/cni/bin
- - name: install-cni
- image: docker.io/flannel/flannel:v0.20.2
- #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
- command:
- - cp
- args:
- - -f
- - /etc/kube-flannel/cni-conf.json
- - /etc/cni/net.d/10-flannel.conflist
- volumeMounts:
- - name: cni
- mountPath: /etc/cni/net.d
- - name: flannel-cfg
- mountPath: /etc/kube-flannel/
- containers:
- - name: kube-flannel
- image: docker.io/flannel/flannel:v0.20.2
- #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
- command:
- - /opt/bin/flanneld
- args:
- - --ip-masq
- - --kube-subnet-mgr
- - --iface=ens33
- - --public-ip=$(PUBLIC_IP)
- resources:
- requests:
- cpu: "100m"
- memory: "50Mi"
- securityContext:
- privileged: false
- capabilities:
- add: ["NET_ADMIN", "NET_RAW"]
- env:
- - name: PUBLIC_IP
- valueFrom:
- fieldRef:
- fieldPath: status.podIP
- - name: POD_NAME
- valueFrom:
- fieldRef:
- fieldPath: metadata.name
- - name: POD_NAMESPACE
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- - name: EVENT_QUEUE_DEPTH
- value: "5000"
- volumeMounts:
- - name: run
- mountPath: /run/flannel
- - name: flannel-cfg
- mountPath: /etc/kube-flannel/
- - name: xtables-lock
- mountPath: /run/xtables.lock
- volumes:
- - name: run
- hostPath:
- path: /run/flannel
- - name: cni-plugin
- hostPath:
- path: /opt/cni/bin
- - name: cni
- hostPath:
- path: /etc/cni/net.d
- - name: flannel-cfg
- configMap:
- name: kube-flannel-cfg
- - name: xtables-lock
- hostPath:
- path: /run/xtables.lock
- type: FileOrCreate
提前下载镜像,如果要修改镜像,需要都改


docker pull quay.io/coreos/flannel:v0.12.0-amd64
应用flannel
[root@k8s-master k8s-install]# kubectl create -f kube-flannel.yml
查看node 节点的状态


测试验证
- [root@k8s-master k8s-install]# kubectl run test-nginx --image=nginx:alpine
-
- [root@k8s-master k8s-install]# kubectl get pod
- NAME READY STATUS RESTARTS AGE
- test-nginx-5bd8859b98-9bs2p 0/1 Pending 0 59s
默认kubeadm 部署的k8s,master节点无法被调度,也就是master无法启动业务pod,主要原因是master默认打上了一个污点,所以可以去除
[root@k8s-master nginx]# kubectl taint node k8s-master node-role.kubernetes.io/master:NoSchedule-

查看是否还有不可调度的,比如



找到\ingress-nginx-nginx-0.30.0\deploy\static\mandatory.yaml

[root@k8s-master opt]# cd k8s-install/
[root@k8s-master k8s-install]# rz mandatory.yaml
[root@k8s-master k8s-install]# mv mandatory.yaml ingress-nginx.yaml
[root@k8s-master k8s-install]# vi ingress-nginx.yaml # 修改mandatory.yaml的两地地方,一个是使用hostNetwork,共享宿主机的网络名称空间,将访问宿主机的80端口或者443端口,直接给到ingress-nginx容器,在通过ingres-controller控制器,也就是提供nginx代理功能,给到ingress资源。另一个NodeSelector添加一个ingress=true,因为kubernetes.io/os: linux是每个node节点都有的标签,由于是Deployment资源,replicas:1,这就导致每次重启k8s,ingres-controller控制器来回的在所有的节点飘,那我如何将访问ingress的流量传递给ingres-controller控制器,每次都在边,所以为了固定下来,再加一个限制条件



在需要部署的节点上,查看80和443端口是不是被占用,由于我之部署了master节点,打上ingress=true 标签

[root@k8s-master k8s-install]# kubectl label node k8s-master ingress=true

提前将镜像下载下来
[root@k8s-master k8s-install]# docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0

应用资源配置清单
[root@k8s-master k8s-install]# kubectl create -f ingress-nginx.yaml



[root@k8s-master nginx]# vi nginx.yaml
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: nginx-dp
- labels:
- app: nginx-dp
- spec:
- selector:
- matchLabels:
- app: nginx-dp
- template:
- metadata:
- labels:
- app: nginx-dp
- spec:
- containers:
- - name: nginx-dp
- image: nginx:latest
- ports:
- - containerPort: 80
-
- ---
-
- apiVersion: v1
- kind: Service
- metadata:
- labels:
- app: nginx-dp
- name: nginx-dp
- namespace: default
- spec:
- ports:
- - port: 9000
- protocol: TCP
- targetPort: 80
- selector:
- app: nginx-dp
- type: ClusterIP
-
- ---
-
- apiVersion: extensions/v1beta1
- kind: Ingress
- metadata:
- name: nginx
- namespace: default
- spec:
- rules:
- - host: nginx.default.com
- http:
- paths:
- - path: /
- backend:
- serviceName: nginx-dp
- servicePort: 9000
- status:
- loadBalancer: {}
[root@k8s-master nginx]# kubectl apply -f nginx.yaml



如果在部署的时候,遇到了其他问题,可以使用如下命令重置
- kubeadm reset
- ifconfig cni0 down && ip link delete cni0
- ifconfig flannel.1 down && ip link delete flannel.1
- rm -rf /var/lib/cni/