• k8s篇之一、环境搭建


    kubernetes 环境搭建

    1.一主多从: 在学习阶段我们只需要一台Master节点和多台Node节点,搭建简单,有单机发生故障风险,适合用于测试环境
    2.多主多从:多台Master节点和多台Node节点,搭建有一些麻烦,安全系数高,适合用于生产环境

    环境说明

    • 1.基于云服务器构建kubernetes 环境

    • 2.本地通过虚拟机构建kubernetes 环境 (当前方式)

    安装Centos7.5 + 准备一个干净的,让后直接克隆三台虚拟机即可
    启动虚拟机过程 ,建议分开的形式启动这三台虚拟机
    注意:克隆了之后 记得一定要修改该linux服务器 mac地址

    一、集群搭建方式

    1.1、环境准备

    1.一个主多个工作节点: 一台Master节点和多台Node节点,搭建简单,但是有单机故障风险,适合用于测试环境
    2.多主节点多工作节点:多台Master节点和多台Node节点,搭建麻烦,安全性高,适合用于生产环境
    程序员初学练习,采用一主多从即可
    ip地址 类型 操作系统 服务配置
    192.168.75.163 Master Centos7.6 2核CPU 2G内存 20G硬盘 (必需2核)
    192.168.75.164 Node1 Centos7.6 2核CPU 2G内存 20G硬盘
    192.168.75.165 Node2 Centos7.6 2核CPU 2G内存 20G硬盘

    1.2、初始化环境

    需要在这三台服务器上执行以下操作

    1.检查操作系统的版本
     # 版本检查 Centos版本要在7.5或之上
     cat /etc/redhat-release 
    
    • 1
    • 2
    2.主机名解析

    主机名解析, 企业中推荐使用内部DNS服务器

    # 开始编辑
    vi /etc/hosts
    
    # 主机名成解析
    192.168.75.173  master
    192.168.75.167  node1
    192.168.75.175  node2
    
    # 让后节点之间相互ping下是否配置成功
    ping master
    ping  node1
    ping  node2
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    3.时间同步

    kubernetes要求集群中的节点时间必须精确一致, 这里直接使用chronyd服务从网络同步时间。企业中建议配置内部的时间同步服务器

    yum install -y chrony
    systemctl start chronyd
    systemctl enable chronyd
    
    • 1
    • 2
    • 3
    4.禁用iptables和firewalld服务

    kubernetes和docker在运行中会产生大量的iptables规则,为了不让系统规则跟它们混淆,直接关闭系统的规则

    # 关闭firewalld服务
    systemctl stop firewalld
    systemctl disable firewalld
    
    # 关闭iptables服务
    systemctl disabled iptables
    systemctl disable iptables
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    5.关闭selinux

    selinux是linux系统下的一个安全服务,如果不关闭它,在安装集群中会产生各种各样的奇葩问题,注意修改完毕之后需要重启 linux服务

    # 开始编辑
    vi /etc/selinux/config 
    
    # 修改
    SELINUX=disabled
    
    • 1
    • 2
    • 3
    • 4
    • 5
    6.禁用swap分区

    swap分区指的是虚拟内存分区,它的作用是在物理内存使用完之后,将磁盘空间虚拟成内存来使用
    启用swap设备会对系统的性能产生非常负面的影响,因此kubernetes要求每个节点都要禁用swap设备
    但是如果因为某些原因确实不能关闭swap分区,就需要在集群安装过程中通过明确的参数进行配置说明
    #编辑分区配置文件/etc/fstab,

    vi /etc/fstab
    
    # 注释掉swap分区一行  /dev/mapper/centos-swap swap....
    
    • 1
    • 2
    • 3
    7.修改Linux内核参数

    修改linux的内核参数,添加网桥过滤和地址转发功能

    # 编辑
    vi /etc/sysctl.d/kubernetes.conf
    
    # 添加配置
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    
    
    
    # 重新加载配置
    sysctl -p
    
    #加载网桥过滤模块
    modprobe br_netfilter
    
    #查看网桥过滤模块是否加载成功
    lsmod  | grep br_netfilter
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    8.配置ipvs功能

    在kubernetes中service有两种代理模型,-种是基于iptables的, - -种是基于ipvs的
    两者比较的话,ipvs的性能明显要高一些,但是如果要使用它,需要手动载入ipvs模块
    k8s service 会使用 ipvs/iptables

    # 1-安装ipset和ipvsadm
    yum install ipset ipvsadmin -y
    
    # 2-添加需要加载的模块写入脚本文件
    
    cat <<EOF > /etc/sysconfig/modules/ipvs.modules
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack_ipv4
    EOF
    
    
    # 3-为脚本文件添加执行权限
     chmod +x /etc/sysconfig/modules/ipvs.modules
    
    # 4-执行脚本文件
    sh +x /etc/sysconfig/modules/ipvs.modules
    
    # 5-查看对应的模块是否加载成功
    lsmod | grep -e ip_vs -e nf_conntrack_ipv4
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23

    9.重启Linux系统

    reboot
    
    • 1

    二、安装 docker

    1、安装docker

    # 安装wget命令
    yum -y install wget
    
    # 切换镜像源
    wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
    
    # 查看 docker 源
    yum list docker-ce --showduplicates
    
    # 安装docker 18.06.3
    yum install --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7 -y
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    2、添加阿里云加速镜像 和 配置 cgroupdriver 为 cgroupdriver

    ## 创建配置文件
    mkdir /etc/docker
    
    ## 如果存在所有
    cd /etc/docker
    
    ## 修改文件配置
    cat <<EOF > /etc/docker/daemon.json
    {
    "exec-opts": ["native.cgroupdriver=systemd"],
    "registry-mirrors": [ "https://66mzqrih.mirror.aliyuncs.com"]
    }
    EOF
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13

    3、启动和设置为开机启动

    systemctl start docker
    systemctl enable docker
    
    ## 检查docker 版本
    docker --version
    
    • 1
    • 2
    • 3
    • 4
    • 5

    三、安装kubernetes组件

    1、切换由于kubernetes的镜像源为国内镜像源

    vi  /etc/yum.repos.d/kubernetes.repo
    
    [kubernetes]
    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
          http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10

    2、安装kubeadm、kubelet和kubectl

    yum install --setopt=obsoletes=0 kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0 -y
    
    • 1

    3、配置kubelet的cgroup

    vi /etc/sysconfig/kubelet
    
    # 追加内容
    KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
    KUBE_PROXY_MODE="ipvs"
    
    • 1
    • 2
    • 3
    • 4
    • 5

    4、设置kubelet开机自启

    systemctl enable kubelet
    
    • 1

    5、安装kubernetes

    准备集群镜像
    在阿里云仓库中存在该镜像 在更改为k8s 官方的名称,同时需要在所有节点操作
    #在安装k8s集群之前,必须要提前准备好集群需要的镜像,所需镜像可以通过下面命令查看

    kubeadm config images list
    
    • 1

    2.下载镜像
    #此镜像在k8s的仓库中,由于网络原因,无法连接,下面提供了-种替代方案
    1、

    images=(
    kube-apiserver:v1.17.4
    kube-controller-manager:v1.17.4
    kube-scheduler:v1.17.4
    kube-proxy:v1.17.4
    pause:3.1
    etcd:3.4.3-0 
    coredns:1.6.5
    )
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9

    2、开始下载

    for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    done
    
    • 1
    • 2
    • 3
    • 4
    • 5

    3、 查看

    docker images
    
    • 1

    6.1、集群- 主节点初始化

    只需要在master节点上执行即可

    • image-repository 需要更改为阿里云加速镜像
    • apiserver-advertise-address 修改为主节点的ip
    ## 初始化
    kubeadm init \
    --kubernetes-version=v1.17.4 \
    --pod-network-cidr=10.244.0.0/16 \
    --image-repository registry.aliyuncs.com/google_containers \
    --service-cidr=10.96.0.0/12 \
    --apiserver-advertise-address=192.168.75.173
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    journalctl -xeu kubelet 查看日志

    6.2、主节点初始化成功后子节点加入

    初始化完成后拷贝该命令 到其他两个节点执行加入k8s集群环境
    在这里插入图片描述

    ##  如果过期可以重新生成
    kubeadm token create --ttl 0 --print-join-command
    
    • 1
    • 2

    6.3、master与本机绑定 + 查看各节点

    master 与本机绑定

    ## kubenetes master 与本机绑定,集群初始化时设置
    export KUBECONFIG=/etc/kubernetes/admin.conf
    
    
    ## 查看各节点执行查看连接状态
    kubectl get nodes
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    7.创建必要文件

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    • 1
    • 2
    • 3

    8、网络插件安装

    Master节点创建 kube-flannel.yml 文件
    修改文件中quay.io仓库为quay-mirror.qiniu.com (已修改)

    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      name: psp.flannel.unprivileged
      annotations:
        seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
        seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
        apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
        apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
    spec:
      privileged: false
      volumes:
        - configMap
        - secret
        - emptyDir
        - hostPath
      allowedHostPaths:
        - pathPrefix: "/etc/cni/net.d"
        - pathPrefix: "/etc/kube-flannel"
        - pathPrefix: "/run/flannel"
      readOnlyRootFilesystem: false
      # Users and groups
      runAsUser:
        rule: RunAsAny
      supplementalGroups:
        rule: RunAsAny
      fsGroup:
        rule: RunAsAny
      # Privilege Escalation
      allowPrivilegeEscalation: false
      defaultAllowPrivilegeEscalation: false
      # Capabilities
      allowedCapabilities: ['NET_ADMIN']
      defaultAddCapabilities: []
      requiredDropCapabilities: []
      # Host namespaces
      hostPID: false
      hostIPC: false
      hostNetwork: true
      hostPorts:
      - min: 0
        max: 65535
      # SELinux
      seLinux:
        # SELinux is unsed in CaaSP
        rule: 'RunAsAny'
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: flannel
    rules:
      - apiGroups: ['extensions']
        resources: ['podsecuritypolicies']
        verbs: ['use']
        resourceNames: ['psp.flannel.unprivileged']
      - apiGroups:
          - ""
        resources:
          - pods
        verbs:
          - get
      - apiGroups:
          - ""
        resources:
          - nodes
        verbs:
          - list
          - watch
      - apiGroups:
          - ""
        resources:
          - nodes/status
        verbs:
          - patch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: flannel
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: flannel
    subjects:
    - kind: ServiceAccount
      name: flannel
      namespace: kube-system
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: flannel
      namespace: kube-system
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: kube-flannel-cfg
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    data:
      cni-conf.json: |
        {
          "cniVersion": "0.2.0",
          "name": "cbr0",
          "plugins": [
            {
              "type": "flannel",
              "delegate": {
                "hairpinMode": true,
                "isDefaultGateway": true
              }
            },
            {
              "type": "portmap",
              "capabilities": {
                "portMappings": true
              }
            }
          ]
        }
      net-conf.json: |
        {
          "Network": "10.244.0.0/16",
          "Backend": {
            "Type": "vxlan"
          }
        }
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-amd64
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: beta.kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: beta.kubernetes.io/arch
                        operator: In
                        values:
                          - amd64
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-arm64
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: beta.kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: beta.kubernetes.io/arch
                        operator: In
                        values:
                          - arm64
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm64
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm64
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-arm
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: beta.kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: beta.kubernetes.io/arch
                        operator: In
                        values:
                          - arm
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-ppc64le
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: beta.kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: beta.kubernetes.io/arch
                        operator: In
                        values:
                          - ppc64le
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-ppc64le
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-ppc64le
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-s390x
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: beta.kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: beta.kubernetes.io/arch
                        operator: In
                        values:
                          - s390x
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-s390x
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-s390x
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183
    • 184
    • 185
    • 186
    • 187
    • 188
    • 189
    • 190
    • 191
    • 192
    • 193
    • 194
    • 195
    • 196
    • 197
    • 198
    • 199
    • 200
    • 201
    • 202
    • 203
    • 204
    • 205
    • 206
    • 207
    • 208
    • 209
    • 210
    • 211
    • 212
    • 213
    • 214
    • 215
    • 216
    • 217
    • 218
    • 219
    • 220
    • 221
    • 222
    • 223
    • 224
    • 225
    • 226
    • 227
    • 228
    • 229
    • 230
    • 231
    • 232
    • 233
    • 234
    • 235
    • 236
    • 237
    • 238
    • 239
    • 240
    • 241
    • 242
    • 243
    • 244
    • 245
    • 246
    • 247
    • 248
    • 249
    • 250
    • 251
    • 252
    • 253
    • 254
    • 255
    • 256
    • 257
    • 258
    • 259
    • 260
    • 261
    • 262
    • 263
    • 264
    • 265
    • 266
    • 267
    • 268
    • 269
    • 270
    • 271
    • 272
    • 273
    • 274
    • 275
    • 276
    • 277
    • 278
    • 279
    • 280
    • 281
    • 282
    • 283
    • 284
    • 285
    • 286
    • 287
    • 288
    • 289
    • 290
    • 291
    • 292
    • 293
    • 294
    • 295
    • 296
    • 297
    • 298
    • 299
    • 300
    • 301
    • 302
    • 303
    • 304
    • 305
    • 306
    • 307
    • 308
    • 309
    • 310
    • 311
    • 312
    • 313
    • 314
    • 315
    • 316
    • 317
    • 318
    • 319
    • 320
    • 321
    • 322
    • 323
    • 324
    • 325
    • 326
    • 327
    • 328
    • 329
    • 330
    • 331
    • 332
    • 333
    • 334
    • 335
    • 336
    • 337
    • 338
    • 339
    • 340
    • 341
    • 342
    • 343
    • 344
    • 345
    • 346
    • 347
    • 348
    • 349
    • 350
    • 351
    • 352
    • 353
    • 354
    • 355
    • 356
    • 357
    • 358
    • 359
    • 360
    • 361
    • 362
    • 363
    • 364
    • 365
    • 366
    • 367
    • 368
    • 369
    • 370
    • 371
    • 372
    • 373
    • 374
    • 375
    • 376
    • 377
    • 378
    • 379
    • 380
    • 381
    • 382
    • 383
    • 384
    • 385
    • 386
    • 387
    • 388
    • 389
    • 390
    • 391
    • 392
    • 393
    • 394
    • 395
    • 396
    • 397
    • 398
    • 399
    • 400
    • 401
    • 402
    • 403
    • 404
    • 405
    • 406
    • 407
    • 408
    • 409
    • 410
    • 411
    • 412
    • 413
    • 414
    • 415
    • 416
    • 417
    • 418
    • 419
    • 420
    • 421
    • 422
    • 423
    • 424
    • 425
    • 426
    • 427
    • 428
    • 429
    • 430
    • 431
    • 432
    • 433
    • 434
    • 435
    • 436
    • 437
    • 438
    • 439
    • 440
    • 441
    • 442
    • 443
    • 444
    • 445
    • 446
    • 447
    • 448
    • 449
    • 450
    • 451
    • 452
    • 453
    • 454
    • 455
    • 456
    • 457
    • 458
    • 459
    • 460
    • 461
    • 462
    • 463
    • 464
    • 465
    • 466
    • 467
    • 468
    • 469
    • 470
    • 471
    • 472
    • 473
    • 474
    • 475
    • 476
    • 477
    • 478
    • 479
    • 480
    • 481
    • 482
    • 483
    • 484
    • 485
    • 486
    • 487
    • 488
    • 489
    • 490
    • 491
    • 492
    • 493
    • 494
    • 495
    • 496
    • 497
    • 498
    • 499
    • 500
    • 501
    • 502
    • 503
    • 504
    • 505
    • 506
    • 507
    • 508
    • 509
    • 510
    • 511
    • 512
    • 513
    • 514
    • 515
    • 516
    • 517
    • 518
    • 519
    • 520
    • 521
    • 522
    • 523
    • 524
    • 525
    • 526
    • 527
    • 528
    • 529
    • 530
    • 531
    • 532
    • 533
    • 534
    • 535
    • 536
    • 537
    • 538
    • 539
    • 540
    • 541
    • 542
    • 543
    • 544
    • 545
    • 546
    • 547
    • 548
    • 549
    • 550
    • 551
    • 552
    • 553
    • 554
    • 555
    • 556
    • 557
    • 558
    • 559
    • 560
    • 561
    • 562
    • 563
    • 564
    • 565
    • 566
    • 567
    • 568
    • 569
    • 570
    • 571
    • 572
    • 573
    • 574
    • 575
    • 576
    • 577
    • 578
    • 579
    • 580
    • 581
    • 582
    • 583
    • 584
    • 585
    • 586
    • 587
    • 588
    • 589
    • 590
    • 591
    • 592
    • 593
    • 594
    • 595
    • 596
    • 597
    • 598
    • 599
    • 600
    • 601

    9、主从网络连接

    网络插件

    ## 主从都执行
    docker pull quay.io/coreos/flannel:v0.12.0-amd64
    
    ## master节执行
    kubectl apply -f kube-flannel.yml
    
    • 1
    • 2
    • 3
    • 4
    • 5

    删除每台节点之前网络配置信息 network并重启服务

    vi /var/lib/kubelet/kubeadm-flags.env

    # 删除  --network-plugin=cni
    
    # 重新启动每台节点
    systemctl daemon-reload
    systemctl restart kubelet
    
    • 1
    • 2
    • 3
    • 4
    • 5

    耐心等待30-60s 查看状态 是否为 ready状态

    kubectl get nodes
    
    • 1

    在这里插入图片描述

    整个k8s集群环境就搭建成功啦!

    四、错误

    重新初始化

    kubeadm init 失败后,或kubeadm init 后重新问题,可使用 kubeadm reset 重置在重新初始化

    kubeadm reset
    
    • 1
  • 相关阅读:
    go热更新配置文件
    动态SQL语句怎么写
    spring boot使用拦截器修改请求URL域名 换 IP 访问
    Go语言超全详解(入门级)
    品牌自查! 小红书用户人群分析+四象限法,精准品牌定位
    基于php+MYSQL的旅游景点攻略的设计与实现毕业设计源码301216
    [附源码]java毕业设计医院管理系统
    WinForm、Wpf自动升级 AutoUpdater.NET
    KDD 2022 | 美团技术团队精选论文解读
    基于HBuilder X平台下的 驾校报名考试管理系统 uniapp 微信小程序3n9o5
  • 原文地址:https://blog.csdn.net/qq_41463655/article/details/126920731