• 云原生(二十八) | Kubernetes篇之自建高可用k8s集群搭建


    文末有惊喜

    文章目录

    自建高可用k8s集群搭建

    一、所有节点基础环境

    1、环境准备与内核升级

    2、安装Docker

    二、PKI

    三、证书工具准备

    1、下载证书工具

    2、ca根配置

    3、ca签名请求

    4、生成证书

    5、k8s集群是如何使用证书的

    四、etcd高可用搭建

    1、etcd文档

    2、下载etcd

    3、etcd证书

    4、etcd高可用安装

    五、k8s组件与证书

    1、K8s离线安装包

    2、master节点准备

    3、apiserver 证书生成

    4、front-proxy证书生成

    5、controller-manage证书生成与配置

    6、scheduler证书生成与配置

    7、admin证书生成与配置

    8、ServiceAccount Key生成

    9、发送证书到其他节点

    六、高可用配置

     七、组件启动

    1、所有master执行

    2、配置apiserver服务

    3、配置controller-manager服务

    4、配置scheduler

    八、TLS与引导启动原理

    1、master1配置bootstrap

    2、master1设置kubectl执行权限

    3、创建集群引导权限文件

    九、引导Node节点启动

    1、发送核心证书到节点

    2、所有节点配置kubelet

    3、kube-proxy配置

    十、部署calico

    十一、部署coreDNS

    十二、给机器打上role标签

    十三、集群验证


    自建高可用k8s集群搭建

    一、所有节点基础环境

    192.168.0.x : 为机器的网段

    10.96.0.0/16: 为Service网段

    196.16.0.0/16: 为Pod网段

    1、环境准备与内核升级

    先升级所有机器内核

    1. #我的机器版本
    2. cat /etc/redhat-release
    3. # CentOS Linux release 7.9.2009 (Core)
    4. #修改域名,一定不是localhost
    5. hostnamectl set-hostname k8s-xxx
    6. #集群规划
    7. k8s-master1 k8s-master2 k8s-master3 k8s-master-lb k8s-node01 k8s-node02 ... k8s-nodeN
    8. # 每个机器准备域名
    9. vi /etc/hosts
    10. 192.168.0.10 k8s-master1
    11. 192.168.0.11 k8s-master2
    12. 192.168.0.12 k8s-master3
    13. 192.168.0.13 k8s-node1
    14. 192.168.0.14 k8s-node2
    15. 192.168.0.15 k8s-node3
    16. 192.168.0.250 k8s-master-lb # 非高可用,可以不用这个。这个使用keepalive配置
    1. # 关闭selinux
    2. setenforce 0
    3. sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
    4. sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
    1. # 关闭swap
    2. swapoff -a && sysctl -w vm.swappiness=0
    3. sed -ri 's/.*swap.*/#&/' /etc/fstab
    1. #修改limit
    2. ulimit -SHn 65535
    3. vi /etc/security/limits.conf
    4. # 末尾添加如下内容
    5. * soft nofile 655360
    6. * hard nofile 131072
    7. * soft nproc 655350
    8. * hard nproc 655350
    9. * soft memlock unlimited
    10. * hard memlock unlimited
    1. #为了方便以后操作配置ssh免密连接,master1运行
    2. ssh-keygen -t rsa
    3. for i in k8s-master1 k8s-master2 k8s-master3 k8s-node1 k8s-node2 k8s-node3;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
    1. #安装后续用的一些工具
    2. yum install wget git jq psmisc net-tools yum-utils device-mapper-persistent-data lvm2 -y
    1. # 所有节点
    2. # 安装ipvs工具,方便以后操作ipvs,ipset,conntrack等
    3. yum install ipvsadm ipset sysstat conntrack libseccomp -y
    4. # 所有节点配置ipvs模块,执行以下命令,在内核4.19+版本改为nf_conntrack, 4.18下改为nf_conntrack_ipv4
    5. modprobe -- ip_vs
    6. modprobe -- ip_vs_rr
    7. modprobe -- ip_vs_wrr
    8. modprobe -- ip_vs_sh
    9. modprobe -- nf_conntrack
    10. #修改ipvs配置,加入以下内容
    11. vi /etc/modules-load.d/ipvs.conf
    12. ip_vs
    13. ip_vs_lc
    14. ip_vs_wlc
    15. ip_vs_rr
    16. ip_vs_wrr
    17. ip_vs_lblc
    18. ip_vs_lblcr
    19. ip_vs_dh
    20. ip_vs_sh
    21. ip_vs_fo
    22. ip_vs_nq
    23. ip_vs_sed
    24. ip_vs_ftp
    25. ip_vs_sh
    26. nf_conntrack
    27. ip_tables
    28. ip_set
    29. xt_set
    30. ipt_set
    31. ipt_rpfilter
    32. ipt_REJECT
    33. ipip
    34. # 执行命令
    35. systemctl enable --now systemd-modules-load.service #--now = enable+start
    36. #检测是否加载
    37. lsmod | grep -e ip_vs -e nf_conntrack
    1. ## 所有节点
    2. cat <<EOF > /etc/sysctl.d/k8s.conf
    3. net.ipv4.ip_forward = 1
    4. net.bridge.bridge-nf-call-iptables = 1
    5. net.bridge.bridge-nf-call-ip6tables = 1
    6. fs.may_detach_mounts = 1
    7. vm.overcommit_memory=1
    8. net.ipv4.conf.all.route_localnet = 1
    9. vm.panic_on_oom=0
    10. fs.inotify.max_user_watches=89100
    11. fs.file-max=52706963
    12. fs.nr_open=52706963
    13. net.netfilter.nf_conntrack_max=2310720
    14. net.ipv4.tcp_keepalive_time = 600
    15. net.ipv4.tcp_keepalive_probes = 3
    16. net.ipv4.tcp_keepalive_intvl =15
    17. net.ipv4.tcp_max_tw_buckets = 36000
    18. net.ipv4.tcp_tw_reuse = 1
    19. net.ipv4.tcp_max_orphans = 327680
    20. net.ipv4.tcp_orphan_retries = 3
    21. net.ipv4.tcp_syncookies = 1
    22. net.ipv4.tcp_max_syn_backlog = 16768
    23. net.ipv4.ip_conntrack_max = 65536
    24. net.ipv4.tcp_timestamps = 0
    25. net.core.somaxconn = 16768
    26. EOF
    27. sysctl --system
    1. # 所有节点配置完内核后,重启服务器,保证重启后内核依旧加载
    2. reboot
    3. lsmod | grep -e ip_vs -e nf_conntrack

    2、安装Docker

    1. # 安装docker
    2. yum remove docker*
    3. yum install -y yum-utils
    4. yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    5. yum install -y docker-ce-19.03.9 docker-ce-cli-19.03.9 containerd.io-1.4.4
    1. # 修改docker配置,新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd
    2. mkdir /etc/docker
    3. cat > /etc/docker/daemon.json <<EOF
    4. {
    5. "exec-opts": ["native.cgroupdriver=systemd"],
    6. "registry-mirrors": ["https://82m9ayutr63.mirror.aliyuncs.com"]
    7. }
    8. EOF
    9. systemctl daemon-reload && systemctl enable --now docker
    1. #也可以自己下载rpm离线包进行安装
    2. http://mirrors.aliyun.com/docker-ce/linux/centos/7.9/x86_64/stable/Packages/
    3. yum localinstall xxxx

    二、PKI

    百度百科:公钥基础设施_百度百科

    Kubernetes 需要 PKI 才能执行以下操作:

    • Kubelet 的客户端证书,用于 API 服务器身份验证

    • API 服务器端点的证书

    • 集群管理员的客户端证书,用于 API 服务器身份认证

    • API 服务器的客户端证书,用于和 Kubelet 的会话

    • API 服务器的客户端证书,用于和 etcd 的会话

    • 控制器管理器的客户端证书/kubeconfig,用于和 API 服务器的会话

    • 调度器的客户端证书/kubeconfig,用于和 API 服务器的会话

    • 前端代理的客户端及服务端证书

    说明: 只有当你运行 kube-proxy 并要支持扩展 API 服务器 时,才需要 front-proxy 证书

    etcd 还实现了双向 TLS 来对客户端和对其他对等节点进行身份验证

    PKI 证书和要求 | Kubernetes

    三、证书工具准备

    1. # 准备文件夹存放所有证书信息。看看kubeadm 如何组织有序的结构的
    2. # 三个节点都执行
    3. mkdir -p /etc/kubernetes/pki

    1、下载证书工具

    1. # 下载cfssl核心组件
    2. wget https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64
    3. wget https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64
    4. wget https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64
    5. #授予执行权限
    6. chmod +x cfssl*
    7. #批量重命名
    8. for name in `ls cfssl*`; do mv $name ${name%_1.5.0_linux_amd64}; done
    9. #移动到文件
    10. mv cfssl* /usr/bin

    2、ca根配置

    ca-config.json

    1. mkdir -p /etc/kubernetes/pki
    2. cd /etc/kubernetes/pki
    3. vi ca-config.json
    1. {
    2.     "signing": {
    3.         "default": {
    4.             "expiry": "87600h"
    5.         },
    6.         "profiles": {
    7.             "server": {
    8.                 "expiry": "87600h",
    9.                 "usages": [
    10.                     "signing",
    11.                     "key encipherment",
    12.                     "server auth"
    13.                 ]
    14.             },
    15.             "client": {
    16.                 "expiry": "87600h",
    17.                 "usages": [
    18.                     "signing",
    19.                     "key encipherment",
    20.                     "client auth"
    21.                 ]
    22.             },
    23.             "peer": {
    24.                 "expiry": "87600h",
    25.                 "usages": [
    26.                     "signing",
    27.                     "key encipherment",
    28.                     "server auth",
    29.                     "client auth"
    30.                 ]
    31.             },
    32.             "kubernetes": {
    33.                 "expiry": "87600h",
    34.                 "usages": [
    35.                     "signing",
    36.                     "key encipherment",
    37.                     "server auth",
    38.                     "client auth"
    39.                 ]
    40.             },
    41.             "etcd": {
    42.                 "expiry": "87600h",
    43.                 "usages": [
    44.                     "signing",
    45.                     "key encipherment",
    46.                     "server auth",
    47.                     "client auth"
    48.                 ]
    49.             }
    50.         }
    51.     }
    52. }

    3、ca签名请求

    CSR是Certificate Signing Request的英文缩写,即证书签名请求文件

    ca-csr.json

    vi /etc/kubernetes/pki/ca-csr.json
    1. {
    2. "CN": "kubernetes",
    3. "key": {
    4. "algo": "rsa",
    5. "size": 2048
    6. },
    7. "names": [
    8. {
    9. "C": "CN",
    10. "ST": "Beijing",
    11. "L": "Beijing",
    12. "O": "Kubernetes",
    13. "OU": "Kubernetes"
    14. }
    15. ],
    16. "ca": {
    17. "expiry": "87600h"
    18. }
    19. }
    • CN(Common Name):

      • 公用名(Common Name)必须填写,一般可以是网站域

    • O(Organization):

      • Organization(组织名)是必须填写的,如果申请的是OV、EV型证书,组织名称必须严格和企业在政府登记名称一致,一般需要和营业执照上的名称完全一致。不可以使用缩写或者商标。如果需要使用英文名称,需要有DUNS编码或者律师信证明。

    • OU(Organization Unit)

      • OU单位部门,这里一般没有太多限制,可以直接填写IT DEPT等皆可。

    • C(City)

      • City是指申请单位所在的城市。

    • ST(State/Province)

      • ST是指申请单位所在的省份。

    • C(Country Name)

      • C是指国家名称,这里用的是两位大写的国家代码,中国是CN。

    4、生成证书

    生成ca证书和私钥

    1. cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    2. # ca.csr ca.pem(ca公钥) ca-key.pem(ca私钥,妥善保管)


    5、k8s集群是如何使用证书的

    参考官方文档:PKI 证书和要求 | Kubernetes

    四、etcd高可用搭建

    1、etcd文档

    etcd示例:Demo | etcd 参照示例学习etcd使用

    etcd构建:Install | etcd 参照etcd-k8s集群量规划指南,大家参照这个标准建立集群

    etcd部署:Operations guide | etcd 参照部署手册,学习etcd配置和集群部署

    2、下载etcd

    1. # 给所有master节点,发送etcd包准备部署etcd高可用
    2. wget https://github.com/etcd-io/etcd/releases/download/v3.4.16/etcd-v3.4.16-linux-amd64.tar.gz
    3. ## 复制到其他节点
    4. for i in k8s-master1 k8s-master2 k8s-master3;do scp etcd-* root@$i:/root/;done
    5. ## 解压到 /usr/local/bin
    6. tar -zxvf etcd-v3.4.16-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.4.16-linux-amd64/etcd{,ctl}
    7. ##验证
    8. etcdctl #只要有打印就ok

    3、etcd证书

    Hardware recommendations | etcd安装参考 :Hardware recommendations | etcd

    生成etcd证书

    etcd-ca-csr.json

    1. {
    2. "CN": "etcd",
    3. "key": {
    4. "algo": "rsa",
    5. "size": 2048
    6. },
    7. "names": [
    8. {
    9. "C": "CN",
    10. "ST": "Beijing",
    11. "L": "Beijing",
    12. "O": "etcd",
    13. "OU": "etcd"
    14. }
    15. ],
    16. "ca": {
    17. "expiry": "87600h"
    18. }
    19. }
    1. # 生成etcd根ca证书
    2. cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/etcd/ca -

    etcd-itdachang-csr.json

    1. {
    2. "CN": "etcd-itdachang",
    3. "key": {
    4. "algo": "rsa",
    5. "size": 2048
    6. },
    7. "hosts": [
    8. "127.0.0.1",
    9. "k8s-master1",
    10. "k8s-master2",
    11. "k8s-master3",
    12. "192.168.0.10",
    13. "192.168.0.11",
    14. "192.168.0.12"
    15. ],
    16. "names": [
    17. {
    18. "C": "CN",
    19. "L": "beijing",
    20. "O": "etcd",
    21. "ST": "beijing",
    22. "OU": "System"
    23. }
    24. ]
    25. }
    26. // 注意:hosts用自己的主机名和ip
    27. // 也可以在签发的时候再加上 -hostname=127.0.0.1,k8s-master1,k8s-master2,k8s-master3,
    28. // 可以指定受信的主机列表
    29. // "hosts": [
    30. // "k8s-master1",
    31. // "www.example.net"
    32. // ],
    1. # 签发itdachang的etcd证书
    2. cfssl gencert \
    3. -ca=/etc/kubernetes/pki/etcd/ca.pem \
    4. -ca-key=/etc/kubernetes/pki/etcd/ca-key.pem \
    5. -config=/etc/kubernetes/pki/ca-config.json \
    6. -profile=etcd \
    7. etcd-itdachang-csr.json | cfssljson -bare /etc/kubernetes/pki/etcd/etcd
    1. # 把生成的etcd证书,复制给其他机器
    2. for i in k8s-master2 k8s-master3;do scp -r /etc/kubernetes/pki/etcd root@$i:/etc/kubernetes/pki;done

    4、etcd高可用安装

    etcd配置文件示例: Configuration flags | etcd

    etcd高可用安装示例: Clustering Guide | etcd

    为了保证启动配置一致性,我们编写etcd配置文件,并将etcd做成service启动

    1. # etcd yaml示例。
    2. # This is the configuration file for the etcd server.
    3. # Human-readable name for this member.
    4. name: 'default'
    5. # Path to the data directory.
    6. data-dir:
    7. # Path to the dedicated wal directory.
    8. wal-dir:
    9. # Number of committed transactions to trigger a snapshot to disk.
    10. snapshot-count: 10000
    11. # Time (in milliseconds) of a heartbeat interval.
    12. heartbeat-interval: 100
    13. # Time (in milliseconds) for an election to timeout.
    14. election-timeout: 1000
    15. # Raise alarms when backend size exceeds the given quota. 0 means use the
    16. # default quota.
    17. quota-backend-bytes: 0
    18. # List of comma separated URLs to listen on for peer traffic.
    19. listen-peer-urls: http://localhost:2380
    20. # List of comma separated URLs to listen on for client traffic.
    21. listen-client-urls: http://localhost:2379
    22. # Maximum number of snapshot files to retain (0 is unlimited).
    23. max-snapshots: 5
    24. # Maximum number of wal files to retain (0 is unlimited).
    25. max-wals: 5
    26. # Comma-separated white list of origins for CORS (cross-origin resource sharing).
    27. cors:
    28. # List of this member's peer URLs to advertise to the rest of the cluster.
    29. # The URLs needed to be a comma-separated list.
    30. initial-advertise-peer-urls: http://localhost:2380
    31. # List of this member's client URLs to advertise to the public.
    32. # The URLs needed to be a comma-separated list.
    33. advertise-client-urls: http://localhost:2379
    34. # Discovery URL used to bootstrap the cluster.
    35. discovery:
    36. # Valid values include 'exit', 'proxy'
    37. discovery-fallback: 'proxy'
    38. # HTTP proxy to use for traffic to discovery service.
    39. discovery-proxy:
    40. # DNS domain used to bootstrap initial cluster.
    41. discovery-srv:
    42. # Initial cluster configuration for bootstrapping.
    43. initial-cluster:
    44. # Initial cluster token for the etcd cluster during bootstrap.
    45. initial-cluster-token: 'etcd-cluster'
    46. # Initial cluster state ('new' or 'existing').
    47. initial-cluster-state: 'new'
    48. # Reject reconfiguration requests that would cause quorum loss.
    49. strict-reconfig-check: false
    50. # Accept etcd V2 client requests
    51. enable-v2: true
    52. # Enable runtime profiling data via HTTP server
    53. enable-pprof: true
    54. # Valid values include 'on', 'readonly', 'off'
    55. proxy: 'off'
    56. # Time (in milliseconds) an endpoint will be held in a failed state.
    57. proxy-failure-wait: 5000
    58. # Time (in milliseconds) of the endpoints refresh interval.
    59. proxy-refresh-interval: 30000
    60. # Time (in milliseconds) for a dial to timeout.
    61. proxy-dial-timeout: 1000
    62. # Time (in milliseconds) for a write to timeout.
    63. proxy-write-timeout: 5000
    64. # Time (in milliseconds) for a read to timeout.
    65. proxy-read-timeout: 0
    66. client-transport-security:
    67. # Path to the client server TLS cert file.
    68. cert-file:
    69. # Path to the client server TLS key file.
    70. key-file:
    71. # Enable client cert authentication.
    72. client-cert-auth: false
    73. # Path to the client server TLS trusted CA cert file.
    74. trusted-ca-file:
    75. # Client TLS using generated certificates
    76. auto-tls: false
    77. peer-transport-security:
    78. # Path to the peer server TLS cert file.
    79. cert-file:
    80. # Path to the peer server TLS key file.
    81. key-file:
    82. # Enable peer client cert authentication.
    83. client-cert-auth: false
    84. # Path to the peer server TLS trusted CA cert file.
    85. trusted-ca-file:
    86. # Peer TLS using generated certificates.
    87. auto-tls: false
    88. # Enable debug-level logging for etcd.
    89. debug: false
    90. logger: zap
    91. # Specify 'stdout' or 'stderr' to skip journald logging even when running under systemd.
    92. log-outputs: [stderr]
    93. # Force to create a new one member cluster.
    94. force-new-cluster: false
    95. auto-compaction-mode: periodic
    96. auto-compaction-retention: "1"

    三个etcd机器都创建 /etc/etcd 目录,准备存储etcd配置信息  

    1. #三个master执行
    2. mkdir -p /etc/etcd
    vi /etc/etcd/etcd.yaml
    1. # 我们的yaml
    2. name: 'etcd-master3' #每个机器可以写自己的域名,不能重复
    3. data-dir: /var/lib/etcd
    4. wal-dir: /var/lib/etcd/wal
    5. snapshot-count: 5000
    6. heartbeat-interval: 100
    7. election-timeout: 1000
    8. quota-backend-bytes: 0
    9. listen-peer-urls: 'https://192.168.0.12:2380' # 本机ip+2380端口,代表和集群通信
    10. listen-client-urls: 'https://192.168.0.12:2379,http://127.0.0.1:2379' #改为自己的
    11. max-snapshots: 3
    12. max-wals: 5
    13. cors:
    14. initial-advertise-peer-urls: 'https://192.168.0.12:2380' #自己的ip
    15. advertise-client-urls: 'https://192.168.0.12:2379' #自己的ip
    16. discovery:
    17. discovery-fallback: 'proxy'
    18. discovery-proxy:
    19. discovery-srv:
    20. initial-cluster: 'etcd-master1=https://192.168.0.10:2380,etcd-master2=https://192.168.0.11:2380,etcd-master3=https://192.168.0.12:2380' #这里不一样
    21. initial-cluster-token: 'etcd-k8s-cluster'
    22. initial-cluster-state: 'new'
    23. strict-reconfig-check: false
    24. enable-v2: true
    25. enable-pprof: true
    26. proxy: 'off'
    27. proxy-failure-wait: 5000
    28. proxy-refresh-interval: 30000
    29. proxy-dial-timeout: 1000
    30. proxy-write-timeout: 5000
    31. proxy-read-timeout: 0
    32. client-transport-security:
    33. cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
    34. key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
    35. client-cert-auth: true
    36. trusted-ca-file: '/etc/kubernetes/pki/etcd/ca.pem'
    37. auto-tls: true
    38. peer-transport-security:
    39. cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
    40. key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
    41. peer-client-cert-auth: true
    42. trusted-ca-file: '/etc/kubernetes/pki/etcd/ca.pem'
    43. auto-tls: true
    44. debug: false
    45. log-package-levels:
    46. log-outputs: [default]
    47. force-new-cluster: false

    三台机器的etcd做成service,开机启动  

    1. vi /usr/lib/systemd/system/etcd.service
    2. [Unit]
    3. Description=Etcd Service
    4. Documentation=https://etcd.io/docs/v3.4/op-guide/clustering/
    5. After=network.target
    6. [Service]
    7. Type=notify
    8. ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.yaml
    9. Restart=on-failure
    10. RestartSec=10
    11. LimitNOFILE=65536
    12. [Install]
    13. WantedBy=multi-user.target
    14. Alias=etcd3.service
    1. # 加载&开机启动
    2. systemctl daemon-reload
    3. systemctl enable --now etcd
    4. # 启动有问题,使用 journalctl -u 服务名排查
    5. journalctl -u etcd

    测试etcd访问

    1. # 查看etcd集群状态
    2. etcdctl --endpoints="192.168.0.10:2379,192.168.0.11:2379,192.168.0.12:2379" --cacert=/etc/kubernetes/pki/etcd/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
    3. # 以后测试命令
    4. export ETCDCTL_API=3
    5. HOST_1=192.168.0.10
    6. HOST_2=192.168.0.11
    7. HOST_3=192.168.0.12
    8. ENDPOINTS=$HOST_1:2379,$HOST_2:2379,$HOST_3:2379
    9. ## 导出环境变量,方便测试,参照https://github.com/etcd-io/etcd/tree/main/etcdctl
    10. export ETCDCTL_DIAL_TIMEOUT=3s
    11. export ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.pem
    12. export ETCDCTL_CERT=/etc/kubernetes/pki/etcd/etcd.pem
    13. export ETCDCTL_KEY=/etc/kubernetes/pki/etcd/etcd-key.pem
    14. export ETCDCTL_ENDPOINTS=$HOST_1:2379,$HOST_2:2379,$HOST_3:2379
    15. # 自动用环境变量定义的证书位置
    16. etcdctl member list --write-out=table
    17. #如果没有环境变量就需要如下方式调用
    18. etcdctl --endpoints=$ENDPOINTS --cacert=/etc/kubernetes/pki/etcd/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem member list --write-out=table
    19. ## 更多etcdctl命令,https://etcd.io/docs/v3.4/demo/#access-etcd

    五、k8s组件与证书

    1、K8s离线安装包

    https://github.com/kubernetes/kubernetes 找到changelog对应版本

    1. # 下载k8s包
    2. wget https://dl.k8s.io/v1.21.1/kubernetes-server-linux-amd64.tar.gz

    2、master节点准备

    1. # 把kubernetes把复制给master所有节点
    2. for i in k8s-master1 k8s-master2 k8s-master3 k8s-node1 k8s-node2 k8s-node3;do scp kubernetes-server-* root@$i:/root/;done
    1. #所有master节点解压kubelet,kubectl等到 /usr/local/bin。
    2. tar -xvf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
    3. #master需要全部组件,node节点只需要 /usr/local/bin kubelet、kube-proxy

    3、apiserver 证书生成

    3.1、apiserver-csr.json

    1. //10.96.0. 为service网段。可以自定义 如: 66.66.0.1
    2. // 192.168.0.250: 是我准备的负载均衡器地址(负载均衡可以自己搭建,也可以购买云厂商lb。)
    3. {
    4. "CN": "kube-apiserver",
    5. "hosts": [
    6. "10.96.0.1",
    7. "127.0.0.1",
    8. "192.168.0.250",
    9. "192.168.0.10",
    10. "192.168.0.11",
    11. "192.168.0.12",
    12. "192.168.0.13",
    13. "192.168.0.14",
    14. "192.168.0.15",
    15. "192.168.0.16",
    16. "kubernetes",
    17. "kubernetes.default",
    18. "kubernetes.default.svc",
    19. "kubernetes.default.svc.cluster",
    20. "kubernetes.default.svc.cluster.local"
    21. ],
    22. "key": {
    23. "algo": "rsa",
    24. "size": 2048
    25. },
    26. "names": [
    27. {
    28. "C": "CN",
    29. "L": "BeiJing",
    30. "ST": "BeiJing",
    31. "O": "Kubernetes",
    32. "OU": "Kubernetes"
    33. }
    34. ]
    35. }

    3.2、生成apiserver证书

    1. # 192.168.0.是k8s service的网段,如果说需要更改k8s service网段,那就需要更改192.168.0.1,
    2. # 如果不是高可用集群,10.103.236.236为Master01的IP
    3. #先生成CA机构
    4. vi ca-csr.json
    5. {
    6. "CN": "kubernetes",
    7. "key": {
    8. "algo": "rsa",
    9. "size": 2048
    10. },
    11. "names": [
    12. {
    13. "C": "CN",
    14. "ST": "Beijing",
    15. "L": "Beijing",
    16. "O": "Kubernetes",
    17. "OU": "Kubernetes"
    18. }
    19. ],
    20. "ca": {
    21. "expiry": "87600h"
    22. }
    23. }
    24. cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    25. cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=/etc/kubernetes/pki/ca-config.json -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

    4、front-proxy证书生成

    官方文档:配置聚合层 | Kubernetes

    注意:front-proxy不建议用新的CA机构签发证书,可能导致通过他代理的组件如metrics-server权限不可用。

    如果用新的,api-server配置添加 --requestheader-allowed-names=front-proxy-client

     4.1、front-proxy-ca-csr.json

    front-proxy根ca

    1. vi front-proxy-ca-csr.json
    2. {
    3. "CN": "kubernetes",
    4. "key": {
    5. "algo": "rsa",
    6. "size": 2048
    7. }
    8. }
    1. #front-proxy 根ca生成
    2. cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca

    4.2、front-proxy-client证书

    vi  front-proxy-client-csr.json  #准备申请client客户端
    1. {
    2. "CN": "front-proxy-client",
    3. "key": {
    4. "algo": "rsa",
    5. "size": 2048
    6. }
    7. }
    1. #生成front-proxy-client 证书
    2. cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
    3. #忽略警告,毕竟我们不是给网站生成的

    5、controller-manage证书生成与配置

    5.1、controller-manager-csr.json

    vi controller-manager-csr.json
    1. {
    2. "CN": "system:kube-controller-manager",
    3. "key": {
    4. "algo": "rsa",
    5. "size": 2048
    6. },
    7. "names": [
    8. {
    9. "C": "CN",
    10. "ST": "Beijing",
    11. "L": "Beijing",
    12. "O": "system:kube-controller-manager",
    13. "OU": "Kubernetes"
    14. }
    15. ]
    16. }

    5.2、生成证书

    1. cfssl gencert \
    2. -ca=/etc/kubernetes/pki/ca.pem \
    3. -ca-key=/etc/kubernetes/pki/ca-key.pem \
    4. -config=ca-config.json \
    5. -profile=kubernetes \
    6. controller-manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager

    5.3、生成配置

    1. # 注意,如果不是高可用集群,192.168.0.250:6443改为master01的地址,6443为apiserver的默认端口
    2. # set-cluster:设置一个集群项,
    3. kubectl config set-cluster kubernetes \
    4. --certificate-authority=/etc/kubernetes/pki/ca.pem \
    5. --embed-certs=true \
    6. --server=https://192.168.0.250:6443 \
    7. --kubeconfig=/etc/kubernetes/controller-manager.conf
    8. # 设置一个环境项,一个上下文
    9. kubectl config set-context system:kube-controller-manager@kubernetes \
    10. --cluster=kubernetes \
    11. --user=system:kube-controller-manager \
    12. --kubeconfig=/etc/kubernetes/controller-manager.conf
    13. # set-credentials 设置一个用户项
    14. kubectl config set-credentials system:kube-controller-manager \
    15. --client-certificate=/etc/kubernetes/pki/controller-manager.pem \
    16. --client-key=/etc/kubernetes/pki/controller-manager-key.pem \
    17. --embed-certs=true \
    18. --kubeconfig=/etc/kubernetes/controller-manager.conf
    19. # 使用某个环境当做默认环境
    20. kubectl config use-context system:kube-controller-manager@kubernetes \
    21. --kubeconfig=/etc/kubernetes/controller-manager.conf
    22. # 后来也用来自动批复kubelet证书

    6、scheduler证书生成与配置

    6.1、scheduler-csr.json

    vi scheduler-csr.json
    1. {
    2. "CN": "system:kube-scheduler",
    3. "key": {
    4. "algo": "rsa",
    5. "size": 2048
    6. },
    7. "names": [
    8. {
    9. "C": "CN",
    10. "ST": "Beijing",
    11. "L": "Beijing",
    12. "O": "system:kube-scheduler",
    13. "OU": "Kubernetes"
    14. }
    15. ]
    16. }

    6.2、签发证书

    1. cfssl gencert \
    2. -ca=/etc/kubernetes/pki/ca.pem \
    3. -ca-key=/etc/kubernetes/pki/ca-key.pem \
    4. -config=/etc/kubernetes/pki/ca-config.json \
    5. -profile=kubernetes \
    6. scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler

    6.3、生成配置

    1. # 注意,如果不是高可用集群,192.168.0.250:6443 改为master01的地址,6443是api-server默认端口
    2. kubectl config set-cluster kubernetes \
    3. --certificate-authority=/etc/kubernetes/pki/ca.pem \
    4. --embed-certs=true \
    5. --server=https://192.168.0.250:6443 \
    6. --kubeconfig=/etc/kubernetes/scheduler.conf
    7. kubectl config set-credentials system:kube-scheduler \
    8. --client-certificate=/etc/kubernetes/pki/scheduler.pem \
    9. --client-key=/etc/kubernetes/pki/scheduler-key.pem \
    10. --embed-certs=true \
    11. --kubeconfig=/etc/kubernetes/scheduler.conf
    12. kubectl config set-context system:kube-scheduler@kubernetes \
    13. --cluster=kubernetes \
    14. --user=system:kube-scheduler \
    15. --kubeconfig=/etc/kubernetes/scheduler.conf
    16. kubectl config use-context system:kube-scheduler@kubernetes \
    17. --kubeconfig=/etc/kubernetes/scheduler.conf
    18. #k8s集群安全操作相关

    7、admin证书生成与配置

    7.1、admin-csr.json

    vi admin-csr.json
    1. {
    2. "CN": "admin",
    3. "key": {
    4. "algo": "rsa",
    5. "size": 2048
    6. },
    7. "names": [
    8. {
    9. "C": "CN",
    10. "ST": "Beijing",
    11. "L": "Beijing",
    12. "O": "system:masters",
    13. "OU": "Kubernetes"
    14. }
    15. ]
    16. }

    7.2、生成证书

    1. cfssl gencert \
    2. -ca=/etc/kubernetes/pki/ca.pem \
    3. -ca-key=/etc/kubernetes/pki/ca-key.pem \
    4. -config=/etc/kubernetes/pki/ca-config.json \
    5. -profile=kubernetes \
    6. admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin

    7.3、生成配置

    1. # 注意,如果不是高可用集群,192.168.0.250:6443改为master01的地址,6443为apiserver的默认端口
    2. kubectl config set-cluster kubernetes \
    3. --certificate-authority=/etc/kubernetes/pki/ca.pem \
    4. --embed-certs=true \
    5. --server=https://192.168.0.250:6443 \
    6. --kubeconfig=/etc/kubernetes/admin.conf
    7. kubectl config set-credentials kubernetes-admin \
    8. --client-certificate=/etc/kubernetes/pki/admin.pem \
    9. --client-key=/etc/kubernetes/pki/admin-key.pem \
    10. --embed-certs=true \
    11. --kubeconfig=/etc/kubernetes/admin.conf
    12. kubectl config set-context kubernetes-admin@kubernetes \
    13. --cluster=kubernetes \
    14. --user=kubernetes-admin \
    15. --kubeconfig=/etc/kubernetes/admin.conf
    16. kubectl config use-context kubernetes-admin@kubernetes \
    17. --kubeconfig=/etc/kubernetes/admin.conf

     kubelet将使用 bootstrap 引导机制,自动颁发证书,所以我们不用配置了。要不然,1万台机器,一个万kubelet,证书配置到明年去。。。

    8、ServiceAccount Key生成

    k8s底层,每创建一个ServiceAccount,都会分配一个Secret,而Secret里面有秘钥,秘钥就是由我们接下来的sa生成的。所以我们提前创建出sa信息

    1. openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
    2. openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

    9、发送证书到其他节点

    1. # 在master1上执行
    2. for NODE in k8s-master2 k8s-master3
    3. do
    4. for FILE in admin.conf controller-manager.conf scheduler.conf
    5. do
    6. scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}
    7. done
    8. done

    六、高可用配置

    • 高可用配置

      • 如果你不是在创建高可用集群,则无需配置haproxy和keepalived

      • 高可用有很多可选方案

        • nginx

        • haproxy

        • keepalived

        • 云供应商提供的负载均衡产品

    • 云上安装注意事项

      • 云上安装可以直接使用云上的lb,比如阿里云slb,腾讯云elb等

      • 公有云要用公有云自带的负载均衡,比如阿里云的SLB,腾讯云的ELB,用来替代haproxy和keepalived,因为公有云大部分都是不支持keepalived的。

      • 阿里云的话,kubectl控制端不能放在master节点,推荐使用腾讯云,因为阿里云的slb有回环的问题,也就是slb代理的服务器不能反向访问SLB,但是腾讯云修复了这个问题。

    • 青云使用

      • 创建负载均衡器,指定ip地址为我们之前的预留地址

      • 进入负载均衡器,创建监听器

      • 选择TCP,6443端口

      • 添加后端服务器地址与端口

     七、组件启动

    1、所有master执行

    1. mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes
    2. #三个master节点kube-xx相关的程序都在 /usr/local/bin
    3. for NODE in k8s-master2 k8s-master3
    4. do
    5. scp -r /etc/kubernetes/* root@$NODE:/etc/kubernetes/
    6. done

    接下来把master1生成的所有证书全部发给master2,master3

    2、配置apiserver服务

    2.1、配置

     所有Master节点创建kube-apiserver.service

    注意,如果不是高可用集群,192.168.0.250改为master01的地址

    以下文档使用的k8s service网段为10.96.0.0/16,该网段不能和宿主机的网段、Pod网段的重复

    特别注意:docker的网桥默认为 172.17.0.1/16。不要使用这个网段

    1. # 每个master节点都需要执行以下内容
    2. # --advertise-address: 需要改为本master节点的ip
    3. # --service-cluster-ip-range=10.96.0.0/16: 需要改为自己规划的service网段
    4. # --etcd-servers: 改为自己etcd-server的所有地址
    5. vi /usr/lib/systemd/system/kube-apiserver.service
    6. [Unit]
    7. Description=Kubernetes API Server
    8. Documentation=https://github.com/kubernetes/kubernetes
    9. After=network.target
    10. [Service]
    11. ExecStart=/usr/local/bin/kube-apiserver \
    12. --v=2 \
    13. --logtostderr=true \
    14. --allow-privileged=true \
    15. --bind-address=0.0.0.0 \
    16. --secure-port=6443 \
    17. --insecure-port=0 \
    18. --advertise-address=192.168.0.10 \
    19. --service-cluster-ip-range=10.96.0.0/16 \
    20. --service-node-port-range=30000-32767 \
    21. --etcd-servers=https://192.168.0.10:2379,https://192.168.0.11:2379,https://192.168.0.12:2379 \
    22. --etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem \
    23. --etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem \
    24. --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem \
    25. --client-ca-file=/etc/kubernetes/pki/ca.pem \
    26. --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
    27. --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
    28. --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
    29. --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
    30. --service-account-key-file=/etc/kubernetes/pki/sa.pub \
    31. --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
    32. --service-account-issuer=https://kubernetes.default.svc.cluster.local \
    33. --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
    34. --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
    35. --authorization-mode=Node,RBAC \
    36. --enable-bootstrap-token-auth=true \
    37. --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
    38. --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
    39. --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
    40. --requestheader-allowed-names=aggregator,front-proxy-client \
    41. --requestheader-group-headers=X-Remote-Group \
    42. --requestheader-extra-headers-prefix=X-Remote-Extra- \
    43. --requestheader-username-headers=X-Remote-User
    44. # --token-auth-file=/etc/kubernetes/token.csv
    45. Restart=on-failure
    46. RestartSec=10s
    47. LimitNOFILE=65535
    48. [Install]
    49. WantedBy=multi-user.target

    2.2、启动apiserver服务

    1. systemctl daemon-reload && systemctl enable --now kube-apiserver
    2. #查看状态
    3. systemctl status kube-apiserver

    3、配置controller-manager服务

    3.1、配置

    所有Master节点配置kube-controller-manager.service

    文档使用的k8s Pod网段为196.16.0.0/16,该网段不能和宿主机的网段、k8s Service网段的重复,请按需修改;

    特别注意:docker的网桥默认为 172.17.0.1/16。不要使用这个网

    1. # 所有节点执行
    2. vi /usr/lib/systemd/system/kube-controller-manager.service
    3. ## --cluster-cidr=196.16.0.0/16 : 为Pod的网段。修改成自己想规划的网段
    4. [Unit]
    5. Description=Kubernetes Controller Manager
    6. Documentation=https://github.com/kubernetes/kubernetes
    7. After=network.target
    8. [Service]
    9. ExecStart=/usr/local/bin/kube-controller-manager \
    10. --v=2 \
    11. --logtostderr=true \
    12. --address=127.0.0.1 \
    13. --root-ca-file=/etc/kubernetes/pki/ca.pem \
    14. --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
    15. --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
    16. --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
    17. --kubeconfig=/etc/kubernetes/controller-manager.conf \
    18. --leader-elect=true \
    19. --use-service-account-credentials=true \
    20. --node-monitor-grace-period=40s \
    21. --node-monitor-period=5s \
    22. --pod-eviction-timeout=2m0s \
    23. --controllers=*,bootstrapsigner,tokencleaner \
    24. --allocate-node-cidrs=true \
    25. --cluster-cidr=196.16.0.0/16 \
    26. --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
    27. --node-cidr-mask-size=24
    28. Restart=always
    29. RestartSec=10s
    30. [Install]
    31. WantedBy=multi-user.target

    3.2、启动

    1. # 所有master节点执行
    2. systemctl daemon-reload
    3. systemctl daemon-reload && systemctl enable --now kube-controller-manager
    4. systemctl status kube-controller-manager

    4、配置scheduler

    4.1、配置

    所有Master节点配置kube-scheduler.service

    1. vi /usr/lib/systemd/system/kube-scheduler.service
    2. [Unit]
    3. Description=Kubernetes Scheduler
    4. Documentation=https://github.com/kubernetes/kubernetes
    5. After=network.target
    6. [Service]
    7. ExecStart=/usr/local/bin/kube-scheduler \
    8. --v=2 \
    9. --logtostderr=true \
    10. --address=127.0.0.1 \
    11. --leader-elect=true \
    12. --kubeconfig=/etc/kubernetes/scheduler.conf
    13. Restart=always
    14. RestartSec=10s
    15. [Install]
    16. WantedBy=multi-user.target

    4.2、启动

    1. systemctl daemon-reload
    2. systemctl daemon-reload && systemctl enable --now kube-scheduler
    3. systemctl status kube-scheduler

    八、TLS与引导启动原理

    1、master1配置bootstrap

    注意,如果不是高可用集群,192.168.0.250:6443改为master1的地址,6443为apiserver的默认端口

    1. #准备一个随机token。但是我们只需要16个字符
    2. head -c 16 /dev/urandom | od -An -t x | tr -d ' '
    3. # 值如下: 737b177d9823531a433e368fcdb16f5f
    4. # 生成16个字符的
    5. head -c 8 /dev/urandom | od -An -t x | tr -d ' '
    6. # d683399b7a553977
    1. #设置集群
    2. kubectl config set-cluster kubernetes \
    3. --certificate-authority=/etc/kubernetes/pki/ca.pem \
    4. --embed-certs=true \
    5. --server=https://192.168.0.250:6443 \
    6. --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
    7. #设置秘钥
    8. kubectl config set-credentials tls-bootstrap-token-user \
    9. --token=l6fy8c.d683399b7a553977 \
    10. --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
    11. #设置上下文
    12. kubectl config set-context tls-bootstrap-token-user@kubernetes \
    13. --cluster=kubernetes \
    14. --user=tls-bootstrap-token-user \
    15. --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
    16. #使用设置
    17. kubectl config use-context tls-bootstrap-token-user@kubernetes \
    18. --kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf

    2、master1设置kubectl执行权限

    kubectl 能不能操作集群是看 /root/.kube 下有没有config文件,而config就是我们之前生成的admin.conf,具有操作权限的

    1. # 只在master1生成,因为生产集群,我们只能让一台机器具有操作集群的权限,这样好控制
    2. mkdir -p /root/.kube ;
    3. cp /etc/kubernetes/admin.conf /root/.kube/config
    1. #验证
    2. kubectl get nodes
    3. # 应该在网络里面开放负载均衡器的6443端口;默认应该不要配置的
    4. [root@k8s-master1 ~]# kubectl get nodes
    5. No resources found
    6. #说明已经可以连接apiserver并获取资源

    3、创建集群引导权限文件

    1. # master准备这个文件
    2. vi /etc/kubernetes/bootstrap.secret.yaml
    3. apiVersion: v1
    4. kind: Secret
    5. metadata:
    6. name: bootstrap-token-l6fy8c
    7. namespace: kube-system
    8. type: bootstrap.kubernetes.io/token
    9. stringData:
    10. description: "The default bootstrap token generated by 'kubelet '."
    11. token-id: l6fy8c
    12. token-secret: d683399b7a553977
    13. usage-bootstrap-authentication: "true"
    14. usage-bootstrap-signing: "true"
    15. auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
    16. ---
    17. apiVersion: rbac.authorization.k8s.io/v1
    18. kind: ClusterRoleBinding
    19. metadata:
    20. name: kubelet-bootstrap
    21. roleRef:
    22. apiGroup: rbac.authorization.k8s.io
    23. kind: ClusterRole
    24. name: system:node-bootstrapper
    25. subjects:
    26. - apiGroup: rbac.authorization.k8s.io
    27. kind: Group
    28. name: system:bootstrappers:default-node-token
    29. ---
    30. apiVersion: rbac.authorization.k8s.io/v1
    31. kind: ClusterRoleBinding
    32. metadata:
    33. name: node-autoapprove-bootstrap
    34. roleRef:
    35. apiGroup: rbac.authorization.k8s.io
    36. kind: ClusterRole
    37. name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
    38. subjects:
    39. - apiGroup: rbac.authorization.k8s.io
    40. kind: Group
    41. name: system:bootstrappers:default-node-token
    42. ---
    43. apiVersion: rbac.authorization.k8s.io/v1
    44. kind: ClusterRoleBinding
    45. metadata:
    46. name: node-autoapprove-certificate-rotation
    47. roleRef:
    48. apiGroup: rbac.authorization.k8s.io
    49. kind: ClusterRole
    50. name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
    51. subjects:
    52. - apiGroup: rbac.authorization.k8s.io
    53. kind: Group
    54. name: system:nodes
    55. ---
    56. apiVersion: rbac.authorization.k8s.io/v1
    57. kind: ClusterRole
    58. metadata:
    59. annotations:
    60. rbac.authorization.kubernetes.io/autoupdate: "true"
    61. labels:
    62. kubernetes.io/bootstrapping: rbac-defaults
    63. name: system:kube-apiserver-to-kubelet
    64. rules:
    65. - apiGroups:
    66. - ""
    67. resources:
    68. - nodes/proxy
    69. - nodes/stats
    70. - nodes/log
    71. - nodes/spec
    72. - nodes/metrics
    73. verbs:
    74. - "*"
    75. ---
    76. apiVersion: rbac.authorization.k8s.io/v1
    77. kind: ClusterRoleBinding
    78. metadata:
    79. name: system:kube-apiserver
    80. namespace: ""
    81. roleRef:
    82. apiGroup: rbac.authorization.k8s.io
    83. kind: ClusterRole
    84. name: system:kube-apiserver-to-kubelet
    85. subjects:
    86. - apiGroup: rbac.authorization.k8s.io
    87. kind: User
    88. name: kube-apiserver
    1. # 应用此文件资源内容
    2. kubectl create -f /etc/kubernetes/bootstrap.secret.yaml

    九、引导Node节点启动

    所有节点的kubelet需要我们引导启动

    1、发送核心证书到节点

    master1节点把核心证书发送到其他节点

    1. cd /etc/kubernetes/ #查看所有信息
    2. #执行复制所有令牌操作
    3. for NODE in k8s-master2 k8s-master3 k8s-node1 k8s-node2; do
    4. ssh $NODE mkdir -p /etc/kubernetes/pki/etcd
    5. for FILE in ca.pem etcd.pem etcd-key.pem; do
    6. scp /etc/kubernetes/pki/etcd/$FILE $NODE:/etc/kubernetes/pki/etcd/
    7. done
    8. for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.conf; do
    9. scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}
    10. done
    11. done

    2、所有节点配置kubelet

    1. # 所有节点创建相关目录
    2. mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/
    3. ## 所有node节点必须有 kubelet kube-proxy
    4. for NODE in k8s-master2 k8s-master3 k8s-node3 k8s-node1 k8s-node2; do
    5. scp -r /etc/kubernetes/* root@$NODE:/etc/kubernetes/
    6. done

    2.1、创建kubelet.service

    1. #所有节点,配置kubelet服务
    2. vi /usr/lib/systemd/system/kubelet.service
    3. [Unit]
    4. Description=Kubernetes Kubelet
    5. Documentation=https://github.com/kubernetes/kubernetes
    6. After=docker.service
    7. Requires=docker.service
    8. [Service]
    9. ExecStart=/usr/local/bin/kubelet
    10. Restart=always
    11. StartLimitInterval=0
    12. RestartSec=10
    13. [Install]
    14. WantedBy=multi-user.target
    1. # 所有节点配置kubelet service配置文件
    2. vi /etc/systemd/system/kubelet.service.d/10-kubelet.conf
    3. [Service]
    4. Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
    5. Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
    6. Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/pause:3.4.1"
    7. Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
    8. ExecStart=
    9. ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS

    2.2、创建kubelet-conf.yml文件

    1. #所有节点,配置kubelet-conf文件
    2. vi /etc/kubernetes/kubelet-conf.yml
    3. # clusterDNS 为service网络的第10个ip值,改成自己的。如:10.96.0.10
    1. apiVersion: kubelet.config.k8s.io/v1beta1
    2. kind: KubeletConfiguration
    3. address: 0.0.0.0
    4. port: 10250
    5. readOnlyPort: 10255
    6. authentication:
    7. anonymous:
    8. enabled: false
    9. webhook:
    10. cacheTTL: 2m0s
    11. enabled: true
    12. x509:
    13. clientCAFile: /etc/kubernetes/pki/ca.pem
    14. authorization:
    15. mode: Webhook
    16. webhook:
    17. cacheAuthorizedTTL: 5m0s
    18. cacheUnauthorizedTTL: 30s
    19. cgroupDriver: systemd
    20. cgroupsPerQOS: true
    21. clusterDNS:
    22. - 10.96.0.10
    23. clusterDomain: cluster.local
    24. containerLogMaxFiles: 5
    25. containerLogMaxSize: 10Mi
    26. contentType: application/vnd.kubernetes.protobuf
    27. cpuCFSQuota: true
    28. cpuManagerPolicy: none
    29. cpuManagerReconcilePeriod: 10s
    30. enableControllerAttachDetach: true
    31. enableDebuggingHandlers: true
    32. enforceNodeAllocatable:
    33. - pods
    34. eventBurst: 10
    35. eventRecordQPS: 5
    36. evictionHard:
    37. imagefs.available: 15%
    38. memory.available: 100Mi
    39. nodefs.available: 10%
    40. nodefs.inodesFree: 5%
    41. evictionPressureTransitionPeriod: 5m0s #缩小相应的配置
    42. failSwapOn: true
    43. fileCheckFrequency: 20s
    44. hairpinMode: promiscuous-bridge
    45. healthzBindAddress: 127.0.0.1
    46. healthzPort: 10248
    47. httpCheckFrequency: 20s
    48. imageGCHighThresholdPercent: 85
    49. imageGCLowThresholdPercent: 80
    50. imageMinimumGCAge: 2m0s
    51. iptablesDropBit: 15
    52. iptablesMasqueradeBit: 14
    53. kubeAPIBurst: 10
    54. kubeAPIQPS: 5
    55. makeIPTablesUtilChains: true
    56. maxOpenFiles: 1000000
    57. maxPods: 110
    58. nodeStatusUpdateFrequency: 10s
    59. oomScoreAdj: -999
    60. podPidsLimit: -1
    61. registryBurst: 10
    62. registryPullQPS: 5
    63. resolvConf: /etc/resolv.conf
    64. rotateCertificates: true
    65. runtimeRequestTimeout: 2m0s
    66. serializeImagePulls: true
    67. staticPodPath: /etc/kubernetes/manifests
    68. streamingConnectionIdleTimeout: 4h0m0s
    69. syncFrequency: 1m0s
    70. volumeStatsAggPeriod: 1m0s

     2.3、所有节点启动kubelet

    1. systemctl daemon-reload && systemctl enable --now kubelet
    2. systemctl status kubelet

    会提示 "Unable to update cni config"。

    接下来配置cni网络即可

    3、kube-proxy配置

    注意,如果不是高可用集群,192.168.0.250:6443改为master1的地址,6443改为apiserver的默认端口

    3.1、生成kube-proxy.conf

    以下操作在master1执行

    1. #创建kube-proxy的sa
    2. kubectl -n kube-system create serviceaccount kube-proxy
    3. #创建角色绑定
    4. kubectl create clusterrolebinding system:kube-proxy \
    5. --clusterrole system:node-proxier \
    6. --serviceaccount kube-system:kube-proxy
    7. #导出变量,方便后面使用
    8. SECRET=$(kubectl -n kube-system get sa/kube-proxy --output=jsonpath='{.secrets[0].name}')
    9. JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET --output=jsonpath='{.data.token}' | base64 -d)
    10. PKI_DIR=/etc/kubernetes/pki
    11. K8S_DIR=/etc/kubernetes
    12. # 生成kube-proxy配置
    13. # --server: 指定自己的apiserver地址或者lb地址
    14. kubectl config set-cluster kubernetes \
    15. --certificate-authority=/etc/kubernetes/pki/ca.pem \
    16. --embed-certs=true \
    17. --server=https://192.168.0.250:6443 \
    18. --kubeconfig=${K8S_DIR}/kube-proxy.conf
    19. # kube-proxy秘钥设置
    20. kubectl config set-credentials kubernetes \
    21. --token=${JWT_TOKEN} \
    22. --kubeconfig=/etc/kubernetes/kube-proxy.conf
    23. kubectl config set-context kubernetes \
    24. --cluster=kubernetes \
    25. --user=kubernetes \
    26. --kubeconfig=/etc/kubernetes/kube-proxy.conf
    27. kubectl config use-context kubernetes \
    28. --kubeconfig=/etc/kubernetes/kube-proxy.conf
    1. #把生成的 kube-proxy.conf 传给每个节点
    2. for NODE in k8s-master2 k8s-master3 k8s-node1 k8s-node2 k8s-node3; do
    3. scp /etc/kubernetes/kube-proxy.conf $NODE:/etc/kubernetes/
    4. done

    3.2、配置kube-proxy.service

    1. # 所有节点配置 kube-proxy.service 服务,一会儿设置为开机启动
    2. vi /usr/lib/systemd/system/kube-proxy.service
    3. [Unit]
    4. Description=Kubernetes Kube Proxy
    5. Documentation=https://github.com/kubernetes/kubernetes
    6. After=network.target
    7. [Service]
    8. ExecStart=/usr/local/bin/kube-proxy \
    9. --config=/etc/kubernetes/kube-proxy.yaml \
    10. --v=2
    11. Restart=always
    12. RestartSec=10s
    13. [Install]
    14. WantedBy=multi-user.target

    3.3、准备kube-proxy.yaml

    一定注意修改自己的Pod网段范围

    1. # 所有机器执行
    2. vi /etc/kubernetes/kube-proxy.yaml
    1. apiVersion: kubeproxy.config.k8s.io/v1alpha1
    2. bindAddress: 0.0.0.0
    3. clientConnection:
    4. acceptContentTypes: ""
    5. burst: 10
    6. contentType: application/vnd.kubernetes.protobuf
    7. kubeconfig: /etc/kubernetes/kube-proxy.conf #kube-proxy引导文件
    8. qps: 5
    9. clusterCIDR: 196.16.0.0/16 #修改为自己的Pod-CIDR
    10. configSyncPeriod: 15m0s
    11. conntrack:
    12. max: null
    13. maxPerCore: 32768
    14. min: 131072
    15. tcpCloseWaitTimeout: 1h0m0s
    16. tcpEstablishedTimeout: 24h0m0s
    17. enableProfiling: false
    18. healthzBindAddress: 0.0.0.0:10256
    19. hostnameOverride: ""
    20. iptables:
    21. masqueradeAll: false
    22. masqueradeBit: 14
    23. minSyncPeriod: 0s
    24. syncPeriod: 30s
    25. ipvs:
    26. masqueradeAll: true
    27. minSyncPeriod: 5s
    28. scheduler: "rr"
    29. syncPeriod: 30s
    30. kind: KubeProxyConfiguration
    31. metricsBindAddress: 127.0.0.1:10249
    32. mode: "ipvs"
    33. nodePortAddresses: null
    34. oomScoreAdj: -999
    35. portRange: ""
    36. udpIdleTimeout: 250ms

    3.4、启动kube-proxy

    所有节点启动

    1. systemctl daemon-reload && systemctl enable --now kube-proxy
    2. systemctl status kube-proxy

     十、部署calico

    可以参照calico私有云部署指南

    1. # 下载官网calico
    2. curl https://docs.projectcalico.org/manifests/calico-etcd.yaml -o calico.yaml
    3. ## 把这个镜像修改成国内镜像
    4. # 修改一些我们自定义的. 修改etcd集群地址
    5. sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.0.10:2379,https://192.168.0.11:2379,https://192.168.0.12:2379"#g' calico.yaml
    6. # etcd的证书内容,需要base64编码设置到yaml中
    7. ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.pem | base64 -w 0 `
    8. ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 -w 0 `
    9. ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 -w 0 `
    10. # 替换etcd中的证书base64编码后的内容
    11. sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico.yaml
    12. #打开 etcd_ca 等默认设置(calico启动后自己生成)。
    13. sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico.yaml
    14. # 修改自己的Pod网段 196.16.0.0/16
    15. POD_SUBNET="196.16.0.0/16"
    16. sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/16"@ value: '"${POD_SUBNET}"'@g' calico.yaml
    17. # 一定确定自己是否修改好了
    18. #确认calico是否修改好
    19. grep "CALICO_IPV4POOL_CIDR" calico.yaml -A 1

    1. # 应用calico配置
    2. kubectl apply -f calico.yaml

    十一、部署coreDNS

    1. git clone https://github.com/coredns/deployment.git
    2. cd deployment/kubernetes
    3. #10.96.0.10 改为 service 网段的 第 10 个ip
    4. ./deploy.sh -s -i 10.96.0.10 | kubectl apply -f -

    十二、给机器打上role标签

    1. kubectl label node k8s-master1 node-role.kubernetes.io/master=''
    2. kubectl label node k8s-master2 node-role.kubernetes.io/master=''
    3. kubectl label node k8s-master3 node-role.kubernetes.io/master=''
    4. kubectl taints node k8s-master1

    十三、集群验证

    • 验证Pod网络可访问性

      • 同名称空间,不同名称空间可以使用 ip 互相访问

      • 跨机器部署的Pod也可以互相访问

    • 验证Service网络可访问性

      • 集群机器使用serviceIp可以负载均衡访问

      • pod内部可以访问service域名 serviceName.namespace

      • pod可以访问跨名称空间的service

    1. # 部署以下内容进行测试
    2. apiVersion: apps/v1
    3. kind: Deployment
    4. metadata:
    5. name: nginx-01
    6. namespace: default
    7. labels:
    8. app: nginx-01
    9. spec:
    10. selector:
    11. matchLabels:
    12. app: nginx-01
    13. replicas: 1
    14. template:
    15. metadata:
    16. labels:
    17. app: nginx-01
    18. spec:
    19. containers:
    20. - name: nginx-01
    21. image: nginx
    22. ---
    23. apiVersion: v1
    24. kind: Service
    25. metadata:
    26. name: nginx-svc
    27. namespace: default
    28. spec:
    29. selector:
    30. app: nginx-01
    31. type: ClusterIP
    32. ports:
    33. - name: nginx-svc
    34. port: 80
    35. targetPort: 80
    36. protocol: TCP
    37. ---
    38. apiVersion: v1
    39. kind: Namespace
    40. metadata:
    41. name: hello
    42. spec: {}
    43. ---
    44. apiVersion: apps/v1
    45. kind: Deployment
    46. metadata:
    47. name: nginx-hello
    48. namespace: hello
    49. labels:
    50. app: nginx-hello
    51. spec:
    52. selector:
    53. matchLabels:
    54. app: nginx-hello
    55. replicas: 1
    56. template:
    57. metadata:
    58. labels:
    59. app: nginx-hello
    60. spec:
    61. containers:
    62. - name: nginx-hello
    63. image: nginx
    64. ---
    65. apiVersion: v1
    66. kind: Service
    67. metadata:
    68. name: nginx-svc-hello
    69. namespace: hello
    70. spec:
    71. selector:
    72. app: nginx-hello
    73. type: ClusterIP
    74. ports:
    75. - name: nginx-svc-hello
    76. port: 80
    77. targetPort: 80
    78. protocol: TCP
    1. # 给两个master标识为worker
    2. kubectl label node k8s-node3 node-role.kubernetes.io/worker=''
    3. kubectl label node k8s-master3 node-role.kubernetes.io/worker=''
    4. kubectl label node k8s-node1 node-role.kubernetes.io/worker=''
    5. kubectl label node k8s-node2 node-role.kubernetes.io/worker=''
    6. # 给master1打上污点。二进制部署的集群,默认master是没有污点的,可以任意调度。我们最好给一个master打上污点,保证master最小可用
    7. kubectl label node k8s-master3 node-role.kubernetes.io/master=''
    8. kubectl taint nodes k8s-master1 node-role.kubernetes.io/master=:NoSchedule

    文末惊喜 

    开发云特价优惠

    【开发云】年年都是折扣价,不用四处薅羊毛


    • 📢博客主页:https://lansonli.blog.csdn.net
    • 📢欢迎点赞 👍 收藏 ⭐留言 📝 如有错误敬请指正!
    • 📢本文由 Lansonli 原创,首发于 CSDN博客🙉
    • 📢停下休息的时候不要忘了别人还在奔跑,希望大家抓紧时间学习,全力奔赴更美好的生活✨
  • 相关阅读:
    【C++】STL — string的使用 + 模拟实现
    ros2_control【B站WMGIII教学学习记录】1
    全志A33使用主线U-Boot
    透视金融科技Q3财报:规模效应显现,深化小微服务覆盖面
    spring基本使用
    正射影像矫正--基于无人机图片
    软件测试/测试开发丨学会与 AI 对话,高效提升学习效率
    Ikigai: 享受生命的意义
    SpringCloudGateway 入门
    sketch for Mac快捷键大全
  • 原文地址:https://blog.csdn.net/xiaoweite1/article/details/125380832