• K8s安装教程


    Kubernetes上安装KubeSphere

    1、安装步骤

    • 选择4核8G(master)、8核16G(node1)、8核16G(node2) 三台机器,按量付费进行实验,CentOS7.9
    • 安装Docker
    • 安装Kubernetes
    • 安装KubeSphere前置环境
    • 安装KubeSphere

    经过本人测试发现还是腾讯的比较便宜,都选2核的。

    1、安装Docker

    1. sudo yum remove docker*
    2. sudo yum install -y yum-utils
    3. #配置docker的yum地址
    4. sudo yum-config-manager \
    5. --add-repo \
    6. http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    7. #安装指定版本
    8. #sudo yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6
    9. #友情提示上面的版本不存在
    10. sudo yum install -y docker-ce docker-ce-cli containerd.io
    11. # 启动&开机启动docker
    12. systemctl enable docker --now
    13. # docker加速配置
    14. sudo mkdir -p /etc/docker
    15. sudo tee /etc/docker/daemon.json <<-'EOF'
    16. {
    17. "registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],
    18. "exec-opts": ["native.cgroupdriver=systemd"],
    19. "log-driver": "json-file",
    20. "log-opts": {
    21. "max-size": "100m"
    22. },
    23. "storage-driver": "overlay2"
    24. }
    25. EOF
    26. sudo systemctl daemon-reload
    27. sudo systemctl restart docker

    2、安装Kubernetes

    1、基本环境

    每个机器使用内网ip互通(也就是各个节点之间相互通信)

    每个机器配置自己的hostname,不能用localhost

    1. #设置每个机器自己的hostname,主节点写 master ,从节点写node1 node2....
    2. hostnamectl set-hostname xxx
    3. #======================下面的每台机器都要执行=======================================
    4. # 将 SELinux 设置为 permissive 模式(相当于将其禁用)
    5. sudo setenforce 0
    6. sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    7. #关闭swap
    8. swapoff -a
    9. sed -ri 's/.*swap.*/#&/' /etc/fstab
    10. #允许 iptables 检查桥接流量
    11. cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
    12. br_netfilter
    13. EOF
    14. cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
    15. net.bridge.bridge-nf-call-ip6tables = 1
    16. net.bridge.bridge-nf-call-iptables = 1
    17. EOF
    18. sudo sysctl --system
    19. #====================上面的每台机器都执行直接全部复制粘贴=========================

    2、安装kubelet、kubeadm、kubectl

    友情提示:

    一、下面的命令每台机器都要执行

    二、  echo "172.31.0.2  master" >> /etc/hosts    

    1、这个地址是主节点的内网Ip  通过ip a查看

    2、master是上面取名hostname我们取名叫master

    1. #配置k8s的yum源地址
    2. cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
    3. [kubernetes]
    4. name=Kubernetes
    5. baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    6. enabled=1
    7. gpgcheck=0
    8. repo_gpgcheck=0
    9. gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
    10. http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    11. EOF
    12. #安装 kubelet,kubeadm,kubectl
    13. sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9
    14. #启动kubelet
    15. sudo systemctl enable --now kubelet
    16. #下面的需要注意,172.31.0.2是主机的内网ip
    17. #通过 ip a 查看,域名是之前上面写主节点的hostname 我们取名为master
    18. #所有机器配置master域名,这里需要注意
    19. echo "172.31.0.2 master" >> /etc/hosts

     注意:上面的命令每台节点都执行网对各个节点ping操作,主要测试各个从节点能不能通过域名master进行ping通

    3、初始化master节点

    1、初始化

    友情提示:

    *下面的命令只在主节点执行

    1、apiserver-advertise-address:对应的ip是主节点的内网ip通过命令 ip a 查看

    2、其他的都可以不用修改

    3、service-cidr    设置的就是service资源的ip地址都是这个网段的

    4、pod-network-cidr 创建的pod资源的ip都是这个网段的

    5、control-plane-endpoint 设置主节点的域名master

    1. kubeadm init \
    2. --apiserver-advertise-address=172.31.0.2 \
    3. --control-plane-endpoint=master \
    4. --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
    5. --kubernetes-version v1.20.9 \
    6. --service-cidr=10.96.0.0/16 \
    7. --pod-network-cidr=192.168.0.0/16

    2、上面执行结果显示

    执行上面的命令会出现下面的内容

    1、当执行完上面的命令等一会就会出现下面的内容,创建文件夹只需要在主节点执行。

    记录master执行完成后的日志

     友情提示:

    1、第一个框,在主节点执行

    2、配置pod网络,这里使用的是Calico网络插件配置

    3、第三个框想让变成主节点的执行

    4、第四个框想让变成从节点的执行

    5、在执行第一个框之后需要安装Calico网络插件,下载yaml文件,并执行


     

     3、安装Calico网络插件

    友情提示:两个命令都是在主节点执行

    1、友情提示:下面的命令执行是失败的,因为k8s版本不支持

    1. curl https://docs.projectcalico.org/manifests/calico.yaml -O
    2. kubectl apply -f calico.yaml

     换成下面的

    1. curl https://docs.projectcalico.org/v3.18/manifests/calico.yaml -O
    2. kubectl apply -f calico.yaml

     当执行上面命令后在主节点可以查看pod

    4、加入worker节点

     在每一个节点执行上面命令后在主节点查看节点

     友情提示:上面的token命令是存在有效时间的,过期了在主节点执行下面的额一句话

    1. 新令牌
    2. kubeadm token create --print-join-command

     如果想实现高可用,高可用部署方式,也是在这一步的时候,使用添加主节点的命令即可

    总结:到此k8s安装完成

    5、 部署dashboard(K8s自带的控制面板)

    友情提示:可以不安装,就是方便查看

    1、部署

    kubernetes官方提供的可视化界面

    https://github.com/kubernetes/dashboard

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

     或者直接使用下面的文件执行

    1. # Copyright 2017 The Kubernetes Authors.
    2. #
    3. # Licensed under the Apache License, Version 2.0 (the "License");
    4. # you may not use this file except in compliance with the License.
    5. # You may obtain a copy of the License at
    6. #
    7. # http://www.apache.org/licenses/LICENSE-2.0
    8. #
    9. # Unless required by applicable law or agreed to in writing, software
    10. # distributed under the License is distributed on an "AS IS" BASIS,
    11. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    12. # See the License for the specific language governing permissions and
    13. # limitations under the License.
    14. apiVersion: v1
    15. kind: Namespace
    16. metadata:
    17. name: kubernetes-dashboard
    18. ---
    19. apiVersion: v1
    20. kind: ServiceAccount
    21. metadata:
    22. labels:
    23. k8s-app: kubernetes-dashboard
    24. name: kubernetes-dashboard
    25. namespace: kubernetes-dashboard
    26. ---
    27. kind: Service
    28. apiVersion: v1
    29. metadata:
    30. labels:
    31. k8s-app: kubernetes-dashboard
    32. name: kubernetes-dashboard
    33. namespace: kubernetes-dashboard
    34. spec:
    35. ports:
    36. - port: 443
    37. targetPort: 8443
    38. selector:
    39. k8s-app: kubernetes-dashboard
    40. ---
    41. apiVersion: v1
    42. kind: Secret
    43. metadata:
    44. labels:
    45. k8s-app: kubernetes-dashboard
    46. name: kubernetes-dashboard-certs
    47. namespace: kubernetes-dashboard
    48. type: Opaque
    49. ---
    50. apiVersion: v1
    51. kind: Secret
    52. metadata:
    53. labels:
    54. k8s-app: kubernetes-dashboard
    55. name: kubernetes-dashboard-csrf
    56. namespace: kubernetes-dashboard
    57. type: Opaque
    58. data:
    59. csrf: ""
    60. ---
    61. apiVersion: v1
    62. kind: Secret
    63. metadata:
    64. labels:
    65. k8s-app: kubernetes-dashboard
    66. name: kubernetes-dashboard-key-holder
    67. namespace: kubernetes-dashboard
    68. type: Opaque
    69. ---
    70. kind: ConfigMap
    71. apiVersion: v1
    72. metadata:
    73. labels:
    74. k8s-app: kubernetes-dashboard
    75. name: kubernetes-dashboard-settings
    76. namespace: kubernetes-dashboard
    77. ---
    78. kind: Role
    79. apiVersion: rbac.authorization.k8s.io/v1
    80. metadata:
    81. labels:
    82. k8s-app: kubernetes-dashboard
    83. name: kubernetes-dashboard
    84. namespace: kubernetes-dashboard
    85. rules:
    86. # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
    87. - apiGroups: [""]
    88. resources: ["secrets"]
    89. resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    90. verbs: ["get", "update", "delete"]
    91. # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
    92. - apiGroups: [""]
    93. resources: ["configmaps"]
    94. resourceNames: ["kubernetes-dashboard-settings"]
    95. verbs: ["get", "update"]
    96. # Allow Dashboard to get metrics.
    97. - apiGroups: [""]
    98. resources: ["services"]
    99. resourceNames: ["heapster", "dashboard-metrics-scraper"]
    100. verbs: ["proxy"]
    101. - apiGroups: [""]
    102. resources: ["services/proxy"]
    103. resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    104. verbs: ["get"]
    105. ---
    106. kind: ClusterRole
    107. apiVersion: rbac.authorization.k8s.io/v1
    108. metadata:
    109. labels:
    110. k8s-app: kubernetes-dashboard
    111. name: kubernetes-dashboard
    112. rules:
    113. # Allow Metrics Scraper to get metrics from the Metrics server
    114. - apiGroups: ["metrics.k8s.io"]
    115. resources: ["pods", "nodes"]
    116. verbs: ["get", "list", "watch"]
    117. ---
    118. apiVersion: rbac.authorization.k8s.io/v1
    119. kind: RoleBinding
    120. metadata:
    121. labels:
    122. k8s-app: kubernetes-dashboard
    123. name: kubernetes-dashboard
    124. namespace: kubernetes-dashboard
    125. roleRef:
    126. apiGroup: rbac.authorization.k8s.io
    127. kind: Role
    128. name: kubernetes-dashboard
    129. subjects:
    130. - kind: ServiceAccount
    131. name: kubernetes-dashboard
    132. namespace: kubernetes-dashboard
    133. ---
    134. apiVersion: rbac.authorization.k8s.io/v1
    135. kind: ClusterRoleBinding
    136. metadata:
    137. name: kubernetes-dashboard
    138. roleRef:
    139. apiGroup: rbac.authorization.k8s.io
    140. kind: ClusterRole
    141. name: kubernetes-dashboard
    142. subjects:
    143. - kind: ServiceAccount
    144. name: kubernetes-dashboard
    145. namespace: kubernetes-dashboard
    146. ---
    147. kind: Deployment
    148. apiVersion: apps/v1
    149. metadata:
    150. labels:
    151. k8s-app: kubernetes-dashboard
    152. name: kubernetes-dashboard
    153. namespace: kubernetes-dashboard
    154. spec:
    155. replicas: 1
    156. revisionHistoryLimit: 10
    157. selector:
    158. matchLabels:
    159. k8s-app: kubernetes-dashboard
    160. template:
    161. metadata:
    162. labels:
    163. k8s-app: kubernetes-dashboard
    164. spec:
    165. containers:
    166. - name: kubernetes-dashboard
    167. image: kubernetesui/dashboard:v2.3.1
    168. imagePullPolicy: Always
    169. ports:
    170. - containerPort: 8443
    171. protocol: TCP
    172. args:
    173. - --auto-generate-certificates
    174. - --namespace=kubernetes-dashboard
    175. # Uncomment the following line to manually specify Kubernetes API server Host
    176. # If not specified, Dashboard will attempt to auto discover the API server and connect
    177. # to it. Uncomment only if the default does not work.
    178. # - --apiserver-host=http://my-address:port
    179. volumeMounts:
    180. - name: kubernetes-dashboard-certs
    181. mountPath: /certs
    182. # Create on-disk volume to store exec logs
    183. - mountPath: /tmp
    184. name: tmp-volume
    185. livenessProbe:
    186. httpGet:
    187. scheme: HTTPS
    188. path: /
    189. port: 8443
    190. initialDelaySeconds: 30
    191. timeoutSeconds: 30
    192. securityContext:
    193. allowPrivilegeEscalation: false
    194. readOnlyRootFilesystem: true
    195. runAsUser: 1001
    196. runAsGroup: 2001
    197. volumes:
    198. - name: kubernetes-dashboard-certs
    199. secret:
    200. secretName: kubernetes-dashboard-certs
    201. - name: tmp-volume
    202. emptyDir: {}
    203. serviceAccountName: kubernetes-dashboard
    204. nodeSelector:
    205. "kubernetes.io/os": linux
    206. # Comment the following tolerations if Dashboard must not be deployed on master
    207. tolerations:
    208. - key: node-role.kubernetes.io/master
    209. effect: NoSchedule
    210. ---
    211. kind: Service
    212. apiVersion: v1
    213. metadata:
    214. labels:
    215. k8s-app: dashboard-metrics-scraper
    216. name: dashboard-metrics-scraper
    217. namespace: kubernetes-dashboard
    218. spec:
    219. ports:
    220. - port: 8000
    221. targetPort: 8000
    222. selector:
    223. k8s-app: dashboard-metrics-scraper
    224. ---
    225. kind: Deployment
    226. apiVersion: apps/v1
    227. metadata:
    228. labels:
    229. k8s-app: dashboard-metrics-scraper
    230. name: dashboard-metrics-scraper
    231. namespace: kubernetes-dashboard
    232. spec:
    233. replicas: 1
    234. revisionHistoryLimit: 10
    235. selector:
    236. matchLabels:
    237. k8s-app: dashboard-metrics-scraper
    238. template:
    239. metadata:
    240. labels:
    241. k8s-app: dashboard-metrics-scraper
    242. annotations:
    243. seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    244. spec:
    245. containers:
    246. - name: dashboard-metrics-scraper
    247. image: kubernetesui/metrics-scraper:v1.0.6
    248. ports:
    249. - containerPort: 8000
    250. protocol: TCP
    251. livenessProbe:
    252. httpGet:
    253. scheme: HTTP
    254. path: /
    255. port: 8000
    256. initialDelaySeconds: 30
    257. timeoutSeconds: 30
    258. volumeMounts:
    259. - mountPath: /tmp
    260. name: tmp-volume
    261. securityContext:
    262. allowPrivilegeEscalation: false
    263. readOnlyRootFilesystem: true
    264. runAsUser: 1001
    265. runAsGroup: 2001
    266. serviceAccountName: kubernetes-dashboard
    267. nodeSelector:
    268. "kubernetes.io/os": linux
    269. # Comment the following tolerations if Dashboard must not be deployed on master
    270. tolerations:
    271. - key: node-role.kubernetes.io/master
    272. effect: NoSchedule
    273. volumes:
    274. - name: tmp-volume
    275. emptyDir: {}

     2、设置访问端口

    友情提示:下面命令会进入编辑状态,修改 type: ClusterIP 改为 type: NodePort,因为ClusterIP 只能内网访问,NodePort才可以外网访问

    kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

     type: ClusterIP 改为 type: NodePort

    下面是对其dashboard开放的端口在服务器的安全组中进行开放。

    1. kubectl get svc -A |grep kubernetes-dashboard
    2. ## 找到端口,在安全组放行

     访问: https://集群任意IP:端口 https://139.198.165.238:32759(友情提示IP地址和端口查看自己的)

    友情提示:下面就是如何获取密码

    3、创建访问账号

    友情提示:这样才能用账号登录dashboard

    1. #创建访问账号,准备一个yaml文件; vi dash.yaml
    2. apiVersion: v1
    3. kind: ServiceAccount
    4. metadata:
    5. name: admin-user
    6. namespace: kubernetes-dashboard
    7. ---
    8. apiVersion: rbac.authorization.k8s.io/v1
    9. kind: ClusterRoleBinding
    10. metadata:
    11. name: admin-user
    12. roleRef:
    13. apiGroup: rbac.authorization.k8s.io
    14. kind: ClusterRole
    15. name: cluster-admin
    16. subjects:
    17. - kind: ServiceAccount
    18. name: admin-user
    19. namespace: kubernetes-dashboard
    kubectl apply -f dash.yaml

     4、令牌访问

    1. #获取访问令牌
    2. kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

     随后生成一大串字符串,复制登录即可,下面的没用

     访问的时候需要使用https://方式访问

     5、界面

    3、安装KubeSphere前置环境

    1、nfs文件系统

    1、安装nfs-server

    友情提示:

    1、除了yum install -y nfs-utils是所有机器都执行,其他的都是在主节点执行

    2、下面的就相当于是nfs服务

    1. # 在每个机器。
    2. yum install -y nfs-utils
    3. # 在master 执行以下命令
    4. echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
    5. # 执行以下命令,启动 nfs 服务;创建共享目录
    6. mkdir -p /nfs/data
    7. # 在master执行
    8. systemctl enable rpcbind
    9. systemctl enable nfs-server
    10. systemctl start rpcbind
    11. systemctl start nfs-server
    12. # 使配置生效
    13. exportfs -r
    14. #检查配置是否生效
    15. exportfs

    2、配置nfs-client(选做)

    友情提示:这个命令只在从节点执行,相当于客户端

    1. showmount -e 172.31.0.2
    2. mkdir -p /nfs/data
    3. mount -t nfs 172.31.0.2:/nfs/data /nfs/data

    3、配置默认存储

    配置动态供应的默认存储类,这里动态分配存储空间,这是kubespher需要的环境,在节点执行

    这里需要修改配置文件中的主节点的内网IP(所有Ip)

    1. ## 创建了一个存储类
    2. apiVersion: storage.k8s.io/v1
    3. kind: StorageClass
    4. metadata:
    5. name: nfs-storage
    6. annotations:
    7. storageclass.kubernetes.io/is-default-class: "true"
    8. provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
    9. parameters:
    10. archiveOnDelete: "true" ## 删除pv的时候,pv的内容是否要备份
    11. ---
    12. apiVersion: apps/v1
    13. kind: Deployment
    14. metadata:
    15. name: nfs-client-provisioner
    16. labels:
    17. app: nfs-client-provisioner
    18. # replace with namespace where provisioner is deployed
    19. namespace: default
    20. spec:
    21. replicas: 1
    22. strategy:
    23. type: Recreate
    24. selector:
    25. matchLabels:
    26. app: nfs-client-provisioner
    27. template:
    28. metadata:
    29. labels:
    30. app: nfs-client-provisioner
    31. spec:
    32. serviceAccountName: nfs-client-provisioner
    33. containers:
    34. - name: nfs-client-provisioner
    35. image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
    36. # resources:
    37. # limits:
    38. # cpu: 10m
    39. # requests:
    40. # cpu: 10m
    41. volumeMounts:
    42. - name: nfs-client-root
    43. mountPath: /persistentvolumes
    44. env:
    45. - name: PROVISIONER_NAME
    46. value: k8s-sigs.io/nfs-subdir-external-provisioner
    47. - name: NFS_SERVER
    48. value: 172.31.0.2 ## 指定自己nfs服务器地址
    49. - name: NFS_PATH
    50. value: /nfs/data ## nfs服务器共享的目录
    51. volumes:
    52. - name: nfs-client-root
    53. nfs:
    54. server: 172.31.0.2
    55. path: /nfs/data
    56. ---
    57. apiVersion: v1
    58. kind: ServiceAccount
    59. metadata:
    60. name: nfs-client-provisioner
    61. # replace with namespace where provisioner is deployed
    62. namespace: default
    63. ---
    64. kind: ClusterRole
    65. apiVersion: rbac.authorization.k8s.io/v1
    66. metadata:
    67. name: nfs-client-provisioner-runner
    68. rules:
    69. - apiGroups: [""]
    70. resources: ["nodes"]
    71. verbs: ["get", "list", "watch"]
    72. - apiGroups: [""]
    73. resources: ["persistentvolumes"]
    74. verbs: ["get", "list", "watch", "create", "delete"]
    75. - apiGroups: [""]
    76. resources: ["persistentvolumeclaims"]
    77. verbs: ["get", "list", "watch", "update"]
    78. - apiGroups: ["storage.k8s.io"]
    79. resources: ["storageclasses"]
    80. verbs: ["get", "list", "watch"]
    81. - apiGroups: [""]
    82. resources: ["events"]
    83. verbs: ["create", "update", "patch"]
    84. ---
    85. kind: ClusterRoleBinding
    86. apiVersion: rbac.authorization.k8s.io/v1
    87. metadata:
    88. name: run-nfs-client-provisioner
    89. subjects:
    90. - kind: ServiceAccount
    91. name: nfs-client-provisioner
    92. # replace with namespace where provisioner is deployed
    93. namespace: default
    94. roleRef:
    95. kind: ClusterRole
    96. name: nfs-client-provisioner-runner
    97. apiGroup: rbac.authorization.k8s.io
    98. ---
    99. kind: Role
    100. apiVersion: rbac.authorization.k8s.io/v1
    101. metadata:
    102. name: leader-locking-nfs-client-provisioner
    103. # replace with namespace where provisioner is deployed
    104. namespace: default
    105. rules:
    106. - apiGroups: [""]
    107. resources: ["endpoints"]
    108. verbs: ["get", "list", "watch", "create", "update", "patch"]
    109. ---
    110. kind: RoleBinding
    111. apiVersion: rbac.authorization.k8s.io/v1
    112. metadata:
    113. name: leader-locking-nfs-client-provisioner
    114. # replace with namespace where provisioner is deployed
    115. namespace: default
    116. subjects:
    117. - kind: ServiceAccount
    118. name: nfs-client-provisioner
    119. # replace with namespace where provisioner is deployed
    120. namespace: default
    121. roleRef:
    122. kind: Role
    123. name: leader-locking-nfs-client-provisioner
    124. apiGroup: rbac.authorization.k8s.io

    运行完上面的内容执行下面的命令

    4、安装KubeSphere

    面向云原生应用的容器混合云,支持 Kubernetes 多集群管理的 PaaS 容器云平台解决方案 | KubeSphere

    1、下载核心文件

    如果下载不到,请复制附录的内容

    1. wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml
    2. wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml

    2、修改cluster-configuration

    在 cluster-configuration.yaml中指定我们需要开启的功能

    参照官网“启用可插拔组件”,完整的改成true都配置如下

    概述

    1. ---
    2. apiVersion: installer.kubesphere.io/v1alpha1
    3. kind: ClusterConfiguration
    4. metadata:
    5. name: ks-installer
    6. namespace: kubesphere-system
    7. labels:
    8. version: v3.1.1
    9. spec:
    10. persistence:
    11. storageClass: "" # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.
    12. authentication:
    13. jwtSecret: "" # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.
    14. local_registry: "" # Add your private registry address if it is needed.
    15. etcd:
    16. monitoring: true # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.
    17. endpointIps: 172.31.0.2 # etcd cluster EndpointIps. It can be a bunch of IPs here.
    18. port: 2379 # etcd port.
    19. tlsEnable: true
    20. common:
    21. redis:
    22. enabled: true
    23. openldap:
    24. enabled: true
    25. minioVolumeSize: 20Gi # Minio PVC size.
    26. openldapVolumeSize: 2Gi # openldap PVC size.
    27. redisVolumSize: 2Gi # Redis PVC size.
    28. monitoring:
    29. # type: external # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
    30. endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
    31. es: # Storage backend for logging, events and auditing.
    32. # elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
    33. # elasticsearchDataReplicas: 1 # The total number of data nodes.
    34. elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
    35. elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
    36. logMaxAge: 7 # Log retention time in built-in Elasticsearch. It is 7 days by default.
    37. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
    38. basicAuth:
    39. enabled: false
    40. username: ""
    41. password: ""
    42. externalElasticsearchUrl: ""
    43. externalElasticsearchPort: ""
    44. console:
    45. enableMultiLogin: true # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
    46. port: 30880
    47. alerting: # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    48. enabled: true # Enable or disable the KubeSphere Alerting System.
    49. # thanosruler:
    50. # replicas: 1
    51. # resources: {}
    52. auditing: # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.
    53. enabled: true # Enable or disable the KubeSphere Auditing Log System.
    54. devops: # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
    55. enabled: true # Enable or disable the KubeSphere DevOps System.
    56. jenkinsMemoryLim: 2Gi # Jenkins memory limit.
    57. jenkinsMemoryReq: 1500Mi # Jenkins memory request.
    58. jenkinsVolumeSize: 8Gi # Jenkins volume size.
    59. jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters.
    60. jenkinsJavaOpts_Xmx: 512m
    61. jenkinsJavaOpts_MaxRAM: 2g
    62. events: # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    63. enabled: true # Enable or disable the KubeSphere Events System.
    64. ruler:
    65. enabled: true
    66. replicas: 2
    67. logging: # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    68. enabled: true # Enable or disable the KubeSphere Logging System.
    69. logsidecar:
    70. enabled: true
    71. replicas: 2
    72. metrics_server: # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
    73. enabled: false # Enable or disable metrics-server.
    74. monitoring:
    75. storageClass: "" # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.
    76. # prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.
    77. prometheusMemoryRequest: 400Mi # Prometheus request memory.
    78. prometheusVolumeSize: 20Gi # Prometheus PVC size.
    79. # alertmanagerReplicas: 1 # AlertManager Replicas.
    80. multicluster:
    81. clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the Host or Member Cluster.
    82. network:
    83. networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
    84. # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
    85. enabled: true # Enable or disable network policies.
    86. ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
    87. type: calico # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.
    88. topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
    89. type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
    90. openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
    91. store:
    92. enabled: true # Enable or disable the KubeSphere App Store.
    93. servicemesh: # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
    94. enabled: true # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).
    95. kubeedge: # Add edge nodes to your cluster and deploy workloads on edge nodes.
    96. enabled: true # Enable or disable KubeEdge.
    97. cloudCore:
    98. nodeSelector: {"node-role.kubernetes.io/worker": ""}
    99. tolerations: []
    100. cloudhubPort: "10000"
    101. cloudhubQuicPort: "10001"
    102. cloudhubHttpsPort: "10002"
    103. cloudstreamPort: "10003"
    104. tunnelPort: "10004"
    105. cloudHub:
    106. advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
    107. - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
    108. nodeLimit: "100"
    109. service:
    110. cloudhubNodePort: "30000"
    111. cloudhubQuicNodePort: "30001"
    112. cloudhubHttpsNodePort: "30002"
    113. cloudstreamNodePort: "30003"
    114. tunnelNodePort: "30004"
    115. edgeWatcher:
    116. nodeSelector: {"node-role.kubernetes.io/worker": ""}
    117. tolerations: []
    118. edgeWatcherAgent:
    119. nodeSelector: {"node-role.kubernetes.io/worker": ""}
    120. tolerations: []

    3、执行安装

    1. kubectl apply -f kubesphere-installer.yaml
    2. kubectl apply -f cluster-configuration.yaml

    4、查看安装进度

    友情提示:通过下面的命令就能查看到账号和密码

    kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
    

     运行到最后就能看到下面直接 运行即可

    访问任意机器的 30880端口

    账号 : admin

    密码 : P@88w0rd

    解决etcd监控证书找不到问题,下面的是当普莫米修斯pod出现问题执行下面这段

    kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key

     怎么查看

    附录

    1、kubesphere-installer.yaml

    1. ---
    2. apiVersion: apiextensions.k8s.io/v1beta1
    3. kind: CustomResourceDefinition
    4. metadata:
    5. name: clusterconfigurations.installer.kubesphere.io
    6. spec:
    7. group: installer.kubesphere.io
    8. versions:
    9. - name: v1alpha1
    10. served: true
    11. storage: true
    12. scope: Namespaced
    13. names:
    14. plural: clusterconfigurations
    15. singular: clusterconfiguration
    16. kind: ClusterConfiguration
    17. shortNames:
    18. - cc
    19. ---
    20. apiVersion: v1
    21. kind: Namespace
    22. metadata:
    23. name: kubesphere-system
    24. ---
    25. apiVersion: v1
    26. kind: ServiceAccount
    27. metadata:
    28. name: ks-installer
    29. namespace: kubesphere-system
    30. ---
    31. apiVersion: rbac.authorization.k8s.io/v1
    32. kind: ClusterRole
    33. metadata:
    34. name: ks-installer
    35. rules:
    36. - apiGroups:
    37. - ""
    38. resources:
    39. - '*'
    40. verbs:
    41. - '*'
    42. - apiGroups:
    43. - apps
    44. resources:
    45. - '*'
    46. verbs:
    47. - '*'
    48. - apiGroups:
    49. - extensions
    50. resources:
    51. - '*'
    52. verbs:
    53. - '*'
    54. - apiGroups:
    55. - batch
    56. resources:
    57. - '*'
    58. verbs:
    59. - '*'
    60. - apiGroups:
    61. - rbac.authorization.k8s.io
    62. resources:
    63. - '*'
    64. verbs:
    65. - '*'
    66. - apiGroups:
    67. - apiregistration.k8s.io
    68. resources:
    69. - '*'
    70. verbs:
    71. - '*'
    72. - apiGroups:
    73. - apiextensions.k8s.io
    74. resources:
    75. - '*'
    76. verbs:
    77. - '*'
    78. - apiGroups:
    79. - tenant.kubesphere.io
    80. resources:
    81. - '*'
    82. verbs:
    83. - '*'
    84. - apiGroups:
    85. - certificates.k8s.io
    86. resources:
    87. - '*'
    88. verbs:
    89. - '*'
    90. - apiGroups:
    91. - devops.kubesphere.io
    92. resources:
    93. - '*'
    94. verbs:
    95. - '*'
    96. - apiGroups:
    97. - monitoring.coreos.com
    98. resources:
    99. - '*'
    100. verbs:
    101. - '*'
    102. - apiGroups:
    103. - logging.kubesphere.io
    104. resources:
    105. - '*'
    106. verbs:
    107. - '*'
    108. - apiGroups:
    109. - jaegertracing.io
    110. resources:
    111. - '*'
    112. verbs:
    113. - '*'
    114. - apiGroups:
    115. - storage.k8s.io
    116. resources:
    117. - '*'
    118. verbs:
    119. - '*'
    120. - apiGroups:
    121. - admissionregistration.k8s.io
    122. resources:
    123. - '*'
    124. verbs:
    125. - '*'
    126. - apiGroups:
    127. - policy
    128. resources:
    129. - '*'
    130. verbs:
    131. - '*'
    132. - apiGroups:
    133. - autoscaling
    134. resources:
    135. - '*'
    136. verbs:
    137. - '*'
    138. - apiGroups:
    139. - networking.istio.io
    140. resources:
    141. - '*'
    142. verbs:
    143. - '*'
    144. - apiGroups:
    145. - config.istio.io
    146. resources:
    147. - '*'
    148. verbs:
    149. - '*'
    150. - apiGroups:
    151. - iam.kubesphere.io
    152. resources:
    153. - '*'
    154. verbs:
    155. - '*'
    156. - apiGroups:
    157. - notification.kubesphere.io
    158. resources:
    159. - '*'
    160. verbs:
    161. - '*'
    162. - apiGroups:
    163. - auditing.kubesphere.io
    164. resources:
    165. - '*'
    166. verbs:
    167. - '*'
    168. - apiGroups:
    169. - events.kubesphere.io
    170. resources:
    171. - '*'
    172. verbs:
    173. - '*'
    174. - apiGroups:
    175. - core.kubefed.io
    176. resources:
    177. - '*'
    178. verbs:
    179. - '*'
    180. - apiGroups:
    181. - installer.kubesphere.io
    182. resources:
    183. - '*'
    184. verbs:
    185. - '*'
    186. - apiGroups:
    187. - storage.kubesphere.io
    188. resources:
    189. - '*'
    190. verbs:
    191. - '*'
    192. - apiGroups:
    193. - security.istio.io
    194. resources:
    195. - '*'
    196. verbs:
    197. - '*'
    198. - apiGroups:
    199. - monitoring.kiali.io
    200. resources:
    201. - '*'
    202. verbs:
    203. - '*'
    204. - apiGroups:
    205. - kiali.io
    206. resources:
    207. - '*'
    208. verbs:
    209. - '*'
    210. - apiGroups:
    211. - networking.k8s.io
    212. resources:
    213. - '*'
    214. verbs:
    215. - '*'
    216. - apiGroups:
    217. - kubeedge.kubesphere.io
    218. resources:
    219. - '*'
    220. verbs:
    221. - '*'
    222. - apiGroups:
    223. - types.kubefed.io
    224. resources:
    225. - '*'
    226. verbs:
    227. - '*'
    228. ---
    229. kind: ClusterRoleBinding
    230. apiVersion: rbac.authorization.k8s.io/v1
    231. metadata:
    232. name: ks-installer
    233. subjects:
    234. - kind: ServiceAccount
    235. name: ks-installer
    236. namespace: kubesphere-system
    237. roleRef:
    238. kind: ClusterRole
    239. name: ks-installer
    240. apiGroup: rbac.authorization.k8s.io
    241. ---
    242. apiVersion: apps/v1
    243. kind: Deployment
    244. metadata:
    245. name: ks-installer
    246. namespace: kubesphere-system
    247. labels:
    248. app: ks-install
    249. spec:
    250. replicas: 1
    251. selector:
    252. matchLabels:
    253. app: ks-install
    254. template:
    255. metadata:
    256. labels:
    257. app: ks-install
    258. spec:
    259. serviceAccountName: ks-installer
    260. containers:
    261. - name: installer
    262. image: kubesphere/ks-installer:v3.1.1
    263. imagePullPolicy: "Always"
    264. resources:
    265. limits:
    266. cpu: "1"
    267. memory: 1Gi
    268. requests:
    269. cpu: 20m
    270. memory: 100Mi
    271. volumeMounts:
    272. - mountPath: /etc/localtime
    273. name: host-time
    274. volumes:
    275. - hostPath:
    276. path: /etc/localtime
    277. type: ""
    278. name: host-time

    2、cluster-configuration.yaml

    1. ---
    2. apiVersion: installer.kubesphere.io/v1alpha1
    3. kind: ClusterConfiguration
    4. metadata:
    5. name: ks-installer
    6. namespace: kubesphere-system
    7. labels:
    8. version: v3.1.1
    9. spec:
    10. persistence:
    11. storageClass: "" # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.
    12. authentication:
    13. jwtSecret: "" # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.
    14. local_registry: "" # Add your private registry address if it is needed.
    15. etcd:
    16. monitoring: true # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.
    17. endpointIps: 172.31.0.4 # etcd cluster EndpointIps. It can be a bunch of IPs here.
    18. port: 2379 # etcd port.
    19. tlsEnable: true
    20. common:
    21. redis:
    22. enabled: true
    23. openldap:
    24. enabled: true
    25. minioVolumeSize: 20Gi # Minio PVC size.
    26. openldapVolumeSize: 2Gi # openldap PVC size.
    27. redisVolumSize: 2Gi # Redis PVC size.
    28. monitoring:
    29. # type: external # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
    30. endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
    31. es: # Storage backend for logging, events and auditing.
    32. # elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
    33. # elasticsearchDataReplicas: 1 # The total number of data nodes.
    34. elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
    35. elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
    36. logMaxAge: 7 # Log retention time in built-in Elasticsearch. It is 7 days by default.
    37. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
    38. basicAuth:
    39. enabled: false
    40. username: ""
    41. password: ""
    42. externalElasticsearchUrl: ""
    43. externalElasticsearchPort: ""
    44. console:
    45. enableMultiLogin: true # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
    46. port: 30880
    47. alerting: # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    48. enabled: true # Enable or disable the KubeSphere Alerting System.
    49. # thanosruler:
    50. # replicas: 1
    51. # resources: {}
    52. auditing: # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.
    53. enabled: true # Enable or disable the KubeSphere Auditing Log System.
    54. devops: # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
    55. enabled: true # Enable or disable the KubeSphere DevOps System.
    56. jenkinsMemoryLim: 2Gi # Jenkins memory limit.
    57. jenkinsMemoryReq: 1500Mi # Jenkins memory request.
    58. jenkinsVolumeSize: 8Gi # Jenkins volume size.
    59. jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters.
    60. jenkinsJavaOpts_Xmx: 512m
    61. jenkinsJavaOpts_MaxRAM: 2g
    62. events: # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    63. enabled: true # Enable or disable the KubeSphere Events System.
    64. ruler:
    65. enabled: true
    66. replicas: 2
    67. logging: # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    68. enabled: true # Enable or disable the KubeSphere Logging System.
    69. logsidecar:
    70. enabled: true
    71. replicas: 2
    72. metrics_server: # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
    73. enabled: false # Enable or disable metrics-server.
    74. monitoring:
    75. storageClass: "" # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.
    76. # prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.
    77. prometheusMemoryRequest: 400Mi # Prometheus request memory.
    78. prometheusVolumeSize: 20Gi # Prometheus PVC size.
    79. # alertmanagerReplicas: 1 # AlertManager Replicas.
    80. multicluster:
    81. clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the Host or Member Cluster.
    82. network:
    83. networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
    84. # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
    85. enabled: true # Enable or disable network policies.
    86. ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
    87. type: calico # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.
    88. topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
    89. type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
    90. openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
    91. store:
    92. enabled: true # Enable or disable the KubeSphere App Store.
    93. servicemesh: # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
    94. enabled: true # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).
    95. kubeedge: # Add edge nodes to your cluster and deploy workloads on edge nodes.
    96. enabled: true # Enable or disable KubeEdge.
    97. cloudCore:
    98. nodeSelector: {"node-role.kubernetes.io/worker": ""}
    99. tolerations: []
    100. cloudhubPort: "10000"
    101. cloudhubQuicPort: "10001"
    102. cloudhubHttpsPort: "10002"
    103. cloudstreamPort: "10003"
    104. tunnelPort: "10004"
    105. cloudHub:
    106. advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
    107. - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
    108. nodeLimit: "100"
    109. service:
    110. cloudhubNodePort: "30000"
    111. cloudhubQuicNodePort: "30001"
    112. cloudhubHttpsNodePort: "30002"
    113. cloudstreamNodePort: "30003"
    114. tunnelNodePort: "30004"
    115. edgeWatcher:
    116. nodeSelector: {"node-role.kubernetes.io/worker": ""}
    117. tolerations: []
    118. edgeWatcherAgent:
    119. nodeSelector: {"node-role.kubernetes.io/worker": ""}
    120. tolerations: []

    一键安装k8s

    1、 安装docker

    1. sudo yum remove docker*
    2. sudo yum install -y yum-utils
    3. #配置docker的yum地址
    4. sudo yum-config-manager \
    5. --add-repo \
    6. http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    7. #安装指定版本
    8. #sudo yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6
    9. #友情提示上面的版本不存在
    10. sudo yum install -y docker-ce docker-ce-cli containerd.io
    11. # 启动&开机启动docker
    12. systemctl enable docker --now
    13. # docker加速配置
    14. sudo mkdir -p /etc/docker
    15. sudo tee /etc/docker/daemon.json <<-'EOF'
    16. {
    17. "registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],
    18. "exec-opts": ["native.cgroupdriver=systemd"],
    19. "log-driver": "json-file",
    20. "log-opts": {
    21. "max-size": "100m"
    22. },
    23. "storage-driver": "overlay2"
    24. }
    25. EOF
    26. sudo systemctl daemon-reload
    27. sudo systemctl restart docker

    2、安装依赖

    1. yum update
    2. yum install -y curl
    3. yum install -y socat
    4. yum install -y vim
    5. yum install -y conntrack

     3、关闭防火墙和分区

    1. swapoff -a
    2. #查看防火墙状态
    3. firewall-cmd --state
    4. #CentOS 7.0默认使用的是firewall作为防火墙
    5. #停止firewall
    6. systemctl stop firewalld.service
    7. #禁止防火墙开启启动
    8. systemctl disable firewalld.service

    4、通过kubekey安装

    1. export KKZONE=cn
    2. curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.1 sh -
    3. chmod +x kk
    4. #输入yes
    5. ./kk create cluster --with-kubernetes v1.22.10 --with-kubesphere v3.3.0

  • 相关阅读:
    【Ansible自动化运维工具 1】Ansible常用模块详解(附各模块应用实例和Ansible环境安装部署)
    智慧公厕预见幸福生活、美好未来
    B站app作品列表sign
    leetcode算法之前缀和
    有10个学生,每个学生的数据包括学号、姓名、3门课程的成绩,从键盘输人10个学
    .NET 8新预览版本使用 Blazor 组件进行服务器端呈现
    【比邻智选】M5310-E系列模组
    GO微服务实战第八节 如何基于 Go-kit 开发 Web 应用:从接口层到业务层再到数据层
    网络安全问题
    那个写出最烂代码的程序员,不但进了Google,还财务自由了!
  • 原文地址:https://blog.csdn.net/qq_36437693/article/details/126677700