• 企业运维之 kubernetes(k8s) 的存储


    目录

    一、Configmap配置管理

    二、Secret配置管理

    三、Volumes配置管理

    四、kubernetes调度

    五、kubernetes访问控制


    一、Configmap配置管理

    Configmap用于保存配置数据,以键值对形式存储;

    ConfigMap资源提供了向Pod注入配置数据的方法,旨在让镜像和配置文件解耦,以便实现镜像的可移植性和可复用性;

    典型的使用场景有:填充环境变量的值、设置容器内的命令行参数、填充卷的配置文件

    1.创建ConfigMap的方式有4种:使用字面值创建、使用文件创建、使用目录创建、编写configmap的yaml文件创建

    1. ##使用字面值创建,键值对的方式:
    2. [root@node22 ~]# mkdir configmap
    3. [root@node22 ~]# cd configmap/
    4. [root@node22 configmap]# ls
    5. [root@node22 configmap]# kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2
    6. configmap/my-config created
    7. [root@node22 configmap]# kubectl get cm
    8. NAME DATA AGE
    9. kube-root-ca.crt 1 3d5h
    10. my-config 2 8s
    11. [root@node22 configmap]# kubectl describe cm my-config
    12. Name: my-config
    13. Namespace: default
    14. Labels: <none>
    15. Annotations: <none>
    16. Data
    17. ====
    18. key2:
    19. ----
    20. config2
    21. key1:
    22. ----
    23. config1
    24. BinaryData
    25. ====
    26. Events: <none>
    27. ##使用文件创建,文件名为key,文件内容为值
    28. [root@node22 configmap]# kubectl create configmap my-config-2 --from-file=/etc/resolv.conf
    29. configmap/my-config-2 created
    30. [root@node22 configmap]# kubectl describe cm my-config-2
    31. Name: my-config-2
    32. Namespace: default
    33. Labels: <none>
    34. Annotations: <none>
    35. Data
    36. ====
    37. resolv.conf:
    38. ----
    39. # Generated by NetworkManager
    40. nameserver 114.114.114.114
    41. BinaryData
    42. ====
    43. Events: <none>
    44. ##使用目录创建,文件名为key,文件内容为值
    45. [root@node22 configmap]# kubectl create configmap my-config-3 --from-file=test
    46. configmap/my-config-3 created
    47. [root@node22 configmap]# kubectl describe cm my-config-3
    48. Name: my-config-3
    49. Namespace: default
    50. Labels: <none>
    51. Annotations: <none>
    52. Data
    53. ====
    54. passwd:
    55. ----
    56. root:x:0:0:root:/root:/bin/bash
    57. bin:x:1:1:bin:/bin:/sbin/nologin
    58. daemon:x:2:2:daemon:/sbin:/sbin/nologin
    59. adm:x:3:4:adm:/var/adm:/sbin/nologin
    60. lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
    61. sync:x:5:0:sync:/sbin:/bin/sync
    62. shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
    63. fstab:
    64. ----
    65. #
    66. # /etc/fstab
    67. # Created by anaconda on Fri Aug 5 17:48:43 2022
    68. #
    69. # Accessible filesystems, by reference, are maintained under '/dev/disk'
    70. # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
    71. #
    72. /dev/mapper/rhel-root / xfs defaults 0 0
    73. UUID=d319ed7a-9c18-4cda-b34a-9e2399f1f1fc /boot xfs defaults 0 0
    74. #/dev/mapper/rhel-swap swap swap defaults 0 0
    75. BinaryData
    76. ====
    77. Events: <none>
    78. ##编写comfigmap的yml文件创建
    79. [root@node22 configmap]# vim cm1.yaml
    80. apiVersion: v1
    81. kind: ConfigMap
    82. metadata:
    83. name: cm1-config
    84. data:
    85. db_host: "192.168.0.1"
    86. db_port: "3306"
    87. [root@node22 configmap]# kubectl apply -f cm1.yaml
    88. configmap/cm1-config created
    89. [root@node22 configmap]# kubectl get cm
    90. NAME DATA AGE
    91. cm1-config 2 11s
    92. kube-root-ca.crt 1 3d5h
    93. my-config 2 9m45s
    94. my-config-2 1 5m54s
    95. my-config-3 2 3m23s
    96. [root@node22 configmap]# kubectl describe cm cm1-config
    97. Name: cm1-config
    98. Namespace: default
    99. Labels: <none>
    100. Annotations: <none>
    101. Data
    102. ====
    103. db_port:
    104. ----
    105. 3306
    106. db_host:
    107. ----
    108. 192.168.0.1
    109. BinaryData
    110. ====
    111. Events: <none>

    2.如何使用configmap:

    1. 1).通过环境变量的方式直接传递给pod
    2. [root@node22 configmap]# vim pod.yaml
    3. apiVersion: v1
    4. kind: Pod
    5. metadata:
    6. name: pod1
    7. spec:
    8. containers:
    9. - name: pod1
    10. image: busybox
    11. command: ["/bin/sh", "-c", "env"]
    12. env:
    13. - name: key1
    14. valueFrom:
    15. configMapKeyRef:
    16. name: cm1-config
    17. key: db_host
    18. - name: key2
    19. valueFrom:
    20. configMapKeyRef:
    21. name: cm1-config
    22. key: db_port
    23. restartPolicy: Never
    24. [root@node22 configmap]# kubectl apply -f pod.yaml
    25. pod/pod1 created
    26. [root@node22 configmap]# kubectl get pod
    27. NAME READY STATUS RESTARTS AGE
    28. demo 1/1 Running 1 (32m ago) 32m
    29. myapp-1-6666f57846-n8zgn 1/1 Running 0 87m
    30. pod1 0/1 Completed 0 101s
    31. [root@node22 configmap]# kubectl delete pod pod1
    32. pod "pod1" deleted
    33. [root@node22 configmap]# kubectl delete pod demo --force
    34. [root@node22 configmap]# kubectl delete deployments.apps myapp-1
    35. [root@node22 configmap]# kubectl -n test delete pod demo --force
    36. 2) .通过在pod的命令行下运行的方式
    37. ##使用conigmap设置命令行参数
    38. [root@node22 configmap]# vim pod2.yaml
    39. apiVersion: v1
    40. kind: Pod
    41. metadata:
    42. name: pod1
    43. spec:
    44. containers:
    45. - name: pod1
    46. image: busybox
    47. command: ["/bin/sh", "-c", "echo $(db_host) $(db_port)"]
    48. envFrom:
    49. - configMapRef:
    50. name: cm1-config
    51. restartPolicy: Never
    52. [root@node22 configmap]# kubectl apply -f pod2.yaml
    53. pod/pod1 created
    54. 3).作为volume的方式挂载到pod内
    55. ##通过数据卷使用configmap
    56. [root@node22 configmap]# vim pod3.yaml
    57. kind: Pod
    58. metadata:
    59. name: pod2
    60. spec:
    61. containers:
    62. - name: pod2
    63. image: busybox
    64. command: ["/bin/sh", "-c", "cat /config/db_host"]
    65. volumeMounts:
    66. - name: config-volume
    67. mountPath: /config
    68. volumes:
    69. - name: config-volume
    70. configMap:
    71. name: cm1-config
    72. restartPolicy: Never
    73. [root@node22 configmap]# kubectl apply -f pod3.yaml
    74. pod/pod2 created
    75. [root@node22 configmap]# kubectl get pod
    76. NAME READY STATUS RESTARTS AGE
    77. pod2 0/1 Completed 0 57s
    78. [root@node22 configmap]# kubectl logs pod2
    79. 192.168.0.1

    示例:

    例子:configmap热更新

    [root@node22 configmap]# vim nginx.conf

    server {

        listen 8000;

        server_name _;

        location / {

            root /usr/share/nginx/html;

            index index.html index.htm;

        }

    }

    [root@node22 configmap]# kubectl create configmap nginxconf --from-file=nginx.conf

    configmap/nginxconf created

    [root@node22 configmap]# kubectl get cm

    NAME               DATA   AGE

    cm1-config         2      28m

    kube-root-ca.crt   1      3d6h

    my-config          2      37m

    my-config-2        1      33m

    my-config-3        2      31m

    nginxconf          1      15s

    [root@node22 configmap]# kubectl describe cm nginxconf

    Name:         nginxconf

    Namespace:    default

    Labels:      

    Annotations: 

    Data

    ====

    nginx.conf:

    ----

    server {

        listen 8000;

        server_name _;

        location / {

            root /usr/share/nginx/html;

            index index.html index.htm;

        }

    }

    BinaryData

    ====

    Events: 

    [root@node22 configmap]# vim nginx.yaml

    apiVersion: apps/v1

    kind: Deployment

    metadata:

      name: my-nginx

    spec:

      replicas: 1

      selector:

        matchLabels:

          app: nginx

      template:

        metadata:

          labels:

            app: nginx

        spec:

          containers:

            - name: nginx

              image: nginx

              volumeMounts:

                mountPath: /etc/nginx/conf.d

          volumes:

            - name: config-volume

              configMap:

                name: nginxconf

    [root@node22 configmap]# kubectl apply -f nginx.yaml

    deployment.apps/my-nginx created

    [root@node22 ~]# cd calico/

    [root@node22 calico]# kubectl delete -f networkpolicy.yaml

    networkpolicy.networking.k8s.io "test-network-policy" deleted

    [root@node22 calico]# cd

    [root@node22 ~]# cd configmap/

    [root@node22 configmap]# curl 10.244.144.76

    curl: (7) Failed connect to 10.244.144.76:80; Connection refused

    [root@node22 configmap]# kubectl edit cm nginxconf

    ##编辑nginxconf这个configmap将其端口修改为8080

    [root@node22 configmap]# kubectl exec my-nginx-7b84dc948c-f6vlb -- cat /etc/nginx/conf.d/nginx.conf

    server {

        listen 8080;

        server_name _;

        location / {

            root /usr/share/nginx/html;

            index index.html index.htm;

        }

    }

    [root@node22 configmap]# curl 10.244.144.76:8080

    curl: (7) Failed connect to 10.244.144.76:8080; Connection refused

    ##此时虽然容器内的文件内容会更新(有一定延迟),但是更新的设定并没有生效;需要手动触发Pod滚动更新, 这样才能再次加载nginx.conf配置文件

    [root@node22 configmap]# kubectl delete pod my-nginx-7b84dc948c-qmq9h

    pod "my-nginx-7b84dc948c-qmq9h" deleted

    root@node22 configmakubectl get pod -o wide

    NAME                        READY   STATUS    RESTARTS   AGE   IP              NODE     NOMINATED NODE   READINESS GATES

    my-nginx-7b84dc948c-qxgtb   1/1     Running   0          5s    10.244.144.80   node33             

    [root@node22 configmap]# curl 10.244.144.80:8080

    Welcome to nginx!

    Welcome to nginx!

    If you see this page, the nginx web server is successfully installed and

    working. Further configuration is required.

    For online documentation and support please refer to

    nginx.org.

    Commercial support is available at

    nginx.com.

    Thank you for using nginx.

    再改回8000

    [root@node22 configmap]# kubectl edit cm nginxconf

    [root@node22 configmap]# kubectl patch deployments.apps my-nginx --patch '{"spec": {"template": {"metadata": {"annotations": {"version/config": "20220827"}}}}}'

    deployment.apps/my-nginx patched

    [root@node22 configmap]# kubectl get all

    NAME                            READY   STATUS    RESTARTS   AGE

    pod/my-nginx-645f5bbfd6-lrmtp   1/1     Running   0          10s

    NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE

    service/kubernetes    ClusterIP   10.96.0.1                443/TCP   3d6h

    service/my-svc        ClusterIP   10.108.185.37            80/TCP    9h

    service/web-service   ClusterIP   10.109.238.119           80/TCP    38h

    NAME                       READY   UP-TO-DATE   AVAILABLE   AGE

    deployment.apps/my-nginx   1/1     1            1           30m

    NAME                                  DESIRED   CURRENT   READY   AGE

    replicaset.apps/my-nginx-645f5bbfd6   1         1         1       10s

    replicaset.apps/my-nginx-7b84dc948c   0         0         0       30m

    二、Secret配置管理

    Secret对象类型用来保存敏感信息,例如密码、OAuth令牌和ssh key;敏感信息放在secret中比放在Pod的定义或者容器镜像中来说更加安全和灵活

    Pod可以用两种方式使用secret:作为volume中的文件被挂载到pod中的一个或者多个容器里;当 kubelet为pod拉取镜像时使用

    Secret的类型:

    Service Account:Kubernetes自动创建包含访问API凭据的secret,并自动修改pod以使用此类型的secret

    Opaque:使用base64编码存储信息,可以通过base64 --decode解码获得原始数据,因此安全性弱,Opaque类型的Secret其value为base64编码后的值

    kubernetes.io/dockerconfigjson:用于存储docker registry的认证信息

    1. 1.##serviceaccout创建时Kubernetes会默认创建对应的secret,对应的secret会自动挂载到Pod的 /var/run/secrets/kubernetes.io/serviceaccount目录中
    2. [root@node22 ~]# kubectl get pod
    3. NAME READY STATUS RESTARTS AGE
    4. my-nginx-645f5bbfd6-lrmtp 1/1 Running 1 (49m ago) 13h
    5. [root@node22 ~]# kubectl describe pod
    6. Mounts:
    7. /etc/nginx/conf.d from config-volume (rw)
    8. /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r9k7t (ro)
    9. [root@node22 ~]# kubectl exec my-nginx-645f5bbfd6-lrmtp -- ls /var/run/secrets/kubernetes.io/serviceaccount
    10. ca.crt
    11. namespace
    12. token
    13. ##每个namespace下有一个名为default的默认的ServiceAccount对象
    14. [root@node22 ~]# kubectl get sa
    15. NAME SECRETS AGE
    16. default 1 3d20h
    17. [root@node22 ~]# kubectl get sa -n test
    18. NAME SECRETS AGE
    19. default 1 16h
    20. 2.##ServiceAccount里有一个名为Tokens的可以作为Volume一样被Mount到Pod里的Secret,当Pod启动时这个Secret会被自动Mount到Pod的指定目录下,用来协助完成Pod中的进程访问API Server时的身份鉴权过程
    21. [root@node22 ~]# kubectl get pod my-nginx-645f5bbfd6-lrmtp -o yaml
    22. spec:
    23. containers:
    24. - image: nginx
    25. imagePullPolicy: Always
    26. name: nginx
    27. resources: {}
    28. terminationMessagePath: /dev/termination-log
    29. terminationMessagePolicy: File
    30. volumeMounts:
    31. - mountPath: /etc/nginx/conf.d
    32. name: config-volume
    33. - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
    34. name: kube-api-access-r9k7t
    35. readOnly: true
    36. 3.##从文件中创建secret
    37. [root@node22 ~]# cd secret/
    38. [root@node22 secret]# echo -n 'admin' > ./username.txt
    39. [root@node22 secret]# echo -n 'westos' > ./password.txt
    40. [root@node22 secret]# kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt
    41. secret/db-user-pass created
    42. [root@node22 secret]# kubectl get secrets
    43. NAME TYPE DATA AGE
    44. basic-auth Opaque 1 23h
    45. db-user-pass Opaque 2 28s
    46. default-token-pf6bb kubernetes.io/service-account-token 3 3d20h
    47. tls-secret kubernetes.io/tls 2 23h
    48. [root@node22 secret]# kubectl describe secrets db-user-pass
    49. Name: db-user-pass
    50. Namespace: default
    51. Labels: <none>
    52. Annotations: <none>
    53. Type: Opaque
    54. Data
    55. ====
    56. password.txt: 6 bytes
    57. username.txt: 5 bytes
    58. 默认情况下 kubectl get和kubectl describe 为了安全是不会显示密码的内容,可以
    59. 通过以下方式查看:
    60. 如果密码具有特殊字符,则需要使用 \ 字符对其进行转义,执行以下命令:
    61. kubectl create secret generic dev-db-secret --from-literal=username=devuser
    62. --from-literal=password=S\!B\\*d\$zDs
    63. [root@node22 secret]# kubectl get secrets db-user-pass -o yaml
    64. apiVersion: v1
    65. data:
    66. password.txt: d2VzdG9z
    67. username.txt: YWRtaW4=
    68. kind: Secret
    69. metadata:
    70. creationTimestamp: "2022-08-28T09:59:53Z"
    71. name: db-user-pass
    72. namespace: default
    73. resourceVersion: "187151"
    74. uid: 2630512d-c437-4d41-bdc8-a4748df06835
    75. type: Opaque
    76. 查看密码可以采用下面方法:
    77. [root@node22 secret]# echo d2VzdG9z | base64 -d
    78. westos
    79. [root@node22 secret]# echo YWRtaW4= | base64 -d
    80. admin
    81. 4.##从yaml文件创建secret
    82. [root@node22 secret]# vim secret.yaml
    83. apiVersion: v1
    84. kind: Secret
    85. metadata:
    86. name: mysecret
    87. type: Opaque
    88. data:
    89. username: YWRtaW4=
    90. password: d2VzdG9z
    91. [root@node22 secret]# kubectl apply -f secret.yaml
    92. secret/mysecret created
    93. [root@node22 secret]# kubectl get secrets mysecret -o yaml
    94. apiVersion: v1
    95. data:
    96. password: d2VzdG9z
    97. username: YWRtaW4=
    98. kind: Secret
    99. metadata:
    100. annotations:
    101. kubectl.kubernetes.io/last-applied-configuration: |
    102. {"apiVersion":"v1","data":{"password":"d2VzdG9z","username":"YWRtaW4="},"kind":"Secret","metadata":{"annotations":{},"name":"mysecret","namespace":"default"},"type":"Opaque"}
    103. creationTimestamp: "2022-08-28T10:08:23Z"
    104. name: mysecret
    105. namespace: default
    106. resourceVersion: "188013"
    107. uid: cbba6019-bad6-4e95-8acb-b53b9d022bea
    108. type: Opaque
    109. ##将secrect通过volume挂载到pod中
    110. [root@node22 secret]# vim pod.yaml
    111. apiVersion: v1
    112. kind: Pod
    113. metadata:
    114. name: mysecret
    115. spec:
    116. containers:
    117. - name: nginx
    118. image: nginx
    119. volumeMounts:
    120. - name: secrets
    121. mountPath: "/secret"
    122. readOnly: true
    123. volumes:
    124. - name: secrets
    125. secret:
    126. secretName: mysecret
    127. [root@node22 secret]# kubectl apply -f pod.yaml
    128. pod/mysecret created
    129. [root@node22 secret]# kubectl get pod
    130. NAME READY STATUS RESTARTS AGE
    131. my-nginx-645f5bbfd6-lrmtp 1/1 Running 1 (75m ago) 14h
    132. mysecret 1/1 Running 0 10s
    133. [root@node22 secret]# kubectl exec mysecret -- ls /secret
    134. password
    135. username
    136. 5.##向指定路径映射secret的键值
    137. [root@node22 secret]# vim pod2.yaml
    138. apiVersion: v1
    139. kind: Pod
    140. metadata:
    141. name: mysecret
    142. spec:
    143. containers:
    144. - name: nginx
    145. image: nginx
    146. volumeMounts:
    147. - name: secrets
    148. mountPath: "/secret"
    149. readOnly: true
    150. volumes:
    151. - name: secrets
    152. secret:
    153. secretName: mysecret
    154. items:
    155. - key: username
    156. path: my-group/my-username
    157. [root@node22 secret]# kubectl apply -f pod2.yaml
    158. pod/mysecret created
    159. [root@node22 secret]# kubectl get pod
    160. NAME READY STATUS RESTARTS AGE
    161. my-nginx-645f5bbfd6-lrmtp 1/1 Running 1 (79m ago) 14h
    162. mysecret 1/1 Running 0 6s
    163. [root@node22 secret]# kubectl exec mysecret -- ls /secret
    164. my-group
    165. [root@node22 secret]# kubectl exec mysecret -- ls /secret/my-group
    166. my-username
    167. [root@node22 secret]# kubectl exec mysecret -- cat /secret/my-group/my-username
    168. admin
    169. ##kubernetes.io/dockerconfigjson用于存储docker registry的认证信息
    170. [root@node22 secret]# vim docker.yaml
    171. apiVersion: v1
    172. kind: Pod
    173. metadata:
    174. name: mypod
    175. spec:
    176. containers:
    177. - name: game2048
    178. image: reg.westos.org/westos/game2048
    179. #imagePullSecrets: 不做认证
    180. # - name: myregistrykey
    181. [root@node22 secret]# kubectl apply -f docker.yaml
    182. pod/mypod created
    183. [root@node22 secret]# kubectl get pod 镜像拉取失败
    184. NAME READY STATUS RESTARTS AGE
    185. mypod 0/1 ImagePullBackOff 0 14s
    186. [root@node22 secret]# kubectl create secret docker-registry myregistrykey --docker-server=reg.westos.org --docker-username=admin --docker-password=westos --docker-email=zcx0216@westos.org
    187. secret/myregistrykey created 做认证
    188. [root@node22 secret]# kubectl get secrets
    189. NAME TYPE DATA AGE
    190. basic-auth Opaque 1 23h
    191. db-user-pass Opaque 2 28m
    192. default-token-pf6bb kubernetes.io/service-account-token 3 3d21h
    193. myregistrykey kubernetes.io/dockerconfigjson 1 40s认证信息
    194. mysecret Opaque 2 20m
    195. tls-secret kubernetes.io/tls 2 23h
    196. [root@node22 secret]# vim docker.yaml
    197. apiVersion: v1
    198. kind: Pod
    199. metadata:
    200. name: mypod
    201. spec:
    202. containers:
    203. - name: game2048
    204. image: reg.westos.org/westos/game2048
    205. imagePullSecrets:
    206. - name: myregistrykey
    207. [root@node22 secret]# kubectl apply -f docker.yaml
    208. pod/mypod created
    209. [root@node22 secret]# kubectl get pod
    210. NAME READY STATUS RESTARTS AGE
    211. mypod 1/1 Running 0 6s
    212. [root@node22 secret]# kubectl delete -f docker.yaml
    213. pod "mypod" deleted

    三、Volumes配置管理

    容器中的文件在磁盘上是临时存放的,这给容器中运行的特殊应用程序带来一些问题,首先,当容器崩溃时,kubelet将重新启动容器,容器中的文件将会丢失,因为容器会以干净的状态重建;其次,当在一个Pod中同时运行多个容器时,常常需要在这些容器之间共享文件;Kubernetes抽象出 Volume对象来解决这两个问题

    Kubernetes卷具有明确的生命周期,与包裹它的Pod相同;因此,卷比Pod中运行的任何容器的存活期都长,在容器重新启动时数据也会得到保留;当一个Pod不再存在时,卷也将不再存在;也许更重要的是,Kubernetes可以支持许多类型的卷,Pod也能同时使用任意数量的卷

    卷不能挂载到其他卷,也不能与其他卷有硬链接;Pod中的每个容器必须独立地指定每个卷的挂载位置

    Kubernetes 支持下列类型的卷:

    awsElasticBlockStore 、azureDisk、azureFile、cephfs、cinder、configMap、csi、downwardAPI、emptyDir、fc (fibre channel)、flexVolume、flocker gcePersistentDisk、gitRepo(deprecated)、glusterfs、hostPath、iscsi、local、 nfs、persistentVolumeClaim、projected、portworxVolume、quobyte、rbd scaleIO、secret、storageos、vsphereVolume

    1).emptyDir卷:

    当Pod指定到某个节点上时,首先创建的是一个emptyDir卷,并且只要Pod在该节点上运行,卷就一直存在;就像其名称表示的那样,卷最初是空的;尽管Pod中的容器挂载emptyDir卷的路径可能相同也可能不同,但是这些容器都可以读写emptyDir卷中相同的文件;当Pod因为某些原因被从节点上删除时,emptyDir卷中的数据也会永久删除

    emptyDir的使用场景:缓存空间,例如基于磁盘的归并排序;为耗时较长的计算任务提供检查点,以便任务能方便地从崩溃前状态恢复执行;在Web服务器容器服务数据时,保存内容管理器容器获取的文件

    默认情况下,emptyDir卷存储在支持该节点所使用的介质上,这里的介质可以是磁盘或SSD或网络存储,这取决于您的环境;但是,您可以将emptyDir.medium字段设置为"Memory",以告诉Kubernetes为您安装tmpfs(基于内存的文件系统);虽然tmpfs速度非常快,但是要注意它与磁盘不同,tmpfs在节点重启时会被清除,并且您所写入的所有文件都会计入容器的内存消耗,受容器内存限制约束

    1. [root@node22 secret]# cd
    2. [root@node22 ~]# mkdir volumes
    3. [root@node22 ~]# cd volumes/
    4. [root@node22 volumes]# vim vol1.yaml
    5. apiVersion: v1
    6. kind: Pod
    7. metadata:
    8. name: vol1
    9. spec:
    10. containers:
    11. - image: busyboxplus
    12. name: vm1
    13. command: ["sleep", "300"]
    14. volumeMounts:
    15. - mountPath: /cache
    16. name: cache-volume
    17. - name: vm2
    18. image: nginx
    19. volumeMounts:
    20. - mountPath: /usr/share/nginx/html
    21. name: cache-volume
    22. volumes:
    23. - name: cache-volume
    24. emptyDir:
    25. medium: Memory
    26. sizeLimit: 100Mi
    27. [root@node22 volumes]# kubectl apply -f vol1.yaml
    28. pod/vol1 created
    29. [root@node22 volumes]# kubectl get pod
    30. NAME READY STATUS RESTARTS AGE
    31. vol1 2/2 Running 0 4s
    32. [root@node22 volumes]# kubectl exec -it vol1 -c vm1 -- sh
    33. / # ls
    34. bin cache dev etc home lib lib64 linuxrc media mnt opt proc root run sbin sys tmp usr var
    35. / # cd cache/
    36. /cache # ls
    37. /cache # echo www.westos.org > index.html
    38. /cache # curl localhost
    39. www.westos.org
    40. /cache # dd if=/dev/zero of=bigfile bs=1M count=100
    41. 100+0 records in
    42. 99+1 records out
    43. /cache # dd if=/dev/zero of=bigfile bs=1M count=101
    44. dd: writing 'bigfile': No space left on device
    45. 101+0 records in
    46. 99+1 records out

    2).hostPath

    hostPath卷能将主机节点文件系统上的文件或目录挂载到Pod中,虽然这不是大多数Pod需要的,但是它为一些应用程序提供了强大的逃生舱

    hostPath卷的一些用法:运行一个需要访问Docker引擎内部机制的容器,挂载/var/lib/docker路径;在容器中运行cAdvisor时,以hostPath方式挂载 /sys;允许Pod指定给定的hostPath在运行Pod之前是否应该存在,是否应该创建以及应该以什么方式存在

    除了必需的path属性之外,用户可以选择性地为hostPath卷指定type

    hostPath卷的缺点:具有相同配置(例如从podTemplate创建)的多个Pod会由于节点上文件的不同而在不同节点上有不同的行为;当Kubernetes按照计划添加资源感知的调度时,这类调度机制将无法考虑由hostPath使用的资源;基础主机上创建的文件或目录只能由root用户写入;您需要在特权容器中以root身份运行进程,或者修改主机上的文件权限以便容器能够写入hostPath卷

    1. [root@node22 volumes]# kubectl delete -f vol1.yaml
    2. pod "vol1" deleted
    3. [root@node22 volumes]# vim hostpath.yaml
    4. apiVersion: v1
    5. kind: Pod
    6. metadata:
    7. name: test-pd
    8. spec:
    9. containers:
    10. - image: nginx
    11. name: test-container
    12. volumeMounts:
    13. - mountPath: /usr/share/nginx/html
    14. name: test-volume
    15. volumes:
    16. - name: test-volume
    17. hostPath:
    18. path: /data
    19. type: DirectoryOrCreate
    20. [root@node22 volumes]# kubectl apply -f hostpath.yaml 生效
    21. pod/test-pd created
    22. [root@node22 volumes]# kubectl get pod
    23. NAME READY STATUS RESTARTS AGE
    24. test-pd 1/1 Running 0 10s
    25. [root@node22 volumes]# kubectl get pod -o wide 分配到了node33
    26. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    27. test-pd 1/1 Running 0 62s 10.244.144.90 node33 <none> <none>
    28. [root@node22 volumes]# curl 10.244.144.90 访问时报错,再node33主机创建首页即可解决
    29. <html>
    30. <head><title>403 Forbidden</title></head>
    31. <body>
    32. <center><h1>403 Forbidden</h1></center>
    33. <hr><center>nginx/1.21.5</center>
    34. </body>
    35. </html>
    36. [root@node33 ~]# cd /data
    37. [root@node33 data]# ls
    38. [root@node33 data]# pwd
    39. /data
    40. [root@node33 data]# echo www.westos.org > index.html
    41. 此时再次访问就没有问题
    42. [root@node22 volumes]# curl 10.244.144.90
    43. www.westos.org
    44. 如果pod被迁移到node44上:
    45. [root@node44 ~]# mkdir /data
    46. [root@node44 ~]# cd /data/
    47. [root@node44 data]# ls
    48. [root@node44 data]# echo bbs.westos.org > index.html
    49. [root@node22 volumes]# kubectl delete -f hostpath.yaml
    50. pod "test-pd" deleted
    51. [root@node22 volumes]# vim hostpath.yaml 做迁移
    52. apiVersion: v1
    53. kind: Pod
    54. metadata:
    55. name: test-pd
    56. spec:
    57. containers:
    58. - image: nginx
    59. name: test-container
    60. volumeMounts:
    61. - mountPath: /usr/share/nginx/html
    62. name: test-volume
    63. volumes:
    64. - name: test-volume
    65. hostPath:
    66. path: /data
    67. type: DirectoryOrCreate
    68. nodeName: node44
    69. [root@node22 volumes]# kubectl apply -f hostpath.yaml
    70. pod/test-pd created
    71. [root@node22 volumes]# kubectl get pod -o wide已经迁移到node44
    72. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    73. test-pd 1/1 Running 0 38s 10.244.214.130 node44 <none> <none>
    74. [root@node22 volumes]# curl 10.244.214.130此时访问的就是node44主机的页面
    75. bbs.westos.org
    76. [root@node22 volumes]# kubectl delete -f hostpath.yaml
    77. pod "test-pd" deleted

    3).nfs卷

    在每个节点上安装nfs

    1. [root@node22 volumes]# yum install -y nfs-utils
    2. [root@node33 data]# yum install -y nfs-utils
    3. [root@node44 data]# yum install -y nfs-utils
    4. [root@node11 harbor]# yum install -y nfs-utils 仓库上安装nfs
    5. [root@node11 harbor]# cat /etc/exports
    6. /nfsdata *(rw,no_root_squash)
    7. [root@node11 harbor]# mkdir /nfsdata
    8. [root@node11 harbor]# cd /nfsdata/
    9. [root@node11 nfsdata]# chmod 777 .
    10. [root@node11 nfsdata]# ll -d .
    11. drwxrwxrwx 2 root root 6 Aug 28 20:40 .
    12. [root@node11 nfsdata]# systemctl start nfs
    13. [root@node11 nfsdata]# echo www.westos.org > index.html
    14. [root@node22 volumes]# showmount -e 192.168.0.11
    15. Export list for 192.168.0.11:
    16. /nfsdata *
    17. [root@node22 volumes]# vim nfs.yaml
    18. apiVersion: v1
    19. kind: Pod
    20. metadata:
    21. name: test-pd
    22. spec:
    23. containers:
    24. - image: nginx
    25. name: test-container
    26. volumeMounts:
    27. - mountPath: /usr/share/nginx/html
    28. name: test-volume
    29. volumes:
    30. - name: test-volume
    31. nfs:
    32. server: 192.168.0.11
    33. path: /nfsdata
    34. [root@node22 volumes]# kubectl apply -f nfs.yaml
    35. pod/test-pd created
    36. [root@node22 volumes]# kubectl get pod -o wide
    37. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    38. test-pd 1/1 Running 0 20s 10.244.144.91 node33 <none> <none>
    39. [root@node22 volumes]# curl 10.244.144.91
    40. www.westos.org
    41. [root@node22 volumes]# kubectl delete -f nfs.yaml删除后迁移到
    42. pod "test-pd" deleted
    43. [root@node22 volumes]# vim nfs.yaml
    44. apiVersion: v1
    45. kind: Pod
    46. metadata:
    47. name: test-pd
    48. spec:
    49. containers:
    50. - image: nginx
    51. name: test-container
    52. volumeMounts:
    53. - mountPath: /usr/share/nginx/html
    54. name: test-volume
    55. volumes:
    56. - name: test-volume
    57. nfs:
    58. server: 192.168.0.11
    59. path: /nfsdata
    60. nodeName: node44 迁移到node44
    61. [root@node22 volumes]# kubectl apply -f nfs.yaml
    62. pod/test-pd created
    63. [root@node22 volumes]# kubectl get pod
    64. NAME READY STATUS RESTARTS AGE
    65. test-pd 1/1 Running 0 39s
    66. [root@node22 volumes]# kubectl get pod -o wide
    67. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    68. test-pd 1/1 Running 0 2m17s 10.244.214.131 node44 <none> <none>
    69. [root@node22 volumes]# curl 10.244.214.131
    70. www.westos.org

    4).持久卷PV与持久卷声明PVC

    PersistentVolume是集群内,由管理员提供的网络存储的一部分,就像集群中的节点一样,PV也是集群中的一种资源,它也像Volume一样,是一种volume插件,但是它的生命周期却是和使用它的Pod相互独立的;PV这个API对象,捕获了诸如NFS、ISCSI、或其他云存储系统的实现细节

    PersistentVolumeClaim是用户的一种存储请求,它和Pod类似,Pod消耗Node资源,而PVC消耗PV资源;Pod能够请求特定的资源(如CPU和内存),PVC能够请求指定的大小和访问的模式(可以被映射为一次读写或者多次只读)

    有两种PV提供的方式:静态和动态

    静态PV:集群管理员创建多个PV,它们携带着真实存储的详细信息,这些存储对于集群用户是可用的,它们存在于Kubernetes API中,并可用于存储使用

    动态PV:当管理员创建的静态PV都不匹配用户的PVC时,集群可能会尝试专门地供给volume给PVC,这种供给基于StorageClass

    PVC与PV的绑定是一对一的映射,没找到匹配的PV,那么PVC会无限期处于unbound即未绑定状态

    PV使用:

    Pod使用PVC就像使用volume一样;集群检查PVC,查找绑定的PV,并映射PV给Pod,对于支持多种访问模式的PV,用户可以指定想用的模式;一旦用户拥有了一个PVC,并且PVC被绑定,那么只要用户还需要,PV就一直属于这个用户;用户调度Pod,通过在Pod的volume块中包含PVC来访问PV

    PV释放:

    当用户使用PV完毕后,他们可以通过API来删除PVC对象,当PVC被删除后,对应的PV就被认为是已经是“released”了,但还不能再给另外一个PVC使用,前一个PVC的属于还存在于该PV中,必须根据策略来处理掉

    PV回收:

    PV的回收策略告诉集群,在PV被释放之后集群应该如何处理该PV;当前,PV可以被Retained(保留)、Recycled(再利用)或者Deleted(删除);保留允许手动地再次声明资源,对于支持删除操作的PV卷,删除操作会从Kubernetes中移除PV对象,还有对应的外部存储(如AWS EBS、GCE PD、Azure Disk或者Cinder volume);动态供给的卷总是会被删除

    ##创建不同容量和访问模式的两个pv

    访问模式:ReadWriteOnce(RWO)==该volume只能被单个节点以读写的方式映射;ReadOnlyMany(ROX)==该volume可以被多个节点以只读方式映射;ReadWriteMany(RWX)==该volume可以被多个节点以读写的方式映射

    回收策略:Retain==保留,需要手动回收;Recycle==回收,自动删除卷中数据;Delete==删除,相关联的存储资产,如AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷都会被删除

    当前只有NFS卷和HostPath卷支持回收利用,AWS EBS、GCE PD、Azure Disk、OpenStack Cinder卷支持删除操作

    状态:Available==空闲的资源,未绑定给PVC;Bound==绑定给了某个PVC;Released==PVC已经删除了,但是PV还没有被集群回收;Failed==PV在自动回收中失败了

    1. [root@node11 nfsdata]# mkdir pv1
    2. [root@node11 nfsdata]# mkdir pv2
    3. [root@node11 nfsdata]# mkdir pv3
    4. [root@node22 volumes]# vim pv.yaml
    5. apiVersion: v1
    6. kind: PersistentVolume
    7. metadata:
    8. name: pv0003
    9. spec:
    10. capacity:
    11. storage: 5Gi
    12. volumeMode: Filesystem
    13. accessModes:
    14. - ReadWriteOnce
    15. persistentVolumeReclaimPolicy: Recycle
    16. storageClassName: nfs
    17. mountOptions:
    18. nfs:
    19. path: /nfsdata/pv1
    20. server: 192.168.0.11
    21. [root@node22 volumes]# kubectl apply -f pv.yaml
    22. persistentvolume/pv0003 created
    23. [root@node22 volumes]# kubectl get pv
    24. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    25. pv0003 5Gi RWO Recycle Available nfs 7s
    26. ##创建pvc,pvc会根据文件中定义的存储类、容量需求和访问模式匹配到合适的pv进行绑定[root@node22 volumes]# vim pvc.yaml
    27. apiVersion: v1
    28. kind: PersistentVolumeClaim
    29. metadata:
    30. name: pvc1
    31. spec:
    32. storageClassName: nfs
    33. accessModes:
    34. - ReadWriteOnce
    35. resources:
    36. requests:
    37. storage: 1Gi
    38. [root@node22 volumes]# kubectl apply -f pvc.yaml
    39. persistentvolumeclaim/pvc1 created
    40. [root@node22 volumes]# kubectl get pvc
    41. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    42. pvc1 Bound pv0003 5Gi RWO nfs 25s
    43. [root@node22 volumes]# kubectl get pv
    44. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    45. pv0003 5Gi RWO Recycle Bound default/pvc1 nfs 3m41s
    46. ##pod中挂载pv
    47. [root@node22 volumes]# vim pod.yaml
    48. apiVersion: v1
    49. kind: Pod
    50. metadata:
    51. name: test-pd
    52. spec:
    53. containers:
    54. - image: nginx
    55. name: nginx
    56. volumeMounts:
    57. - mountPath: /usr/share/nginx/html
    58. name: vol1
    59. volumes:
    60. - name: vol1
    61. persistentVolumeClaim:
    62. claimName: pvc1
    63. [root@node22 volumes]# kubectl delete pod test-pd
    64. pod "test-pd" deleted
    65. [root@node22 volumes]# kubectl apply -f pod.yaml
    66. pod/test-pd created
    67. [root@node11 nfsdata]# cd pv1
    68. [root@node11 pv1]# echo pv1 > index.html
    69. [root@node22 volumes]# kubectl get pod -o wide
    70. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    71. test-pd 1/1 Running 0 2m37s 10.244.144.92 node33 <none> <none>
    72. [root@node22 volumes]# curl 10.244.144.92
    73. pv1
    74. 创建pv2,pv3
    75. [root@node22 volumes]# vim pv.yaml
    76. apiVersion: v1
    77. kind: PersistentVolume
    78. metadata:
    79. name: pv0003
    80. spec:
    81. capacity:
    82. storage: 5Gi
    83. volumeMode: Filesystem
    84. accessModes:
    85. - ReadWriteOnce
    86. persistentVolumeReclaimPolicy: Recycle
    87. storageClassName: nfs
    88. mountOptions:
    89. nfs:
    90. path: /nfsdata/pv1
    91. server: 192.168.0.11
    92. ---
    93. apiVersion: v1
    94. kind: PersistentVolume
    95. metadata:
    96. name: pv2
    97. spec:
    98. capacity:
    99. storage: 10Gi
    100. volumeMode: Filesystem
    101. accessModes:
    102. - ReadWriteMany
    103. persistentVolumeReclaimPolicy: Recycle
    104. storageClassName: nfs
    105. mountOptions:
    106. nfs:
    107. path: /nfsdata/pv2
    108. server: 192.168.0.11
    109. ---
    110. apiVersion: v1
    111. kind: PersistentVolume
    112. metadata:
    113. name: pv3
    114. spec:
    115. capacity:
    116. storage: 20Gi
    117. volumeMode: Filesystem
    118. accessModes:
    119. - ReadOnlyMany
    120. persistentVolumeReclaimPolicy: Recycle
    121. storageClassName: nfs
    122. mountOptions:
    123. nfs:
    124. path: /nfsdata/pv3
    125. server: 192.168.0.11
    126. [root@node22 volumes]# kubectl apply -f pv.yaml
    127. persistentvolume/pv0003 configured
    128. persistentvolume/pv2 created
    129. persistentvolume/pv3 created
    130. [root@node22 volumes]# kubectl get pv
    131. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    132. pv0003 5Gi RWO Recycle Bound default/pvc1 nfs 15m
    133. pv2 10Gi RWX Recycle Available nfs 12s
    134. pv3 20Gi ROX Recycle Available nfs 12s
    135. 创建pvc2,pvc3
    136. [root@node22 volumes]# vim pvc.yaml
    137. apiVersion: v1
    138. kind: PersistentVolumeClaim
    139. metadata:
    140. name: pvc1
    141. spec:
    142. storageClassName: nfs
    143. accessModes:
    144. - ReadWriteOnce
    145. resources:
    146. requests:
    147. storage: 1Gi
    148. ---
    149. apiVersion: v1
    150. kind: PersistentVolumeClaim
    151. metadata:
    152. name: pvc2
    153. spec:
    154. storageClassName: nfs
    155. accessModes:
    156. - ReadWriteMany
    157. resources:
    158. requests:
    159. storage: 10Gi
    160. ---
    161. apiVersion: v1
    162. kind: PersistentVolumeClaim
    163. metadata:
    164. name: pvc3
    165. spec:
    166. storageClassName: nfs
    167. accessModes:
    168. - ReadOnlyMany
    169. resources:
    170. requests:
    171. storage: 20Gi
    172. [root@node22 volumes]# kubectl apply -f pvc.yaml
    173. persistentvolumeclaim/pvc1 unchanged
    174. persistentvolumeclaim/pvc2 created
    175. persistentvolumeclaim/pvc3 created
    176. [root@node22 volumes]# kubectl get pvc
    177. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    178. pvc1 Bound pv0003 5Gi RWO nfs 14m
    179. pvc2 Bound pv2 10Gi RWX nfs 6s
    180. pvc3 Bound pv3 20Gi ROX nfs 6s
    181. [root@node22 volumes]# kubectl get pv
    182. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    183. pv0003 5Gi RWO Recycle Bound default/pvc1 nfs 18m
    184. pv2 10Gi RWX Recycle Bound default/pvc2 nfs 3m7s
    185. pv3 20Gi ROX Recycle Bound default/pvc3 nfs 3m7s
    186. 创建test-pd-2
    187. [root@node22 volumes]# cp pod.yaml pod2.yaml
    188. [root@node22 volumes]# vim pod2.yaml
    189. apiVersion: v1
    190. kind: Pod
    191. metadata:
    192. name: test-pd-2
    193. spec:
    194. containers:
    195. - image: nginx
    196. name: nginx
    197. volumeMounts:
    198. - mountPath: /usr/share/nginx/html
    199. name: vol2
    200. volumes:
    201. - name: vol2
    202. persistentVolumeClaim:
    203. claimName: pvc2
    204. [root@node22 volumes]# kubectl apply -f pod2.yaml
    205. pod/test-pd-2 created
    206. [root@node22 volumes]# kubectl get pod
    207. NAME READY STATUS RESTARTS AGE
    208. test-pd 1/1 Running 0 11m
    209. test-pd-2 1/1 Running 0 15s
    210. [root@node22 volumes]# kubectl get pod -o wide
    211. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    212. test-pd 1/1 Running 0 12m 10.244.144.92 node33 <none> <none>
    213. test-pd-2 1/1 Running 0 93s 10.244.144.93 node33 <none> <none>
    214. [root@node11 pv1]# cd ..
    215. [root@node11 nfsdata]# cd pv2
    216. [root@node11 pv2]# echo pv2 > index.html
    217. [root@node22 volumes]# curl 10.244.144.93
    218. pv2
    219. [root@node22 volumes]# curl 10.244.144.92
    220. pv1
    221. 删除:(pod被迁移到其他节点的时候,再次挂接pvc)
    222. [root@node22 volumes]# kubectl delete -f pod2.yaml
    223. pod "test-pd-2" deleted
    224. [root@node22 volumes]# kubectl delete -f pod.yaml
    225. pod "test-pd" deleted
    226. [root@node22 volumes]# kubectl get pvc
    227. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    228. pvc1 Bound pv0003 5Gi RWO nfs 25m
    229. pvc2 Bound pv2 10Gi RWX nfs 11m
    230. pvc3 Bound pv3 20Gi ROX nfs 11m
    231. ##删除pvc,先前被绑定的pv的状态由bound->released->available;挂载卷中数据被删除(recycle的回收策略)
    232. [root@node22 volumes]# kubectl delete -f pvc.yaml
    233. persistentvolumeclaim "pvc1" deleted
    234. persistentvolumeclaim "pvc2" deleted
    235. persistentvolumeclaim "pvc3" deleted
    236. [root@node22 volumes]# kubectl get pv
    237. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    238. pv0003 5Gi RWO Recycle Available nfs 31m
    239. pv2 10Gi RWX Recycle Available nfs 16m
    240. pv3 20Gi ROX Recycle Available nfs 16m
    241. [root@node22 volumes]# kubectl get pod
    242. NAME READY STATUS RESTARTS AGE
    243. recycler-for-pv0003 0/1 Completed 0 89s
    244. [root@node22 volumes]# kubectl delete pod recycler-for-pv0003
    245. pod "recycler-for-pv0003" deleted
    246. [root@node22 volumes]# kubectl get pv
    247. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    248. pv0003 5Gi RWO Recycle Available nfs 33m
    249. pv2 10Gi RWX Recycle Available nfs 18m
    250. pv3 20Gi ROX Recycle Available nfs 18m
    251. [root@node22 volumes]# kubectl delete -f pv.yaml
    252. persistentvolume "pv0003" deleted
    253. persistentvolume "pv2" deleted
    254. persistentvolume "pv3" deleted
    255. [root@node22 volumes]# kubectl get pv
    256. No resources found

    5).StorageClass提供了一种描述存储类(class)的方法,不同的class可能会映射到不同的服务质量等级和备份策略或其他策略等;每个StorageClass都包含provisioner、parameters和reclaimPolicy字段, 这些字段会在StorageClass需要动态分配PersistentVolume时会使用到

    StorageClass的属性:

    Provisioner(存储分配器):用来决定使用哪个卷插件分配PV,该字段必须指定,可以指定内部分配器,也可以指定外部分配器;外部分配器的代码地址为: kubernetes-incubator/external-storage,其中包括NFS和Ceph等

    Reclaim Policy(回收策略):通过reclaimPolicy字段指定创建的PersistentVolume的回收策略,回收策略包括:Delete或者Retain,没有指定默认为Delete

    更多属性查看:https://kubernetes.io/zh/docs/concepts/storage/storage-classes/

    NFS Client Provisioner是一个automatic provisioner,使用NFS作为存储,自动创建PV和对应的PVC,其本身不提供NFS存储,需要外部先有一套NFS存储服务

    PV以 ${namespace}-${pvcName}-${pvName} 的命名格式提供(在NFS服务器上);PV回收的时候以 archieved-${namespace}-${pvcName}-${pvName} 的命名格式(在NFS服务器上);nfs-client-provisioner源码地址:external-storage/nfs-client at master · kubernetes-retired/external-storage · GitHub

    1. ##配置授权,编写yaml文件并应用,创建yaml文件中指定的namespace
    2. [root@node22 ~]# mkdir nfs
    3. [root@node22 ~]# cd nfs
    4. [root@node22 nfs]# vim delpoyment.yaml
    5. [root@node22 nfs]# kubectl create namespace nfs-client-provisioner
    6. namespace/nfs-client-provisioner created
    7. [root@node22 nfs]# vim delpoyment.yaml
    8. apiVersion: apps/v1
    9. kind: Deployment
    10. metadata:
    11. name: nfs-client-provisioner
    12. labels:
    13. app: nfs-client-provisioner
    14. namespace: nfs-client-provisioner
    15. spec:
    16. replicas: 1
    17. strategy:
    18. type: Recreate
    19. selector:
    20. matchLabels:
    21. app: nfs-client-provisioner
    22. template:
    23. metadata:
    24. labels:
    25. app: nfs-client-provisioner
    26. spec:
    27. serviceAccountName: nfs-client-provisioner
    28. containers:
    29. - name: nfs-client-provisioner
    30. image: sig-storage/nfs-subdir-external-provisioner:v4.0.2
    31. volumeMounts:
    32. - name: nfs-client-root
    33. mountPath: /persistentvolumes
    34. env:
    35. - name: PROVISIONER_NAME
    36. value: k8s-sigs.io/nfs-subdir-external-provisioner
    37. - name: NFS_SERVER
    38. value: 192.168.0.11
    39. - name: NFS_PATH
    40. value: /nfsdata
    41. volumes:
    42. - name: nfs-client-root
    43. nfs:
    44. server: 192.168.0.11
    45. path: /nfsdata
    46. [root@node22 nfs]# vim rbac.yaml
    47. apiVersion: v1
    48. kind: ServiceAccount
    49. metadata:
    50. name: nfs-client-provisioner
    51. # replace with namespace where provisioner is deployed
    52. namespace: nfs-client-provisioner
    53. ---
    54. kind: ClusterRole
    55. apiVersion: rbac.authorization.k8s.io/v1
    56. metadata:
    57. name: nfs-client-provisioner-runner
    58. rules:
    59. - apiGroups: [""]
    60. resources: ["nodes"]
    61. verbs: ["get", "list", "watch"]
    62. - apiGroups: [""]
    63. resources: ["persistentvolumes"]
    64. verbs: ["get", "list", "watch", "create", "delete"]
    65. - apiGroups: [""]
    66. resources: ["persistentvolumeclaims"]
    67. verbs: ["get", "list", "watch", "update"]
    68. - apiGroups: ["storage.k8s.io"]
    69. resources: ["storageclasses"]
    70. verbs: ["get", "list", "watch"]
    71. - apiGroups: [""]
    72. resources: ["events"]
    73. verbs: ["create", "update", "patch"]
    74. ---
    75. kind: ClusterRoleBinding
    76. apiVersion: rbac.authorization.k8s.io/v1
    77. metadata:
    78. name: run-nfs-client-provisioner
    79. subjects:
    80. - kind: ServiceAccount
    81. name: nfs-client-provisioner
    82. # replace with namespace where provisioner is deployed
    83. namespace: nfs-client-provisioner
    84. roleRef:
    85. kind: ClusterRole
    86. name: nfs-client-provisioner-runner
    87. apiGroup: rbac.authorization.k8s.io
    88. ---
    89. kind: Role
    90. apiVersion: rbac.authorization.k8s.io/v1
    91. metadata:
    92. name: leader-locking-nfs-client-provisioner
    93. # replace with namespace where provisioner is deployed
    94. namespace: nfs-client-provisioner
    95. rules:
    96. - apiGroups: [""]
    97. resources: ["endpoints"]
    98. verbs: ["get", "list", "watch", "create", "update", "patch"]
    99. ---
    100. kind: RoleBinding
    101. apiVersion: rbac.authorization.k8s.io/v1
    102. metadata:
    103. name: leader-locking-nfs-client-provisioner
    104. # replace with namespace where provisioner is deployed
    105. namespace: nfs-client-provisioner
    106. subjects:
    107. - kind: ServiceAccount
    108. name: nfs-client-provisioner
    109. # replace with namespace where provisioner is deployed
    110. namespace: nfs-client-provisioner
    111. roleRef:
    112. kind: Role
    113. name: leader-locking-nfs-client-provisioner
    114. apiGroup: rbac.authorization.k8s.io
    115. [root@node22 nfs]# kubectl apply -f rbac.yaml
    116. serviceaccount/nfs-client-provisioner created
    117. clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
    118. clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
    119. role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
    120. rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
    121. [root@node22 nfs]# kubectl apply -f delpoyment.yaml
    122. deployment.apps/nfs-client-provisioner created
    123. [root@node22 nfs]# kubectl -n nfs-client-provisioner get all
    124. NAME READY STATUS RESTARTS AGE
    125. pod/nfs-client-provisioner-784f85c9-mz5nx 1/1 Running 0 23s
    126. NAME READY UP-TO-DATE AVAILABLE AGE
    127. deployment.apps/nfs-client-provisioner 1/1 1 1 23s
    128. NAME DESIRED CURRENT READY AGE
    129. replicaset.apps/nfs-client-provisioner-784f85c9 1 1 1 23s
    130. ##部署NFS Client Provisioner
    131. [root@node22 nfs]# vim class.yaml
    132. apiVersion: storage.k8s.io/v1
    133. kind: StorageClass
    134. metadata:
    135. name: managed-nfs-storage
    136. provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
    137. parameters:
    138. archiveOnDelete: "false"
    139. [root@node22 nfs]# kubectl apply -f class.yaml
    140. storageclass.storage.k8s.io/managed-nfs-storage created
    141. [root@node22 nfs]# kubectl get storageclasses.storage.k8s.io
    142. NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
    143. managed-nfs-storage k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 21s
    144. ##创建NFS存储类
    145. [root@node22 nfs]# vim pvc.yaml
    146. kind: PersistentVolumeClaim
    147. apiVersion: v1
    148. metadata:
    149. name: test-claim
    150. spec:
    151. storageClassName: managed-nfs-storage
    152. accessModes:
    153. - ReadWriteMany
    154. resources:
    155. requests:
    156. storage: 1Gi
    157. [root@node22 nfs]# kubectl apply -f pvc.yaml
    158. persistentvolumeclaim/test-claim created
    159. [root@node22 nfs]# kubectl get pvc
    160. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    161. test-claim Bound pvc-9506279b-4c3c-43eb-bebe-7cd89e6062d3 1Gi RWX managed-nfs-storage 3s
    162. [root@node22 nfs]# kubectl get pv
    163. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    164. pvc-9506279b-4c3c-43eb-bebe-7cd89e6062d3 1Gi RWX Delete Bound default/test-claim managed-nfs-storage 42s
    165. [root@node22 nfs]# vim pod.yaml
    166. apiVersion: v1
    167. kind: Pod
    168. metadata:
    169. name: test-pd
    170. spec:
    171. containers:
    172. - image: nginx
    173. name: nginx
    174. volumeMounts:
    175. - mountPath: /usr/share/nginx/html
    176. name: nfs-pvc
    177. volumes:
    178. - name: nfs-pvc
    179. persistentVolumeClaim:
    180. claimName: test-claim
    181. [root@node22 nfs]# kubectl apply -f pod.yaml
    182. pod/test-pd created
    183. [root@node22 nfs]# kubectl get pod
    184. NAME READY STATUS RESTARTS AGE
    185. test-pd 1/1 Running 0 8s
    186. [root@node22 nfs]# kubectl get pod -o wide
    187. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    188. test-pd 1/1 Running 0 30s 10.244.144.99 node33 <none> <none>
    189. [root@node22 nfs]# curl 10.244.144.99
    190. <html>
    191. <head><title>403 Forbidden</title></head>
    192. <body>
    193. <center><h1>403 Forbidden</h1></center>
    194. <hr><center>nginx/1.21.5</center>
    195. </body>
    196. </html>
    197. [root@node11 nfsdata]# cd default-test-claim-pvc-9506279b-4c3c-43eb-bebe-7cd89e6062d3
    198. [root@node11 default-test-claim-pvc-9506279b-4c3c-43eb-bebe-7cd89e6062d3]# ls
    199. [root@node11 default-test-claim-pvc-9506279b-4c3c-43eb-bebe-7cd89e6062d3]# echo test-claim > index.html
    200. [root@node22 nfs]# curl 10.244.144.99
    201. test-claim
    202. [root@node22 nfs]# kubectl delete -f pod.yaml
    203. pod "test-pd" deleted
    204. [root@node22 nfs]# kubectl delete -f pvc.yaml
    205. persistentvolumeclaim "test-claim" deleted
    206. [root@node22 nfs]# kubectl get pv
    207. No resources found
    208. ##删除上述pvc时,共享目录下自动创建的pv卷也被删除(因为class.yml文件中设定archiveOnDelete=false

    6).StatefulSet控制器

    StatefulSet是用来管理有状态应用的工作负载API对象

    StatefulSet用来管理Deployment和扩展一组Pod,并且能为这些Pod提供序号和唯一性保证

    和Deployment相同的是,StatefulSet管理了基于相同容器定义的一组Pod,但和Deployment不同的是,StatefulSet为它们的每个Pod维护了一个固定的ID;这些Pod是基于相同的声明来创建的,但是不能相互替换:无论怎么调度,每个Pod都有一个永久不变的ID

    StatefulSet和其他控制器使用相同的工作模式,在StatefulSet对象中定义你期望的状态,然后StatefulSet的控制器就会通过各种更新来达到那种你想要的状态

    StatefulSets对于需要满足以下一个或多个需求的应用程序很有价值:

    稳定的、唯一的网络标识符

    稳定的、持久的存储

    有序的、优雅的部署和缩放

    有序的、自动的滚动更新

    在上面,稳定意味着Pod调度或重调度的整个过程是有持久性的;如果应用程序不需要任何稳定的标识符或有序的部署、删除或伸缩,则应该使用由一组无状态的副本控制器提供的工作负载来部署应用程序,比如Deployment或者ReplicaSet可能更适用于无状态应用的部署需要

    1. StatefulSet如何通过Headless Service维持Pod的拓扑状态:
    2. 创建headless svc:
    3. [root@node22 statefulset]# vim svc.yaml
    4. apiVersion: v1
    5. kind: Service
    6. metadata:
    7. name: nginx-svc
    8. labels:
    9. app: nginx
    10. spec:
    11. ports:
    12. - port: 80
    13. name: web
    14. clusterIP: None
    15. selector:
    16. app: nginx
    17. [root@node22 statefulset]# kubectl apply -f svc.yaml 生效yaml文件创建svc
    18. service/nginx-svc created
    19. [root@node22 statefulset]# kubectl get svc
    20. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    21. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d19h
    22. my-svc ClusterIP 10.108.185.37 <none> 80/TCP 45h
    23. nginx-svc ClusterIP None <none> 80/TCP 5s
    24. web-service ClusterIP 10.109.238.119 <none> 80/TCP 3d2h
    25. [root@node22 statefulset]# kubectl delete svc my-svc删除不用的svc
    26. service "my-svc" deleted
    27. [root@node22 statefulset]# kubectl delete svc web-service
    28. service "web-service" deleted
    29. [root@node22 statefulset]# kubectl get svc
    30. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    31. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d19h
    32. nginx-svc ClusterIP None <none> 80/TCP 33s
    33. 创建statefulSet控制器:
    34. [root@node22 statefulset]# vim statefulset.yaml
    35. apiVersion: apps/v1
    36. kind: StatefulSet
    37. metadata:
    38. name: web
    39. spec:
    40. serviceName: "nginx-svc"
    41. replicas: 2
    42. selector:
    43. matchLabels:
    44. app: nginx
    45. template:
    46. metadata:
    47. labels:
    48. app: nginx
    49. spec:
    50. containers:
    51. - name: nginx
    52. image: nginx
    53. [root@node22 statefulset]# kubectl apply -f statefulset.yaml
    54. statefulset.apps/web created
    55. [root@node22 statefulset]# kubectl get pod
    56. NAME READY STATUS RESTARTS AGE
    57. web-0 1/1 Running 0 2m20s
    58. web-1 1/1 Running 0 49s
    59. [root@node22 statefulset]# kubectl get pod -o wide
    60. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    61. web-0 1/1 Running 0 2m49s 10.244.144.100 node33 <none> <none>
    62. web-1 1/1 Running 0 78s 10.244.214.132 node44 <none> <none>

    ##statefulset控制器下pod的按顺序创建和回收:

    StatefulSet将应用状态抽象成了两种情况:

    拓扑状态:应用实例必须按照某种顺序启动;新创建的Pod必须和原来Pod的网络标识一样

    存储状态:应用的多个实例分别绑定了不同存储数据

    StatefulSet给所有的Pod进行了编号,编号规则是:$(statefulset名称)-$(序号)并且从0开始

    Pod被删除后重建,重建Pod的网络标识也不会改变,Pod的拓扑状态按照“名字+编号”的方式固定下来,并且为每个Pod提供了一个固定且唯一的访问入口,即Pod对应的DNS记录

    创建:(从第一个开始创建,第一个无法启动剩下的也无法创建)

    1. [root@node22 statefulset]# dig -t A nginx-svc.default.svc.cluster.local. @10.96.0.10
    2. ; <<>> DiG 9.9.4-RedHat-9.9.4-72.el7 <<>> -t A nginx-svc.default.svc.cluster.local. @10.96.0.10
    3. ;; global options: +cmd
    4. ;; Got answer:
    5. ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29984
    6. ;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
    7. ;; WARNING: recursion requested but not available
    8. ;; OPT PSEUDOSECTION:
    9. ; EDNS: version: 0, flags:; udp: 4096
    10. ;; QUESTION SECTION:
    11. ;nginx-svc.default.svc.cluster.local. IN A
    12. ;; ANSWER SECTION:
    13. nginx-svc.default.svc.cluster.local. 30 IN A 10.244.144.100
    14. nginx-svc.default.svc.cluster.local. 30 IN A 10.244.214.132
    15. ;; Query time: 362 msec
    16. ;; SERVER: 10.96.0.10#53(10.96.0.10)
    17. ;; WHEN: Mon Aug 29 16:49:23 CST 2022
    18. ;; MSG SIZE rcvd: 166
    19. [root@node22 statefulset]# dig -t A web-0.nginx-svc.default.svc.cluster.local. @10.96.0.10
    20. ;; ANSWER SECTION:
    21. web-0.nginx-svc.default.svc.cluster.local. 30 IN A 10.244.144.100
    22. [root@node22 statefulset]# dig -t A web-1.nginx-svc.default.svc.cluster.local. @10.96.0.10
    23. ;; ANSWER SECTION:
    24. web-1.nginx-svc.default.svc.cluster.local. 30 IN A 10.244.214.132
    25. 回收:(从最后一个开始回收)
    26. [root@node22 statefulset]# vim statefulset.yaml
    27. apiVersion: apps/v1
    28. kind: StatefulSet
    29. metadata:
    30. name: web
    31. spec:
    32. serviceName: "nginx-svc"
    33. replicas: 0 将此处改为0即是回收,并不是delete
    34. selector:
    35. matchLabels:
    36. app: nginx
    37. template:
    38. metadata:
    39. labels:
    40. app: nginx
    41. spec:
    42. containers:
    43. - name: nginx
    44. image: nginx
    45. [root@node22 statefulset]# kubectl apply -f statefulset.yaml
    46. statefulset.apps/web configured
    47. [root@node22 statefulset]# kubectl get pod
    48. No resources found in default namespace.
    49. [root@node22 statefulset]# kubectl delete -f statefulset.yaml
    50. statefulset.apps "web" deleted

    ##PV和PVC的设计,使得StatefulSet对存储状态的管理成为了可能

    1. [root@node22 statefulset]# vim statefulset.yaml
    2. apiVersion: apps/v1
    3. kind: StatefulSet
    4. metadata:
    5. name: web
    6. spec:
    7. serviceName: "nginx-svc"
    8. replicas: 3
    9. selector:
    10. matchLabels:
    11. app: nginx
    12. template:
    13. metadata:
    14. labels:
    15. app: nginx
    16. spec:
    17. containers:
    18. - name: nginx
    19. image: nginx
    20. volumeMounts:
    21. - name: www
    22. mountPath: /usr/share/nginx/html
    23. volumeClaimTemplates:
    24. - metadata:
    25. name: www
    26. spec:
    27. storageClassName: nfs-client
    28. accessModes:
    29. - ReadWriteOnce
    30. resources:
    31. requests:
    32. storage: 1Gi
    33. [root@node22 statefulset]# kubectl get pod
    34. NAME READY STATUS RESTARTS AGE
    35. web-0 0/1 Pending 0 16s
    36. (此处是因为storageClassName: nfs-client错误,讲nfs中的class.yaml文件改为如下
    37. apiVersion: storage.k8s.io/v1
    38. kind: StorageClass
    39. metadata:
    40. name: nfs-client
    41. provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
    42. parameters:
    43. archiveOnDelete: "false"
    44. [root@node22 statefulset]# kubectl get pod
    45. NAME READY STATUS RESTARTS AGE
    46. web-0 1/1 Running 0 3m10s
    47. web-1 1/1 Running 0 3m8s
    48. web-2 1/1 Running 0 3m3s
    49. [root@node22 statefulset]# kubectl get pvc
    50. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    51. www-web-0 Bound pvc-1cbb7bd9-1fe0-472d-8068-6a7a81fc3322 1Gi RWO nfs-client 9m30s
    52. www-web-1 Bound pvc-7be7b00f-8a40-447f-92f1-4064f8466629 1Gi RWO nfs-client 3m28s
    53. www-web-2 Bound pvc-3186a28f-c878-437b-8a03-1e4166ac0cbf 1Gi RWO nfs-client 3m23s
    54. [root@node22 statefulset]# kubectl get pv
    55. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    56. pvc-1cbb7bd9-1fe0-472d-8068-6a7a81fc3322 1Gi RWO Delete Bound default/www-web-0 nfs-client 3m44s
    57. pvc-3186a28f-c878-437b-8a03-1e4166ac0cbf 1Gi RWO Delete Bound default/www-web-2 nfs-client 3m29s
    58. pvc-7be7b00f-8a40-447f-92f1-4064f8466629 1Gi RWO Delete Bound default/www-web-1 nfs-client 3m34s

    Pod的创建也是严格按照编号顺序进行的。比如在web-0进入到running状态,并且ConditionsReady之前,web-1一直会处于pending状态。

    StatefulSet还会为每一个Pod分配并创建一个同样编号的PVC。这样,kubernetes就可以通过 Persistent Volume机制为这个PVC绑定对应的PV,从而保证每一个Pod都拥有一个独立的Volume

    测试:

    1. [root@node11 nfsdata]# ll
    2. total 4
    3. drwxrwxrwx 2 root root 6 Aug 29 17:09 default-www-web-0-pvc-1cbb7bd9-1fe0-472d-8068-6a7a81fc3322
    4. drwxrwxrwx 2 root root 6 Aug 29 17:09 default-www-web-1-pvc-7be7b00f-8a40-447f-92f1-4064f8466629
    5. drwxrwxrwx 2 root root 6 Aug 29 17:09 default-www-web-2-pvc-3186a28f-c878-437b-8a03-1e4166ac0cbf
    6. -rw-r--r-- 1 root root 15 Aug 28 20:48 index.html
    7. drwxr-xr-x 2 root root 6 Aug 28 21:50 pv1
    8. drwxr-xr-x 2 root root 6 Aug 28 21:50 pv2
    9. drwxr-xr-x 2 root root 6 Aug 28 21:18 pv3
    10. [root@node11 nfsdata]# cd default-www-web-0-pvc-1cbb7bd9-1fe0-472d-8068-6a7a81fc3322
    11. [root@node11 default-www-web-0-pvc-1cbb7bd9-1fe0-472d-8068-6a7a81fc3322]# echo web-0 > index.html
    12. [root@node11 default-www-web-0-pvc-1cbb7bd9-1fe0-472d-8068-6a7a81fc3322]# cd ..
    13. [root@node11 nfsdata]# cd default-www-web-1-pvc-7be7b00f-8a40-447f-92f1-4064f8466629
    14. [root@node11 default-www-web-1-pvc-7be7b00f-8a40-447f-92f1-4064f8466629]# echo web-1 > index.html
    15. [root@node11 default-www-web-1-pvc-7be7b00f-8a40-447f-92f1-4064f8466629]# cd ..
    16. [root@node11 nfsdata]# cd default-www-web-2-pvc-3186a28f-c878-437b-8a03-1e4166ac0cbf
    17. [root@node11 default-www-web-2-pvc-3186a28f-c878-437b-8a03-1e4166ac0cbf]# echo web-2 > index.html
    18. [root@node11 default-www-web-2-pvc-3186a28f-c878-437b-8a03-1e4166ac0cbf]# cd ..
    19. [root@node22 statefulset]# kubectl run demo --image=busyboxplus -it --rm 进入容器内访问
    20. If you don't see a command prompt, try pressing enter.
    21. / # curl web-0.nginx-svc
    22. web-0
    23. / # curl web-1.nginx-svc
    24. web-1
    25. / # curl web-2.nginx-svc
    26. web-2
    27. ##回收pod并重新创建后,访问对应的pod可得到原来对应挂载卷的内容
    28. [root@node22 statefulset]# kubectl delete -f statefulset.yaml
    29. statefulset.apps "web" deleted
    30. [root@node22 statefulset]# kubectl delete pvc --all
    31. persistentvolumeclaim "www-web-0" deleted
    32. persistentvolumeclaim "www-web-1" deleted
    33. persistentvolumeclaim "www-web-2" deleted
    34. [root@node22 statefulset]# kubectl get pv
    35. No resources found
    36. [root@node22 statefulset]# kubectl get pod
    37. No resources found in default namespace.

    7).使用statefullset部署mysql主从集群

    1. [root@node22 statefulset]# mkdir mysql
    2. [root@node22 statefulset]# cd mysql/
    3. [root@node22 mysql]# vim config.yaml
    4. apiVersion: v1
    5. kind: ConfigMap
    6. metadata:
    7. name: mysql
    8. labels:
    9. app: mysql
    10. app.kubernetes.io/name: mysql
    11. data:
    12. primary.cnf: |
    13. # 仅在主服务器上应用此配置
    14. [mysqld]
    15. log-bin
    16. replica.cnf: |
    17. # 仅在副本服务器上应用此配置
    18. [mysqld]
    19. super-read-only
    20. [root@node22 mysql]# kubectl apply -f config.yaml
    21. configmap/mysql created
    22. [root@node22 mysql]# kubectl get cm
    23. NAME DATA AGE
    24. cm1-config 2 38h
    25. kube-root-ca.crt 1 4d20h
    26. my-config 2 38h
    27. my-config-2 1 38h
    28. my-config-3 2 38h
    29. mysql 2 14s
    30. nginxconf 1 38h
    31. [root@node22 mysql]# kubectl delete cm my-config
    32. configmap "my-config" deleted
    33. [root@node22 mysql]# kubectl delete cm my-config-2
    34. configmap "my-config-2" deleted
    35. [root@node22 mysql]# kubectl delete cm my-config-3
    36. configmap "my-config-3" deleted
    37. [root@node22 mysql]# kubectl get cm
    38. NAME DATA AGE
    39. cm1-config 2 38h
    40. kube-root-ca.crt 1 4d20h
    41. mysql 2 41s
    42. nginxconf 1 38h
    43. 创建两个svc:
    44. [root@node22 mysql]# vim svc.yaml
    45. apiVersion: v1
    46. kind: Service
    47. metadata:
    48. name: mysql
    49. labels:
    50. app: mysql
    51. app.kubernetes.io/name: mysql
    52. spec:
    53. ports:
    54. - name: mysql
    55. port: 3306
    56. clusterIP: None
    57. selector:
    58. app: mysql
    59. ---
    60. # 用于连接到任一 MySQL 实例执行读操作的客户端服务
    61. # 对于写操作,你必须连接到主服务器:mysql-0.mysql
    62. apiVersion: v1
    63. kind: Service
    64. metadata:
    65. name: mysql-read
    66. labels:
    67. app: mysql
    68. app.kubernetes.io/name: mysql
    69. readonly: "true"
    70. spec:
    71. ports:
    72. - name: mysql
    73. port: 3306
    74. selector:
    75. app: mysql
    76. [root@node22 mysql]# kubectl apply -f svc.yaml
    77. service/mysql created
    78. service/mysql-read created
    79. [root@node22 mysql]# kubectl delete svc nginx-svc
    80. service "nginx-svc" deleted
    81. [root@node22 mysql]# kubectl get svc
    82. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    83. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d20h
    84. mysql ClusterIP None <none> 3306/TCP 17s
    85. mysql-read ClusterIP 10.106.229.89 <none> 3306/TCP 17s
    86. 创建statehulset控制器:
    87. [root@node22 mysql]# vim mysql.yaml
    88. [root@node22 mysql]# kubectl apply -f mysql.yaml
    89. statefulset.apps/mysql created
    90. [root@node22 mysql]# kubectl get pod
    91. NAME READY STATUS RESTARTS AGE
    92. mysql-0 1/1 Running 1 (64s ago) 9m56s
    93. [root@node22 mysql]# kubectl logs mysql-0 init-mysql
    94. ++ hostname
    95. + [[ mysql-0 =~ -([0-9]+)$ ]]
    96. + ordinal=0
    97. + echo '[mysqld]'
    98. + echo server-id=100
    99. + [[ 0 -eq 0 ]]
    100. + cp /mnt/config-map/primary.cnf /mnt/conf.d/
    101. 上传镜像:
    102. [root@node11 ~]# docker load -i mysql-xtrabackup.tar
    103. [root@node11 ~]# docker push reg.westos.org/library/mysql:5.7
    104. [root@node11 ~]# docker push reg.westos.org/library/xtrabackup:1.0
    105. [root@node22 mysql]# kubectl run demo --image=mysql:5.7 -it bash
    106. If you don't see a command prompt, try pressing enter.
    107. root@demo:/# mysql -h mysql-0.mysql

    四、kubernetes调度

    调度器通过 kubernetes watch 机制来发现集群中新创建且尚未被调度到 Node

    上的 Pod。调度器会将发现的每一个未调度的 Pod 调度到一个合适的 Node 上来运

    行。

    • kube-scheduler Kubernetes 集群的默认调度器,并且是集群控制面的一部分。

    如果你真的希望或者有这方面的需求,kube-scheduler 在设计上是允许你自己写一

    个调度组件并替换原有的 kube-scheduler

    在做调度决定时需要考虑的因素包括:单独和整体的资源请求、硬件/软件/策略限

    制、亲和以及反亲和要求、数据局域性、负载间的干扰等等。

    默认策略可以参考:https://kubernetes.io/zh/docs/concepts/scheduling/kube[1]

    scheduler/

    调度框架:https://kubernetes.io/zh/docs/concepts/configuration/scheduling[1]

    framework/

    • nodeName 是节点选择约束的最简单方法,但一般不推荐。如果 nodeName

    PodSpec 中指定了,则它优先于其他的节点选择方法。

    使用 nodeName 来选择节点的一些限制:

    如果指定的节点不存在。

    如果指定的节点没有资源来容纳 pod,则pod 调度失败。

    云环境中的节点名称并非总是可预测或稳定的。

    示例:

    1. [root@node22 ~]# cd yaml/
    2. [root@node22 yaml]# vim pod.yaml
    3. apiVersion: v1
    4. kind: Pod
    5. metadata:
    6. name: demo
    7. namespace: default
    8. labels:
    9. app: nginx
    10. spec:
    11. containers:
    12. - name: nginx
    13. image: nginx
    14. nodeName: node44
    15. [root@node22 yaml]# kubectl apply -f pod.yaml
    16. pod/demo created
    17. [root@node22 yaml]# kubectl get pod -o wide
    18. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    19. demo 0/1 ContainerCreating 0 15s <none> node44 <none> <none>
    20. 虽然次方法使用简单,但节点的不稳定性会影响其使用。
    21. [root@node22 yaml]# kubectl delete -f pod.yaml
    22. pod "demo" deleted
    23. • nodeSelector 是节点选择约束的最简单推荐形式。
    24. • 给选择的节点添加标签:
    25. • kubectl label nodes server2 disktype=ssd
    26. • 添加 nodeSelector 字段到 pod 配置中:
    27. [root@node22 yaml]# vim pod.yaml
    28. apiVersion: v1
    29. kind: Pod
    30. metadata:
    31. name: demo
    32. namespace: default
    33. labels:
    34. app: nginx
    35. spec:
    36. containers:
    37. - name: nginx
    38. image: nginx
    39. #nodeName: no
    40. nodeSelector:
    41. disktype: ssd
    42. [root@node22 yaml]# kubectl apply -f pod.yaml
    43. pod/demo created
    44. [root@node22 yaml]# kubectl get pod -o wide
    45. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    46. demo 1/1 Running 0 9s 10.244.144.110 node33 <none> <none>
    47. [root@node22 yaml]# kubectl delete -f pod.yaml
    48. pod "demo" deleted

    亲和与反亲和

    • nodeSelector 提供了一种非常简单的方法来将 pod 约束到具有特定标签的节

    点上。亲和/反亲和功能极大地扩展了你可以表达约束的类型。

    你可以发现规则是”/“偏好,而不是硬性要求,因此,如果调度器无

    法满足该要求,仍然调度该 pod

    你可以使用节点上的 pod 的标签来约束,而不是使用节点本身的标签,来允

    许哪些 pod 可以或者不可以被放置在一起。

    节点亲和

    • requiredDuringSchedulingIgnoredDuringExecution

    必须满足

    • preferredDuringSchedulingIgnoredDuringExecution

    倾向满足

    • IgnoreDuringExecution 表示如果在Pod运行期间Node的标签发生变化,导致

    亲和性策略不能满足,则继续运行当前的Pod

    参考:https://kubernetes.io/zh/docs/concepts/configuration/assign-pod[1]

    node/

    节点亲和性pod示例:

    1. [root@node22 ~]# mkdir node
    2. [root@node22 ~]# cd node/
    3. [root@node22 node]# vim pod.yaml
    4. apiVersion: v1
    5. kind: Pod
    6. metadata:
    7. name: node-affinity
    8. spec:
    9. containers:
    10. - name: nginx
    11. image: nginx
    12. affinity:
    13. nodeAffinity:
    14. requiredDuringSchedulingIgnoredDuringExecution:
    15. nodeSelectorTerms:
    16. - matchExpressions:
    17. - key: disktype
    18. operator: In
    19. values:
    20. - ssd
    21. - sata
    22. [root@node22 node]# kubectl apply -f pod.yaml
    23. pod/node-affinity created
    24. [root@node22 node]# kubectl get pod -o wide
    25. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    26. node-affinity 1/1 Running 0 14s 10.244.144.111 node33 <none> <none>
    27. [root@node22 node]# kubectl get node --show-labels
    28. NAME STATUS ROLES AGE VERSION LABELS
    29. node22 Ready control-plane,master 9d v1.23.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node22,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
    30. node33 Ready <none> 9d v1.23.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node33,kubernetes.io/os=linux
    31. node44 Ready <none> 9d v1.23.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,ingress=nginx,kubernetes.io/arch=amd64,kubernetes.io/hostname=node44,kubernetes.io/os=linux
    32. [root@node22 node]# kubectl delete -f pod.yaml
    33. pod "node-affinity" deleted
    34. 加了倾向性后就会倾向于条件,如果不满足条件,不会影响调度,只要满足必要条件就好
    35. [root@node22 node]# vim pod.yaml
    36. apiVersion: v1
    37. kind: Pod
    38. metadata:
    39. name: node-affinity
    40. spec:
    41. containers:
    42. - name: nginx
    43. image: nginx
    44. affinity:
    45. nodeAffinity:
    46. requiredDuringSchedulingIgnoredDuringExecution:
    47. nodeSelectorTerms:
    48. - matchExpressions:
    49. - key: disktype
    50. operator: In
    51. values:
    52. - ssd
    53. - sata
    54. - fc
    55. preferredDuringSchedulingIgnoredDuringExecution:
    56. - weight: 1
    57. preference:
    58. matchExpressions:
    59. - key: ingress
    60. operator: In
    61. values:
    62. - nginx
    63. [root@node22 node]# kubectl apply -f pod.yaml
    64. pod/node-affinity created
    65. [root@node22 node]# kubectl get pod -o wide 调度到了符合条件的node44
    66. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    67. node-affinity 1/1 Running 0 21s 10.244.214.140 node44 <none> <none>

    1).nodeaffinity还支持多种规则匹配条件的配置:

    In:label的值在列表内

    NotIn:label的值不在列表内

    Gt:label的值大于设置的值,不支持Pod亲和性

    Lt:label的值小于设置的值,不支持pod亲和性

    Exists:设置的label存在

    DoesNotExist:设置的label不存在

    2).pod亲和性和反亲和性

    podAffinity主要解决POD可以和哪些POD部署在同一个拓扑域中的问题(拓扑域用主机标签实现,可以是单个主机,也可以是多个主机组成的cluster、zone等)

    podAntiAffinity主要解决POD不能和哪些POD部署在同一个拓扑域中的问题;它们处理的是Kubernetes集群内部POD和POD之间的关系

    Pod间亲和与反亲和在与更高级别的集合(例如ReplicaSets、StatefulSets、Deployments等)一起使用时,它们可能更加有用,可以轻松配置一组应位于相同定义拓扑(例如,节点)中的工作负载

    pod亲和性:

    1. [root@node22 node]# vim pod2.yaml
    2. apiVersion: v1
    3. kind: Pod
    4. metadata:
    5. name: myapp
    6. labels:
    7. app: myapp
    8. spec:
    9. containers:
    10. - name: myapp
    11. image: myapp:v1
    12. hostNetwork: true
    13. nodeName: node33
    14. [root@node22 node]# kubectl apply -f pod2.yaml
    15. pod/myapp created
    16. [root@node22 node]# kubectl get pod
    17. NAME READY STATUS RESTARTS AGE
    18. myapp 0/1 Error 1 (8s ago) 11s
    19. nginx 1/1 Running 0 3m25s
    20. [root@node22 node]# kubectl get pod -o wide
    21. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    22. myapp 1/1 Running 3 (31s ago) 53s 192.168.0.33 node33 <none> <none>
    23. nginx 1/1 Running 0 4m7s 192.168.0.33 node33 <none> <none>
    24. [root@node22 node]# vim pod4.yaml
    25. apiVersion: v1
    26. kind: Pod
    27. metadata:
    28. name: mysql
    29. labels:
    30. app: mysql
    31. spec:
    32. containers:
    33. - name: mysql
    34. image: mysql:5.7
    35. env:
    36. - name: "MYSQL_ROOT_PASSWORD"
    37. value: "westos"
    38. affinity:
    39. podAntiAffinity:
    40. requiredDuringSchedulingIgnoredDuringExecution:
    41. - labelSelector:
    42. matchExpressions:
    43. - key: app
    44. operator: In
    45. values:
    46. - nginx
    47. topologyKey: kubernetes.io/hostname
    48. [root@node22 node]# kubectl apply -f pod4.yaml
    49. pod/mysql created
    50. [root@node22 node]# kubectl get pod -o wide
    51. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    52. myapp 1/1 Running 3 (31s ago) 53s 192.168.0.33 node33 <none> <none>
    53. nginx 1/1 Running 0 4m7s 192.168.0.33 node33 <none> <none>

    pod反亲和性:

    1. [root@node22 node]# vim pod3.yaml
    2. apiVersion: v1
    3. kind: Pod
    4. metadata:
    5. name: myapp
    6. labels:
    7. app: myapp
    8. spec:
    9. containers:
    10. - name: myapp
    11. image: myapp:v1
    12. hostNetwork: true
    13. affinity:
    14. podAntiAffinity:
    15. requiredDuringSchedulingIgnoredDuringExecution:
    16. - labelSelector:
    17. matchExpressions:
    18. - key: app
    19. operator: In
    20. values:
    21. - nginx
    22. topologyKey: "kubernetes.io/hostname"
    23. [root@node22 node]# kubectl delete -f pod3.yaml
    24. pod "myapp" deleted
    25. [root@node22 node]# kubectl apply -f pod3.yaml
    26. pod/myapp created
    27. [root@node22 node]# kubectl get pod -o wide
    28. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    29. myapp 1/1 Running 0 12s 192.168.0.44 node44 <none> <none>
    30. nginx 1/1 Running 0 7m57s 192.168.0.33 node33 <none> <none>

    3).Taints污点

    https://kubernetes.io/zh/docs/concepts/scheduling-eviction/taint-and-toleration/

    NodeAffinity节点亲和性,是Pod上定义的一种属性,使Pod能够按我们的要求调度到某个Node上,而Taints则恰恰相反,它可以让Node拒绝运行Pod,甚至驱逐Pod

    Taints(污点)是Node的一个属性,设置了Taints后,Kubernetes是不会将Pod调度到这个Node上的,于是Kubernetes就给Pod设置了个属性Tolerations(容忍),只要Pod能够容忍Node上的污点,那么Kubernetes就会忽略Node上的污点,就能够(不是必须)把Pod调度过去

    可以使用命令 kubectl taint 给节点增加一个 taint

    kubectl taint nodes node1 key=value:NoSchedule //创建

    kubectl describe nodes server1 |grep Taints //查询

    kubectl taint nodes node1 key:NoSchedule- //删除

    其中[effect] 可取值: [ NoSchedule | PreferNoSchedule | NoExecute ]

    • NoSchedulePOD 不会被调度到标记为 taints 节点。

    • PreferNoScheduleNoSchedule 的软策略版本。

    • NoExecute:该选项意味着一旦 Taint 生效,如该节点内正在运行的 POD 没有对应

    Tolerate 设置,会直接被逐出。

    ##集群节点的master不参与调度是因为其上有Taints,而worker节点没有此污点

    1. [root@node22 node]# kubectl describe nodes node22 | grep Tain
    2. Taints: node-role.kubernetes.io/master:NoSchedule
    3. [root@node22 node]# kubectl describe nodes node33 | grep Tain
    4. Taints: <none>
    5. [root@node22 node]# kubectl describe nodes node44 | grep Tain
    6. Taints: <none>
    7. [root@node22 node]# kubectl create deployment demo --image=myapp:v1 --replicas=3
    8. [root@node22 node]# kubectl get pod -o wide
    9. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    10. demo-7c4d6f8c46-856fc 1/1 Running 0 19s 10.244.144.113 node33 <none> <none>
    11. demo-7c4d6f8c46-ss2nw 1/1 Running 0 19s 10.244.144.112 node33 <none> <none>
    12. demo-7c4d6f8c46-wljn8 1/1 Running 0 19s 10.244.214.143 node44 <none> <none>

    node33NoSchedule污点:

    ##为集群中的node33主机打上污点后,后续pod都会被调度至没有污点的node44主机上

    1. [root@node22 node]# kubectl delete pod --all 删除不用的pod
    2. pod "mysql" deleted
    3. pod "nginx" deleted
    4. [root@node22 node]# kubectl taint node node33 k1=v1:NoSchedule
    5. node/node33 tainted
    6. [root@node22 node]# kubectl describe nodes node33 | grep Tain
    7. Taints: k1=v1:NoSchedule
    8. [root@node22 node]# kubectl scale deployment demo --replicas=6 把副本数拉升为6
    9. deployment.apps/demo scaled
    10. [root@node22 node]# kubectl get pod -o wide
    11. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    12. demo-7c4d6f8c46-856fc 1/1 Running 0 4m50s 10.244.144.113 node33 <none> <none>
    13. demo-7c4d6f8c46-8qsng 1/1 Running 0 44s 10.244.214.145 node44 <none> <none>
    14. demo-7c4d6f8c46-9jmkc 1/1 Running 0 44s 10.244.214.144 node44 <none> <none>
    15. demo-7c4d6f8c46-ss2nw 1/1 Running 0 4m50s 10.244.144.112 node44 <none> <none>
    16. demo-7c4d6f8c46-vlfws 1/1 Running 0 44s 10.244.214.146 node44 <none> <none>
    17. demo-7c4d6f8c46-wljn8 1/1 Running 0 4m50s 10.244.214.143 node44 <none> <none>

    node44NoExecute污点:

    ##node44节点主机也打上污点后,原来运行在其上面的pod会被驱离

    1. [root@node22 node]# kubectl taint node node44 k1=v1:NoExecute
    2. node/node44 tainted
    3. [root@node22 node]# kubectl describe nodes node44 | grep Tain
    4. Taints: k1=v1:NoExecute
    5. [root@node22 node]# kubectl get pod -o wide
    6. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    7. demo-7c4d6f8c46-5gw59 0/1 Pending 0 22s <none> <none> <none> <none>
    8. demo-7c4d6f8c46-5ww9h 0/1 Pending 0 20s <none> <none> <none> <none>
    9. demo-7c4d6f8c46-7xmsw 0/1 Pending 0 20s <none> <none> <none> <none>
    10. demo-7c4d6f8c46-856fc 1/1 Running 0 7m28s 10.244.144.113 node33 <none> <none>
    11. demo-7c4d6f8c46-rthws 0/1 Pending 0 20s <none> <none> <none> <none>
    12. demo-7c4d6f8c46-ss2nw 1/1 Running 0 7m28s 10.244.144.112 node33 <none> <none>

    ##在pod清单文件中配置污点容忍后,node44主机其上又可以重新运行pod 

    [root@node22 node]# vim myapp.yaml  容忍NoSchedule污点:

    [root@node22 node]# kubectl apply -f myapp.yaml

    Warning: resource deployments/demo is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.

    deployment.apps/demo configured

    [root@node22 node]# kubectl get pod -o wide

    NAME                    READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES

    demo-7df9764968-2hw59   1/1     Running   0          7s    10.244.144.118   node33             

    demo-7df9764968-8cfdf   1/1     Running   0          23s   10.244.144.116   node33             

    demo-7df9764968-8rbg8   1/1     Running   0          7s    10.244.144.117   node33             

    demo-7df9764968-hlxcj   1/1     Running   0          23s   10.244.144.114   node33             

    demo-7df9764968-jthwv   1/1     Running   0          6s    10.244.144.119   node33             

    demo-7df9764968-mzqn9   1/1     Running   0          23s   10.244.144.115   node33             

    [root@node22 node]# vim myapp.yaml 容忍所有污点

    [root@node22 node]# kubectl apply -f myapp.yaml

    deployment.apps/demo created

    [root@node22 node]# kubectl get pod -o wide

    NAME                    READY   STATUS              RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES

    demo-5f7ffd8d99-hsvt5   1/1     Running             0          7s    10.244.214.147   node44             

    demo-5f7ffd8d99-jbnpt   1/1     Running             0          7s    10.244.214.148   node44             

    demo-5f7ffd8d99-mzbhh   1/1     Running             0          7s    10.244.144.121   node33             

    demo-5f7ffd8d99-nv746   0/1     ContainerCreating   0          7s               node22             

    demo-5f7ffd8d99-wq2pb   0/1     ContainerCreating   0          7s               node22             

    demo-5f7ffd8d99-zhggq   1/1     Running             0          7s    10.244.144.120   node33             

    tolerations中定义的key、value、effect,要与node上设置的taint保持一直:

    • 如果 operator 是 Exists ,value可以省略。

    • 如果 operator 是 Equal ,则key与value之间的关系必须相等。

    • 如果不指定operator属性,则默认值为Equal。

    还有两个特殊值:

    • 当不指定key,再配合Exists 就能匹配所有的key与value ,可以容忍所有污点。

    • 当不指定effect ,则匹配所有的effect。

    1. [root@node22 node]# kubectl delete -f myapp.yaml
    2. deployment.apps "demo" deleted
    3. [root@node22 node]# kubectl taint node node33 k1-
    4. node/node33 untainted
    5. [root@node22 node]# kubectl taint node node44 k1-
    6. node/node44 untainted
    7. [root@node22 node]# kubectl describe nodes | grep Tain
    8. Taints: node-role.kubernetes.io/master:NoSchedule
    9. Taints: <none>
    10. Taints: <none>

    4).影响Pod调度的指令还有:cordondraindelete,后期创建的pod都不会被调度到

    该节点上,但操作的暴力程度不一样。

    cordon 停止调度:

    影响最小,只会将node调为SchedulingDisabled,新创建pod,不会被调度到该节点,节

    点原有pod不受影响,仍正常对外提供服务。

    1. [root@node22 node]# kubectl get node
    2. NAME STATUS ROLES AGE VERSION
    3. node22 Ready control-plane,master 9d v1.23.10
    4. node33 Ready <none> 9d v1.23.10
    5. node44 Ready <none> 9d v1.23.10
    6. [root@node22 node]# kubectl cordon node33
    7. node/node33 cordoned
    8. [root@node22 node]# kubectl get node
    9. NAME STATUS ROLES AGE VERSION
    10. node22 Ready control-plane,master 9d v1.23.10
    11. node33 Ready,SchedulingDisabled <none> 9d v1.23.10
    12. node44 Ready <none> 9d v1.23.10
    13. [root@node22 node]# kubectl uncordon node33 取消禁用
    14. node/node33 uncordoned

     5).drain 驱逐节点: 

    首先驱逐node上的pod,在其他节点重新创建,然后将节点调为SchedulingDisabled

    1. [root@node22 node]# kubectl create deployment demo --image=nginx --replicas=3
    2. deployment.apps/demo created
    3. [root@node22 node]# kubectl get pod -o wide
    4. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    5. demo-6c54f77c95-78mhw 1/1 Running 0 7s 10.244.214.151 node44 <none> <none>
    6. demo-6c54f77c95-c9jp7 1/1 Running 0 7s 10.244.144.123 node33 <none> <none>
    7. demo-6c54f77c95-j8rtj 1/1 Running 0 7s 10.244.144.122 node33 <none> <none>
    8. root@node22 node]# kubectl drain node44 --ignore-daemonsets
    9. node/node44 already cordoned
    10. WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-2tgjc, kube-system/kube-proxy-zh89l, metallb-system/speaker-4hb2q
    11. evicting pod metallb-system/controller-5c97f5f498-zkkww
    12. evicting pod default/demo-6c54f77c95-78mhw
    13. evicting pod ingress-nginx/ingress-nginx-controller-5bbfbbb9c7-d82hw
    14. pod/controller-5c97f5f498-zkkww evicted
    15. pod/demo-6c54f77c95-78mhw evicted
    16. pod/ingress-nginx-controller-5bbfbbb9c7-d82hw evicted
    17. node/node44 drained
    18. [root@node22 node]# kubectl get pod -o wide
    19. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    20. demo-6c54f77c95-bl47m 1/1 Running 0 17s 10.244.144.124 node33 <none> <none>
    21. demo-6c54f77c95-c9jp7 1/1 Running 0 79s 10.244.144.123 node33 <none> <none>
    22. demo-6c54f77c95-j8rtj 1/1 Running 0 79s 10.244.144.122 node33 <none> <none>
    23. [root@node22 node]# kubectl get node
    24. NAME STATUS ROLES AGE VERSION
    25. node22 Ready control-plane,master 9d v1.23.10
    26. node33 Ready <none> 9d v1.23.10
    27. node44 Ready,SchedulingDisabled <none> 9d v1.23.10

     6).delete 删除节点

    最暴力的一个,首先驱逐node上的pod,在其他节点重新创建,然后,从master节点删除

    nodemaster失去对其控制,如要恢复调度,需进入node节点,重启kubelet服务

    1. [root@node22 node]# kubectl delete nodes node44
    2. node "node44" deleted
    3. [root@node22 node]# kubectl get node
    4. NAME STATUS ROLES AGE VERSION
    5. node22 Ready control-plane,master 9d v1.23.10
    6. node33 Ready <none> 9d v1.23.10
    7. [root@node44 ~]# systemctl restart kubelet重启节点上的kubelete基于node的自注册功能,恢复使用
    8. [root@node22 node]# kubectl get node
    9. NAME STATUS ROLES AGE VERSION
    10. node22 Ready control-plane,master 9d v1.23.10
    11. node33 Ready <none> 9d v1.23.10
    12. node44 Ready <none> 31s v1.23.10

    五、kubernetes访问控制

    1.kubernetes API 访问控制

    Authentication(认证)

    认证方式现共有8种,可以启用一种或多种认证方式,只要有一种认证方式通过,就不再

    进行其它方式的认证。通常启用X509 Client CertsService Accout Tokens两种认证方式。

    • Kubernetes集群有两类用户:由Kubernetes管理的Service Accounts (服务账户)和

    Users Accounts 普通账户。k8s中账号的概念不是我们理解的账号,它并不真的存在,

    它只是形式上存在。

    Authorization(授权)

    必须经过认证阶段,才到授权请求,根据所有授权策略匹配请求资源属性,决定允许或拒

    绝请求。授权方式现共有6种,AlwaysDenyAlwaysAllowABACRBACWebhook

    Node。默认集群强制开启RBAC

    Admission Control(准入控制)

    用于拦截请求的一种方式,运行在认证、授权之后,是权限认证链上的最后一环,对请求

    API资源对象进行修改和校验。

    访问k8sAPI Server的客户端主要分为两类:

    • kubectl :用户家目录中的 .kube/config 里面保存了客户端访问API Server的密钥相关信息,

    这样当用kubectl访问k8s时,它就会自动读取该配置文件,向API Server发起认证,然后

    完成操作请求。

    • podPod中的进程需要访问API Server,如果是人去访问或编写的脚本去访问,这类访问

    使用的账号为:UserAccount;而Pod自身去连接API Server时,使用的账号是:

    ServiceAccount,生产中后者使用居多。

    kubectlapiserver发起的命令,采用的是http方式,其实就是对URL发起增删改查的操作。

    • kubectl proxy --port=8888 &

    • curl http://localhost:8888/api/v1/namespaces/default

    • curl http://localhost:8888/apis/apps/v1/namespaces/default/deployments

    以上两种api的区别是:

    • api它是一个特殊链接,只有在核心v1群组中的对象才能使用。

    • apis 它是一般API访问的入口固定格式名。kubernetes访问控制

    UserAccountserviceaccount

    用户账户是针对人而言的。 服务账户是针对运行在 pod 中的进程而言的。

    用户账户是全局性的。 其名称在集群各 namespace 中都是全局唯一的,未来的用户资源不

    会做 namespace 隔离, 服务账户是 namespace 隔离的。

    通常情况下,集群的用户账户可能会从企业数据库进行同步,其创建需要特殊权限,并且涉

    及到复杂的业务流程。 服务账户创建的目的是为了更轻量,允许集群用户为了具体的任务

    创建服务账户 ( 即权限最小化原则 )

    创建服务账号(serviceaccount

    1. [root@node22 ~]# kubectl delete pvc --all 删除多余pvc
    2. persistentvolumeclaim "data-mysql-0" deleted
    3. [root@node22 ~]# kubectl create sa admin 创建admin账号(服务)
    4. serviceaccount/admin created
    5. [root@node22 ~]# kubectl get sa 查看sa账号
    6. NAME SECRETS AGE
    7. admin 1 52s
    8. default 1 9d
    9. [root@node22 ~]# kubectl describe sa admin此时k8s为用户自动生成认证信息,但没有授权
    10. Name: admin 查看sa信息
    11. Namespace: default
    12. Labels: <none>
    13. Annotations: <none>
    14. Image pull secrets: <none>
    15. Mountable secrets: admin-token-jbwqn
    16. Tokens: admin-token-jbwqn
    17. Events: <none>
    18. [root@node22 ~]# kubectl run demo --image=nginx 创建容器
    19. pod/demo created
    20. [root@node22 ~]# kubectl get pod demo -o yaml | grep default 默认使用default账号
    21. namespace: default
    22. schedulerName: default-scheduler
    23. serviceAccount: default
    24. serviceAccountName: default
    25. defaultMode: 420
    26. [root@node22 ~]# kubectl delete pod demo
    27. pod "demo" deleted

     添加secretsserviceaccount中:

    1. [root@node22 sa]# kubectl patch serviceaccount admin -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}'
    2. serviceaccount/admin patched
    3. [root@node22 sa]# kubectl describe sa admin
    4. Name: admin
    5. Namespace: default
    6. Labels: <none>
    7. Annotations: <none>
    8. Image pull secrets: myregistrykey
    9. Mountable secrets: admin-token-jbwqn
    10. Tokens: admin-token-jbwqn
    11. Events: <none>

    serviceaccountpod绑定起来:

    1. [root@node22 ~]# mkdir sa
    2. [root@node22 ~]# cd sa
    3. [root@node22 sa]# vim pod.yaml 定义一个admin用户
    4. apiVersion: v1
    5. kind: Pod
    6. metadata:
    7. name: myapp
    8. labels:
    9. app: myapp
    10. spec:
    11. containers:
    12. - name: myapp
    13. image: reg.westos.org/westos/game2048
    14. ports:
    15. - name: http
    16. containerPort: 80
    17. serviceAccountName: admin
    18. [root@node22 sa]# kubectl apply -f pod.yaml
    19. pod/myapp created
    20. [root@node22 sa]# kubectl get pod 拉取私有仓库里的镜像需要认证
    21. NAME READY STATUS RESTARTS AGE
    22. demo-6c54f77c95-bl47m 1/1 Running 0 101m
    23. demo-6c54f77c95-c9jp7 1/1 Running 0 102m
    24. demo-6c54f77c95-j8rtj 1/1 Running 0 102m
    25. myapp 0/1 ImagePullBackOff 0 18s
    26. [root@node22 sa]# kubectl delete deployments.apps demo
    27. deployment.apps "demo" deleted
    28. [root@node22 sa]# kubectl delete pod myapp
    29. pod "myapp" deleted

    做完上面的添加secretsserviceaccount中后就可拉取

    1. [root@node22 sa]# kubectl apply -f pod.yaml
    2. pod/myapp created
    3. [root@node22 sa]# kubectl get pod
    4. NAME READY STATUS RESTARTS AGE
    5. myapp 1/1 Running 0 18s
    6. [root@node22 sa]# kubectl delete pod myapp
    7. pod "myapp" deleted

    创建用户账号(UserAccount

    1. [root@node22 sa]# cd /etc/kubernetes/pki/
    2. [root@node22 pki]# openssl genrsa -out test.key 2048 生成一个测试key
    3. [root@node22 pki]# openssl req -new -key test.key -out test.csr -subj "/CN=test"通过私钥生成证书请求文件
    4. [root@node22 pki]# openssl x509 -req -in test.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out test.crt -days 365 通过证书请求文件生成证书
    5. Signature ok
    6. subject=/CN=test
    7. Getting CA Private Key
    8. [root@node22 pki]# kubectl config set-credentials test --client-certificate=/etc/kubernetes/pki/test.crt --client-key=/etc/kubernetes/pki/test.key --embed-certs=true
    9. User "test" set.
    10. [root@node22 pki]# kubectl config set-context test@kubernetes --cluster=kubernetes --user=test 创建用户上下文
    11. Context "test@kubernetes" created.
    12. [root@node22 pki]# kubectl config use-context test@kubernetes 切换到test账号
    13. Switched to context "test@kubernetes".
    14. [root@node22 pki]# kubectl config view 查看
    15. apiVersion: v1
    16. clusters:
    17. - cluster:
    18. certificate-authority-data: DATA+OMITTED
    19. server: https://192.168.0.22:6443
    20. name: kubernetes
    21. contexts:
    22. - context:
    23. cluster: kubernetes
    24. user: kubernetes-admin
    25. name: kubernetes-admin@kubernetes
    26. - context:
    27. cluster: kubernetes
    28. user: test
    29. name: test@kubernetes
    30. current-context: test@kubernetes
    31. kind: Config
    32. preferences: {}
    33. users:
    34. - name: kubernetes-admin
    35. user:
    36. client-certificate-data: REDACTED
    37. client-key-data: REDACTED
    38. - name: test
    39. user:
    40. client-certificate-data: REDACTED
    41. client-key-data: REDACTED
    42. [root@node22 pki]# kubectl get pod权限不够
    43. Error from server (Forbidden): pods is forbidden: User "test" cannot list resource "pods" in API group "" in the namespace "default"
    44. 此时用户通过认证,但还没有权限操作集群资源,需要继续添加授权。

    RBAC(Role Based Access Control):基于角色访问控制授权

    允许管理员通过Kubernetes API动态配置授权策略,RBAC就是用户通过角色与权限进行关联;RBAC只有授权,没有拒绝授权,所以只需要定义允许该用户做什么即可;RBAC包括四种类型:Role、ClusterRole、RoleBinding、ClusterRoleBinding

    RBAC的三个基本概念

    Subject:被作用者,它表示k8s中的三类主体---user、group、serviceAccount

    Role:角色,它其实是一组规则,定义了一组对Kubernetes API对象的操作权限

    RoleBinding:定义了“被作用者”和“角色”的绑定关系

    RBAC的三个基本概念:

    • Subject:被作用者,它表示k8s中的三类主体, user group serviceAccount

    • Role:角色,它其实是一组规则,定义了一组对 Kubernetes API 对象的操作权限。

    • RoleBinding:定义了被作用者角色的绑定关系。

    Role ClusterRole

    • Role是一系列的权限的集合,Role只能授予单个namespace 中资源的访问权限。

    • ClusterRole Role 类似,但是可以在集群中全局使用。

    创建局部角色:

    1. [root@node22 sa]# vim role.yaml
    2. kind: Role
    3. apiVersion: rbac.authorization.k8s.io/v1
    4. metadata:
    5. namespace: default
    6. name: myrole
    7. rules:
    8. - apiGroups: [""]
    9. resources: ["pods"]
    10. verbs: ["get", "watch", "list", "create", "update", "patch", "delete"]
    11. [root@node22 sa]# kubectl config use-context kubernetes-admin@kubernetes 切换到admin
    12. Switched to context "kubernetes-admin@kubernetes".
    13. [root@node22 sa]# kubectl apply -f role.yaml 创建角色
    14. role.rbac.authorization.k8s.io/myrole created
    15. [root@node22 sa]# kubectl get role 查看角色
    16. NAME CREATED AT
    17. myrole 2022-09-03T14:04:53Z

    RoleBindingClusterRoleBinding

    • RoleBinding是将Role中定义的权限授予给用户或用户组。它包含一个subjects

    列表(usersgroups service accounts),并引用该Role

    • RoleBinding是对某个namespace 内授权,ClusterRoleBinding适用在集群范围

    内使用。

    绑定角色;

    1. [root@node22 sa]# vim bind.yaml
    2. kind: RoleBinding
    3. apiVersion: rbac.authorization.k8s.io/v1
    4. metadata:
    5. name: test-read-pods
    6. namespace: default
    7. subjects:
    8. - kind: User
    9. name: test
    10. apiGroup: rbac.authorization.k8s.io
    11. roleRef:
    12. kind: Role
    13. name: myrole
    14. apiGroup: rbac.authorization.k8s.io
    15. [root@node22 sa]# kubectl apply -f bind.yaml
    16. rolebinding.rbac.authorization.k8s.io/test-read-pods created
    17. [root@node22 sa]# kubectl get rolebindings.rbac.authorization.k8s.io
    18. NAME ROLE AGE
    19. test-read-pods Role/myrole 14s
    20. [root@node22 sa]# kubectl config use-context test@kubernetes 切换到test用户
    21. Switched to context "test@kubernetes".
    22. [root@node22 sa]# kubectl get pod
    23. No resources found in default namespace.
    24. [root@node22 sa]# kubectl run demo --image=nginx 可以操控pod
    25. pod/demo created

    创建集群角色:

    1. [root@node22 sa]# vim role.yaml
    2. kind: Role
    3. apiVersion: rbac.authorization.k8s.io/v1
    4. metadata:
    5. namespace: default
    6. name: myrole
    7. rules:
    8. - apiGroups: [""]
    9. resources: ["pods"]
    10. verbs: ["get", "watch", "list", "create", "update", "patch", "delete"]
    11. ---
    12. kind: ClusterRole
    13. apiVersion: rbac.authorization.k8s.io/v1
    14. metadata:
    15. name: myclusterrole
    16. rules:
    17. - apiGroups: [""]
    18. resources: ["pods"]
    19. verbs: ["get", "watch", "list", "delete", "create", "update"]
    20. - apiGroups: ["extensions", "apps"]
    21. resources: ["deployments"]
    22. verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
    23. [root@node22 sa]# kubectl apply -f role.yaml
    24. role.rbac.authorization.k8s.io/myrole unchanged
    25. clusterrole.rbac.authorization.k8s.io/myclusterrole created

     创建集群角色绑定:

    1. [root@node22 sa]# vim bind.yaml
    2. kind: RoleBinding
    3. apiVersion: rbac.authorization.k8s.io/v1
    4. metadata:
    5. name: test-read-pods
    6. namespace: default
    7. subjects:
    8. - kind: User
    9. name: test
    10. apiGroup: rbac.authorization.k8s.io
    11. roleRef:
    12. kind: Role
    13. name: myrole
    14. apiGroup: rbac.authorization.k8s.io
    15. ---
    16. apiVersion: rbac.authorization.k8s.io/v1
    17. kind: RoleBinding
    18. metadata:
    19. name: rolebind-myclusterrole
    20. namespace: default
    21. roleRef:
    22. apiGroup: rbac.authorization.k8s.io
    23. kind: ClusterRole
    24. name: myclusterrole
    25. subjects:
    26. - apiGroup: rbac.authorization.k8s.io
    27. kind: User
    28. name: test
    29. [root@node22 sa]# kubectl apply -f bind.yaml
    30. rolebinding.rbac.authorization.k8s.io/test-read-pods unchanged
    31. rolebinding.rbac.authorization.k8s.io/rolebind-myclusterrole created
    32. [root@node22 sa]# kubectl config use-context test@kubernetes 切换到test用户
    33. Switched to context "test@kubernetes".
    34. [root@node22 sa]# kubectl get pod
    35. No resources found in default namespace.
    36. [root@node22 sa]# kubectl get deployments.apps
    37. No resources found in default namespace.
    38. [root@node22 sa]# kubectl create deployments demo --images=nginx
    39. error: unknown flag: --images
    40. See 'kubectl create --help' for usage.
    41. [root@node22 sa]# kubectl create deployments demo --image=nginx
    42. error: unknown flag: --image
    43. See 'kubectl create --help' for usage.
    44. [root@node22 sa]# kubectl create deployment demo --image=nginx
    45. deployment.apps/demo created
    46. [root@node22 sa]# kubectl delete deployments.apps demo
    47. deployment.apps "demo" deleted
    48. [root@node22 sa]# kubectl get deployments.apps -n kube-system 不能切换到其他namespace
    49. Error from server (Forbidden): deployments.apps is forbidden: User "test" cannot list resource "deployments" in API group "apps" in the namespace "kube-system"
    50. [root@node22 sa]# kubectl config use-context kubernetes-admin@kubernetes 切换到admin
    51. Switched to context "kubernetes-admin@kubernetes".

     创建clusterrolebinding

    1. [root@node22 sa]# vim bind.yaml
    2. kind: RoleBinding
    3. apiVersion: rbac.authorization.k8s.io/v1
    4. metadata:
    5. name: test-read-pods
    6. namespace: default
    7. subjects:
    8. - kind: User
    9. name: test
    10. apiGroup: rbac.authorization.k8s.io
    11. roleRef:
    12. kind: Role
    13. name: myrole
    14. apiGroup: rbac.authorization.k8s.io
    15. ---
    16. apiVersion: rbac.authorization.k8s.io/v1
    17. kind: RoleBinding
    18. metadata:
    19. name: rolebind-myclusterrole
    20. namespace: default
    21. roleRef:
    22. apiGroup: rbac.authorization.k8s.io
    23. kind: ClusterRole
    24. name: myclusterrole
    25. subjects:
    26. - apiGroup: rbac.authorization.k8s.io
    27. kind: User
    28. name: test
    29. ---
    30. apiVersion: rbac.authorization.k8s.io/v1
    31. kind: ClusterRoleBinding
    32. metadata:
    33. name: clusterrolebinding-myclusterrole
    34. roleRef:
    35. apiGroup: rbac.authorization.k8s.io
    36. kind: ClusterRole
    37. name: myclusterrole
    38. subjects:
    39. - apiGroup: rbac.authorization.k8s.io
    40. kind: User
    41. name: test
    42. [root@node22 sa]# kubectl apply -f bind.yaml 可以操作整个集群的namespace
    43. rolebinding.rbac.authorization.k8s.io/test-read-pods unchanged
    44. rolebinding.rbac.authorization.k8s.io/rolebind-myclusterrole unchanged
    45. clusterrolebinding.rbac.authorization.k8s.io/clusterrolebinding-myclusterrole created

    服务账户的自动化

    服务账户准入控制器(Service account admission controller):

    如果该 pod 没有 ServiceAccount 设置,将其 ServiceAccount 设为 default

    保证 pod 所关联的 ServiceAccount 存在,否则拒绝该 pod

    如果 pod 不包含 ImagePullSecrets 设置,那么 ServiceAccount 中的

    ImagePullSecrets 信息添加到 pod 中。

    将一个包含用于 API 访问的 token volume 添加到 pod 中。

    将挂载于 /var/run/secrets/kubernetes.io/serviceaccount volumeSource 添加到

    pod 下的每个容器中。

    Token 控制器(Token controller

    检测服务账户的创建,并且创建相应的 Secret 以支持 API 访问。

    检测服务账户的删除,并且删除所有相应的服务账户 Token Secret

    检测 Secret 的增加,保证相应的服务账户存在,如有需要,为 Secret 增加 token

    检测 Secret 的删除,如有需要,从相应的服务账户中移除引用。

    服务账户控制器(Service account controller

    服务账户管理器管理各命名空间下的服务账户,并且保证每个活跃的命名空间下存在

    一个名为 “default” 的服务账户

    Kubernetes 还拥有用户组Group)的概念:

    • ServiceAccount对应内置用户的名字是:

    • system:serviceaccount:名字 >

    而用户组所对应的内置名字是:

    • system:serviceaccounts:名字 >

    示例1:表示mynamespace中的所有ServiceAccount

    subjects:

    - kind: Group

    name: system:serviceaccounts:mynamespace

    apiGroup: rbac.authorization.k8s.io

    示例2:表示整个系统中的所有ServiceAccount

    subjects:

    - kind: Group

    name: system:serviceaccounts

    apiGroup: rbac.authorization.k8s.io

    Kubernetes 还提供了四个预先定义好的 ClusterRole 来供用户直接使用:

    • cluster-amdin 管理集群

    • admin 可以管理

    • edit  可写

    • view 只读

    示例:(最佳实践)

    kind: RoleBinding

    apiVersion: rbac.authorization.k8s.io/v1

    metadata:

    name: readonly-default

    subjects:

    - kind: ServiceAccount

    name: default

    namespace: default

    roleRef:

    kind: ClusterRole

    name: view

    apiGroup: rbac.authorization.k8s.io

  • 相关阅读:
    OpenCV实战之人脸美颜美型算法
    HTML 学习笔记(七)列表
    vue 子页面通过暴露属性,实现主页面的某事件的触发
    了解操作符的那些事(二)
    ABAP RFC
    SystemVerilog 控制流语句
    Golang使用自定义IP请求https
    使用ElementUI结合Mock完成主页的搭建
    人工智能前沿——AI技术在医疗领域的应用(二)
    esxi下实现ikuai相同的两个网卡,单独路由配置
  • 原文地址:https://blog.csdn.net/z17609273238/article/details/126936547