• K8S(5)HPA



    下面有metrics-server的完整yaml文件内容

    一、HPA概述

    • HPA全称Horizontal Pod Autoscaler,即水平Pod自动伸缩器,可以根据观察到的CPU、内存使用率或自定义度量标准来自动增加或者减少Pod的数量,但是HPA不适用于无法扩、缩容的对象,例如DaemonSet,通常都作用与Deployment
    • HPA控制器会定期调整RC或者Deployment的副本数,使对象数量符合用户定义规则的数量
    • 既然是通过CPU、内存等指标来自动扩、缩容Pod,那么HPA肯定是需要一个能监控这些硬件资源的组件,则例的组件有很多选择,例如metrices-serverHeapster等,这里使用metrices-server

    metrices-server从api-server中获取cpu、内存使用率等监控指标

    二、HPA版本

    • 查看HPA所有版本
    [root@master C]# kubectl api-versions |grep autoscaling
    autoscaling/v1			#只支持通过CPU为参考依据来改变Pod的副本数
    autoscaling/v2beta1		#支持通过CPU、内存、连接数或者自定义规则为参考依据
    autoscaling/v2beta2		#和v2beta1差不多
    
    • 1
    • 2
    • 3
    • 4
    • 查看当前版本
    [root@master C]# kubectl explain hpa
    KIND:     HorizontalPodAutoscaler
    VERSION:  autoscaling/v1  #可以看到使用的默认版本是v1
    
    DESCRIPTION:
         configuration of a horizontal pod autoscaler.
    
    FIELDS:
       apiVersion   <string>
         APIVersion defines the versioned schema of this representation of an
         object. Servers should convert recognized schemas to the latest internal
         value, and may reject unrecognized values. More info:
         https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
    
       kind <string>
         Kind is a string value representing the REST resource this object
         represents. Servers may infer this from the endpoint the client submits
         requests to. Cannot be updated. In CamelCase. More info:
         https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
    
       metadata     <Object>
         Standard object metadata. More info:
         https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
    
       spec <Object>
         behaviour of autoscaler. More info:
         https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
    
       status       <Object>
         current information about the autoscaler.
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 指定使用版本,这里并不是修改,相当于执行这条命令时,指定了下版本
    [root@master C]# kubectl explain hpa --api-version=autoscaling/v2beta1
    KIND:     HorizontalPodAutoscaler
    VERSION:  autoscaling/v2beta1
    
    DESCRIPTION:
         HorizontalPodAutoscaler is the configuration for a horizontal pod
         autoscaler, which automatically manages the replica count of any resource
         implementing the scale subresource based on the metrics specified.
    
    FIELDS:
       apiVersion   <string>
         APIVersion defines the versioned schema of this representation of an
         object. Servers should convert recognized schemas to the latest internal
         value, and may reject unrecognized values. More info:
         https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
    
       kind <string>
         Kind is a string value representing the REST resource this object
         represents. Servers may infer this from the endpoint the client submits
         requests to. Cannot be updated. In CamelCase. More info:
         https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
    
       metadata     <Object>
         metadata is the standard object metadata. More info:
         https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
    
       spec <Object>
         spec is the specification for the behaviour of the autoscaler. More info:
         https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
    
       status       <Object>
         status is the current information about the autoscaler.
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33

    三、HPA部署

    (1)部署metrics-server

    [root@master kube-system]# kubectl top nodes  #查看节点状态,因为没有安装,所以会报错
    Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
    
    • 1
    • 2
    • 编写yaml文件,注意端口和镜像
    [root@master kube-system]# vim components-v0.5.0.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        k8s-app: metrics-server
        rbac.authorization.k8s.io/aggregate-to-admin: "true"
        rbac.authorization.k8s.io/aggregate-to-edit: "true"
        rbac.authorization.k8s.io/aggregate-to-view: "true"
      name: system:aggregated-metrics-reader
    rules:
    - apiGroups:
      - metrics.k8s.io
      resources:
      - pods
      - nodes
      verbs:
      - get
      - list
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        k8s-app: metrics-server
      name: system:metrics-server
    rules:
    - apiGroups:
      - ""
      resources:
      - pods
      - nodes
      - nodes/stats
      - namespaces
      - configmaps
      verbs:
      - get
      - list
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server-auth-reader
      namespace: kube-system
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: extension-apiserver-authentication-reader
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server:system:auth-delegator
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:auth-delegator
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      labels:
        k8s-app: metrics-server
      name: system:metrics-server
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:metrics-server
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    spec:
      ports:
      - name: https
        port: 443
        protocol: TCP
        targetPort: https
      selector:
        k8s-app: metrics-server
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    spec:
      selector:
        matchLabels:
          k8s-app: metrics-server
      strategy:
        rollingUpdate:
          maxUnavailable: 0
      template:
        metadata:
          labels:
            k8s-app: metrics-server
        spec:
          containers:
          - args:
            - --cert-dir=/tmp
            - --secure-port=4443
            - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
            - --kubelet-use-node-status-port
            - --metric-resolution=15s
            - --kubelet-insecure-tls
            image: registry.cn-shenzhen.aliyuncs.com/zengfengjin/metrics-server:v0.5.0
            imagePullPolicy: IfNotPresent
            livenessProbe:
              failureThreshold: 3
              httpGet:
                path: /livez
                port: https
                scheme: HTTPS
              periodSeconds: 10
            name: metrics-server
            ports:
            - containerPort: 4443
              name: https
              protocol: TCP
            readinessProbe:
              failureThreshold: 3
              httpGet:
                path: /readyz
                port: https
                scheme: HTTPS
              initialDelaySeconds: 20
              periodSeconds: 10
            resources:
              requests:
                cpu: 100m
                memory: 200Mi
            securityContext:
              readOnlyRootFilesystem: true
              runAsNonRoot: true
              runAsUser: 1000
            volumeMounts:
            - mountPath: /tmp
              name: tmp-dir
          nodeSelector:
            kubernetes.io/os: linux
          priorityClassName: system-cluster-critical
          serviceAccountName: metrics-server
          volumes:
          - emptyDir: {}
            name: tmp-dir
    ---
    apiVersion: apiregistration.k8s.io/v1
    kind: APIService
    metadata:
      labels:
        k8s-app: metrics-server
      name: v1beta1.metrics.k8s.io
    spec:
      group: metrics.k8s.io
      groupPriorityMinimum: 100
      insecureSkipTLSVerify: true
      service:
        name: metrics-server
        namespace: kube-system
      version: v1beta1
      versionPriority: 100
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183
    • 184
    • 185
    • 186
    • 187
    • 188
    • 189
    • 190
    • 191
    • 192
    • 193
    • 194
    • 部署
    [root@master kube-system]# kubectl apply -f components-v0.5.0.yaml
    serviceaccount/metrics-server created
    clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
    clusterrole.rbac.authorization.k8s.io/system:metrics-server created
    rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
    clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
    clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
    service/metrics-server created
    deployment.apps/metrics-server created
    apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
    
    #查看创建的pod
    [root@master kube-system]# kubectl get pods -n kube-system| egrep 'NAME|metrics-server'
    NAME                              READY   STATUS              RESTARTS   AGE
    metrics-server-5944675dfb-q6cdd   0/1     ContainerCreating   0          6s
    
     #查看日志
    [root@master kube-system]# kubectl logs metrics-server-5944675dfb-q6cdd  -n kube-system 
    I0718 03:06:39.064633       1 serving.go:341] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
    I0718 03:06:39.870097       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
    I0718 03:06:39.870122       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
    I0718 03:06:39.870159       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
    I0718 03:06:39.870160       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
    I0718 03:06:39.870105       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
    I0718 03:06:39.871166       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
    I0718 03:06:39.872804       1 dynamic_serving_content.go:130] Starting serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key
    I0718 03:06:39.875741       1 secure_serving.go:197] Serving securely on [::]:4443
    I0718 03:06:39.876050       1 tlsconfig.go:240] Starting DynamicServingCertificateController
    I0718 03:06:39.970469       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
    I0718 03:06:39.970575       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
    I0718 03:06:39.971610       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
    
    #如果报错的化,可以修改apiserver的yaml文件,这是k8s的yaml文件
    [root@master kube-system]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
    
     40     - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
     41     - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
     42     - --enable-aggregator-routing=true     #添加这行
     43     image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0
     44     imagePullPolicy: IfNotPresent
    
    #保存退出
    [root@master kube-system]# systemctl restart kubelet  #修改后重启kubelet
    
    #再次查看节点信息
    
    [root@master kube-system]# kubectl top node
    NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
    master   327m         4%     3909Mi          23%
    node     148m         1%     1327Mi          8%
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50

    (2)创建Deployment

    • 这里创建一个nginx的deployment
    [root@master test]# cat nginx.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
    spec:
      selector:
        matchLabels:
          run: nginx
      replicas: 1
      template:
        metadata:
          labels:
            run: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.15.2
            ports:
            - containerPort: 80
            resources:
              limits:
                cpu: 500m
              requests:  #想要HPA生效,必须添加requests声明
                cpu: 200m
    
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx
      labels:
        run: nginx
    spec:
      ports:
      - port: 80
      selector:
        run: nginx
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 访问测试
    [root@master test]# kubectl get pods -o wide
    NAME                    READY   STATUS    RESTARTS   AGE   IP            NODE   NOMINATED NODE   READINESS GATES
    nginx-9cb8d65b5-tq9v4   1/1     Running   0          14m   10.244.1.22   node   <none>           <none>
    [root@master test]# kubectl get svc nginx
    NAME    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
    nginx   ClusterIP   172.16.169.27   <none>        80/TCP    15m
    [root@master test]# kubectl describe svc nginx
    Name:              nginx
    Namespace:         default
    Labels:            run=nginx
    Annotations:       Selector:  run=nginx
    Type:              ClusterIP
    IP:                172.16.169.27
    Port:              <unset>  80/TCP
    TargetPort:        80/TCP
    Endpoints:         10.244.1.22:80
    Session Affinity:  None
    Events:            <none>
    [root@node test]# curl 172.16.169.27  #访问成功
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
        body {
            width: 35em;
            margin: 0 auto;
            font-family: Tahoma, Verdana, Arial, sans-serif;
        }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44

    (3)基于CPU创建HPA

    #创建一个cpu利用率达到20,最大10个pod,最小1个,这里没有指定版本所以默认是v1版本,而v1版本只能以CPU为标准
    [root@master test]# kubectl autoscale deployment nginx --cpu-percent=20 --min=1 --max=10
    horizontalpodautoscaler.autoscaling/nginx autoscaled
    
    #TARGETS可以看到使用率
    [root@master test]# kubectl get hpa
    NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    nginx   Deployment/nginx   0%/20%    1         10        1          86s
    
    #创建一个测试pod增加负载,访问地址要和pod的svc地址相同
    [root@master ~]# kubectl  run busybox -it --image=busybox -- /bin/sh -c 'while true; do wget -q -O- http://10.244.1.22; done'
    
    
    #过一分钟后看hap的使用率,REPLICAS是当前pod的数量
    [root@master test]# kubectl get hpa
    NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    nginx   Deployment/nginx   27%/20%   1         10        5          54m
    
    [root@master test]# kubectl get pods   #再看pod数量,发现已经增加到了5个
    NAME                    READY   STATUS    RESTARTS   AGE
    bustbox                 1/1     Running   0          119s
    nginx-9cb8d65b5-24dg2   1/1     Running   0          57s
    nginx-9cb8d65b5-c6n98   1/1     Running   0          87s
    nginx-9cb8d65b5-ksjzv   1/1     Running   0          57s
    nginx-9cb8d65b5-n77fm   1/1     Running   0          87s
    nginx-9cb8d65b5-tq9v4   1/1     Running   0          84m
    [root@master test]# kubectl get deployments.apps
    NAME    READY   UP-TO-DATE   AVAILABLE   AGE
    nginx   5/5     5            5           84m
    
    
    #此时,停止压测,过好几分钟后再次查看pod数量和使用率
    [root@master test]# kubectl delete pod busybox  #终止后,删除pod
    [root@master test]# kubectl get hpa  #虽然使用率已经降到0了,但是可以看到当前REPLICAS的数量还5,这个需要等一会就会缩容
    NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    nginx   Deployment/nginx   0%/20%    1         10        5          58m
    
    #过了几分钟后,可以看到pod数量已经回到了1
    [root@master test]# kubectl get hpa
    NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    nginx   Deployment/nginx   0%/20%    1         10        1          64m  
    [root@master test]# kubectl get pods
    NAME                    READY   STATUS    RESTARTS   AGE
    nginx-9cb8d65b5-tq9v4   1/1     Running   0          95m
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44

    (4)基于内存创建的HPA

    #先把上面创建的资源删除
    [root@master test]# kubectl delete horizontalpodautoscalers.autoscaling  nginx
    horizontalpodautoscaler.autoscaling "nginx" deleted
    [root@master test]# kubectl delete -f nginx.yaml
    deployment.apps "nginx" deleted
    service "nginx" deleted
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 重新编写yaml文件
    [root@master test]# cat nginx.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
    spec:
      selector:
        matchLabels:
          run: nginx
      replicas: 1
      template:
        metadata:
          labels:
            run: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.15.2
            ports:
            - containerPort: 80
            resources:
              limits:
                cpu: 500m
                memory: 60Mi
              requests:
                cpu: 200m
                memory: 25Mi
    
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx
      labels:
        run: nginx
    spec:
      ports:
      - port: 80
      selector:
        run: nginx
    
    [root@master test]# kubectl apply -f nginx.yaml
    deployment.apps/nginx created
    service/nginx created
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 创建HPA
    [root@master test]# vim hpa-nginx.yaml
    apiVersion: autoscaling/v2beta1  #上面的hpa版本有提到过,使用基于内存的hpa需要换个版本
    kind: HorizontalPodAutoscaler
    metadata:
      name: nginx-hpa
    spec:
      maxReplicas: 10  #1-10的pod数量限制
      minReplicas: 1
      scaleTargetRef:               #指定使用hpa的资源对象,版本、类型、名称要和上面创建的相同
        apiVersion: apps/v1
        kind: Deployment
        name: nginx
      metrics:
      - type: Resource
        resource:
          name: memory
          targetAverageUtilization: 50   #限制%50的内存
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    [root@master test]# kubectl apply -f hpa-nginx.yaml
    horizontalpodautoscaler.autoscaling/nginx-hpa created
    [root@master test]# kubectl get hpa
    NAME        REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    nginx-hpa   Deployment/nginx   7%/50%    1         10        1          59s
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 更换终端进行测试
    #在pod中执行命令,增加内存负载
    [root@master ~]# kubectl exec -it nginx-78f4944bb8-2rz7j -- /bin/sh -c 'dd if=/dev/zero of=/tmp/file1'
    
    • 1
    • 2
    • 等待负载上去,然后查看pod数量与内存使用率
    [root@master test]# kubectl get hpa
    NAME        REFERENCE          TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
    nginx-hpa   Deployment/nginx   137%/50%   1         10        1          12m
    [root@master test]# kubectl get hpa
    NAME        REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    nginx-hpa   Deployment/nginx   14%/50%   1         10        3          12m
    [root@master test]# kubectl get pods
    NAME                     READY   STATUS    RESTARTS   AGE
    nginx-78f4944bb8-2rz7j   1/1     Running   0          21m
    nginx-78f4944bb8-bxh78   1/1     Running   0          34s
    nginx-78f4944bb8-g8w2h   1/1     Running   0          34s
    #与CPU相同,内存上去了也会自动创建pod
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
  • 相关阅读:
    外包干了半年,快要废了。。。
    使用百度EasyDL语音识别打造Smart汽车原创音乐
    VMware:一个多云+AI的未来
    蚂蚁二面,面试官问我零拷贝的实现原理,当场懵了…
    数据结构 - 数组 - 青岛大学(王卓)
    一文详解数据链路相关技术
    FBI:皇家勒索软件要求350名受害者支付2.75亿美元
    Qt5开发从入门到精通——第六篇四节( 图像与图片——显示SVG格式图片 )
    Git 常用命令指南:从入门到精通
    Java知识点一
  • 原文地址:https://blog.csdn.net/rzy1248873545/article/details/125971278