• Service详解


    Service详解

    一. Service介绍

    • 在kubernetes中,pod是应用程序的载体,可以通过pod的ip来访问应用程序,但是pod的ip地址不是固定的,意味着不方便直接采用pod的ip对服务进行访问
    • 为了解决这个问题,kubernetes提供了Service资源,Service会对提供同一个服务的多个pod进行聚合,并且提供一个统一的入口地址。
    • 通过访问Service的入口地址就能访问到后面的pod服务。

    img

    • 通过deploy创建有标签的pod,又通过标签选择器来控制pod,service通过标签选择器来选择有相同类型标签的pod
    • Service在很多情况下只是一个概念,真正起作用的其实是kube-proxy服务进程,每个Node节点上都运行着一个kube-proxy服务进程
    • 当创建Service的时候会通过api-server向etcd写入创建的service的信息,而kube-proxy会基于监听的机制发现这种Service的变动,然后它会将最新的Service信息转换成对应的访问规则

    img

    • 首先要创建service,向apiserver发起请求,apiserver向etcd写入信息,kube-proxy发现apiserver向etcd写入信息,kube-proxy向node节点写规则
    ## 10.97.97.97:80 是service提供的访问入口
    ## 当访问这个入口的时候,可以发现后面有三个pod的服务在等待调用,
    ## kube-#proxy会基于rr(轮询)的策略,将请求分发到其中一个pod上去
    ## 这个规则会同时在集群内的所有节点上都生成,所以在任何一个节点上访问都可以。
    
    [root@node1 ~]# ipvsadm -Ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  10.97.97.97:80 rr
      -> 10.244.1.39:80               Masq    1      0          0
      -> 10.244.1.40:80               Masq    1      0          0
      -> 10.244.2.33:80               Masq    1      0          0
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    1. kube-proxy目前支持三种工作模式
    1.1 userspace 模式
    • 用户空间模式下,kube-proxy会为每一个Service创建一个监听端口,发向Cluster IP的请求被Iptables规则重定向到kube-proxy监听的端口上,kube-proxy根据LB算法选择一个提供服务的Pod并和其建立链接,以将请求转发到Pod上。
    • 该模式下,kube-proxy充当了一个四层负责均衡器的角色。
    • 由于kube-proxy运行在userspace中,在进行转发处理时会增加内核和用户空间之间的数据拷贝,虽然比较稳定,但是效率比较低。

    img

    • apiserver告诉kube-proxy要创建service,kube-proxy通过集群IP加规则,去访问,转发到pod上去
    1.2 iptables 模式
    • iptables模式下,kube-proxy为service后端的每个Pod创建对应的iptables规则,直接将发向Cluster IP的请求重定向到一个Pod IP。

    • 该模式下kube-proxy不承担四层负责均衡器的角色,只负责创建iptables规则。

    • 该模式的优点是较userspace模式效率更高,但不能提供灵活的LB策略,当后端Pod不可用时也无法进行重试。

    img

    1.3 ipvs 模式
    • ipvs模式和iptables类似,kube-proxy监控Pod的变化并创建相应的ipvs规则。
    • ipvs相对iptables转发效率更高,ipvs支持更多的LB算法。

    img

    # 此模式必须安装ipvs内核模块,否则会降级为iptables
    
    [root@k8s-master ~]# kubectl api-resources|grep cm
    configmaps                        cm           v1                                     true         ConfigMap
    
    cm 配置管理器/配置映射
    
    # 开启ipvs,# 修改mode: "ipvs"
    [root@k8s-master ~]# kubectl edit configmaps kube-proxy -n kube-system
    configmap/kube-proxy edited
    [root@k8s-master ~]# 
    mode: "ipvs"
    
    查看标签:k8s-app=kube-proxy
    [root@k8s-master ~]# kubectl get pods -n kube-system --show-labels|grep k8s-app=kube-proxy
    kube-proxy-65lcn                     1/1     Running   12 (64m ago)   9d    controller-revision-hash=dd4c999cf,k8s-app=kube-proxy,pod-template-generation=1
    kube-proxy-lw4z2                     1/1     Running   12 (63m ago)   8d    controller-revision-hash=dd4c999cf,k8s-app=kube-proxy,pod-template-generation=1
    kube-proxy-zskvf                     1/1     Running   13 (63m ago)   8d    controller-revision-hash=dd4c999cf,k8s-app=kube-proxy,pod-template-generation=1
    [root@k8s-master ~]# 
    
    [root@k8s-master ~]# kubectl delete pod -l k8s-app=kube-proxy -n kube-system
    pod "kube-proxy-65lcn" deleted
    pod "kube-proxy-lw4z2" deleted
    pod "kube-proxy-zskvf" deleted
    
    
    [root@k8s-node1 ~]# ipvsadm -Ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  172.17.0.1:30514 rr
    TCP  172.17.0.1:31876 rr
    TCP  192.168.232.132:30514 rr
    TCP  192.168.232.132:31876 rr
    TCP  10.96.0.1:443 rr
      -> 192.168.232.128:6443         Masq    1      0          0         
    TCP  10.96.0.10:53 rr
      -> 10.244.0.26:53               Masq    1      0          0         
      -> 10.244.0.27:53               Masq    1      0          0         
    TCP  10.96.0.10:9153 rr
      -> 10.244.0.26:9153             Masq    1      0          0         
      -> 10.244.0.27:9153             Masq    1      0          0         
    TCP  10.99.32.163:80 rr
    TCP  10.111.65.78:80 rr
    TCP  10.244.1.0:30514 rr
    TCP  10.244.1.0:31876 rr
    UDP  10.96.0.10:53 rr
      -> 10.244.0.26:53               Masq    1      0          0         
      -> 10.244.0.27:53               Masq    1      0          0         
    [root@k8s-node1 ~]# 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50

    二. Service类型

    1. Service的资源清单文件
    kind: Service  # 资源类型
    apiVersion: v1  # 资源版本
    metadata: # 元数据
      name: service # 资源名称
      namespace: dev # 命名空间
    spec: # 描述
      selector: # 标签选择器,用于确定当前service代理哪些pod
        app: nginx
      type: # Service类型,指定service的访问方式
      clusterIP:  # 虚拟服务的ip地址
      sessionAffinity: # session亲和性,支持ClientIP、None两个选项
      ports: # 端口信息
        - protocol: TCP 
          port: 3017  # service端口
          targetPort: 5003 # pod端口
          nodePort: 31122 # 主机端口
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • ClusterIP:默认值,它是Kubernetes系统自动分配的虚拟IP,只能在集群内部访问
    • NodePort:将Service通过指定的Node上的端口暴露给外部,通过此方法,就可以在集群外部访问服务
    • LoadBalancer:使用外接负载均衡器完成到服务的负载分发,注意此模式需要外部云环境支持
    • 自己在外搭建lb
    • [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-qVNBBlrn-1663253772842)(C:/Users/Administrator/AppData/Roaming/Typora/typora-user-images/image-20220915174454779.png)]
    • ExternalName: 把集群外部的服务引入集群内部,直接使用

    三. Service使用

    1. 实验环境准备
    • 在使用service之前,首先利用Deployment创建出3个pod,注意要为pod设置app=nginx-pod的标签

    • 创建文件

    [root@k8s-master manifest]# cat deployment.yaml 
    apiVersion: apps/v1
    kind: Deployment      
    metadata:
      name: pc-deployment
      namespace: dev
    spec: 
      replicas: 3
      selector:
        matchLabels:
          app: nginx-pod
      template:
        metadata:
          labels:
            app: nginx-pod
        spec:
          containers:
          - name: nginx
            image: nginx:1.17.1
            ports:
            - containerPort: 80
    [root@k8s-master manifest]# 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 运行
    [root@k8s-master manifest]# kubectl apply -f deployment.yaml 
    deployment.apps/pc-deployment created
    [root@k8s-master manifest]#
    
    # 查看pod详情
    [root@k8s-master manifest]# kubectl get -f deployment.yaml 
    NAME            READY   UP-TO-DATE   AVAILABLE   AGE
    pc-deployment   3/3     3            3           17s
    [root@k8s-master manifest]# kubectl get -f deployment.yaml -o wide
    NAME            READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
    pc-deployment   3/3     3            3           36s   nginx        nginx:1.17.1   app=nginx-pod
    [root@k8s-master manifest]# 
    [root@k8s-master manifest]# kubectl get pods -n dev -o wide
    NAME                             READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
    pc-deployment-66d5c85c96-6qj82   1/1     Running   0          61s   10.244.2.114   k8s-node2   <none>           <none>
    pc-deployment-66d5c85c96-dc7gr   1/1     Running   0          61s   10.244.1.100   k8s-node1   <none>           <none>
    pc-deployment-66d5c85c96-v8829   1/1     Running   0          61s   10.244.2.113   k8s-node2   <none>           <none>
    [root@k8s-master manifest]# 
    
    
    # 为了方便后面的测试,修改下三台nginx的
    [root@k8s-master ~]# kubectl exec -itn dev pc-deployment-66d5c85c96-6qj82 -- /bin/sh
    # echo "page 111" > /usr/share/nginx/html/index.html                  # exit                                   
    [root@k8s-master ~]# kubectl exec -itn dev pc-deployment-66d5c85c96-dc7gr -- /bin/sh
    # echo "page 222" > /usr/share/nginx/html/index.html
    # exit
    [root@k8s-master ~]# kubectl exec -itn dev pc-deployment-66d5c85c96-v8829 -- /bin/sh
    # echo "page 333" > /usr/share/nginx/html/index.html
    # exit
    [root@k8s-master ~]# 
    
    
    #修改完毕之后,访问测试
    [root@k8s-master ~]# curl 10.244.2.114
    page 111
    [root@k8s-master ~]# curl 10.244.1.100
    page 222
    [root@k8s-master ~]# curl 10.244.2.113
    page 333
    [root@k8s-master ~]# 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    2. ClusterIP类型的Service
    • 创建service文件
    [root@k8s-master manifest]# cat service-clusterip.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      name: service-clusterip
      namespace: dev
    spec:
      selector:
        app: nginx-pod
      type: ClusterIP
      ports:
      - port: 80  # Service端口       
        targetPort: 80 # pod端口
    [root@k8s-master manifest]# 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 运行service
    # 创建service
    [root@k8s-master manifest]# kubectl apply -f service-clusterip.yaml 
    service/service-clusterip created
    [root@k8s-master manifest]# 
    
    
    
    # 查看service
    [root@k8s-master manifest]# kubectl get -f service-clusterip.yaml 
    NAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
    service-clusterip   ClusterIP   10.98.207.217   <none>        80/TCP    20s
    [root@k8s-master manifest]# kubectl get svc -n dev -o wide
    NAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE   SELECTOR
    service-clusterip   ClusterIP   10.98.207.217   <none>        80/TCP    40s   app=nginx-pod
    [root@k8s-master manifest]# 
    
    
    # 查看service的详细信息
    # 在这里有一个Endpoints列表,里面就是当前service可以负载到的服务入口
    [root@k8s-master manifest]# kubectl describe svc service-clusterip -n dev
    Name:              service-clusterip
    Namespace:         dev
    Labels:            <none>
    Annotations:       <none>
    Selector:          app=nginx-pod
    Type:              ClusterIP
    IP Family Policy:  SingleStack
    IP Families:       IPv4
    IP:                10.98.207.217
    IPs:               10.98.207.217
    Port:              <unset>  80/TCP
    TargetPort:        80/TCP
    Endpoints:         10.244.1.100:80,10.244.2.113:80,10.244.2.114:80//这几个ip后端的rs或者pod所拥有的ip,这些ip会发生变化,重建pod
    Session Affinity:  None
    Events:            <none>
    [root@k8s-master manifest]# 
    
    
    # 查看ipvs的映射规则
    [root@k8s-master ~]# ipvsadm -Ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn    
    TCP  10.98.207.217:80 rr
      -> 10.244.1.100:80              Masq    1      0          0         
      -> 10.244.2.113:80              Masq    1      0          0         
      -> 10.244.2.114:80              Masq    1      0          0         
    
    # 访问 10.98.207.217:80 观察效果
    [root@k8s-master manifest]# kubectl get svc -n dev -o wide
    NAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE   SELECTOR
    service-clusterip   ClusterIP   10.98.207.217   <none>        80/TCP    95s   app=nginx-pod
    [root@k8s-master manifest]# curl 10.98.207.217
    page 111
    [root@k8s-master manifest]# curl 10.98.207.217
    page 333
    [root@k8s-master manifest]# curl 10.98.207.217
    page 222
    [root@k8s-master manifest]# curl 10.98.207.217
    page 111
    [root@k8s-master manifest]# curl 10.98.207.217
    page 333
    [root@k8s-master manifest]# curl 10.98.207.217
    page 222
    [root@k8s-master manifest]# 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    3. Endpoint
    • Endpoint是kubernetes中的一个资源对象,存储在etcd中,用来记录一个service对应的所有pod的访问地址,它是根据service配置文件中selector描述产生的。

    • 一个Service由一组Pod组成,这些Pod通过Endpoints暴露出来,Endpoints是实现实际服务的端点集合

    • service和pod之间的联系是通过endpoints实现的

    • Endpoint就像rs,会随着pod的变化而变化,service会读取endpoint的信息

    img

    [root@k8s-master ~]# kubectl get endpoints -n dev -o wide
    NAME                ENDPOINTS                                         AGE
    service-clusterip   10.244.1.100:80,10.244.2.113:80,10.244.2.114:80   17m
    [root@k8s-master ~]# 
    
    • 1
    • 2
    • 3
    • 4
    4. 负载分发策略
    • 对Service的访问被分发到了后端的Pod上去,目前kubernetes提供了两种负载分发策略:

      • 如果不定义,默认使用kube-proxy的策略,比如随机、轮询
      • 基于客户端地址的会话保持模式,即来自同一个客户端发起的所有请求都会转发到固定的一个Pod上
    • 此模式可以使在spec中添加sessionAffinity:ClientIP选项

    • sessionAffinity 会话亲和,会话保持

    # 查看ipvs的映射规则【rr 轮询】
    grep -A 3 过滤下面3行
    [root@k8s-master manifest]# ipvsadm -Ln|grep -A 3 '10.98.207.217'
    TCP  10.98.207.217:80 rr
      -> 10.244.1.100:80              Masq    1      0          0         
      -> 10.244.2.113:80              Masq    1      0          0         
      -> 10.244.2.114:80              Masq    1      0          0         
    [root@k8s-master manifest]# 
           
    
    # 循环访问测试
    [root@k8s-master ~]# while true;do curl 10.98.207.217:80; sleep 2; done;
    page 111
    page 333
    page 222
    page 111
    page 333
    page 222
    page 111
    ^C
    [root@k8s-master ~]# 
    
    
    # 修改分发策略----sessionAffinity:ClientIP
    [root@k8s-master manifest]# cat service-clusterip.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      name: service-clusterip
      namespace: dev
    spec:
      sessionAffinity: ClientIP
      selector:
        app: nginx-pod
      type: ClusterIP
      ports:
      - port: 80  # Service端口       
        targetPort: 80 # pod端口
    [root@k8s-master manifest]# 
    [root@k8s-master manifest]# kubectl delete -f service-clusterip.yaml 
    service "service-clusterip" deleted
    [root@k8s-master manifest]# kubectl apply -f service-clusterip.yaml 
    service/service-clusterip created
    [root@k8s-master manifest]# 
    
        
    # 查看ipvs规则【persistent 代表持久】
    [root@k8s-master manifest]# kubectl get svc -n dev
    NAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
    service-clusterip   ClusterIP   10.111.98.171   <none>        80/TCP    55s
    [root@k8s-master manifest]# ipvsadm -Ln|grep -A 3 '10.111.98.171'
    TCP  10.111.98.171:80 rr persistent 10800
      -> 10.244.1.100:80              Masq    1      0          0         
      -> 10.244.2.113:80              Masq    1      0          0         
      -> 10.244.2.114:80              Masq    1      0          0         
    [root@k8s-master manifest]# 
    
    
    
    
    # 循环访问测试
    [root@k8s-master ~]# while true;do curl 10.111.98.171; sleep 2; done;page 111
    page 111
    page 111
    page 111
    ^C
    [root@k8s-master ~]# 
    [root@k8s-master manifest]# ipvsadm -Ln|grep -A 3 '10.111.98.171'
    TCP  10.111.98.171:80 rr persistent 10800
      -> 10.244.1.100:80              Masq    1      0          0         
      -> 10.244.2.113:80              Masq    1      0          0         
      -> 10.244.2.114:80              Masq    1      0          11        
    [root@k8s-master manifest]# 
    
    
    # 删除service
    [root@k8s-master manifest]# kubectl delete -f service-clusterip.yaml 
    service "service-clusterip" deleted
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    5. HeadLiness类型的Service
    • 在某些场景中,开发人员可能不想使用Service提供的负载均衡功能,而希望自己来控制负载均衡策略
    • 针对这种情况,kubernetes提供了HeadLiness Service,这类Service不会分配Cluster IP,如果想要访问service,只能通过service的域名进行查询。
    5.1 创建service-headliness.yaml
    [root@k8s-master manifest]# cat service-headliness.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: service-headliness
      namespace: dev
    spec:
      selector:
        app: nginx-pod
      clusterIP: None # 将clusterIP设置为None,即可创建headliness Service,自动分配
      type: ClusterIP
      ports:
      - port: 80    
        targetPort: 80
    [root@k8s-master manifest]# 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 运行
    # 创建service
    [root@k8s-master manifest]# kubectl apply -f service-headliness.yaml
    service/service-headliness created
    [root@k8s-master manifest]# 
    
    
    # 获取service, 发现CLUSTER-IP未分配
    [root@k8s-master manifest]# kubectl get svc service-headliness -n dev -o wide
    NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTOR
    service-headliness   ClusterIP   None         <none>        80/TCP    28s   app=nginx-pod
    [root@k8s-master manifest]# kubectl get -f service-headliness.yamlNAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    service-headliness   ClusterIP   None         <none>        80/TCP    35s
    [root@k8s-master manifest]# 
    
    
    # 查看service详情
    [root@k8s-master manifest]# kubectl describe svc service-headliness -n dev
    Name:              service-headliness
    Namespace:         dev
    Labels:            <none>
    Annotations:       <none>
    Selector:          app=nginx-pod
    Type:              ClusterIP
    IP Family Policy:  SingleStack
    IP Families:       IPv4
    IP:                None
    IPs:               None
    Port:              <unset>  80/TCP
    TargetPort:        80/TCP
    Endpoints:         10.244.1.100:80,10.244.2.113:80,10.244.2.114:80
    Session Affinity:  None
    Events:            <none>
    [root@k8s-master manifest]# 
     
    
    # 查看域名的解析情况
    [root@k8s-master ~]# kubectl get pods -n dev
    NAME                             READY   STATUS    RESTARTS   AGE
    pc-deployment-66d5c85c96-6qj82   1/1     Running   0          59m
    pc-deployment-66d5c85c96-dc7gr   1/1     Running   0          59m
    pc-deployment-66d5c85c96-v8829   1/1     Running   0          59m
    [root@k8s-master ~]# kubectl exec -itn dev pc-deployment-66d5c85c96-6qj82 -- /bin/sh
    # cat /etc/resolv.conf
    search dev.svc.cluster.local svc.cluster.local cluster.local
    nameserver 10.96.0.10
    options ndots:5
    # 
    没有dig命令
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 没有dig命令,新增一个pod
    [root@k8s-master manifest]# cat deployment.yaml 
    apiVersion: apps/v1
    kind: Deployment      
    metadata:
      name: pc-deployment
      namespace: dev
    spec: 
      replicas: 3
      selector:
        matchLabels:
          app: nginx-pod
      template:
        metadata:
          labels:
            app: nginx-pod
        spec:
          containers:
          - name: b1
            image: busybox:latest
            command: ["/bin/sleep","6000"]
          - name: nginx
            image: nginx:1.17.1
            ports:
            - containerPort: 80
    [root@k8s-master manifest]# 
    [root@k8s-master manifest]# kubectl apply -f deployment.yaml 
    deployment.apps/pc-deployment created
    [root@k8s-master manifest]# kubectl get pods -n dev
    NAME                             READY   STATUS    RESTARTS   AGE
    pc-deployment-854fc846fb-2h9h9   2/2     Running   0          18s
    pc-deployment-854fc846fb-4skbc   2/2     Running   0          18s
    pc-deployment-854fc846fb-6r8rl   2/2     Running   0          18s
    [root@k8s-master manifest]# kubectl get -f deployment.yaml 
    NAME            READY   UP-TO-DATE   AVAILABLE   AGE
    pc-deployment   3/3     3            3           21s
    [root@k8s-master manifest]# 
    
        
    [root@k8s-master manifest]# cat service-headliness.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      name: service-headliness
      namespace: dev
    spec:
      selector:
        app: nginx-pod
      clusterIP: None # 将clusterIP设置为None,即可创建headliness Service
      type: ClusterIP
      ports:
      - port: 80    
        targetPort: 80
    [root@k8s-master manifest]# kubectl apply -f service-headliness.yaml
    service/service-headliness created
    [root@k8s-master manifest]# 
    
    [root@k8s-master manifest]# kubectl exec -itn dev pc-deployment-854fc846fb-2h9h9 -c b1  -- /bin/sh
    / # cat /etc/resolv.confsearch dev.svc.cluster.local svc.cluster.local cluster.local
    nameserver 10.96.0.10
    options ndots:5
    / # nslookup service-headliness.dev.svc.cluster.local
    Server:         10.96.0.10
    Address:        10.96.0.10:53
    
    
    *** Can't find service-headliness.dev.svc.cluster.local: No answer
    
    / # 
        
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    6. NodePort类型的Service
    • 在之前的样例中,创建的Service的ip地址只有集群内部才可以访问
    • 如果希望将Service暴露给集群外部使用,那么就要使用到另外一种类型的Service,称为NodePort类型。
    • NodePort的工作原理其实就是将service的端口映射到Node的一个端口上,然后就可以通过NodeIp:NodePort来访问service了。

    img

    6.1 创建service-nodeport.yaml
    [root@k8s-master manifest]# cat service-nodeport.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: service-nodeport
      namespace: dev
    spec:
      selector:
        app: nginx-pod
      type: NodePort # service类型
      ports:
      - port: 80
        nodePort: 30002 # 指定绑定的node的端口(默认的取值范围是:30000-32767), 如果不指定,会默认分配
        targetPort: 80
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 运行service
    # 创建service
    [root@k8s-master manifest]# kubectl apply -f service-nodeport.yaml
    service/service-nodeport created
    
    
    # 查看service
    [root@k8s-master manifest]# kubectl get -f service-nodeport.yaml
    NAME               TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
    service-nodeport   NodePort   10.110.181.129   <none>        80:30002/TCP   9s
    [root@k8s-master manifest]# kubectl get -f service-nodeport.yaml -o wide
    NAME               TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE   SELECTOR
    service-nodeport   NodePort   10.110.181.129   <none>        80:30002/TCP   52s   app=nginx-pod
    [root@k8s-master manifest]# 
    
    
    # 接下来可以通过电脑主机的浏览器去访问集群中任意一个nodeip的30002端口,即可访问到pod
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 访问:http://192.168.232.128:30002/

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-MmKsq5V6-1663253772844)(C:/Users/Administrator/AppData/Roaming/Typora/typora-user-images/image-20220915205931550.png)]

    7. LoadBalancer类型的Service
    • LoadBalancer和NodePort很相似,目的都是向外部暴露一个端口
    • 区别在于LoadBalancer会在集群的外部再来做一个负载均衡设备,而这个设备需要外部环境支持的,外部服务发送到这个设备上的请求,会被设备负载之后转发到集群中。
    • LB:lvs,nginx,haproxy

    img

    • 用户通过vip访问到负载均衡设备,lb设备转发到节点端口号上去,service在转发到目标的设备上去。
    8. ExternalName类型的Service
    • ExternalName类型的Service用于引入集群外部的服务
    • 它通过externalName属性指定外部一个服务的地址,然后在集群内部访问此service就可以访问到外部的服务了。

    img

    8.1 创建service-externalname.yaml
    [root@k8s-master manifest]# cat service-externalname.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: service-externalname
      namespace: dev
    spec:
      type: ExternalName # service类型
      externalName: www.baidu.com  #改成ip地址也可以
    [root@k8s-master manifest]# 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 运行
    # 创建service
    [root@k8s-master manifest]# kubectl apply -f service-externalname.yaml
    service/service-externalname created
    [root@k8s-master manifest]# kubectl get -f service-externalname.yaml -o wide
    NAME                   TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)   AGE   SELECTOR
    service-externalname   ExternalName   <none>       www.baidu.com   <none>    31s   <none>
    
    
    # 域名解析
    
    # 域名解析
    [root@k8s-master01 ~]# dig @10.96.0.10 service-externalname.dev.svc.cluster.local
    service-externalname.dev.svc.cluster.local. 30 IN CNAME www.baidu.com.
    www.baidu.com.          30      IN      CNAME   www.a.shifen.com.
    www.a.shifen.com.       30      IN      A       39.156.66.18
    www.a.shifen.com.       30      IN      A       39.156.66.14
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16

    四. Ingress介绍

    1. Service对集群之外暴露服务的主要方式有两种
    • NotePort
    • LoadBalancer
    1.1 两种方式的缺点:
    • NodePort方式的缺点是会占用很多集群机器的端口,那么当集群服务变多的时候,这个缺点就愈发明显(集群服务变多,使用的端口号也变多,导致端口号不够用)
    • LB方式的缺点是每个service需要一个LB,浪费、麻烦,并且需要kubernetes之外设备的支持
    1.2 基于这种现状,kubernetes提供了Ingress资源对象,Ingress只需要一个NodePort或者一个LB就可以满足暴露多个Service的需求
    • 工作机制大致如下图表示:

    img

    • 客户端通过访问ingress访问service,客户端访问ingress,ingress转发到service

    • 实际上,Ingress相当于一个7层的负载均衡器,是kubernetes对反向代理的一个抽象

    • 它的工作原理类似于Nginx,可以理解成在Ingress里建立诸多映射规则,Ingress Controller通过监听这些配置规则并转化成Nginx的反向代理配置 , 然后对外部提供服务

    • 在这里有两个核心概念:

      • ingress:kubernetes中的一个对象,作用是定义请求如何转发到service的规则
      • ingress controller(ingress 控制器):具体实现反向代理及负载均衡的程序,对ingress定义的规则进行解析,根据配置的规则来实现请求转发,实现方式有很多,比如Nginx, Contour, Haproxy等等
    • Ingress(以Nginx为例)的工作原理如下:

      • 用户编写Ingress规则,说明哪个域名对应kubernetes集群中的哪个Service
      • Ingress控制器动态感知Ingress服务规则的变化,然后生成一段对应的Nginx反向代理配置
      • Ingress控制器会将生成的Nginx配置写入到一个运行着的Nginx服务中,并动态更新
      • 到此为止,其实真正在工作的就是一个Nginx了,内部配置了用户定义的请求转发规则

    img

    • ingress service写规则,控制器通过k8s api感知规则,通过规则转发到nginx的代理(pod,做负载均衡的),nginx代理通过service转发到后端的pod
    2. Ingress使用
    2.1 环境准备
    2.1.1 搭建ingress环境
    # 创建文件夹
    [root@k8s-master ~]# mkdir ingress-controller
    [root@k8s-master ~]# cd ingress-controller
    
    # 获取ingress-nginx,本次案例使用的是1.31版本
    # 修改deploy.yaml文件中的仓库
    # 修改quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
    # 为dyrnq/ingress-nginx-controller:v1.3.1 
    [root@k8s-master ingress-controller]# grep image deploy.yaml 
            image: dyrnq/ingress-nginx-controller:v1.3.1 
            imagePullPolicy: IfNotPresent
            image: dyrnq/kube-webhook-certgen:v1.3.0 
            imagePullPolicy: IfNotPresent
            image: dyrnq/kube-webhook-certgen:v1.3.0 
            imagePullPolicy: IfNotPresent
    [root@k8s-master ingress-controller]# 
     
     
    
    # 创建ingress-nginx
    [root@k8s-master ingress-controller]# kubectl apply -f deploy.yaml
    
    
    
    # 查看ingress-nginx
    [root@k8s-master ~]# kubectl get pods -n ingress-nginx
    NAME                                       READY   STATUS      RESTARTS   AGE
    ingress-nginx-admission-create-zrp92       0/1     Completed   0          49s
    ingress-nginx-admission-patch-s2fj5        0/1     Completed   0          49s
    ingress-nginx-controller-5dbc974cb-frg8k   1/1     Running     0          49s
    
    
    # 查看service
    [root@k8s-master ingress-controller]# kubectl get svc -n ingress-nginxNAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
    ingress-nginx-controller             NodePort    10.98.68.101   <none>        80:31313/TCP,443:32641/TCP   44s
    ingress-nginx-controller-admission   ClusterIP   10.99.52.229   <none>        443/TCP                      44s
    [root@k8s-master ingress-controller]# 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 文件如下
    [root@k8s-master ~]# cat ingress-controller/deploy.yaml 
    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
      name: ingress-nginx
    ---
    apiVersion: v1
    automountServiceAccountToken: true
    kind: ServiceAccount
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx
      namespace: ingress-nginx
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-admission
      namespace: ingress-nginx
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx
      namespace: ingress-nginx
    rules:
    - apiGroups:
      - ""
      resources:
      - namespaces
      verbs:
      - get
    - apiGroups:
      - ""
      resources:
      - configmaps
      - pods
      - secrets
      - endpoints
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - ""
      resources:
      - services
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - networking.k8s.io
      resources:
      - ingresses
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - networking.k8s.io
      resources:
      - ingresses/status
      verbs:
      - update
    - apiGroups:
      - networking.k8s.io
      resources:
      - ingressclasses
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - ""
      resourceNames:
      - ingress-controller-leader
      resources:
      - configmaps
      verbs:
      - get
      - update
    - apiGroups:
      - ""
      resources:
      - configmaps
      verbs:
      - create
    - apiGroups:
      - coordination.k8s.io
      resourceNames:
      - ingress-controller-leader
      resources:
      - leases
      verbs:
      - get
      - update
    - apiGroups:
      - coordination.k8s.io
      resources:
      - leases
      verbs:
      - create
    - apiGroups:
      - ""
      resources:
      - events
      verbs:
      - create
      - patch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-admission
      namespace: ingress-nginx
    rules:
    - apiGroups:
      - ""
      resources:
      - secrets
      verbs:
      - get
      - create
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx
    rules:
    - apiGroups:
      - ""
      resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
      - namespaces
      verbs:
      - list
      - watch
    - apiGroups:
      - coordination.k8s.io
      resources:
      - leases
      verbs:
      - list
      - watch
    - apiGroups:
      - ""
      resources:
      - nodes
      verbs:
      - get
    - apiGroups:
      - ""
      resources:
      - services
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - networking.k8s.io
      resources:
      - ingresses
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - ""
      resources:
      - events
      verbs:
      - create
      - patch
    - apiGroups:
      - networking.k8s.io
      resources:
      - ingresses/status
      verbs:
      - update
    - apiGroups:
      - networking.k8s.io
      resources:
      - ingressclasses
      verbs:
      - get
      - list
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-admission
    rules:
    - apiGroups:
      - admissionregistration.k8s.io
      resources:
      - validatingwebhookconfigurations
      verbs:
      - get
      - update
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx
      namespace: ingress-nginx
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: ingress-nginx
    subjects:
    - kind: ServiceAccount
      name: ingress-nginx
      namespace: ingress-nginx
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-admission
      namespace: ingress-nginx
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: ingress-nginx-admission
    subjects:
    - kind: ServiceAccount
      name: ingress-nginx-admission
      namespace: ingress-nginx
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      labels:
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: ingress-nginx
    subjects:
    - kind: ServiceAccount
      name: ingress-nginx
      namespace: ingress-nginx
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-admission
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: ingress-nginx-admission
    subjects:
    - kind: ServiceAccount
      name: ingress-nginx-admission
      namespace: ingress-nginx
    ---
    apiVersion: v1
    data:
      allow-snippet-annotations: "true"
    kind: ConfigMap
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-controller
      namespace: ingress-nginx
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-controller
      namespace: ingress-nginx
    spec:
      externalTrafficPolicy: Local
      ipFamilies:
      - IPv4
      ipFamilyPolicy: SingleStack
      ports:
      - appProtocol: http
        name: http
        port: 80
        protocol: TCP
        targetPort: http
      - appProtocol: https
        name: https
        port: 443
        protocol: TCP
        targetPort: https
      selector:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
      type: NodePort 
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-controller-admission
      namespace: ingress-nginx
    spec:
      ports:
      - appProtocol: https
        name: https-webhook
        port: 443
        targetPort: webhook
      selector:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
      type: ClusterIP
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-controller
      namespace: ingress-nginx
    spec:
      minReadySeconds: 0
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          app.kubernetes.io/component: controller
          app.kubernetes.io/instance: ingress-nginx
          app.kubernetes.io/name: ingress-nginx
      template:
        metadata:
          labels:
            app.kubernetes.io/component: controller
            app.kubernetes.io/instance: ingress-nginx
            app.kubernetes.io/name: ingress-nginx
        spec:
          containers:
          - args:
            - /nginx-ingress-controller
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
            - --election-id=ingress-controller-leader
            - --controller-class=k8s.io/ingress-nginx
            - --ingress-class=nginx
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
            image: dyrnq/ingress-nginx-controller:v1.3.1 
            imagePullPolicy: IfNotPresent
            lifecycle:
              preStop:
                exec:
                  command:
                  - /wait-shutdown
            livenessProbe:
              failureThreshold: 5
              httpGet:
                path: /healthz
                port: 10254
                scheme: HTTP
              initialDelaySeconds: 10
              periodSeconds: 10
              successThreshold: 1
              timeoutSeconds: 1
            name: controller
            ports:
            - containerPort: 80
              name: http
              protocol: TCP
            - containerPort: 443
              name: https
              protocol: TCP
            - containerPort: 8443
              name: webhook
              protocol: TCP
            readinessProbe:
              failureThreshold: 3
              httpGet:
                path: /healthz
                port: 10254
                scheme: HTTP
              initialDelaySeconds: 10
              periodSeconds: 10
              successThreshold: 1
              timeoutSeconds: 1
            resources:
              requests:
                cpu: 100m
                memory: 90Mi
            securityContext:
              allowPrivilegeEscalation: true
              capabilities:
                add:
                - NET_BIND_SERVICE
                drop:
                - ALL
              runAsUser: 101
            volumeMounts:
            - mountPath: /usr/local/certificates/
              name: webhook-cert
              readOnly: true
          dnsPolicy: ClusterFirst
          nodeSelector:
            kubernetes.io/os: linux
          serviceAccountName: ingress-nginx
          terminationGracePeriodSeconds: 300
          volumes:
          - name: webhook-cert
            secret:
              secretName: ingress-nginx-admission
    ---
    apiVersion: batch/v1
    kind: Job
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-admission-create
      namespace: ingress-nginx
    spec:
      template:
        metadata:
          labels:
            app.kubernetes.io/component: admission-webhook
            app.kubernetes.io/instance: ingress-nginx
            app.kubernetes.io/name: ingress-nginx
            app.kubernetes.io/part-of: ingress-nginx
            app.kubernetes.io/version: 1.3.1
          name: ingress-nginx-admission-create
        spec:
          containers:
          - args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
            env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            image: dyrnq/kube-webhook-certgen:v1.3.0 
            imagePullPolicy: IfNotPresent
            name: create
            securityContext:
              allowPrivilegeEscalation: false
          nodeSelector:
            kubernetes.io/os: linux
          restartPolicy: OnFailure
          securityContext:
            fsGroup: 2000
            runAsNonRoot: true
            runAsUser: 2000
          serviceAccountName: ingress-nginx-admission
    ---
    apiVersion: batch/v1
    kind: Job
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-admission-patch
      namespace: ingress-nginx
    spec:
      template:
        metadata:
          labels:
            app.kubernetes.io/component: admission-webhook
            app.kubernetes.io/instance: ingress-nginx
            app.kubernetes.io/name: ingress-nginx
            app.kubernetes.io/part-of: ingress-nginx
            app.kubernetes.io/version: 1.3.1
          name: ingress-nginx-admission-patch
        spec:
          containers:
          - args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
            env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            image: dyrnq/kube-webhook-certgen:v1.3.0 
            imagePullPolicy: IfNotPresent
            name: patch
            securityContext:
              allowPrivilegeEscalation: false
          nodeSelector:
            kubernetes.io/os: linux
          restartPolicy: OnFailure
          securityContext:
            fsGroup: 2000
            runAsNonRoot: true
            runAsUser: 2000
          serviceAccountName: ingress-nginx-admission
    ---
    apiVersion: networking.k8s.io/v1
    kind: IngressClass
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: nginx
    spec:
      controller: k8s.io/ingress-nginx
    ---
    apiVersion: admissionregistration.k8s.io/v1
    kind: ValidatingWebhookConfiguration
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-admission
    webhooks:
    - admissionReviewVersions:
      - v1
      clientConfig:
        service:
          name: ingress-nginx-controller-admission
          namespace: ingress-nginx
          path: /networking/v1/ingresses
      failurePolicy: Fail
      matchPolicy: Equivalent
      name: validate.nginx.ingress.kubernetes.io
      rules:
      - apiGroups:
        - networking.k8s.io
        apiVersions:
        - v1
        operations:
        - CREATE
        - UPDATE
        resources:
        - ingresses
      sideEffects: None
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183
    • 184
    • 185
    • 186
    • 187
    • 188
    • 189
    • 190
    • 191
    • 192
    • 193
    • 194
    • 195
    • 196
    • 197
    • 198
    • 199
    • 200
    • 201
    • 202
    • 203
    • 204
    • 205
    • 206
    • 207
    • 208
    • 209
    • 210
    • 211
    • 212
    • 213
    • 214
    • 215
    • 216
    • 217
    • 218
    • 219
    • 220
    • 221
    • 222
    • 223
    • 224
    • 225
    • 226
    • 227
    • 228
    • 229
    • 230
    • 231
    • 232
    • 233
    • 234
    • 235
    • 236
    • 237
    • 238
    • 239
    • 240
    • 241
    • 242
    • 243
    • 244
    • 245
    • 246
    • 247
    • 248
    • 249
    • 250
    • 251
    • 252
    • 253
    • 254
    • 255
    • 256
    • 257
    • 258
    • 259
    • 260
    • 261
    • 262
    • 263
    • 264
    • 265
    • 266
    • 267
    • 268
    • 269
    • 270
    • 271
    • 272
    • 273
    • 274
    • 275
    • 276
    • 277
    • 278
    • 279
    • 280
    • 281
    • 282
    • 283
    • 284
    • 285
    • 286
    • 287
    • 288
    • 289
    • 290
    • 291
    • 292
    • 293
    • 294
    • 295
    • 296
    • 297
    • 298
    • 299
    • 300
    • 301
    • 302
    • 303
    • 304
    • 305
    • 306
    • 307
    • 308
    • 309
    • 310
    • 311
    • 312
    • 313
    • 314
    • 315
    • 316
    • 317
    • 318
    • 319
    • 320
    • 321
    • 322
    • 323
    • 324
    • 325
    • 326
    • 327
    • 328
    • 329
    • 330
    • 331
    • 332
    • 333
    • 334
    • 335
    • 336
    • 337
    • 338
    • 339
    • 340
    • 341
    • 342
    • 343
    • 344
    • 345
    • 346
    • 347
    • 348
    • 349
    • 350
    • 351
    • 352
    • 353
    • 354
    • 355
    • 356
    • 357
    • 358
    • 359
    • 360
    • 361
    • 362
    • 363
    • 364
    • 365
    • 366
    • 367
    • 368
    • 369
    • 370
    • 371
    • 372
    • 373
    • 374
    • 375
    • 376
    • 377
    • 378
    • 379
    • 380
    • 381
    • 382
    • 383
    • 384
    • 385
    • 386
    • 387
    • 388
    • 389
    • 390
    • 391
    • 392
    • 393
    • 394
    • 395
    • 396
    • 397
    • 398
    • 399
    • 400
    • 401
    • 402
    • 403
    • 404
    • 405
    • 406
    • 407
    • 408
    • 409
    • 410
    • 411
    • 412
    • 413
    • 414
    • 415
    • 416
    • 417
    • 418
    • 419
    • 420
    • 421
    • 422
    • 423
    • 424
    • 425
    • 426
    • 427
    • 428
    • 429
    • 430
    • 431
    • 432
    • 433
    • 434
    • 435
    • 436
    • 437
    • 438
    • 439
    • 440
    • 441
    • 442
    • 443
    • 444
    • 445
    • 446
    • 447
    • 448
    • 449
    • 450
    • 451
    • 452
    • 453
    • 454
    • 455
    • 456
    • 457
    • 458
    • 459
    • 460
    • 461
    • 462
    • 463
    • 464
    • 465
    • 466
    • 467
    • 468
    • 469
    • 470
    • 471
    • 472
    • 473
    • 474
    • 475
    • 476
    • 477
    • 478
    • 479
    • 480
    • 481
    • 482
    • 483
    • 484
    • 485
    • 486
    • 487
    • 488
    • 489
    • 490
    • 491
    • 492
    • 493
    • 494
    • 495
    • 496
    • 497
    • 498
    • 499
    • 500
    • 501
    • 502
    • 503
    • 504
    • 505
    • 506
    • 507
    • 508
    • 509
    • 510
    • 511
    • 512
    • 513
    • 514
    • 515
    • 516
    • 517
    • 518
    • 519
    • 520
    • 521
    • 522
    • 523
    • 524
    • 525
    • 526
    • 527
    • 528
    • 529
    • 530
    • 531
    • 532
    • 533
    • 534
    • 535
    • 536
    • 537
    • 538
    • 539
    • 540
    • 541
    • 542
    • 543
    • 544
    • 545
    • 546
    • 547
    • 548
    • 549
    • 550
    • 551
    • 552
    • 553
    • 554
    • 555
    • 556
    • 557
    • 558
    • 559
    • 560
    • 561
    • 562
    • 563
    • 564
    • 565
    • 566
    • 567
    • 568
    • 569
    • 570
    • 571
    • 572
    • 573
    • 574
    • 575
    • 576
    • 577
    • 578
    • 579
    • 580
    • 581
    • 582
    • 583
    • 584
    • 585
    • 586
    • 587
    • 588
    • 589
    • 590
    • 591
    • 592
    • 593
    • 594
    • 595
    • 596
    • 597
    • 598
    • 599
    • 600
    • 601
    • 602
    • 603
    • 604
    • 605
    • 606
    • 607
    • 608
    • 609
    • 610
    • 611
    • 612
    • 613
    • 614
    • 615
    • 616
    • 617
    • 618
    • 619
    • 620
    • 621
    • 622
    • 623
    • 624
    • 625
    • 626
    • 627
    • 628
    • 629
    • 630
    • 631
    • 632
    • 633
    • 634
    • 635
    • 636
    • 637
    • 638
    • 639
    • 640
    • 641
    • 642
    • 643
    2.1.2 准备service和pod

    在这里插入图片描述

    • 创建tomcat-nginx.yaml
    [root@k8s-master ingress-controller]# cat tomcat-nginx.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      namespace: dev
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx-pod
      template:
        metadata:
          labels:
            app: nginx-pod
        spec:
          containers:
          - name: nginx
            image: nginx:1.17.1
            ports:
            - containerPort: 80
    
    ---
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: tomcat-deployment
      namespace: dev
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: tomcat-pod
      template:
        metadata:
          labels:
            app: tomcat-pod
        spec:
          containers:
          - name: tomcat
            image: tomcat:8.5-jre10-slim
            ports:
            - containerPort: 8080
    
    ---
    
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-service
      namespace: dev
    spec:
      selector:
        app: nginx-pod
      type: ClusterIP
      ports:
      - port: 80
        targetPort: 80
    
    ---
    
    apiVersion: v1
    kind: Service
    metadata:
      name: tomcat-service
      namespace: dev
    spec:
      selector:
        app: tomcat-pod
      type: ClusterIP
      ports:
      - port: 8080
        targetPort: 8080
    [root@k8s-master ingress-controller]# 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 运行
    # 创建
    [root@k8s-master ingress-controller]# kubectl apply -f tomcat-nginx.yaml 
    deployment.apps/nginx-deployment created
    deployment.apps/tomcat-deployment created
    service/nginx-service created
    service/tomcat-service created
    [root@k8s-master ingress-controller]# 
    
    
    # 查看
    [root@k8s-master ingress-controller]# kubectl get svc -n devNAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
    nginx-service    ClusterIP   10.100.3.46    <none>        80/TCP     74s
    tomcat-service   ClusterIP   10.100.176.2   <none>        8080/TCP   74s
    [root@k8s-master ingress-controller]# kubectl get -f tomcat-nginx.yaml
    NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/nginx-deployment    3/3     3            3           83s
    deployment.apps/tomcat-deployment   3/3     3            3           83s
    
    NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
    service/nginx-service    ClusterIP   10.100.3.46    <none>        80/TCP     83s
    service/tomcat-service   ClusterIP   10.100.176.2   <none>        8080/TCP   83s
    [root@k8s-master ingress-controller]# 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    3. Http代理
    • 创建ingress-http.yaml
    [root@k8s-master ingress-controller]# cat ingress-http.yaml 
    apiVersion: networking.k8s.io/v1 
    kind: Ingress
    metadata:
      name: ingress-http
      namespace: dev
    spec:
      rules:
      - host: nginx.mushuang.com
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nginx-service
                port: 
                  number: 80
      - host: tomcat.mushuang.com
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: tomcat-service 
                port:
                  number: 8080
    [root@k8s-master ingress-controller]#
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 运行
    # 创建
    [root@k8s-master ingress-controller]# kubectl apply -f ingress-http.yaml 
    ingress.networking.k8s.io/ingress-http created
    
    
    # 查看
    [root@k8s-master ingress-controller]# kubectl get -f ingress-http.yaml  
    NAME           CLASS    HOSTS                                    ADDRESS   PORTS   AGE
    ingress-http   <none>   nginx.mushuang.com,tomcat.mushuang.com             80      7s
    [root@k8s-master ingress-controller]# kubectl get ing ingress-http -n dev
    NAME           CLASS    HOSTS                                    ADDRESS   PORTS   AGE
    ingress-http   <none>   nginx.mushuang.com,tomcat.mushuang.com             80      2m8s
    [root@k8s-master ingress-controller]# 
    
    
    
    # 查看详情
    [root@k8s-master ingress-controller]# kubectl describe ing ingress-http  -n dev
    Name:             ingress-http
    Labels:           <none>
    Namespace:        dev
    Address:          
    Ingress Class:    <none>
    Default backend:  <default>
    Rules:
      Host                 Path  Backends
      ----                 ----  --------
      nginx.mushuang.com   
                           /   nginx-service:80 (10.244.1.117:80,10.244.2.131:80,10.244.2.133:80)
      tomcat.mushuang.com  
                           /   tomcat-service:8080 (10.244.1.115:8080,10.244.1.116:8080,10.244.2.132:8080)
    Annotations:           <none>
    Events:                <none>
    [root@k8s-master ingress-controller]# 
    
    
    # 接下来,在本地电脑上配置host文件,解析上面的两个域名到192.168.232.128(master)上
    # 然后,就可以分别访问tomcat.mushuang.com:32240  和  nginx.mushuang.com:32240 查看效果了
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38

    在这里插入图片描述

    在这里插入图片描述

    4. Https代理
    • 创建证书
    # 生成证书
    [root@k8s-master ~]# mkdir crt
    [root@k8s-master ~]# cd crt/
    [root@k8s-master crt]# openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/C=CN/ST=HB/L=WH/O=nginx/CN=mushuang.com"
    Generating a RSA private key
    ...................+++++
    ................................................+++++
    writing new private key to 'tls.key'
    -----
    
    
    # 创建密钥
    [root@k8s-master crt]# kubectl create secret tls tls-secret --key tls.key --cert tls.crt
    secret/tls-secret created
    [root@k8s-master crt]# 
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
  • 相关阅读:
    C++阶段03笔记01【内存分区模型、引用、函数提高】
    【Docker】安装及相关的命令
    【Redis】哨兵
    final的练习
    客户端负载均衡_什么是负载均衡
    Redis入门到通关之Redis数据结构-List篇
    【JavaWeb·3】Servlet+JDBC实现对数据库内数据增删改查
    我的创作纪念日
    TexFormula2Word: 将Latex公式转换为MathML的Chrome扩展
    《C语言》22-23第一学期后十周教学计划(谭浩强第五版)
  • 原文地址:https://blog.csdn.net/mushuangpanny/article/details/126881368