• kubernetes(3)


    目录

    service微服务

    ipvs模式

    clusterip

    headless

    nodeport

    loadbalancer

    metallb

    nodeport默认端口

    externalname

    ingress-nginx

    部署

    基于路径访问

    基于域名访问

    TLS加密

    auth认证

    rewrite重定向

    canary金丝雀发布

    基于header灰度

    基于权重灰度

    业务域拆分

     flannel网络插件

    calico网络插件

    部署

    网络策略

    限制pod流量

    限制namespace流量

    同时限制namespace和pod

    限制集群外部流量


    service微服务

    创建测试示例

    1. [root@k8s2 service]# vim myapp.yml
    2. apiVersion: apps/v1
    3. kind: Deployment
    4. metadata:
    5. labels:
    6. app: myapp
    7. name: myapp
    8. spec:
    9. replicas: 6
    10. selector:
    11. matchLabels:
    12. app: myapp
    13. template:
    14. metadata:
    15. labels:
    16. app: myapp
    17. spec:
    18. containers:
    19. - image: myapp:v1
    20. name: myapp
    21. ---
    22. apiVersion: v1
    23. kind: Service
    24. metadata:
    25. labels:
    26. app: myapp
    27. name: myapp
    28. spec:
    29. ports:
    30. - port: 80
    31. protocol: TCP
    32. targetPort: 80
    33. selector:
    34. app: myapp
    35. type: ClusterIP //ClusterIP是Kubernetes Service的一种类型。它为同一个Kubernetes集群中的其他Pod提供了访问Service的IP地址。这个IP地址和Service是虚拟的, 不对外暴露,只能在集群内部使用。

    ClusterIP Service类型默认使用iptables调度。iptables负责将Service的ClusterIP地址映射到后端Pod的IP地址和端口上,处理请求的负载均衡和高可用性

    ipvs模式

    修改proxy配置

    1. [root@k8s2 pod]# kubectl -n kube-system edit cm kube-proxy
    2. ...
    3. mode: "ipvs" //IPVS是一种高性能负载均衡技术,与iptables相比具有更高的性能和更丰富的负载均衡策略

    重启pod

    [root@k8s2 pod]# kubectl -n kube-system get pod|grep kube-proxy | awk '{system("kubectl -n kube-system delete pod "$1"")}'
    

    切换ipvs模式后,kube-proxy会在宿主机上添加一个虚拟网卡:kube-ipvs0,并分配service IP

    clusterip

    clusterip模式只能在集群内访问

    1. [root@k8s2 service]# vim myapp.yml
    2. ---
    3. apiVersion: v1
    4. kind: Service
    5. metadata:
    6. labels:
    7. app: myapp
    8. name: myapp
    9. spec:
    10. ports:
    11. - port: 80
    12. protocol: TCP
    13. targetPort: 80
    14. selector:
    15. app: myapp
    16. type: ClusterIP

    service创建后集群DNS提供解析

    headless

    1. [root@k8s2 service]# vim myapp.yml
    2. ---
    3. apiVersion: v1
    4. kind: Service
    5. metadata:
    6. labels:
    7. app: myapp
    8. name: myapp
    9. spec:
    10. ports:
    11. - port: 80
    12. protocol: TCP
    13. targetPort: 80
    14. selector:
    15. app: myapp
    16. type: ClusterIP
    17. clusterIP: None //当 ClusterIP 设置为 None 时,Kubernetes 不会为该 Service 分配一个 Cluster IP,并且 DNS 请求的返回值将不会是 Service 的虚拟 IP,而是真实的 Pod IP。

    1. [root@k8s2 service]# kubectl delete svc myapp
    2. [root@k8s2 service]# kubectl apply -f myapp.yml

    headless模式不分配vip

    headless通过svc名称访问,由集群内dns提供解析

    集群内直接使用service名称访问

    nodeport

    1. [root@k8s2 service]# vim myapp.yml
    2. ---
    3. apiVersion: v1
    4. kind: Service
    5. metadata:
    6. labels:
    7. app: myapp
    8. name: myapp
    9. spec:
    10. ports:
    11. - port: 80
    12. protocol: TCP
    13. targetPort: 80
    14. selector:
    15. app: myapp
    16. type: NodePort //NodePort 类型会在每个 Node 上监听一个静态端口,并将请求转发到后端 Pod 的 Cluster IP。这种类型的 Service 可以通过 <NodeIP>:<NodePort> 的方式访问到后端 Pod。
    17. 使用 NodePort 类型的 Service 可以让外部用户通过 Node IP 和 Node 端口来访问到集群内部的服务。

    1. [root@k8s2 service]# kubectl apply -f myapp.yml
    2. [root@k8s2 service]# kubectl get svc

    nodeport在集群节点上绑定端口,一个端口对应一个服务

    loadbalancer

    1. [root@k8s2 service]# vim myapp.yml
    2. ---
    3. apiVersion: v1
    4. kind: Service
    5. metadata:
    6. labels:
    7. app: myapp
    8. name: myapp
    9. spec:
    10. ports:
    11. - port: 80
    12. protocol: TCP
    13. targetPort: 80
    14. selector:
    15. app: myapp
    16. type: LoadBalancer
    17. //使用 LoadBalancer 类型的 Service 可以在集群外部暴露一个负载均衡器,让外部用户能够轻松访问到集群内部的服务。
    18. 使用 LoadBalancer 类型的 Service 必须在 Kubernetes 集群部署在某些支持云服务商的基础设施上,否则该类型的 Service 无法正常工作。

    LoadBalancer模式适用云平台,裸金属环境需要安装metallb提供支持

    1. [root@k8s2 service]# kubectl edit configmap -n kube-system kube-proxy
    2. apiVersion: kubeproxy.config.k8s.io/v1alpha1
    3. kind: KubeProxyConfiguration
    4. mode: "ipvs"
    5. ipvs:
    6. strictARP: true //启用 Kubernetes Service 的 strictARP 选项可以防止 ARP 欺骗攻击,提高网络安全性
    7. [root@k8s2 service]# kubectl -n kube-system get pod|grep kube-proxy | awk '{system("kubectl -n kube-system delete pod "$1"")}'

     

    下载部署文件

    [root@k8s2 metallb]# wget https://raw.githubusercontent.com/metallb/metallb/v0.13.11/config/manifests/metallb-native.yaml
    

    修改文件中镜像地址,与harbor仓库路径保持一致

    上传镜像到harbor

    部署服务

    1. [root@k8s2 metallb]# kubectl apply -f metallb-native.yaml
    2. [root@k8s2 metallb]# kubectl -n metallb-system get pod

    配置分段地址段

    1. [root@k8s2 metallb]# vim config.yaml
    2. apiVersion: metallb.io/v1beta1
    3. kind: IPAddressPool
    4. metadata:
    5. name: first-pool
    6. namespace: metallb-system
    7. spec:
    8. addresses:
    9. - 192.168.81.100-192.168.81.200 //修改为自己本地地址段
    10. ---
    11. apiVersion: metallb.io/v1beta1
    12. kind: L2Advertisement
    13. metadata:
    14. name: example
    15. namespace: metallb-system
    16. spec:
    17. ipAddressPools:
    18. - first-pool

    通过分配地址从集群外访问服务

    nodeport默认端口

    1. [root@k8s2 service]# vim myapp.yml
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. labels:
    6. app: myapp
    7. name: myapp
    8. spec:
    9. ports:
    10. - port: 80
    11. protocol: TCP
    12. targetPort: 80
    13. nodePort: 33333
    14. selector:
    15. app: myapp
    16. type: NodePort

    nodeport默认端口是30000-32767,超出会报错

    添加如下参数,端口范围可以自定义

    1. [root@k8s2 service]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
    2. --service-node-port- range=30000-50000

    修改后api-server会自动重启,等apiserver正常启动后才能操作集群

    externalname

    1. [root@k8s2 service]# vim externalname.yaml
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: my-service
    6. spec:
    7. type: ExternalName //ExternalName 类型的 Service 不会创建任何 ClusterIP、NodePort、LoadBalancer 类型的 Endpoint,在集群内部也不会创建任何 Pod,它会将 Service 名称与一个外部的 DNS 名称(或者 IP 地址)关联起来
    8. externalName: www.westos.org //当集群内部的应用程序访问该 Service 时,将会通过集群的 DNS 解析服务解析该 Service 名称,返回关联的 www.westos.org 的 DNS 名称

    ingress-nginx

    ingress-nginx 是一个 Kubernetes 上的 Ingress 控制器,它提供了负载均衡、SSL/TLS 等功能,可以将外部访问 Kubernetes 集群内部的服务

    部署

    官网:https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters

    下载部署文件,上传镜像到harbor

    修改3个镜像路径

    修改为LoadBalancer方式

    [root@k8s2 ingress]# kubectl -n ingress-nginx edit  svc ingress-nginx-controller
    

    创建ingress策略

    1. [root@k8s2 ingress]# vim ingress.yml
    2. apiVersion: networking.k8s.io/v1
    3. kind: Ingress
    4. metadata:
    5. name: minimal-ingress
    6. spec:
    7. ingressClassName: nginx
    8. rules:
    9. - http:
    10. paths:
    11. - path: /
    12. pathType: Prefix
    13. backend:
    14. service:
    15. name: myapp
    16. port:
    17. number: 80

    基于路径访问

    创建svc

    1. [root@k8s2 ingress]# vim myapp-v1.yml
    2. apiVersion: apps/v1
    3. kind: Deployment
    4. metadata:
    5. labels:
    6. app: myapp-v1
    7. name: myapp-v1
    8. spec:
    9. replicas: 3
    10. selector:
    11. matchLabels:
    12. app: myapp-v1
    13. template:
    14. metadata:
    15. labels:
    16. app: myapp-v1
    17. spec:
    18. containers:
    19. - image: myapp:v1
    20. name: myapp-v1
    21. ---
    22. apiVersion: v1
    23. kind: Service
    24. metadata:
    25. labels:
    26. app: myapp-v1
    27. name: myapp-v1
    28. spec:
    29. ports:
    30. - port: 80
    31. protocol: TCP
    32. targetPort: 80
    33. selector:
    34. app: myapp-v1
    35. type: ClusterIP

    1. [root@k8s2 ingress]# vim myapp-v2.yml
    2. apiVersion: apps/v1
    3. kind: Deployment
    4. metadata:
    5. labels:
    6. app: myapp-v2
    7. name: myapp-v2
    8. spec:
    9. replicas: 3
    10. selector:
    11. matchLabels:
    12. app: myapp-v2
    13. template:
    14. metadata:
    15. labels:
    16. app: myapp-v2
    17. spec:
    18. containers:
    19. - image: myapp:v2
    20. name: myapp-v2
    21. ---
    22. apiVersion: v1
    23. kind: Service
    24. metadata:
    25. labels:
    26. app: myapp-v2
    27. name: myapp-v2
    28. spec:
    29. ports:
    30. - port: 80
    31. protocol: TCP
    32. targetPort: 80
    33. selector:
    34. app: myapp-v2
    35. type: ClusterIP

    创建ingress

    1. [root@k8s2 ingress]# vim ingress1.yml
    2. apiVersion: networking.k8s.io/v1
    3. kind: Ingress
    4. metadata:
    5. name: minimal-ingress
    6. annotations:
    7. nginx.ingress.kubernetes.io/rewrite-target: /
    8. spec:
    9. ingressClassName: nginx
    10. rules:
    11. - host: myapp.westos.org
    12. http:
    13. paths:
    14. - path: /v1
    15. pathType: Prefix
    16. backend:
    17. service:
    18. name: myapp-v1
    19. port:
    20. number: 80
    21. - path: /v2
    22. pathType: Prefix
    23. backend:
    24. service:
    25. name: myapp-v2
    26. port:
    27. number: 80

    测试

    基于域名访问

    1. [root@k8s2 ingress]# vim ingress2.yml
    2. apiVersion: networking.k8s.io/v1
    3. kind: Ingress
    4. metadata:
    5. name: minimal-ingress
    6. spec:
    7. ingressClassName: nginx
    8. rules:
    9. - host: myapp1.westos.org
    10. http:
    11. paths:
    12. - path: /
    13. pathType: Prefix
    14. backend:
    15. service:
    16. name: myapp-v1
    17. port:
    18. number: 80
    19. - host: myapp2.westos.org
    20. http:
    21. paths:
    22. - path: /
    23. pathType: Prefix
    24. backend:
    25. service:
    26. name: myapp-v2
    27. port:
    28. number: 80

    TLS加密

    创建证书

    1. [root@k8s2 ingress]# openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
    2. [root@k8s2 ingress]# kubectl create secret tls tls-secret --key tls.key --cert tls.crt
    3. [root@k8s2 ingress]# vim ingress3.yml
    4. apiVersion: networking.k8s.io/v1
    5. kind: Ingress
    6. metadata:
    7. name: ingress-tls
    8. spec:
    9. tls:
    10. - hosts:
    11. - myapp.westos.org
    12. secretName: tls-secret
    13. ingressClassName: nginx
    14. rules:
    15. - host: myapp.westos.org
    16. http:
    17. paths:
    18. - path: /
    19. pathType: Prefix
    20. backend:
    21. service:
    22. name: myapp-v1
    23. port:
    24. number: 80

    auth认证

    创建认证文件

    1. [root@k8s2 ingress]# yum install -y httpd-tools
    2. [root@k8s2 ingress]# htpasswd -c auth shx
    3. [root@k8s2 ingress]# kubectl create secret generic basic-auth --from-file=auth

    1. [root@k8s2 ingress]# vim ingress3.yml
    2. apiVersion: networking.k8s.io/v1
    3. kind: Ingress
    4. metadata:
    5. name: ingress-tls
    6. annotations:
    7. nginx.ingress.kubernetes.io/auth-type: basic
    8. nginx.ingress.kubernetes.io/auth-secret: basic-auth
    9. nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - shx'
    10. spec:
    11. tls:
    12. - hosts:
    13. - myapp.westos.org
    14. secretName: tls-secret
    15. ingressClassName: nginx
    16. rules:
    17. - host: myapp.westos.org
    18. http:
    19. paths:
    20. - path: /
    21. pathType: Prefix
    22. backend:
    23. service:
    24. name: myapp-v1
    25. port:
    26. number: 80

    rewrite重定向

    示例一:

    1. [root@k8s2 ingress]# vim ingress3.yml
    2. apiVersion: networking.k8s.io/v1
    3. kind: Ingress
    4. metadata:
    5. name: ingress-tls
    6. annotations:
    7. nginx.ingress.kubernetes.io/auth-type: basic
    8. nginx.ingress.kubernetes.io/auth-secret: basic-auth
    9. nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - shx'
    10. nginx.ingress.kubernetes.io/app-root: /hostname.html
    11. spec:
    12. tls:
    13. - hosts:
    14. - myapp.westos.org
    15. secretName: tls-secret
    16. ingressClassName: nginx
    17. rules:
    18. - host: myapp.westos.org
    19. http:
    20. paths:
    21. - path: /
    22. pathType: Prefix
    23. backend:
    24. service:
    25. name: myapp-v1
    26. port:
    27. number: 80

    示例二:

    1. [root@k8s2 ingress]# vim ingress3.yml
    2. apiVersion: networking.k8s.io/v1
    3. kind: Ingress
    4. metadata:
    5. name: ingress-tls
    6. annotations:
    7. nginx.ingress.kubernetes.io/auth-type: basic
    8. nginx.ingress.kubernetes.io/auth-secret: basic-auth
    9. nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - shx'
    10. #nginx.ingress.kubernetes.io/app-root: /hostname.html
    11. nginx.ingress.kubernetes.io/use-regex: "true"
    12. nginx.ingress.kubernetes.io/rewrite-target: /$2
    13. spec:
    14. tls:
    15. - hosts:
    16. - myapp.westos.org
    17. secretName: tls-secret
    18. ingressClassName: nginx
    19. rules:
    20. - host: myapp.westos.org
    21. http:
    22. paths:
    23. - path: /
    24. pathType: Prefix
    25. backend:
    26. service:
    27. name: myapp-v1
    28. port:
    29. number: 80
    30. - path: /westos(/|$)(.*)
    31. pathType: ImplementationSpecific
    32. backend:
    33. service:
    34. name: myapp-v1
    35. port:
    36. number: 80

    canary金丝雀发布

    基于header灰度
    1. [root@k8s2 ingress]# vim ingress4.yml
    2. apiVersion: networking.k8s.io/v1
    3. kind: Ingress
    4. metadata:
    5. name: myapp-v1-ingress
    6. spec:
    7. ingressClassName: nginx
    8. rules:
    9. - host: myapp.westos.org
    10. http:
    11. paths:
    12. - pathType: Prefix
    13. path: /
    14. backend:
    15. service:
    16. name: myapp-v1
    17. port:
    18. number: 80

    1. [root@k8s2 ingress]# vim ingress5.yml
    2. apiVersion: networking.k8s.io/v1
    3. kind: Ingress
    4. metadata:
    5. annotations:
    6. nginx.ingress.kubernetes.io/canary: "true"
    7. nginx.ingress.kubernetes.io/canary-by-header: stage
    8. nginx.ingress.kubernetes.io/canary-by-header-value: gray
    9. name: myapp-v2-ingress
    10. spec:
    11. ingressClassName: nginx
    12. rules:
    13. - host: myapp.westos.org
    14. http:
    15. paths:
    16. - pathType: Prefix
    17. path: /
    18. backend:
    19. service:
    20. name: myapp-v2
    21. port:
    22. number: 80

    测试

    基于权重灰度
    1. [root@k8s2 ingress]# vim ingress5.yml
    2. apiVersion: networking.k8s.io/v1
    3. kind: Ingress
    4. metadata:
    5. annotations:
    6. nginx.ingress.kubernetes.io/canary: "true"
    7. #nginx.ingress.kubernetes.io/canary-by-header: stage
    8. #nginx.ingress.kubernetes.io/canary-by-header-value: gray
    9. nginx.ingress.kubernetes.io/canary-weight: "50"
    10. nginx.ingress.kubernetes.io/canary-weight-total: "100"
    11. name: myapp-v2-ingress
    12. spec:
    13. ingressClassName: nginx
    14. rules:
    15. - host: myapp.westos.org
    16. http:
    17. paths:
    18. - pathType: Prefix
    19. path: /
    20. backend:
    21. service:
    22. name: myapp-v2
    23. port:
    24. number: 80

    测试

    1. [root@k8s1 ~]# vim ingress.sh
    2. #!/bin/bash
    3. v1=0
    4. v2=0
    5. for (( i=0; i<100; i++))
    6. do
    7. response=`curl -s myapp.westos.org |grep -c v1`
    8. v1=`expr $v1 + $response`
    9. v2=`expr $v2 + 1 - $response`
    10. done
    11. echo "v1:$v1, v2:$v2"

    业务域拆分
    1. [root@k8s2 ingress]# vim ingress6.yml
    2. apiVersion: networking.k8s.io/v1
    3. kind: Ingress
    4. metadata:
    5. annotations:
    6. nginx.ingress.kubernetes.io/rewrite-target: /$1
    7. name: rewrite-ingress
    8. spec:
    9. ingressClassName: nginx
    10. rules:
    11. - host: myapp.westos.org
    12. http:
    13. paths:
    14. - path: /user/(.*)
    15. pathType: Prefix
    16. backend:
    17. service:
    18. name: myapp-v1
    19. port:
    20. number: 80
    21. - path: /order/(.*)
    22. pathType: Prefix
    23. backend:
    24. service:
    25. name: myapp-v2
    26. port:
    27. number: 80

    测试

     flannel网络插件

    使用host-gw模式

    [root@k8s2 ~]# kubectl -n kube-flannel edit  cm kube-flannel-cfg
    

    重启pod生效

    [root@k8s2 ~]# kubectl -n kube-flannel delete  pod --all
    

    calico网络插件

    部署

    删除flannel插件、删除所有节点上flannel配置文件,避免冲突

    1. [root@k8s2 ~]# kubectl delete -f kube-flannel.yml
    2. [root@k8s2 ~]# rm -f /etc/cni/net.d/10-flannel.conflist
    3. [root@k8s3 ~]# rm -f /etc/cni/net.d/10-flannel.conflist
    4. [root@k8s4 ~]# rm -f /etc/cni/net.d/10-flannel.conflist

    下载部署文件、修改镜像路径、上传镜像

    [root@k8s2 calico]# kubectl apply -f calico.yaml
    

    重启所有集群节点,让pod重新分配IP

    等待集群重启正常后测试网络

    网络策略

    限制pod流量
    1. [root@k8s2 calico]# vim networkpolicy.yaml
    2. apiVersion: networking.k8s.io/v1
    3. kind: NetworkPolicy
    4. metadata:
    5. name: test-network-policy
    6. namespace: default
    7. spec:
    8. podSelector:
    9. matchLabels:
    10. app: myapp-v1
    11. policyTypes:
    12. - Ingress
    13. ingress:
    14. - from:
    15. - podSelector:
    16. matchLabels:
    17. role: test
    18. ports:
    19. - protocol: TCP
    20. port: 80

    控制的对象是具有app=myapp-v1标签的pod

    此时访问svc是不通的

    给测试pod添加指定标签后,可以访问

    限制namespace流量
    1. [root@k8s2 calico]# vim networkpolicy.yaml
    2. apiVersion: networking.k8s.io/v1
    3. kind: NetworkPolicy
    4. metadata:
    5. name: test-network-policy
    6. namespace: default
    7. spec:
    8. podSelector:
    9. matchLabels:
    10. app: myapp
    11. policyTypes:
    12. - Ingress
    13. ingress:
    14. - from:
    15. - namespaceSelector:
    16. matchLabels:
    17. project: test
    18. - podSelector:
    19. matchLabels:
    20. role: test
    21. ports:
    22. - protocol: TCP
    23. port: 80

    [root@k8s2 ~]# kubectl create namespace test
    

    给namespace添加指定标签

    [root@k8s2 calico]# kubectl label ns test project=test
    

    同时限制namespace和pod
    1. [root@k8s2 calico]# vim networkpolicy.yaml
    2. apiVersion: networking.k8s.io/v1
    3. kind: NetworkPolicy
    4. metadata:
    5. name: test-network-policy
    6. namespace: default
    7. spec:
    8. podSelector:
    9. matchLabels:
    10. app: myapp
    11. policyTypes:
    12. - Ingress
    13. ingress:
    14. - from:
    15. - namespaceSelector:
    16. matchLabels:
    17. project: test
    18. podSelector:
    19. matchLabels:
    20. role: test
    21. ports:
    22. - protocol: TCP
    23. port: 80

    给test命令空间中的pod添加指定标签后才能访问

    [root@k8s2 calico]# kubectl -n test label pod demo role=test
    

    限制集群外部流量
    1. [root@k8s2 calico]# vim networkpolicy.yaml
    2. apiVersion: networking.k8s.io/v1
    3. kind: NetworkPolicy
    4. metadata:
    5. name: test-network-policy
    6. namespace: default
    7. spec:
    8. podSelector:
    9. matchLabels:
    10. app: myapp
    11. policyTypes:
    12. - Ingress
    13. ingress:
    14. - from:
    15. - ipBlock:
    16. cidr: 192.168.56.0/24
    17. - namespaceSelector:
    18. matchLabels:
    19. project: myproject
    20. podSelector:
    21. matchLabels:
    22. role: frontend
    23. ports:
    24. - protocol: TCP
    25. port: 80

  • 相关阅读:
    应用在儿童平板防蓝光中的LED防蓝光灯珠
    java八股文(mysql篇)
    Java使用milo读写OPC UA源代码示例
    git只提交部分修改的文件(提交指定文件)
    数据库 备份和恢复
    Linux 命令(212)—— ssh-add 命令
    MyBatisPlus 日志的两个坑:生产环境不打日志、多数据源日志配置等
    数组去重(unique())--numpy
    (附源码)ssm学校疫情服务平台 毕业设计 291202
    Bluez目录结构分析
  • 原文地址:https://blog.csdn.net/m0_64028800/article/details/134059820