• Haproxy实现七层负载均衡


    目录

    Haproxy概述

    haproxy算法:

    Haproxy实现七层负载

    ①部署nginx-server测试页面

    ②(主/备)部署负载均衡器

     ③部署keepalived高可用

     ④增加对haproxy健康检查

    ⑤测试


    Haproxy概述

    haproxy---主要是做负载均衡的7层,也可以做4层负载均衡
    apache也可以做7层负载均衡,但是很麻烦。实际工作中没有人用。
    负载均衡是通过OSI协议对应的
    7层负载均衡:用的7层http协议,
    4层负载均衡:用的是tcp协议加端口号做的负载均衡

    haproxy算法:


    1.roundrobin
    基于权重进行轮询,在服务器的处理时间保持均匀分布时,这是最平衡,最公平的算法.此算法是动态的,这表示其权重可以在运行时进行调整.不过在设计上,每个后端服务器仅能最多接受4128个连接
    2.static-rr
    基于权重进行轮询,与roundrobin类似,但是为静态方法,在运行时调整其服务器权重不会生效.不过,其在后端服务器连接数上没有限制
    3.leastconn
    新的连接请求被派发至具有最少连接数目的后端服务器.

    Haproxy实现七层负载

    keepalived+haproxy

    192.168.134.165  master

    192.168.134.166 slave

    192.168.134.163  nginx-server

    192.168.134.164 nginx-server

    192.168.134.160   VIP(虚拟IP)

    ①部署nginx-server测试页面

    两台nginx都部署方便测试

    1. [root@server03 ~]# yum -y install nginx
    2. [root@server03 ~]# systemctl start nginx
    3. [root@server03 ~]# echo "webserver01..." > /usr/share/nginx/html/index.html
    4. [root@server04 ~]# yum -y install nginx
    5. [root@server04 ~]# systemctl start nginx
    6. [root@server04 ~]# echo "webserver02..." > /usr/share/nginx/html/index.html
    ②(主/备)部署负载均衡器
    1. [root@server01 ~]# yum -y install haproxy
    2. [root@server01 ~]# vim /etc/haproxy/haproxy.cfg
    3. global
    4. log 127.0.0.1 local2 info
    5. pidfile /var/run/haproxy.pid
    6. maxconn 4000
    7. user haproxy
    8. group haproxy
    9. daemon
    10. nbproc 1
    11. defaults
    12. mode http
    13. log global
    14. retries 3
    15. option redispatch
    16. maxconn 4000
    17. contimeout 5000
    18. clitimeout 50000
    19. srvtimeout 50000
    20. listen stats
    21. bind *:81
    22. stats enable
    23. stats uri /haproxy
    24. stats auth aren:123
    25. frontend web
    26. mode http
    27. bind *:80
    28. option httplog
    29. acl html url_reg -i \.html$
    30. use_backend httpservers if html
    31. default_backend httpservers
    32. backend httpservers
    33. balance roundrobin
    34. server http1 192.168.134.163:80 maxconn 2000 weight 1 check inter 1s rise 2 fall 2
    35. server http2 192.168.134.164:80 maxconn 2000 weight 1 check inter 1s rise 2 fall 2
    36. [root@server01 ~]# systemctl start haproxy

    浏览器访问haproxy监控

    master:

    slave:

    页面主要参数解释
    Queue
    Cur: current queued requests //当前的队列请求数量
    Max:max queued requests     //最大的队列请求数量
    Limit:           //队列限制数量

    Errors
    Req:request errors             //错误请求
    Conn:connection errors          //错误的连接

    Server列表:
    Status:状态,包括up(后端机活动)和down(后端机挂掉)两种状态
    LastChk:    持续检查后端服务器的时间
    Wght: (weight) : 权重

     ③部署keepalived高可用

    注意:master和slave的优先级不一样,但虚拟路由id(virtual_router_id)保持一致;并且slave配置 nopreempt(不抢占资源)

    master:

    1. [root@server01 ~]# yum -y install keepalived
    2. [root@server01 ~]# vim /etc/keepalived/keepalived.conf
    3. ! Configuration File for keepalived
    4. global_defs {
    5. router_id director1
    6. }
    7. vrrp_instance VI_1 {
    8. state MASTER
    9. interface ens33
    10. virtual_router_id 80
    11. priority 100
    12. advert_int 1
    13. authentication {
    14. auth_type PASS
    15. auth_pass 1111
    16. }
    17. virtual_ipaddress {
    18. 192.168.134.160/24
    19. }
    20. }
    21. [root@server01 ~]# systemctl start keepalived

    slaver:

    1. [root@localhost ~]# yum -y install keepalived
    2. [root@localhost ~]# vim /etc/keepalived/keepalived.conf
    3. ! Configuration File for keepalived
    4. global_defs {
    5. router_id directory2
    6. }
    7. vrrp_instance VI_1 {
    8. state BACKUP
    9. interface ens33
    10. nopreempt
    11. virtual_router_id 80
    12. priority 50
    13. advert_int 1
    14. authentication {
    15. auth_type PASS
    16. auth_pass 1111
    17. }
    18. virtual_ipaddress {
    19. 192.168.134.160/24
    20. }
    21. }
    22. [root@localhost ~]# systemctl start keepalived

    查看IP

     ④增加对haproxy健康检查

    两台机器都做,让Keepalived以一定时间间隔执行一个外部脚本,脚本的功能是当Haproxy失败,则关闭本机的Keepalived。

    1. [root@server01 ~]# vim /etc/keepalived/check.sh
    2. #!/bin/bash
    3. /usr/bin/curl -I http://localhost &>/dev/null
    4. if [ $? -ne 0 ];then
    5. # /etc/init.d/keepalived stop
    6. systemctl stop keepalived
    7. fi
    8. [root@server01 ~]# chmod a+x /etc/keepalived/check.sh

    在keepalived增加健康检查配置vrrp_script check_haproxy并且用 track_script调用。

    1. ! Configuration File for keepalived
    2. global_defs {
    3. router_id director1
    4. }
    5. vrrp_script check_haproxy {
    6. script "/etc/keepalived/check.sh"
    7. interval 5
    8. }
    9. vrrp_instance VI_1 {
    10. state MASTER
    11. interface ens33
    12. virtual_router_id 80
    13. priority 100
    14. advert_int 1
    15. authentication {
    16. auth_type PASS
    17. auth_pass 1111
    18. }
    19. virtual_ipaddress {
    20. 192.168.134.160/24
    21. }
    22. track_script {
    23. check_haproxy
    24. }
    25. }

    重启keepalived

    [root@server01 ~]# systemctl  restart keepalived
    
    ⑤测试

    关闭master的haproxy服务可以发现master的keepalived服务也关闭,此时master上的VIP转移到slave上

    • 关闭master的服务并查看VIP

    • 查看slave的IP可以发现VIP跳转至此。

    • 在web界面查看服务是否正常

    第一次刷新

    第二次刷新

  • 相关阅读:
    label问题排查:打不开标注好的图像
    Django自定义manage.py命令实现hexo博客迁移
    Vue el-table全表搜索,模糊匹配-前端静态查询
    无涯教程-JavaScript - DEGREES函数
    函数柯里化详解
    rsync远程同步
    兼顾省钱与性能的存储资源盘活系统
    瑞吉外卖实战项目全攻略——优化篇第一天
    解决AU报“MME无法使用“问题
    记一次公司被勒索病毒攻击事迹,上上下下咬牙切齿
  • 原文地址:https://blog.csdn.net/l1727377/article/details/134345122