• 大数据安装部署


    目录

    一、基础环境搭建

    1.安装VMware软件

    2.CentOS系统安装

    3.配置网络并关闭防火墙

    4.配置主机名和ip的映射关系

    5.安装配置JDK

    6.克隆虚拟机

    7.配置网络环境

    二、hadoop安装

    1.安装hadoop,配置环境变量

    2.ssh免密码登录

    3.编写集群同步脚本

    4.配置hdfs集群

    5.配置yarn集群

    6.启动集群测试

     7.配置本地映射关系

    8.配置本地yum源

    9.配置hadoop历史日志

    1.配置历史服务器

     2.配置日志的聚集

    三、安装hive

    1.mysql安装

    2.安装hive

    3.配置mysql的数据源

    四、配置时间同步

    1.配置NTP时间同步服务器

    2.启动服务

    五、安装Zookeeper

    1.安装

    2.配置文件

    3.把集群文件同步到其它机器

    4.集群的启动与关闭

    5.检查集群启动的状态

    六、安装hbase

    1.安装

    2.配置

    3.同步的其他集群

    4.HBase服务的启动

    5.打开端口号16010

     六、安装spark

    1.安装

    2.配置

    3.同步其他集群

    4.启动

    5.JobHistoryServer配置

    七、安装flume

    1.安装

    2.配置


    一、基础环境搭建

    1.安装VMware软件

    在这里展示是vmware10的版本,官网可以下载,

    镜像文件:CentOS-6.8-x86_64-bin-DVD1.iso,官网下载

    2.CentOS系统安装

    2.1创建新的虚拟机

     2.选择自定义,然后下一步

     3.不用修改,直接下一步

     4.选择稍后安装操作系统,下一步

     5.选择linux,centos64位,下一步

     6.虚拟机名称起hadoop01,位置选择D盘位置,建议不要选择C盘,后期占内存大,下一步

    7.直接下一步

     8.建议选择2g内存,下一步

     9.选择仅主机模式网络,下一步

     10.选择系统自动推荐的,下一步

     11.选择创建新虚拟磁盘,下一步

     12.大小选择20g,将虚拟磁盘拆分多个文件,下一步

     13.不用修改,直接下一步

     14.不用修改,点击完成

     15.编辑虚拟机设置

     16.选择ISO映像文件,镜像文件:CentOS-6.8-x86_64-bin-DVD1.iso,官网下载

     17.选择Install or upgrade an existing system,回车 

    18. 选择skip,回车

     19.选择next,选择语言页面选择默认下一步,键盘默认下一步

     20.Basic Storage Devices,选择Next 

    21.选择Yes,discrard any data

     22.不用修改主机名,直接下一步

    23.不用修改时间地区,直接下一步 

     24.设置密码,下一步,

    25.选择Use Anyway, 

    26.选择Use All Space ,下一步

     27.选择Write changes to disk

     28.默认安装桌面版本,下一步

     29.等待过程有些久 

    30.选择Reboot

     31.直接选择Forward,接下来直接选择Forward

     32.选择Yes

     33.选择Finish,然后选择yes

    3.配置网络并关闭防火墙

    1.登录用户和密码,选择log in

     2.打开终端,open in Terminal

     3.配置虚拟机静态ip

    1. [root@hadoop01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
    2. DEVICE=eth0
    3. HWADDR=00:0C:29:1C:AE:A7
    4. TYPE=Ethernet
    5. UUID=a01f4ce4-e877-4696-aa24-caeef5395b9f
    6. ONBOOT=yes #修改成yes
    7. NM_CONTROLLED=yes
    8. BOOTPROTO=static #修改成静态
    9. IPADDR=192.168.86.101 #子网ip,101是自设的
    10. NETMASK=255.255.255.0 #子网掩码
    11. GETWAY=192.168.86.1 #子网ip,最后一位是固定的

     ​​​​​

     4.配置好了重启网络

    [root@hadoop01 ~]# service network restart
    

     5.查看网络ip

    1. [root@hadoop01 ~]# ifconfig
    2. eth0 Link encap:Ethernet HWaddr 00:0C:29:1C:AE:A7
    3. inet addr:192.168.86.101 Bcast:192.168.86.255 Mask:255.255.255.0
    4. inet6 addr: fe80::20c:29ff:fe1c:aea7/64 Scope:Link
    5. UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    6. RX packets:2093404 errors:0 dropped:0 overruns:0 frame:0
    7. TX packets:2229452 errors:0 dropped:0 overruns:0 carrier:0
    8. collisions:0 txqueuelen:1000
    9. RX bytes:1148990236 (1.0 GiB) TX bytes:2157139065 (2.0 GiB)
    10. lo Link encap:Local Loopback
    11. inet addr:127.0.0.1 Mask:255.0.0.0
    12. inet6 addr: ::1/128 Scope:Host
    13. UP LOOPBACK RUNNING MTU:65536 Metric:1
    14. RX packets:660419 errors:0 dropped:0 overruns:0 frame:0
    15. TX packets:660419 errors:0 dropped:0 overruns:0 carrier:0
    16. collisions:0 txqueuelen:0
    17. RX bytes:92983459 (88.6 MiB) TX bytes:92983459 (88.6 MiB)

    6.查看防火墙状态

    [root@hadoop01 ~]# service iptables status
    

    7.关闭防火墙(暂时关闭,重启后会失效)

    [root@hadoop01 ~]# service iptables stop
    

    8.检查防火墙状态

    1. [root@hadoop01 ~]# service iptables status
    2. iptables: Firewall is not running. #出现这个表示关闭的防火墙

    9.永久关闭防火墙

    1. [root@hadoop01 ~]# chkconfig --list iptables
    2. iptables 0:off 1:off 2:on 3:on 4:on 5:on 6:off
    3. [root@hadoop01 ~]# chkconfig iptables off #永久关闭
    4. [root@hadoop01 ~]# chkconfig --list iptables
    5. iptables 0:off 1:off 2:off 3:off 4:off 5:off 6:off #永久关闭成功

    4.配置主机名和ip的映射关系

    1.打开主机名的配置文件,进行修改

    1. [root@hadoop01 ~]# vim /etc/sysconfig/network
    2. NETWORKING=yes
    3. HOSTNAME=hadoop01

    2.修改ip的映射关系

    1. [root@hadoop01 ~]# vim /etc/hosts
    2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
    3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
    4. 192.168.86.101 hadoop01
    5. 192.168.86.102 hadoop02
    6. 192.168.86.103 hadoop03

    3.重启虚拟机

    [root@hadoop01 software]# reboot
    

     4.检查主机名

    1. [root@hadoop01 software]# hostname
    2. hadoop01
    3. [root@hadoop01 software]# ping hadoop01
    4. PING hadoop01 (192.168.86.101) 56(84) bytes of data.
    5. 64 bytes from hadoop01 (192.168.86.101): icmp_seq=1 ttl=64 time=0.057 ms
    6. 64 bytes from hadoop01 (192.168.86.101): icmp_seq=2 ttl=64 time=0.047 ms
    7. 64 bytes from hadoop01 (192.168.86.101): icmp_seq=3 ttl=64 time=0.049 ms
    8. ^C
    9. --- hadoop01 ping statistics ---
    10. 3 packets transmitted, 3 received, 0% packet loss, time 2795ms
    11. rtt min/avg/max/mdev = 0.047/0.051/0.057/0.004 ms

    5.安装配置JDK

    在这里使用的是jdk-8u144-linux-x64.tar.gz,官网下载

    1.建立专门放置安装包

    1. [root@hadoop01 ~]# mkdir /opt/software/
    2. [root@hadoop01 ~]# cd /opt/software/
    3. [root@hadoop01 software]# ll
    4. total 0

    2.上传jdk到指定路径

    可以使用xshell或者其他也可以,上传文件

    1. [root@hadoop01 software]# ls
    2. jdk-8u144-linux-x64.tar.gz

    3.建立一个解压后的文件夹

    1. [root@hadoop01 software]# mkdir /opt/module/
    2. [root@hadoop01 software]# cd /opt/module/
    3. [root@hadoop01 module]# ll
    4. total 0

    4.解压jdk

    1. [root@hadoop01 software]# tar -zxvf jdk-8u144-linux-x64.tar.gz -C /opt/module/
    2. [root@hadoop01 module]# java -version #安装完成后,查看jdk版本
    3. java version "1.8.0_144"
    4. Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
    5. Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

    5.如果版本号不对应

    1. [root@hadoop01 software]# rpm -qa | grep java
    2. tzdata-java-2016c-1.el6.noarch
    3. java-1.7.0-openjdk-1.7.0.99-2.6.5.1.e16.x86_64
    4. java-1.6.0-openjdk-1.6.0.38-1.13.10.4.e16.x86_64
    5. [root@hadoop01 software]# rpm -e --nodeps java-1.7.0-openjdk-1.7.0.99-2.6.5.1.e16.x86_64 #卸载安装包
    6. [root@hadoop01 software]# rpm -e --nodeps java-1.6.0-openjdk-1.6.0.38-1.13.10.4.e16.x86_64
    7. [root@hadoop01 software]# rpm -qa | grep java
    8. tzdata-java-2016c-1.el6.noarch

    6.配置环境变量

    1. [root@hadoop01 software]# vim /etc/profile #添加jdk路径
    2. export JAVA_HOME=/opt/module/jdk1.8.0_144
    3. export PATH=$PATH:$JAVA_HOME/bin
    4. [root@hadoop01 software]# source /etc/profile

    7.再次查看jdk版本

    1. [root@hadoop01 software]# vim /etc/profile
    2. [root@hadoop01 software]# java -version
    3. java version "1.8.0_144"
    4. Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
    5. Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

    6.克隆虚拟机

    1.右键打开克隆

    2. 点击下一页

     

     3.选择创建完整克隆

     4.名称和路径

     5.选择继续

    重复一遍克隆一台虚拟机

    名称为haoop03的虚拟机

    6.并且启动

    7.配置网络环境

    1.查看网卡信息

    1. [root@hadoop02 ~]# ifconfig
    2. eth1 Link encap:Ethernet HWaddr 00:0C:29:96:83:5A
    3. inet addr:192.168.86.102 Bcast:192.168.86.255 Mask:255.255.255.0
    4. inet6 addr: fe80::20c:29ff:fe96:835a/64 Scope:Link
    5. UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    6. RX packets:1648800 errors:0 dropped:0 overruns:0 frame:0
    7. TX packets:1521767 errors:0 dropped:0 overruns:0 carrier:0
    8. collisions:0 txqueuelen:1000
    9. RX bytes:1229868487 (1.1 GiB) TX bytes:187659267 (178.9 MiB)
    10. lo Link encap:Local Loopback
    11. inet addr:127.0.0.1 Mask:255.0.0.0
    12. inet6 addr: ::1/128 Scope:Host
    13. UP LOOPBACK RUNNING MTU:65536 Metric:1
    14. RX packets:72557 errors:0 dropped:0 overruns:0 frame:0
    15. TX packets:72557 errors:0 dropped:0 overruns:0 carrier:0
    16. collisions:0 txqueuelen:0
    17. RX bytes:5452856 (5.2 MiB) TX bytes:5452856 (5.2 MiB)

     2.把网卡信息中eth1修改成eth0

    1. [root@hadoop02 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
    2. DEVICE=eth0
    3. HWADDR=00:0C:29:96:83:5A #查询ifconfig的eth1第一行的物理地址
    4. TYPE=Ethernet
    5. UUID=a01f4ce4-e877-4696-aa24-caeef5395b9f
    6. ONBOOT=yes
    7. NM_CONTROLLED=yes
    8. BOOTPROTO=static
    9. IPADDR=192.168.86.102 #修改hadoop02的IP地址
    10. NETMASK=255.255.255.0
    11. GETWAY=192.168.86.1
    12. #只需要修改物理地址和ip地址就可以

    3.修改网卡文件

    1. [root@hadoop02 ~]# vim /etc/udev/rules.d/70-persistent-net.rules
    2. # This file was automatically generated by the /lib/udev/write_net_rules
    3. # program, run by the persistent-net-generator.rules rules file.
    4. #
    5. # You can modify it, as long as you keep each rule on a single
    6. # line, and change only the value of the NAME= key.
    7. # PCI device 0x8086:0x100f (e1000)
    8. SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:96:83:5a", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

    4.修改主机名

    1. [root@hadoop02 ~]# vim /etc/sysconfig/network
    2. NETWORKING=yes
    3. HOSTNAME=hadoop02

    5.重启虚拟机,主机名生效 

    [root@hadoop02 ~]# reboot
    

    6.修改hadoop03网卡同上

    二、hadoop安装

    在这里hadoop-2.7.3.tar.gz,官网下载

    1.安装hadoop,配置环境变量

    1.上传hadoop-2.7.3.tar.gz到指定位置

    1. [root@hadoop01 software]# ls
    2. hadoop-2.7.3.tar.gz

    2.解压文件

    [root@hadoop01 software]# tar -zxvf hadoop-2.7.3.tar.gz -C /opt/module/
    

    3.配置hadoop-env.sh

    在esc状态下:set nu显示行号

     打开hadoop-env.sh修改jdk路径

    1. [root@hadoop01 ~]# cd /opt/module/hadoop-2.7.3/etc/hadoop
    2. [root@hadoop01 hadoop]# vim hadoop-env.sh
    3. 25 export JAVA_HOME=/opt/module/jdk1.8.0_144

    4.添加hadoop的路径

    1. [root@hadoop01 hadoop]# vim /etc/profile #最后一行添加
    2. export HADOOP_HOME=/opt/module/hadoop-2.7.3
    3. export PATH=$PATH:$HADOOP_HOME/bin
    4. export PATH=$PATH:$HADOOP_HOME/sbin

    5.修改让文件生效

    [root@hadoop01 software]# source /etc/profile

    6.hadoop02,hadoop03重复3-5的步骤配置就可以

    2.ssh免密码登录

    1.hadoop01生成公钥和私钥

    1. [root@hadoop01 ~]# cd .ssh
    2. [root@hadoop01 .ssh]# pwd
    3. /root/.ssh
    4. [root@hadoop01 .ssh]# ssh-keygen -t rsa
    5. [root@hadoop01 .ssh]# ssh-copy-id hadoop01 #yes回车,输入密码
    6. [root@hadoop01 .ssh]# ssh-copy-id hadoop02
    7. [root@hadoop01 .ssh]# ssh-copy-id hadoop03
    8. [root@hadoop01 .ssh]# ssh-copy-id localhost

    2.hadoop02,hadoop03重复步骤配置就可以

    3.编写集群同步脚本

    1.创建目录

    [root@hadoop01 ~]# mkdir bin
    

    2.创建文件

    [root@hadoop01 bin]# touch xsync
    

    3.编写集群同步脚本

    1. [root@hadoop01 bin]# vim xsync
    2. #!/bin/bash
    3. #1 获取输入参数个数,如果没有参数,直接退出
    4. pcount=$#
    5. if((pcount==0)); then
    6. echo no args;
    7. exit;
    8. fi
    9. #2 获取文件名称
    10. p1=$1
    11. fname=`basename $p1`
    12. echo fname=$fname
    13. #3 获取上级目录到绝对路径
    14. pdir=`cd -P $(dirname $p1); pwd`
    15. echo pdir=$pdir
    16. #4 获取当前用户名称
    17. user=`whoami`
    18. #5 循环
    19. for((host=1; host<4; host++)); do
    20. #echo $pdir/$fname $user@hadoop$host:$pdir
    21. echo --------------- hadoop0$host ----------------
    22. rsync -rvl $pdir/$fname $user@hadoop0$host:$pdir
    23. done

    4.文件加上权限

    [root@hadoop01 bin]# chmod 777 xsync
    

    5.同步目录

    [root@hadoop01 bin]# /root/bin/xsync /root/bin

    4.配置hdfs集群

    1.配置core-site.xml

    1. [root@hadoop01 hadoop]# vim core-site.xml
    2. <configuration>
    3. <!-- 指定HDFS中NameNode的地址 -->
    4. <property>
    5. <name>fs.defaultFS</name>
    6. <value>hdfs://hadoop01:9000</value>
    7. </property>
    8.  
    9. <!-- 指定hadoop运行时产生文件的存储目录 -->
    10. <property>
    11. <name>hadoop.tmp.dir</name>
    12. <value>/opt/module/hadoop-2.7.3/data/tmp</value>
    13. </property>
    14. </configuration>
    15. ~

    2.配置hdfs-site.xml

    1. [root@hadoop01 hadoop]# vim hdfs-site.xml
    2. <configuration>
    3. <property>
    4. <name>dfs.replication</name>
    5. <value>3</value>
    6. </property>
    7. <property>
    8. <name>dfs.namenode.secondary.http-address</name>
    9. <value>hadoop01:50090</value>
    10. </property>
    11. </configuration>

    3.配置slaves 

    1. [root@hadoop01 hadoop]# vim slaves
    2. hadoop01
    3. hadoop02
    4. hadoop03

    5.配置yarn集群

    1.配置yarn-env.sh

    1. [root@hadoop01 hadoop]# vim yarn-env.sh
    2. 23 export JAVA_HOME=/opt/module/jdk1.8.0_144

    2.配置yarn-site.xml

    1. [root@hadoop01 hadoop]# vim yarn-site.xml
    2. <configuration>
    3. <!-- reducer获取数据的方式 -->
    4. <property>
    5. <name>yarn.nodemanager.aux-services</name>
    6. <value>mapreduce_shuffle</value>
    7. </property>
    8. <!-- 指定YARN的ResourceManager的地址 -->
    9. <property>
    10. <name>yarn.resourcemanager.hostname</name>
    11. <value>hadoop01</value>
    12. </property>
    13. <!-- 日志聚集功能使能 -->
    14. <property>
    15. <name>yarn.log-aggregation-enable</name>
    16. <value>true</value>
    17. </property>
    18. <!-- 日志保留时间设置7天 -->
    19. <property>
    20. <name>yarn.log-aggregation.retain-seconds</name>
    21. <value>604800</value>
    22. </property>
    23. </configuration>

    3.配置mapred-env.sh

    1. [root@hadoop01 hadoop]# vim mapred-env.sh
    2. export JAVA_HOME=/opt/module/jdk1.8.0_144
    3. export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000
    4. export HADOOP_MAPRED_ROOT_LOGGER=INFO,RFA

    4.配置mapred-site.xml

    1. [root@hadoop01 hadoop]# mv mapred-site.xml.template mapred-site.xml #修改名字
    2. [root@hadoop01 hadoop]# vim mapred-site.xml
    3. <configuration>
    4. <!-- 指定mr运行在yarn上 -->
    5. <property>
    6. <name>mapreduce.framework.name</name>
    7. <value>yarn</value>
    8. </property>
    9. <property>
    10. <name>mapreduce.jobhistory.address</name>
    11. <value>hadoop01:10020</value>
    12. </property>
    13. <property>
    14. <name>mapreduce.jobhistory.webapp.address</name>
    15. <value>hadoop01:19888</value>
    16. </property>
    17. </configuration>

    5.同步集群到hadoop02,hadoop03

    [root@hadoop01 hadoop]# /root/bin/xsync /opt/module/hadoop-2.7.3/
    

    6.启动集群测试

    1.第一次启动集群,格式化namenode

    [root@hadoop01 hadoop-2.7.3]# bin/hdfs namenode -format
    

    2.启动进程

    [root@hadoop01 hadoop-2.7.3]# sbin/start-dfs.sh
    

    3.查看进程

    1. [root@hadoop01 hadoop-2.7.3]# jps
    2. 10496 Jps
    3. 28469 SecondaryNameNode
    4. 28189 NameNode
    5. 28286 DataNode
    6. [root@hadoop02 ~]# jps
    7. 27242 Jps
    8. 3614 DataNode
    9. [root@hadoop03 ~]# jps
    10. 27242 Jps
    11. 3614 DataNode

    4.访问端口号50070

     5.启动yarn集群

    [root@hadoop01 hadoop-2.7.3]# sbin/start-yarn.sh

    6.查看yarn进程

    1. [root@hadoop01 hadoop-2.7.3]# jps
    2. 49155 NodeManager
    3. 28469 SecondaryNameNode
    4. 48917 ResourceManager
    5. 10600 Jps
    6. 28189 NameNode
    7. 28286 DataNode
    8. [root@hadoop02 ~]# jps
    9. 3736 NodeManager
    10. 27242 Jps
    11. 3614 DataNode
    12. [root@hadoop03 ~]# jps
    13. 3736 NodeManager
    14. 27242 Jps
    15. 3614 DataNode

    7.查看yarn端口号8088

     7.配置本地映射关系

    1.在Windows找到hosts文件

     2.进行修改,添加ip和主机名

    1. # Copyright (c) 1993-2009 Microsoft Corp.
    2. #
    3. # This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
    4. #
    5. # This file contains the mappings of IP addresses to host names. Each
    6. # entry should be kept on an individual line. The IP address should
    7. # be placed in the first column followed by the corresponding host name.
    8. # The IP address and the host name should be separated by at least one
    9. # space.
    10. #
    11. # Additionally, comments (such as these) may be inserted on individual
    12. # lines or following the machine name denoted by a '#' symbol.
    13. #
    14. # For example:
    15. #
    16. # 102.54.94.97 rhino.acme.com # source server
    17. # 38.25.63.10 x.acme.com # x client host
    18. # localhost name resolution is handled within DNS itself.
    19. # 127.0.0.1 localhost
    20. # ::1 localhost
    21. # 最后一行添加
    22. 192.168.86.101 hadoop01
    23. 192.168.86.102 hadoop02
    24. 192.168.86.103 hadoop03

    8.配置本地yum源

    1.创建目录

    1. [root@hadoop01 ~]# mkdir /mnt/cdrom
    2. [root@hadoop01 ~]# cd /mnt
    3. [root@hadoop01 mnt]# ll
    4. total 4
    5. dr-xr-xr-x. 7 root root 4096 May 23 2016 cdrom

    2.挂载光驱

    1. [root@hadoop01 mnt]# mount -t auto /dev/cdrom /mnt/cdrom
    2. [root@hadoop01 mnt]# cd /etc/yum.repos.d/
    3. [root@hadoop01 yum.repos.d]# mkdir bak
    4. [root@hadoop01 yum.repos.d]# mv CentOS-* bak

    3.创建配置CentOS-DVD.repo 

    1. [root@hadoop01 yum.repos.d]# touch CentOS-DVD.repo
    2. [root@hadoop01 yum.repos.d]# vim CentOS-DVD.repo
    3. [centos6-dvd]
    4. name=Welcome to local source yum
    5. baseurl=file:///mnt/cdrom
    6. enabled=1
    7. gpgcheck=0

    4.加载yum源

    1. [root@hadoop01 yum.repos.d]# yum clean all
    2. [root@hadoop01 yum.repos.d]# yum repolist all

    9.配置hadoop历史日志

    1.配置历史服务器

    1. 配置mapred-site.xml

    1. [root@hadoop01 hadoop]# vim mapred-site.xml
    2. <configuration>
    3. <!-- 指定mr运行在yarn上 -->
    4. <property>
    5. <name>mapreduce.framework.name</name>
    6. <value>yarn</value>
    7. </property>
    8. <property>
    9. <name>mapreduce.jobhistory.address</name>
    10. <value>hadoop01:10020</value>
    11. </property>
    12. <property>
    13. <name>mapreduce.jobhistory.webapp.address</name>
    14. <value>hadoop01:19888</value>
    15. </property>
    16. </configuration>

    2. 查看启动历史服务器文件目录

    1. [root@hadoop01 hadoop-2.7.3]# ls sbin/ | grep mr
    2. mr-jobhistory-daemon.sh

     3.启动历史服务器

    [root@hadoop01 hadoop-2.7.2]$ sbin/mr-jobhistory-daemon.sh start historyserver

     4.查看历史服务器是否启动

    [root@hadoop01 hadoop-2.7.2]$ jps

    5.查看jobhistory,端口号19888

    http://hadoop01:19888/jobhistory

     2.配置日志的聚集

    1.配置yarn-site.xml

    1. [root@hadoop01 hadoop]# vim yarn-site.xml
    2. <configuration>
    3. <!-- reducer获取数据的方式 -->
    4. <property>
    5. <name>yarn.nodemanager.aux-services</name>
    6. <value>mapreduce_shuffle</value>
    7. </property>
    8. <!-- 指定YARN的ResourceManager的地址 -->
    9. <property>
    10. <name>yarn.resourcemanager.hostname</name>
    11. <value>hadoop01</value>
    12. </property>
    13. <!-- 日志聚集功能使能 -->
    14. <property>
    15. <name>yarn.log-aggregation-enable</name>
    16. <value>true</value>
    17. </property>
    18. <!-- 日志保留时间设置7天 -->
    19. <property>
    20. <name>yarn.log-aggregation.retain-seconds</name>
    21. <value>604800</value>
    22. </property>
    23. </configuration>

     2.关闭nodemanager 、resourcemanager和historymanager

    1. [root@hadoop01 hadoop-2.7.3]$ sbin/yarn-daemon.sh stop resourcemanager
    2. [root@hadoop01 hadoop-2.7.3]$ sbin/yarn-daemon.sh stop nodemanager
    3. [root@hadoop01 hadoop-2.7.3]$ sbin/mr-jobhistory-daemon.sh stop historyserver

    3. 启动nodemanager 、resourcemanager和historymanager

    1. [root@hadoop01 hadoop-2.7.3]$ sbin/yarn-daemon.sh start resourcemanager
    2. [root@hadoop01 hadoop-2.7.3]$ sbin/yarn-daemon.sh start nodemanager
    3. [root@hadoop01 hadoop-2.7.3]$ sbin/mr-jobhistory-daemon.sh start historyserver

    4.删除hdfs上已经存在的hdfs文件

    [root@hadoop01 hadoop-2.7.3]$ bin/hdfs dfs -rm -R /user/root/output

     5.执行wordcount程序

    [root@hadoop01 hadoop-2.7.3]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /user/root/input /user/root/output

    6.查看日志,端口号19888

    三、安装hive

    1.mysql安装

    1.安装mysql

    [root@hadoop01 ~]# yum install mysql-server -y
    

    2.启动mysql

    [root@hadoop01 ~]# service mysql start
    

    3.初始化密码,不设置密码,直接回车

    [root@hadoop01 ~]# /usr/bin/mysql_secure_installation
    

    4.启动并登录

    1. [root@hadoop01 ~]# service mysqld restart
    2. [root@hadoop01 ~]# mysql -u root -p123456

    5.用远程端连接mysql

    1. mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '123456' WITH
    2. GRANT OPTION;
    3. Query OK, 0 rows affected (0.01 sec)
    4. mysql> FLUSH PRIVILEGES;
    5. Query OK, 0 rows affected (0.00 sec)

     6.退出mysql

    1. mysql> exit
    2. Bye
    3. [root@hadoop01 ~]#

    2.安装hive

    1.上传hive的安装包

    apache-hive-2.1.1-bin.tar.gz在这里是用的这个版本

    [root@hadoop01 software]# ls    #查看对应安装包是否上传
    

    2.解压安装包

    [root@hadoop01 software]# tar -zxvf apache-hive-2.1.1-bin.tar.gz -C /opt/module/

    3.修改hive-env.sh文件并加入环境变量

    1. [root@hadoop01 module]# mv apache-hive-2.1.1-bin/ hive
    2. [root@hadoop01 module]# cd hive/conf/
    3. [root@hadoop01 conf]# mv hive-env.sh.template hive-env.sh
    4. [root@hadoop01 conf]# vim hive-env.sh
    5. 47 # Set HADOOP_HOME to point to a specific hadoop install directory
    6. 48 HADOOP_HOME=/opt/module/hadoop-2.7.3
    7. 49
    8. 50 # Hive Configuration Directory can be controlled by:
    9. 51 export HIVE_CONF_DIR=/opt/module/hive/conf

    3.配置mysql的数据源

    1.上传mysql的安装包

    mysql-connector-java-5.1.27.tar.gz,官网下载

    2.复制mysql驱动jar包到hive环境中

    [root@hadoop01 mysql-connector-java-5.1.27]# cp mysql-connector-java-5.1.27-bin.jar /opt/module/hive/lib
    

    3.配置hive的mysql数据源

    1. [root@hadoop01 mysql-connector-java-5.1.27]# cd /opt/module/hive/conf/
    2. [root@hadoop01 conf]# vim hive-site.xml
    3. <?xml version="1.0"?>
    4. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    5. <configuration>
    6. <property>
    7. <name>javax.jdo.option.ConnectionURL</name>
    8. <value>jdbc:mysql://hadoop01:3306/hive?createDatabaseIfNotExist=true</value>
    9. <description>JDBC connect string for a JDBC metastore</description>
    10. </property>
    11. <property>
    12. <name>javax.jdo.option.ConnectionDriverName</name>
    13. <value>com.mysql.jdbc.Driver</value>
    14. <description>Driver class name for a JDBC metastore</description>
    15. </property>
    16. <property>
    17. <name>javax.jdo.option.ConnectionUserName</name>
    18. <value>root</value>
    19. <description>username to use against metastore database</description>
    20. </property>
    21. <property>
    22. <name>javax.jdo.option.ConnectionPassword</name>
    23. <value>123456</value>
    24. <description>password to use against metastore database</description>
    25. </property>
    26. </configuration>

    4.在HDFS上创建/tmp和/user/hive/warehouse两个目录并修改他们的同组权限可写

    1. [root@hadoop01 hadoop-2.7.3]$ bin/hadoop fs -mkdir /tmp
    2. [root@hadoop01 hadoop-2.7.3]$ bin/hadoop fs -mkdir -p /user/hive/warehouse
    3. [root@hadoop01 hadoop-2.7.3]$ bin/hadoop fs -chmod g+w /tmp
    4. [root@hadoop01 hadoop-2.7.3]$ bin/hadoop fs -chmod g+w /user/hive/warehouse

    5.初始化元数据库,在hive安装目录的bin下

    [root@hadoop01 hive]# bin/schematool -dbType mysql -initSchema
    

    6.登录hive的shell进行测试

    1. [root@hadoop01 hive]# bin/hive
    2. which: no hbase in (/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/module/jdk1.8.0_144/bin:/opt/module/hadoop-2.7.3/bin:/opt/module/hadoop-2.7.3/sbin:/root/bin)
    3. SLF4J: Class path contains multiple SLF4J bindings.
    4. SLF4J: Found binding in [jar:file:/opt/module/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    5. SLF4J: Found binding in [jar:file:/opt/module/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    6. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    7. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
    8. Logging initialized using configuration in jar:file:/opt/module/hive/lib/hive-common-2.1.1.jar!/hive-log4j2.properties Async: true
    9. Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
    10. hive> exit;
    11. [root@hadoop01 hive]#

    四、配置时间同步

    1.配置NTP时间同步服务器

    1.在三台服务器中设置相同的上海时区

    1. [root@hadoop01 ~] # cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
    2. [root@hadoop02 ~] # cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
    3. [root@hadoop03 ~] # cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

    2.hadoop01机器上配置

    hadoop01作为主时间同步服务器,其它机器时间以它进行时间同步。

    1. [root@hadoop01 ~]# vim /etc/ntp.conf
    2. 10
    3. 11 # Permit all access over the loopback interface. This could
    4. 12 # be tightened as well, but to do so would effect some of
    5. 13 # the administrative functions.
    6. 14 restrict 192.168.86.101 nomodify notrap nopeer noquery #修改ip地址
    7. 15 restrict 127.0.0.1
    8. 16 restrict -6 ::1
    9. 17
    10. 18 # Hosts on local network are less restricted.
    11. 19 restrict 192.168.86.1 mask 255.255.255.0 nomodify notrap
    12. 20
    13. 21 # Use public servers from the pool.ntp.org project.
    14. 22 # Please consider joining the pool (http://www.pool.ntp.org/join.html).
    15. 23 #server 0.centos.pool.ntp.org iburst
    16. 24 #server 1.centos.pool.ntp.org iburst
    17. 25 #server 2.centos.pool.ntp.org iburst
    18. 26 #server 3.centos.pool.ntp.org iburst
    19. 27 server 127.127.1.0
    20. 28 fudge 127.127.1.0 stratum 10

    3.在hadoop02/hadoop03上分别配置

    1. 6 # Permit time synchronization with our time source, but do not
    2. 7 # permit the source to query or modify the service on this system.
    3. 8 restrict default kod nomodify notrap nopeer noquery
    4. 9 restrict -6 default kod nomodify notrap nopeer noquery
    5. 10
    6. 11 # Permit all access over the loopback interface. This could
    7. 12 # be tightened as well, but to do so would effect some of
    8. 13 # the administrative functions.
    9. 14 restrict 192.168.86.101 nomodify notrap nopeer noquery
    10. 15 restrict 127.0.0.1
    11. 16 restrict -6 ::1
    12. 17
    13. 18 # Hosts on local network are less restricted.
    14. 19 trict 192.168.86.1 mask 255.255.255.0 nomodify notrap
    15. 20
    16. 21 # Use public servers from the pool.ntp.org project.
    17. 22 # Please consider joining the pool (http://www.pool.ntp.org/join.html).
    18. 23 #server 0.centos.pool.ntp.org iburst
    19. 24 #server 1.centos.pool.ntp.org iburst
    20. 25 #server 2.centos.pool.ntp.org iburst
    21. 26 #server 3.centos.pool.ntp.org iburst
    22. 27 server 192.168.86.101
    23. 28 Fudge 192.168.86.101 stratum 10

    2.启动服务

    1.在三台虚拟机中启动ntpd服务器

    1. [root@hadoop01 ~]#service ntpd start
    2. [root@hadoop02 ~]#service ntpd start
    3. [root@hadoop03 ~]#service ntpd start

    2.在hadoop02或hadoop03中修改任意一台机器时间

    [root@hadoop02 ~]# date -s "2017-9-11 11:11:11"

    3.在这台机器中向时间服务器hadoop01发送同步请求(手动同步测试)

    [root@hadoop02 ~]# ntpdate 192.168.86.101

    查看时间同步结果:

    [root@hadoop02 ~]# date

    4.在hadoop02或hadoop03中修改任意一台机器时间

    [root@hadoop02 ~]# date -s "2017-9-11 11:11:11"

    5.十分钟后查看机器是否与时间服务器同步(自动同步测试)

    [root@hadoop02 ~]# date

    五、安装Zookeeper

    1.安装

    1.上传安装包

    zookeeper-3.4.10.tar.gz

    2.解压安装包

    1. [root@hadoop01 software]# tar -zxvf zookeeper-3.4.10.tar.gz -C /opt/module/
    2. [root@hadoop01 software]# cd /opt/module/
    3. [root@hadoop01 module]# mv zookeeper-3.4.10/ zookeeper
    4. [root@hadoop01 conf]# cd /opt/module/zookeeper/conf

    2.配置文件

    1.修改zoo_sample.cfg名称

    [root@hadoop01 conf]# mv zoo_sample.cfg zoo.cfg

    2.创建一个存放zk数据的目录:

    [root@hadoop01 zookeeper]# mkdir /opt/module/zookeeper/data

    3.配置zoo.cfg 

    1. [root@hadoop01 zookeeper]# vim conf/zoo.cfg
    2. # The number of milliseconds of each tick
    3. tickTime=2000
    4. # The number of ticks that the initial
    5. # synchronization phase can take
    6. initLimit=10
    7. # The number of ticks that can pass between
    8. # sending a request and getting an acknowledgement
    9. syncLimit=5
    10. # the directory where the snapshot is stored.
    11. # do not use /tmp for storage, /tmp here is just
    12. # example sakes.
    13. dataDir=/opt/module/zookeeper/data
    14. # the port at which the clients will connect
    15. clientPort=2181
    16. # the maximum number of client connections.
    17. # increase this if you need to handle more clients
    18. #maxClientCnxns=60
    19. #
    20. # Be sure to read the maintenance section of the
    21. # administrator guide before turning on autopurge.
    22. #
    23. # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
    24. #
    25. # The number of snapshots to retain in dataDir
    26. #autopurge.snapRetainCount=3
    27. # Purge task interval in hours
    28. # Set to "0" to disable auto purge feature
    29. #autopurge.purgeInterval=1
    30. #配置集群的机器
    31. server.1=hadoop01:2888:3888
    32. server.2=hadoop02:2888:3888
    33. server.3=hadoop03:2888:3888

    4.测试

    1. [root@hadoop01 data]# cd /opt/module/zookeeper/data/
    2. [root@hadoop01 data]# touch myid
    3. [root@hadoop01 data]# echo 1 > myid
    4. [root@hadoop01 data]# cat myid
    5. 1

    3.把集群文件同步到其它机器

    1.同步文件

    [root@hadoop01 zookeeper]# /root/bin/xsync /opt/module/zookeeper
    

    2.写入

    1. [root@hadoop01 zookeeper]# cd /opt/module/zookeeper/data/
    2. [root@hadoop01 data]# cat myid
    3. 1
    4. [root@hadoop02 data]# echo 2 > myid
    5. [root@hadoop02 data]# cat myid
    6. 2
    7. [root@hadoop03 module]# cd /opt/module/zookeeper/data/
    8. [root@hadoop03 data]# echo 3 > myid
    9. [root@hadoop03 data]# cat myid
    10. 3

    4.集群的启动与关闭

    1. [root@hadoop01 data]# cd /opt/module/zookeeper/
    2. [root@hadoop02 data]# cd /opt/module/zookeeper/
    3. [root@hadoop03 data]# cd /opt/module/zookeeper/
    4. 在三台机器中执行命令:
    5. # bin/zkServer.sh start ---启动命令
    6. # bin/zkServer.sh stop ---停止命令
    7. # bin/zkServer.sh status ---查看状态命令

    5.检查集群启动的状态

    1. [root@hadoop01 zookeeper]# bin/zkServer.sh status
    2. ZooKeeper JMX enabled by default
    3. Using config: /opt/module/zookeeper/bin/../conf/zoo.cfg
    4. Mode: follower
    5. [root@hadoop02 zookeeper]# bin/zkServer.sh status
    6. ZooKeeper JMX enabled by default
    7. Using config: /opt/module/zookeeper/bin/../conf/zoo.cfg
    8. Mode: leader
    9. [root@hadoop03 zookeeper]# bin/zkServer.sh status
    10. ZooKeeper JMX enabled by default
    11. Using config: /opt/module/zookeeper/bin/../conf/zoo.cfg
    12. Mode: follower

    六、安装hbase

    1.安装

    1.上传安装包

    hbase-1.3.1-bin.tar.gz

    2.解压

    [root@hadoop01 software]# tar -zxvf hbase-1.3.1-bin.tar.gz -C /opt/module/
    

    2.配置

    1.hbase-env.sh修改内容:

    1. [root@hadoop01 conf]# pwd
    2. /opt/module/hbase/conf
    3. [root@hadoop01 conf]# vim hbase-env.sh
    4. 27 export JAVA_HOME=/opt/module/jdk1.8.0_144
    5. 129 export HBASE_MANAGES_ZK=false

    2.hbase-site.xml修改内容

    1. [root@hadoop01 conf]# vim hbase-site.xml
    2. <configuration>
    3. <property>
    4. <name>hbase.rootdir</name>
    5. <value>hdfs://hadoop01:9000/hbase</value>
    6. </property>
    7. <property>
    8. <name>hbase.cluster.distributed</name>
    9. <value>true</value>
    10. </property>
    11. <!-- 0.98后的新变动,之前版本没有.port,默认端口为60000 -->
    12. <property>
    13. <name>hbase.master.port</name>
    14. <value>16000</value>
    15. </property>
    16. <property>
    17. <name>hbase.zookeeper.quorum</name>
    18. <value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
    19. </property>
    20. <property>
    21. <name>hbase.zookeeper.property.dataDir</name>
    22. <value>/opt/module/zookeeper/data</value>
    23. </property>
    24. </configuration>

    3.配置regionservers

    1. [root@hadoop01 conf]# vim regionservers
    2. hadoop01
    3. hadoop02
    4. hadoop03

    4.软连接hadoop配置文件到hbase

    1. [root@hadoop01 module]$ ln -s /opt/module/hadoop-2.7.2/etc/hadoop/core-site.xml
    2. /opt/module/hbase/conf/core-site.xml
    3. [root@hadoop01 module]$ ln -s /opt/module/hadoop-2.7.2/etc/hadoop/hdfs-site.xml
    4. /opt/module/hbase/conf/hdfs-site.xml

    3.同步的其他集群

    [root@hadoop02 module]$ xsync hbase/ 

    4.HBase服务的启动

    1.启动

    1. [root@hadoop02 hbase]# bin/start-hbase.sh
    2. starting master, logging to /opt/module/hbase/bin/../logs/hbase-root-master-hadoop02.out
    3. hadoop03: starting regionserver, logging to /opt/module/hbase/bin/../logs/hbase-root-regionserver-hadoop03.out
    4. hadoop01: starting regionserver, logging to /opt/module/hbase/bin/../logs/hbase-root-regionserver-hadoop01.out
    5. hadoop02: starting regionserver, logging to /opt/module/hbase/bin/../logs/hbase-root-regionserver-hadoop02.out

    2.关闭

    [root@hadoop02 hbase]$ bin/stop-hbase.sh
    

    5.打开端口号16010

     六、安装spark

    1.安装

    1.上传安装包

    spark-2.1.1-bin-hadoop2.7.tgz,官网下载

    2.解压

    [root@hadoop01 software]# tar -zxvf spark-2.1.1-bin-hadoop2.7.tgz -C /opt/module/
    

    2.配置

    1.配置slaves

    1. [root@hadoop01 module]# mv spark-2.1.1-bin-hadoop2.7 spark
    2. [root@hadoop01 conf]# mv slaves.template slaves
    3. [root@hadoop01 conf]# vim slaves
    4. #
    5. # Licensed to the Apache Software Foundation (ASF) under one or more
    6. # contributor license agreements. See the NOTICE file distributed with
    7. # this work for additional information regarding copyright ownership.
    8. # The ASF licenses this file to You under the Apache License, Version 2.0
    9. # (the "License"); you may not use this file except in compliance with
    10. # the License. You may obtain a copy of the License at
    11. #
    12. # http://www.apache.org/licenses/LICENSE-2.0
    13. #
    14. # Unless required by applicable law or agreed to in writing, software
    15. # distributed under the License is distributed on an "AS IS" BASIS,
    16. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    17. # See the License for the specific language governing permissions and
    18. # limitations under the License.
    19. #
    20. # A Spark Worker will be started on each of the machines listed below.
    21. hadoop01
    22. hadoop02
    23. hadoop03

    2.修改spark-env.sh文件

    1. [root@hadoop01 conf]# vim spark-env.sh
    2. SPARK_MASTER_HOST=hadoop01
    3. SPARK_MASTER_PORT=7077

    3.配置spark-config.sh文件

    1. [root@hadoop01 sbin]# vim spark-config.sh
    2. export JAVA_HOME=/opt/module/jdk1.8.0_144

    3.同步其他集群

    [root@hadoop01 module]$ xsync spark/

    4.启动

    1. [root@hadoop01 spark]$ sbin/start-all.sh
    2. [root@hadoop01 spark]$ util.sh
    3. ================root@hadoop01================
    4. 3330 Jps
    5. 3238 Worker
    6. 3163 Master
    7. ================root@hadoop02================
    8. 2966 Jps
    9. 2908 Worker
    10. ================root@hadoop03================
    11. 2978 Worker
    12. 3036 Jps

    5.查看UI页面,端口号8080

    5.JobHistoryServer配置

    1.修改spark-default.conf.template名称

    [root@hadoop01 conf]$ mv spark-defaults.conf.template spark-defaults.conf

     2.修改spark-default.conf文件,开启Log:

    注意:HDFS上的目录需要提前存在。

     没有就创建目录

    [root@hadoop01 conf]# hdfs dfs -mkdir directory
    
    1. [root@hadoop01 conf]# vim spark-defaults.conf
    2. #
    3. # Licensed to the Apache Software Foundation (ASF) under one or more
    4. # contributor license agreements. See the NOTICE file distributed with
    5. # this work for additional information regarding copyright ownership.
    6. # The ASF licenses this file to You under the Apache License, Version 2.0
    7. # (the "License"); you may not use this file except in compliance with
    8. # the License. You may obtain a copy of the License at
    9. #
    10. # http://www.apache.org/licenses/LICENSE-2.0
    11. #
    12. # Unless required by applicable law or agreed to in writing, software
    13. # distributed under the License is distributed on an "AS IS" BASIS,
    14. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    15. # See the License for the specific language governing permissions and
    16. # limitations under the License.
    17. #
    18. # Default system properties included when running spark-submit.
    19. # This is useful for setting default environmental settings.
    20. # Example:
    21. # spark.master spark://master:7077
    22. # spark.eventLog.enabled true
    23. # spark.eventLog.dir hdfs://namenode:8021/directory
    24. # spark.serializer org.apache.spark.serializer.KryoSerializer
    25. # spark.driver.memory 5g
    26. # spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
    27. spark.eventLog.enabled true
    28. spark.eventLog.dir hdfs://hadoop01:9000/directory

    3.修改spark-env.sh文件,添加如下配置:

    1. [root@hadoop01 conf]# vim spark-env.sh
    2. export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=18080
    3. -Dspark.history.retainedApplications=30
    4. -Dspark.history.fs.logDirectory=hdfs://hadoop01:9000/directory"

    4.同步到其他集群

    [root@hadoop01 conf]# xsync /opt/module/spark/conf
    

    5.启动历史服务

    [root@hadoop01 spark]# sbin/start-history-server.sh

    6.再次执行任务

    1. [root@hadoop102 spark]$ bin/spark-submit \
    2. --class org.apache.spark.examples.SparkPi \
    3. --master spark://hadoop102:7077 \
    4. --executor-memory 1G \
    5. --total-executor-cores 2 \
    6. ./examples/jars/spark-examples_2.11-2.1.1.jar \
    7. 100

    7.查看历史服务,端口号18080

    七、安装flume

    1.安装

    1.上传安装包

    apache-flume-1.7.0-bin.tar.gz,官网下载

    2.解压

    [root@hadoop01 software]# tar -zxvf apache-flume-1.7.0-bin.tar.gz -C /opt/module/
    

    3.修改apache-flume-1.7.0-bin的名称为flume

    [root@hadoop01 software]# mv apache-flume-1.7.0-bin flume
    

    2.配置

    1.将flume/conf下的flume-env.sh.template文件修改为flume-env.sh,并配置flume-env.sh文件

    1. [root@hadoop01 software]# mv flume-env.sh.template flume-env.sh
    2. [root@hadoop01 software]# vim flume-env.sh
    3. export JAVA_HOME=/opt/module/jdk1.8.0_144

    3.同步

    [root@hadoop01 flume]# /root/bin/xsync flume/
    

  • 相关阅读:
    OA项目之会议通知(查询&是否参会&反馈详情)
    [Python]Pipenv虛擬環境的嘗試與Bug解除
    Postgresql 主从复制+主从切换(流复制)
    超好用的大数据分析平台分享,SuccBI 一站式大数据分析平台
    安装window的Charles抓包工具
    网络安全形势迫在眉睫!云WAF保护私有云安全!
    Flutter的Event Loop
    前端设计模式——过滤器模式
    组件协作模式
    DNSPod十问党霏霏:充电桩是披着高科技外皮的传统基建?
  • 原文地址:https://blog.csdn.net/m0_55834564/article/details/126816641