• Hadoop3教程(三十六):(生产调优篇)企业开发场景中的参数调优案例概述


    (170)企业开发场景案例

    这章仅做兴趣了解即可。

    需求:从1G数据中,统计每个单词出现次数。服务器3台,每台配置4G内存,4核CPU,4线程。

    需求分析

    1G / 128m = 8个MapTask;1个ReduceTask;1个mrAppMaster

    平均每个节点运行10个 / 3台 ≈ 3个任务(4 3 3)

    当然,这只是个案例演示,生产环境中一般是结合机器配置、业务等来做综合配置,肯定是不会像案例里这样对某个任务进行配置的。

    下面直接贴一下教程里给出的实际调优参数的设置:

    HDFS参数调优

    (1)修改hadoop-env.sh,配置NameNode和DataNode占用最大内存都为1G

    export HDFS_NAMENODE_OPTS="-Dhadoop.security.logger=INFO,RFAS -Xmx1024m"
    export HDFS_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS -Xmx1024m"
    
    • 1
    • 2

    (2)修改hdfs-site.xml,配置心跳并发数

    
    
      dfs.namenode.handler.count
      21
    
    
    • 1
    • 2
    • 3
    • 4
    • 5

    (3)修改core-site.xml,配置垃圾回收站

    
    
      fs.trash.interval
      60
    
    
    • 1
    • 2
    • 3
    • 4
    • 5

    (4)分发配置

    [atguigu@hadoop102 hadoop]$ xsync hadoop-env.sh hdfs-site.xml core-site.xml
    
    • 1

    MapReduce参数调优

    (1)修改mapred-site.xml

    
    <property>
     <name>mapreduce.task.io.sort.mbname>
     <value>100value>
    property>
     
    
    <property>
     <name>mapreduce.map.sort.spill.percentname>
     <value>0.80value>
    property>
    
    
    <property>
     <name>mapreduce.task.io.sort.factorname>
     <value>10value>
    property>
    
    
    <property>
     <name>mapreduce.map.memory.mbname>
     <value>-1value>
     <description>The amount of memory to request from the scheduler for each   map task. If this is not specified or is non-positive, it is inferred from mapreduce.map.java.opts and mapreduce.job.heap.memory-mb.ratio. If java-opts are also not specified, we set it to 1024.
     description>
    property>
     
    
    <property>
     <name>mapreduce.map.cpu.vcoresname>
     <value>1value>
    property>
    
    
    <property>
     <name>mapreduce.map.maxattemptsname>
     <value>4value>
    property>
    
    
    <property>
     <name>mapreduce.reduce.shuffle.parallelcopiesname>
     <value>5value>
    property>
    
    
    <property>
     <name>mapreduce.reduce.shuffle.input.buffer.percentname>
     <value>0.70value>
    property>
    
    
    <property>
     <name>mapreduce.reduce.shuffle.merge.percentname>
     <value>0.66value>
    property>
    
    
    <property>
     <name>mapreduce.reduce.memory.mbname>
     <value>-1value>
     <description>The amount of memory to request from the scheduler for each   reduce task. If this is not specified or is non-positive, it is inferred
      from mapreduce.reduce.java.opts and mapreduce.job.heap.memory-mb.ratio.
      If java-opts are also not specified, we set it to 1024.
     description>
    property>
    
    
    <property>
     <name>mapreduce.reduce.cpu.vcoresname>
     <value>2value>
    property>
    
    
    <property>
     <name>mapreduce.reduce.maxattemptsname>
     <value>4value>
    property>
    
    
    <property>
     <name>mapreduce.job.reduce.slowstart.completedmapsname>
     <value>0.05value>
    property>
    
    
    <property>
     <name>mapreduce.task.timeoutname>
     <value>600000value>
    property>
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89

    (2)分发配置

    [atguigu@hadoop102 hadoop]$ xsync mapred-site.xml
    
    • 1

    YARN参数调优

    (1)修改yarn-site.xml配置参数如下:

    
    <property>
    	<description>The class to use as the resource scheduler.description>
    	<name>yarn.resourcemanager.scheduler.classname>
    	<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulervalue>
    property>
    
    
    <property>
    	<description>Number of threads to handle scheduler interface.description>
    	<name>yarn.resourcemanager.scheduler.client.thread-countname>
    	<value>8value>
    property>
    
    
    <property>
    	<description>Enable auto-detection of node capabilities such as
    	memory and CPU.
    	description>
    	<name>yarn.nodemanager.resource.detect-hardware-capabilitiesname>
    	<value>falsevalue>
    property>
    
    
    <property>
    	<description>Flag to determine if logical processors(such as
    	hyperthreads) should be counted as cores. Only applicable on Linux
    	when yarn.nodemanager.resource.cpu-vcores is set to -1 and
    	yarn.nodemanager.resource.detect-hardware-capabilities is true.
    	description>
    	<name>yarn.nodemanager.resource.count-logical-processors-as-coresname>
    	<value>falsevalue>
    property>
    
    
    <property>
    	<description>Multiplier to determine how to convert phyiscal cores to
    	vcores. This value is used if yarn.nodemanager.resource.cpu-vcores
    	is set to -1(which implies auto-calculate vcores) and
    	yarn.nodemanager.resource.detect-hardware-capabilities is set to true. The	number of vcores will be calculated as	number of CPUs * multiplier.
    	description>
    	<name>yarn.nodemanager.resource.pcores-vcores-multipliername>
    	<value>1.0value>
    property>
    
    
    <property>
    	<description>Amount of physical memory, in MB, that can be allocated 
    	for containers. If set to -1 and
    	yarn.nodemanager.resource.detect-hardware-capabilities is true, it is
    	automatically calculated(in case of Windows and Linux).
    	In other cases, the default is 8192MB.
    	description>
    	<name>yarn.nodemanager.resource.memory-mbname>
    	<value>4096value>
    property>
    
    
    <property>
    	<description>Number of vcores that can be allocated
    	for containers. This is used by the RM scheduler when allocating
    	resources for containers. This is not used to limit the number of
    	CPUs used by YARN containers. If it is set to -1 and
    	yarn.nodemanager.resource.detect-hardware-capabilities is true, it is
    	automatically determined from the hardware in case of Windows and Linux.
    	In other cases, number of vcores is 8 by default.description>
    	<name>yarn.nodemanager.resource.cpu-vcoresname>
    	<value>4value>
    property>
    
    
    <property>
    	<description>The minimum allocation for every container request at the RM	in MBs. Memory requests lower than this will be set to the value of this	property. Additionally, a node manager that is configured to have less memory	than this value will be shut down by the resource manager.
    	description>
    	<name>yarn.scheduler.minimum-allocation-mbname>
    	<value>1024value>
    property>
    
    
    <property>
    	<description>The maximum allocation for every container request at the RM	in MBs. Memory requests higher than this will throw an	InvalidResourceRequestException.
    	description>
    	<name>yarn.scheduler.maximum-allocation-mbname>
    	<value>2048value>
    property>
    
    
    <property>
    	<description>The minimum allocation for every container request at the RM	in terms of virtual CPU cores. Requests lower than this will be set to the	value of this property. Additionally, a node manager that is configured to	have fewer virtual cores than this value will be shut down by the resource	manager.
    	description>
    	<name>yarn.scheduler.minimum-allocation-vcoresname>
    	<value>1value>
    property>
    
    
    <property>
    	<description>The maximum allocation for every container request at the RM	in terms of virtual CPU cores. Requests higher than this will throw an
    	InvalidResourceRequestException.description>
    	<name>yarn.scheduler.maximum-allocation-vcoresname>
    	<value>2value>
    property>
    
    
    <property>
    	<description>Whether virtual memory limits will be enforced for
    	containers.description>
    	<name>yarn.nodemanager.vmem-check-enabledname>
    	<value>falsevalue>
    property>
    
    
    <property>
    	<description>Ratio between virtual memory to physical memory when	setting memory limits for containers. Container allocations are	expressed in terms of physical memory, and virtual memory usage	is allowed to exceed this allocation by this ratio.
    	description>
    	<name>yarn.nodemanager.vmem-pmem-rationame>
    	<value>2.1value>
    property>
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117

    (2)分发配置

    [atguigu@hadoop102 hadoop]$ xsync yarn-site.xml
    
    • 1

    执行程序

    (1)重启集群

    [atguigu@hadoop102 hadoop-3.1.3]$ sbin/stop-yarn.sh
    [atguigu@hadoop103 hadoop-3.1.3]$ sbin/start-yarn.sh
    
    • 1
    • 2

    (2)执行WordCount程序

    [atguigu@hadoop102 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input /output
    
    • 1

    (3)观察Yarn任务执行页面

    http://hadoop103:8088/cluster/apps

    参考文献

    1. 【尚硅谷大数据Hadoop教程,hadoop3.x搭建到集群调优,百万播放】
  • 相关阅读:
    计算机竞赛 题目:基于深度学习卷积神经网络的花卉识别 - 深度学习 机器视觉
    Botowski:SEO友好的AI内容生成器
    软考下午第5题——面向对象程序设计——代码填空(老程序员必得15分)
    基于文本相似度的康复量表ICF映射研究
    python中赋值、浅拷贝、深拷贝的区别,几张图片让你学会
    计算机毕业设计Java优乐帮育儿系统(系统+程序+mysql数据库+Lw文档)
    【kafka】-分区-消费端负载均衡
    数据在内存中的存储
    ora-39083 ora-01861
    DTSE Tech Talk丨第3期:解密数据隔离方案,让SaaS应用开发更轻松
  • 原文地址:https://blog.csdn.net/wlh2220133699/article/details/133999937