• hadoop3.x搭建完再测试阶段发现运行bin下面命令时把namenode组的ID识别成了DNS的一个域名有解决思路吗


    搭了一个hadoop3.3.6的hadoop完全分布式测试集群,在测试阶段发现namenode和yarn都能正常启动,它们的web页面访问也正常,但是当我运行bin目录下的命令时运行线程会把配置文件里面namenode的服务组ID当做一个DNS的域名去解析,这一点该如何调整呢?

    [root@hdp4 wy]# hadoop fs -get /
    2024-06-16 17:49:31,873 WARN fs.FileSystem: Failed to initialize filesystem hdfs://hdp: java.lang.IllegalArgumentException: java.net.UnknownHostException: hdp
    -get: java.net.UnknownHostException: hdp
    Usage: hadoop fs [generic options]
            [-appendToFile [-n]  ... ]
            [-cat [-ignoreCrc]  ...]
            [-checksum [-v]  ...]
            [-chgrp [-R] GROUP PATH...]
            [-chmod [-R] [,MODE]... | OCTALMODE> PATH...]
            [-chown [-R] [OWNER][:[GROUP]] PATH...]
            [-concat    ...]
            [-copyFromLocal [-f] [-p] [-l] [-d] [-t ] [-q ]  ... ]
            [-copyToLocal [-f] [-p] [-crc] [-ignoreCrc] [-t ] [-q ]  ... ]
            [-count [-q] [-h] [-v] [-t []] [-u] [-x] [-e] [-s]  ...]
            [-cp [-f] [-p | -p[topax]] [-d] [-t ] [-q ]  ... ]
            [-createSnapshot  []]
            [-deleteSnapshot  ]
            [-df [-h] [ ...]]
            [-du [-s] [-h] [-v] [-x]  ...]
            [-expunge [-immediate] [-fs ]]
            [-find  ...  ...]
            [-get [-f] [-p] [-crc] [-ignoreCrc] [-t ] [-q ]  ... ]
            [-getfacl [-R] ]
            [-getfattr [-R] {-n name | -d} [-e en] ]
            [-getmerge [-nl] [-skip-empty-file]  ]
            [-head ]
            [-help [cmd ...]]
            [-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [ ...]]
            [-mkdir [-p]  ...]
            [-moveFromLocal [-f] [-p] [-l] [-d]  ... ]
            [-moveToLocal  ]
            [-mv  ... ]
            [-put [-f] [-p] [-l] [-d] [-t ] [-q ]  ... ]
            [-renameSnapshot   ]
            [-rm [-f] [-r|-R] [-skipTrash] [-safely]  ...]
            [-rmdir [--ignore-fail-on-non-empty]  ...]
            [-setfacl [-R] [{-b|-k} {-m|-x } ]|[--set  ]]
            [-setfattr {-n name [-v value] | -x name} ]
            [-setrep [-R] [-w]   ...]
            [-stat [format]  ...]
            [-tail [-f] [-s ] ]
            [-test -[defswrz] ]
            [-text [-ignoreCrc]  ...]
            [-touch [-a] [-m] [-t TIMESTAMP (yyyyMMdd:HHmmss) ] [-c]  ...]
            [-touchz  ...]
            [-truncate [-w]   ...]
            [-usage [cmd ...]]
    
    Generic options supported are:
    -conf         specify an application configuration file
    -D                define a value for a given property
    -fs  specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations.
    -jt   specify a ResourceManager
    -files                 specify a comma-separated list of files to be copied to the map reduce cluster
    -libjars                specify a comma-separated list of jar files to be included in the classpath
    -archives           specify a comma-separated list of archives to be unarchived on the compute machines
    
    The general command line syntax is:
    command [genericOptions] [commandOptions]
    
    Usage: hadoop fs [generic options] -get [-f] [-p] [-crc] [-ignoreCrc] [-t ] [-q ]  ... 
    

    问题就出现在2024-06-16 17:49:31,873 WARN fs.FileSystem: Failed to initialize filesystem hdfs://hdp: java.lang.IllegalArgumentException: java.net.UnknownHostException: hdp -get: java.net.UnknownHostException: hdp这一块。

    hdp是我在配置文件中高可用的namenode服务组ID,在我的core-site.xml中存在如下配置

    <configuration>
            <property>
                    <name>fs.defaultFSname>
                    <value>hdfs://hdpvalue>
            property>
    
            
            <property>
                    <name>hadoop.tmp.dirname>
                    <value>/opt/hadoop-3.3.6/hdpData/tmpvalue>
            property>
    
        <property>
            <name>hadoop.http.staticuser.username>
            <value>rootvalue>
        property>
    
            
            <property>
                    <name>ha.zookeeper.quorumname>
                    <value>hdp4:2181,hdp5:2181,hdp6:2181value>
            property>
    
        
        <property>
            <name>ipc.client.connect.max.retriesname>
            <value>100value>
        property>
        <property>
            <name>ipc.client.connect.retry.intervalname>
            <value>10000value>
        property>
    configuration>
    

    在hdfs-site.xml中也存在如下配置

            <property>
                    <name>dfs.nameservicesname>
                    <value>hdpvalue>
            property>
    

    我尝试了在环境变量中明确指定hadoop配置文件路径但是没有影响这个问题的发生。

    展开全部

    • 尘世壹俗人 2024-06-17 19:17
      关注

      已解决,从新走了一遍配置文件,发现是hdfs-site.xml文件中配置故障转移类的时候配置值手抖多打了个1,被自己蠢到了,修正之后,重启集群就ok了

      本回答被题主选为最佳回答 , 对您是否有帮助呢?
  • 相关阅读:
    C++学习第五课--函数新特性、内联函数、const详解笔记
    GET和POST的区别及使用场景?
    python14 字典类型
    学校介绍静态HTML网页设计作品 DIV布局学校官网模板代码 DW大学网站制作成品下载 HTML5期末大作业
    智慧公厕:不放过任何“卫生死角”,为公共厕所装上“净化系统”。
    FFmpeg中的常用结构体分析
    从零学习开发一个RISC-V操作系统(三)丨嵌入式操作系统开发的常用概念和工具
    阿里是如何使用分布式架构的?阿里内部学习手册分享
    容器云安全挑战和攻防应对
    黑马苍穹外卖6 清理redis缓存+Spring Cache+购物车的增删改查
  • 原文地址:https://ask.csdn.net/questions/8119400