109 个评论
,这个是什么问题呢?
另外在Web中URL最好输入IP:Port(以防你在本机上没有做映射,直接输入主机名是访问不了的)
MarsJ 回复 我要学习Hadoop
jps
2692 SecondaryNameNode
2597 DataNode
2856 NodeManager
2266 ResourceManager
2524 NameNode
2893 Jps
各个进程似乎都正常,使用浏览器查看 http://localhost:18088/ 和 http://localhost:50070/ 没有任何响应,不知为何 ?
ps:我在启动 (start-all.sh)是,会要我几次授权(namenode datanode secondxxxnnamenode 等)
谢谢 Mars 老师
[root@bigdata hadoop]# jps
2184 ResourceManager
1897 DataNode
2603 Jps
1804 NameNode
2301 NodeManager
网页上可以正常查看到
<property>
<name>yarn.resource.manager.admin.address</name>
<value>bigdata:18141</value>
</property>
这个resource.manager是有个点 没有敲错?
[root@wlh sbin]# bash stop-all.sh
This script is deprecated. Use stop-dfs.sh and stop-yarn.sh instead.
[root@wlh sbin]# bash start-all.sh
This script is deprecated. Use start-dfs.sh and start-yarn.sh instead.
出现这种问题是我的公钥私钥配置不对么?
[root@wlh .ssh]# ll
total 16
-rw-r--r--. 1 root root 390 Nov 28 08:39 authorized_keys
-rw-------. 1 root root 1675 Nov 30 01:41 id_rsa
-rw-r--r--. 1 root root 390 Nov 30 01:41 id_rsa.pub
-rw-r--r--. 1 root root 792 Nov 30 01:31 known_hosts
[root@wlh .ssh]# cat id_rsa.pub >> authorized_keys
[root@wlh .ssh]# ll
total 16
-rw-r--r--. 1 root root 780 Nov 30 01:42 authorized_keys
-rw-------. 1 root root 1675 Nov 30 01:41 id_rsa
-rw-r--r--. 1 root root 390 Nov 30 01:41 id_rsa.pub
-rw-r--r--. 1 root root 792 Nov 30 01:31 known_hosts
[root@wlh .ssh]# ssh wlh
Last login: Wed Nov 30 01:21:34 2016 from 192.168.128.1
[root@wlh ~]# /opt/hadoop-3.0.0-alpha1/sbin/start-all.sh
This script is deprecated. Use start-dfs.sh and start-yarn.sh instead.
2865 SecondaryNameNode
3010 ResourceManager
2626 NameNode
3558 Jps
3101 NodeManager
然后,打开http://192.168.0.103:50070/后也没有看到1个Node,是不是哪里配置有问题呢?
2016-12-10 07:18:47,569 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: localhost:0
http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/ClusterSetup.html#Configuring_the_Hadoop_Daemons
这个是core-site.xml的配置说明,在http://hadoop.apache.org/docs/stable/这个链接的左下角可以找到Configuration
http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/core-default.xml
这个是core-site.xml的配置说明,在http://hadoop.apache.org/docs/stable/这个链接的左下角可以找到Configuration
2090 ResourceManager
2458 Jps
我的只有4G内存,是我的电脑配置太低了原因吗?还是什么?
/opt/hadoop-2.7.2/bin/hdfs: line 304: exec: /root/usr/java/default//bin/java: cannot execute: 没有那个文件或目录
”
我可以执行“java -version”和“echo $JAVA_HOME”命令
[root@bigdata ~]# java -version
java version "1.8.0_111"
Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
Java HotSpot(TM) Client VM (build 25.111-b14, mixed mode, sharing)
[root@bigdata ~]# echo $JAVA_HOME
/usr/java/default/
请问是什么原因呢?谢谢。我的电脑是32位,下载的JDK也是32位的。其他所有内容及所有步骤都是与视屏一致
[root@bigdata ~]# /opt/hadoop-2.7.2/sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
17/02/03 04:18:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [bigdata]
bigdata: starting namenode, logging to /opt/hadoop-2.7.2/logs/hadoop-root-namenode-bigdata.out
bigdata: starting datanode, logging to /opt/hadoop-2.7.2/logs/hadoop-root-datanode-bigdata.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: secondarynamenode running as process 2199. Stop it first.
17/02/03 04:18:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop-2.7.2/logs/yarn-root-resourcemanager-bigdata.out
bigdata: starting nodemanager, logging to /opt/hadoop-2.7.2/logs/yarn-root-nodemanager-bigdata.out
[root@bigdata ~]# jps
2851 Jps
2006 NameNode
2134 DataNode
2423 ResourceManager
2526 NodeManager
[root@bigdata ~]#
老师 我这样算是hadoop运行成功了么 但是jps里面没有secondarynamenode,hdfs http://bigdata:50070和yarn http://bigdata:18088也打不开 请问怎么解决
[root@bigdata hadoop]# hdfs namenode -format
Error: Could not find or load main class org.apache.hadoop.hdfs.server.namenode.NameNode
麻烦老师帮忙看下
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
17/03/03 16:38:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [bigdata]
bigdata: ssh: connect to host bigdata port 22: Connection refused
bigdata: ssh: connect to host bigdata port 22: Connection refused
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /opt/hadoop-2.7.2/logs/hadoop-root-secondarynamenode-bigdata.out
17/03/03 16:39:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop-2.7.2/logs/yarn-root-resourcemanager-bigdata.out
bigdata: ssh: connect to host bigdata port 22: Connection refused
MarsJ 回复 1395354946
检查一下你的SSH做好了没有
浏览器进入http://loaclhst:50070,查看hdfs管理页面,发现Summary下的各项值全部为0,为什么
您的过程很详细,我这有几个问题想请教下您。
1、您讲的好像是伪分布模式的hadoop吧,也就是在master上配置的一些东西,那如果我想做真正的分布式呢,我这里有salve1和slave2,这两台机器该如何配置呢?
您讲课的时候提到,配置hdfs-site.xml 时,有如下需要注意的细节:
<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/hadoop-2.7.2/current/data</value> --HDFS文件系统细节datanode配置(只需在datanode上配置就行)
</property>
<property>
<name>dfs.replication</name>
<value>1</value> --HDFS文件系统副本的数量1,节点数量
</property>
如果我是真正的分布式,那这台master机器上是不是不用配置datanode,并且下面的节点数量也要跟着slave的数量而变化?
2、我目前是按照您说的伪分布模式配置的,但是我在web访问yarn的时候是用的IP:18088端口,但是访问不了。
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:18088</value>
</property>
PS:这里我用的是腾讯云的主机,有公网IP和内网IP之分,我在/etc/hosts里配置的是公网IP 和主机名,
但是我用ifconfig查出来的却是内网IP。不知道是不是这原因导致yarn访问不了。不过50070端口是可以的。
谢谢老师解答!
######日志中提示错误的部分########
。。。。。。。。
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41; compiled by 'jenkins' on 2016-01-26T00:08Z
STARTUP_MSG: java = 1.8.0_121
************************************************************/
17/03/16 04:01:40 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/03/16 04:01:40 INFO namenode.NameNode: createNameNode [-format]
17/03/16 04:01:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[Fatal Error] mapred-site.xml:24:2: The markup in the document following the root element must be well-formed.
17/03/16 04:01:42 FATAL conf.Configuration: error parsing conf mapred-site.xml
org.xml.sax.SAXParseException; systemId: file:/hadoop-2.7.2/etc/hadoop/mapred-site.xml; lineNumber: 24; columnNumber: 2; The markup in the document following the root element must be well-formed.
at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
。。。。。。。。。。。。
。。。。。。。。。。。。
mapred-site.xml 是按视频中配置的,日志中提到的24行是对应的 该对象的value这一行。
<property>
<name>mapreduce.jobhistory.done-dir</name>
<value>/jobhistory/done</value>
</property>
zzwzzwcool 回复 MarsJ
2、因为我录课时没有分布式环境,只能给大家讲解伪分布式,但是也有提到,真正的分布式跟伪分布式的配置几乎没有差别,例如有一个差别就是你提到的副本数,伪分布式只有1个节点,当然副本数就是1,如果你的集群规模是>1的,那么你的副本数可以按需设置,默认是3,如果你只有2个节点,设3也是无用的。
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
115.159.159.230 master
115.159.37.18 slave1
115.159.51.136 slave2
我把最上面的注释掉,关了重新启动后,发现没了resourcemanager,其余都有,我之前好像发现这个问题后,把注释放开的,看来还是不能注释那个啊
bigdata: ssh: connect to host bigdata port 22: Connection timed out
bigdata: ssh: connect to host bigdata port 22: Connection timed out
怎么回事
hadoop namenode -format 吧
2116 DataNode
2311 SecondaryNameNode
2456 ResourceManager
2027 NameNode
4379 Jps
为什么没有NodeManager呢?
15018396355 回复 BOTAK