hadoop - Issue in Connecting to HDFS Namenode -
hadoop - Issue in Connecting to HDFS Namenode -
after new hadoop single node installation , got next error in hadoop-root-datanode-localhost.localdomain.log
2014-06-18 23:43:23,594 error org.apache.hadoop.security.usergroupinformation: priviledgedactionexception as:root cause:java.net.connectexception: phone call localhost/127.0.0.1:54310 failed on connection exception: java.net.connectexception: connection refused 2014-06-18 23:43:23,595 info org.apache.hadoop.mapred.jobtracker: problem connecting hdfs namenode... re-trying java.net.connectexception: phone call localhost/127.0.0.1:54310 failed on connection exception: java.net.connectexception: connection refusedat org.apache.hadoop.ipc.client.wrapexception(client.java:1142)
any idea.?
jps not giving ouput
core site.xml updated
<configuration> <property> <name>hadoop.tmp.dir</name> <value>/opt/surya/hadoop-1.2.1/tmp</value> <description>a base of operations other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:54310</value> <description>the name of default file system. uri scheme , authorization determine filesystem implementation. uri's scheme determines config property (fs.scheme.impl) naming filesystem implementation class. uri's authorization used determine host, port, etc. filesystem.</description> </property> </configuration>
also , on format using hadoop namenode -format got below aborted error
re-format filesystem in /tmp/hadoop-root/dfs/name ? (y or n) y format aborted in /tmp/hadoop-root/dfs/name
you need run hadoop namenode -format hdfs-superuser. "hdfs" user itself.
the hint can seen here:
usergroupinformation: priviledgedactionexception as:root cause:java
another thing consider: want move hdfs root other /tmp. risk losing hdfs contents when /tmp cleaned (which happen time)
update based on op comments.
re: jobtracker unable contact namenode: please not skip steps.
first create sure format namenode then start namenode , datanodesrun basic hdfs commands such
hdfs dfs -put
and
hdfs dfs -get
then can start jobtracker , tasktracker then (and not earlier) can seek run mapreduce job (which uses hdfs) hadoop hdf
Comments
Post a Comment