3
votes

Hi I am using hadoop and HBase.When i tried to start hadoop, It started fine but when I tried to start HBase it shows exception in log files. In log file hadoop is refusing the connection on port 54310 of localhost. Logs are given below:

Mon Apr  9 12:28:15 PKT 2012 Starting master on hbase
ulimit -n 1024
2012-04-09 12:28:17,685 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: Initializing RPC Metrics with hostName=HMaster, port=60000
2012-04-09 12:28:18,180 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server Responder: starting
2012-04-09 12:28:18,190 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server listener on 60000: starting
2012-04-09 12:28:18,197 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60000: starting
2012-04-09 12:28:18,200 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60000: starting
2012-04-09 12:28:18,202 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60000: starting
2012-04-09 12:28:18,206 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60000: starting
2012-04-09 12:28:18,210 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60000: starting
2012-04-09 12:28:18,278 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60000: starting
2012-04-09 12:28:18,279 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60000: starting
2012-04-09 12:28:18,284 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60000: starting
2012-04-09 12:28:18,285 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60000: starting
2012-04-09 12:28:18,285 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60000: starting
2012-04-09 12:28:18,369 INFO org.apache.zookeeper.ZooKeeper: Client environment:zookeeper.version=3.3.2-1031432, built on 11/05/2010 05:32 GMT
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:host.name=hbase.com.com
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.version=1.6.0_20
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-6-openjdk/jre
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.class.path=/opt/com/hbase-0.90.4/bin/../conf:/usr/lib/jvm/java-6-openjdk/lib/tools.jar:/opt/com/hbase-0.90.4/bin/..:/opt/com/hbase-0.90.4/bin/../hbase-0.90.4.jar:/opt/com/hbase-0.90.4/bin/../hbase-0.90.4-tests.jar:/opt/com/hbase-0.90.4/bin/../lib/activation-1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/asm-3.1.jar:/opt/com/hbase-0.90.4/bin/../lib/avro-1.3.3.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-cli-1.2.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-codec-1.4.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-configuration-1.6.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-el-1.0.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-httpclient-3.1.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-lang-2.5.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-logging-1.1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-net-1.4.1.jar:/opt/com/hbase-0.90.4/bin/../lib/core-3.1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/guava-r06.jar:/opt/com/hbase-0.90.4/bin/../lib/hadoop-core-0.20.205.0.jar:/opt/com/hbase-0.90.4/bin/../lib/hadoop-gpl-compression-0.2.0-dev.jar:/opt/com/hbase-0.90.4/bin/../lib/jackson-core-asl-1.5.5.jar:/opt/com/hbase-0.90.4/bin/../lib/jackson-jaxrs-1.5.5.jar:/opt/com/hbase-0.90.4/bin/../lib/jackson-mapper-asl-1.4.2.jar:/opt/com/hbase-0.90.4/bin/../lib/jackson-xc-1.5.5.jar:/opt/com/hbase-0.90.4/bin/../lib/jasper-compiler-5.5.23.jar:/opt/com/hbase-0.90.4/bin/../lib/jasper-runtime-5.5.23.jar:/opt/com/hbase-0.90.4/bin/../lib/jaxb-api-2.1.jar:/opt/com/hbase-0.90.4/bin/../lib/jaxb-impl-2.1.12.jar:/opt/com/hbase-0.90.4/bin/../lib/jersey-core-1.4.jar:/opt/com/hbase-0.90.4/bin/../lib/jersey-json-1.4.jar:/opt/com/hbase-0.90.4/bin/../lib/jersey-server-1.4.jar:/opt/com/hbase-0.90.4/bin/../lib/jettison-1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/jetty-6.1.26.jar:/opt/com/hbase-0.90.4/bin/../lib/jetty-util-6.1.26.jar:/opt/com/hbase-0.90.4/bin/../lib/jruby-complete-1.6.0.jar:/opt/com/hbase-0.90.4/bin/../lib/jsp-2.1-6.1.14.jar:/opt/com/hbase-0.90.4/bin/../lib/jsp-api-2.1-6.1.14.jar:/opt/com/hbase-0.90.4/bin/../lib/jsr311-api-1.1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/log4j-1.2.16.jar:/opt/com/hbase-0.90.4/bin/../lib/protobuf-java-2.3.0.jar:/opt/com/hbase-0.90.4/bin/../lib/servlet-api-2.5-6.1.14.jar:/opt/com/hbase-0.90.4/bin/../lib/slf4j-api-1.5.8.jar:/opt/com/hbase-0.90.4/bin/../lib/slf4j-log4j12-1.5.8.jar:/opt/com/hbase-0.90.4/bin/../lib/stax-api-1.0.1.jar:/opt/com/hbase-0.90.4/bin/../lib/thrift-0.2.0.jar:/opt/com/hbase-0.90.4/bin/../lib/xmlenc-0.52.jar:/opt/com/hbase-0.90.4/bin/../lib/zookeeper-3.3.2.jar
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.library.path=/usr/lib/jvm/java-6-openjdk/jre/lib/i386/client:/usr/lib/jvm/java-6-openjdk/jre/lib/i386:/usr/lib/jvm/java-6-openjdk/jre/../lib/i386:/usr/java/packages/lib/i386:/usr/lib/jni:/lib:/usr/lib
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.arch=i386
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.version=2.6.32-40-generic
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.name=com
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.home=/home/com
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.dir=/opt/com/hbase-0.90.4/bin
2012-04-09 12:28:18,372 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=180000 watcher=master:60000
2012-04-09 12:28:18,436 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181
2012-04-09 12:28:18,484 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
2012-04-09 12:28:18,676 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1369600cac10000, negotiated timeout = 180000
2012-04-09 12:28:18,740 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=Master, sessionId=hbase.com.com:60000
2012-04-09 12:28:18,803 INFO org.apache.hadoop.hbase.metrics: MetricsString added: revision
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUser
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsDate
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUrl
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: date
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsRevision
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: user
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsVersion
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: url
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: version
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
2012-04-09 12:28:18,810 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
2012-04-09 12:28:18,810 INFO org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized
2012-04-09 12:28:18,940 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Master=hbase.com.com:60000
2012-04-09 12:28:21,342 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 0 time(s).
2012-04-09 12:28:22,343 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 1 time(s).
2012-04-09 12:28:23,344 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 2 time(s).
2012-04-09 12:28:24,345 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 3 time(s).
2012-04-09 12:28:25,346 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 4 time(s).
2012-04-09 12:28:26,347 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 5 time(s).
2012-04-09 12:28:27,348 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 6 time(s).
2012-04-09 12:28:28,349 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 7 time(s).
2012-04-09 12:28:29,350 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 8 time(s).
2012-04-09 12:28:30,351 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 9 time(s).
2012-04-09 12:28:30,356 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Call to hbase/192.168.15.20:54310 failed on connection exception: java.net.ConnectException: Connection refused
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1095)
    at org.apache.hadoop.ipc.Client.call(Client.java:1071)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at $Proxy6.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:118)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:222)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:187)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:65)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1346)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
    at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:364)
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:81)
    at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:346)
    at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:282)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:604)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:434)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:560)
    at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1202)
    at org.apache.hadoop.ipc.Client.call(Client.java:1046)
    ... 17 more
2012-04-09 12:28:30,361 INFO org.apache.hadoop.hbase.master.HMaster: Aborting
2012-04-09 12:28:30,361 DEBUG org.apache.hadoop.hbase.master.HMaster: Stopping service threads
2012-04-09 12:28:30,361 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60000
2012-04-09 12:28:30,362 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60000: exiting
2012-04-09 12:28:30,362 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60000: exiting
2012-04-09 12:28:30,362 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60000: exiting
2012-04-09 12:28:30,362 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60000: exiting
2012-04-09 12:28:30,363 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60000: exiting
2012-04-09 12:28:30,363 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60000: exiting
2012-04-09 12:28:30,363 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60000: exiting
2012-04-09 12:28:30,363 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60000: exiting
2012-04-09 12:28:30,364 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60000: exiting
2012-04-09 12:28:30,364 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60000: exiting
2012-04-09 12:28:30,364 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on 60000
2012-04-09 12:28:30,369 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
2012-04-09 12:28:30,450 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down
2012-04-09 12:28:30,450 INFO org.apache.zookeeper.ZooKeeper: Session: 0x1369600cac10000 closed
2012-04-09 12:28:30,450 INFO org.apache.hadoop.hbase.master.HMaster: HMaster main thread exiting
Mon Apr  9 12:28:40 PKT 2012 Stopping hbase (via master)

(hadoop conf) core-site.xml

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoop/tmp</value>
</property><property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
</property>
</configuration>

hdfs-site.xml

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>

mapred-site.xml

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
</property>
</configuration>

(hbase conf) hbase-site.xml

<configuration>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:54310/hbase</value>
</property>
<!--added-->
<property>
<name>hbase.master</name>
<value>127.0.0.1:60000</value>
<description>The host and port that the HBase master runs at.
</description>
</property>
</configuration>
5
check if firewall is blocking the port???Abhishek bhutra
what is the configured value of"fs.default.name" in HDFS ?hari_sree
Yes I have already checked my iptables , firestarter etc.I think it is not a port issue may be it is a configuration mistake.khan
can you put your configuration files...i think some how hbase is not being able to connect to hdfs...may be namenode is no running..looking into configuration files will help. And check logs for namenode and all by yourself.Abhishek bhutra
This is the default value "fs.default.name" in HDFS hdfs://localhost:54310khan

5 Answers

3
votes

Try this

Comment 127.0.1.1 in /etc/hosts file using # then put your ip and computer name in new line if you want to use localhost make sure that 127.0.0.1 localhost is there in your hosts file then replace all occurance of ip in configuration files replace with localhost

if you want to use ip instead of localhost then make sure that ip and equivalent domain name is there in your hosts file and replace all occurance of localhost with your ip.

problem related to namenode generally occurs due to misconfiguration of host or IP

1
votes

Try looking in /etc/hosts file and/or assigning localhost to 127.0.0.1. In your example it connects to 192.168.15.20:54310, not 127.0.0.1:54310

0
votes

First check that in habse-site.xml the property hbase.rootdir is trying to connect to the same port as defined in core-site.xml for hadoop as fs.default.name .

Is hbase.rootdir is set to some /tmp/hadoop location ? (because that comes defult) change it to point on where your hdfs resides.

And first of all try http://localhost:50070 and check for something like Namenode : --IP--:--port--. Give me that port.

0
votes

Take a look at java.io.FileNotFoundException: /hadoop/tmp/dfs/name/current/VERSION (Permission denied)

So, first of all - please see what do you have set as hbase.rootdir indeed - whether it is pointing to the HDFS or local filesystem. My example (with localhost for pseudo-distributed mode):

    <configuration>
        <property>
            <name>hbase.rootdir</name>
            <value>hdfs://localhost:54310/hbase</value>
        </property>
        <property>
            <name>hbase.master</name>
            <value>127.0.0.1:60000</value>
        </property>
    </configuration>

Next, looking at your log it seems that most likely you're running using the local filesystem and you don't have the read/write access to the directory where HBase stores its data - check it with

mcbatyuk:/ bam$ ls -l / |grep hadoop
drwxr-xr-x   3 bam   wheel       102 Feb 29 21:34 hadoop

If your base.rootdir is on HDFS you seem to have broken permissions, so you will need to change it with

# hadoop fs -chmod -R MODE /hadoop/

or change the property dfs.permissions to false in your $HADOOP_HOME/conf/hdfs-site.xml

0
votes

Instead of using the temp dir , configure "dfs.name.dir" in hdfs-site.xml to a directory where you have permission to read/write . And then start the namenode after formatting (command is "hadoop namenode -format"). Once this is done , try starting hbase .