1
votes

I am trying to setup multinode cluster in hadoop how i am getting 0 datanodes as active datanodes and my hdfs shows allocation of 0 bytes

however nodemanager daemons are running on datanodes

masters: masterhost1 172.31.100.3 (acting as secondary namenode also) namenode

datahost1 172.31.100.4 #datanode

the log of datanode is below :

`STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r cc865b490b9a6260e9611a5b8633cab885b3d247; compiled by 'jenkins' on 2015-12-18T01:19Z STARTUP_MSG: java = 1.8.0_71 ************************************************************/ 2016-01-24 03:53:28,368 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT] 2016-01-24 03:53:28,862 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop_tmp/hdfs/datanode should be specified as a URI in configuration files. Please update hdfs configuration. 2016-01-24 03:53:36,454 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2016-01-24 03:53:37,127 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2016-01-24 03:53:37,127 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started 2016-01-24 03:53:37,132 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is datahost1 2016-01-24 03:53:37,142 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0 2016-01-24 03:53:37,195 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010 2016-01-24 03:53:37,197 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s 2016-01-24 03:53:37,197 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 5 2016-01-24 03:53:47,331 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2016-01-24 03:53:47,375 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined 2016-01-24 03:53:47,395 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2016-01-24 03:53:47,400 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode 2016-01-24 03:53:47,404 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2016-01-24 03:53:47,405 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2016-01-24 03:53:47,559 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* 2016-01-24 03:53:47,566 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50075 2016-01-24 03:53:47,566 INFO org.mortbay.log: jetty-6.1.26 2016-01-24 03:53:48,565 INFO org.mortbay.log: Started [email protected]:50075 2016-01-24 03:53:49,200 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = hadoop 2016-01-24 03:53:49,201 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = sudo 2016-01-24 03:53:59,319 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue 2016-01-24 03:53:59,354 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020 2016-01-24 03:53:59,401 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020 2016-01-24 03:53:59,450 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null 2016-01-24 03:53:59,485 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: 2016-01-24 03:53:59,491 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop_tmp/hdfs/datanode should be specified as a URI in configuration files. Please update hdfs configuration. 2016-01-24 03:53:59,499 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool (Datanode Uuid unassigned) service to masterhost1/172.31.100.3:9000 starting to offer service 2016-01-24 03:53:59,503 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2016-01-24 03:53:59,504 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting 2016-01-24 03:54:00,805 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: masterhost1/172.31.100.3:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-01-24 03:54:01,808 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: masterhost1/172.31.100.3:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-01-24 03:54:02,811 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: masterhost1/172.31.100.3:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-01-24 03:54:03,826 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: masterhost1/172.31.100.3:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2016-01-24 03:54:04,831 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: masterhost1/172.31.100.3:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

`

1
I think I have the same question with you. I have 3 slave machines and when I do put, it reports that there are no data nodes runningLouis Kuang

1 Answers

0
votes

the problem is with incoming connections namenode is not getting incoming information from datanode it was because of ipv6 issues just disable ipv6 on master node and check listening ports using netstat then you can solve above