0
votes

Hi I am new to Hadoop and trying to get it working on my local machine. But every time I run hdfs namenode -format and then start-dfs.sh and start-yarn.sh or start-all.sh. It logs those files and I cannot figure out where its going wrong.

Please see the log files below

        2016-11-27 21:02:37,468 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = 
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.7.3
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.3.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.7.3.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z
STARTUP_MSG:   java = 1.8.0_111
************************************************************/
2016-11-27 21:02:37,479 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2016-11-27 21:02:37,483 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2016-11-27 21:02:37,755 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2016-11-27 21:02:37,841 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2016-11-27 21:02:37,841 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2016-11-27 21:02:37,843 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
2016-11-27 21:02:37,844 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service.
2016-11-27 21:02:37,910 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-11-27 21:02:38,035 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2016-11-27 21:02:38,090 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2016-11-27 21:02:38,118 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2016-11-27 21:02:38,124 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2016-11-27 21:02:38,136 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-11-27 21:02:38,138 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2016-11-27 21:02:38,138 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2016-11-27 21:02:38,138 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2016-11-27 21:02:38,247 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2016-11-27 21:02:38,249 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2016-11-27 21:02:38,271 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2016-11-27 21:02:38,271 INFO org.mortbay.log: jetty-6.1.26
2016-11-27 21:02:38,412 INFO org.mortbay.log: Started [email protected]:50070
2016-11-27 21:02:38,439 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2016-11-27 21:02:38,439 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2016-11-27 21:02:38,470 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2016-11-27 21:02:38,470 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2016-11-27 21:02:38,507 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2016-11-27 21:02:38,507 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2016-11-27 21:02:38,508 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2016-11-27 21:02:38,508 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2016 Nov 27 21:02:38
2016-11-27 21:02:38,509 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2016-11-27 21:02:38,509 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2016-11-27 21:02:38,510 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2016-11-27 21:02:38,511 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2016-11-27 21:02:38,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2016-11-27 21:02:38,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 1
2016-11-27 21:02:38,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2016-11-27 21:02:38,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2016-11-27 21:02:38,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2016-11-27 21:02:38,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2016-11-27 21:02:38,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2016-11-27 21:02:38,523 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2016-11-27 21:02:38,530 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
2016-11-27 21:02:38,530 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2016-11-27 21:02:38,530 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2016-11-27 21:02:38,530 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2016-11-27 21:02:38,531 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2016-11-27 21:02:38,684 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2016-11-27 21:02:38,684 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2016-11-27 21:02:38,684 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2016-11-27 21:02:38,684 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2016-11-27 21:02:38,685 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2016-11-27 21:02:38,685 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2016-11-27 21:02:38,685 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384
2016-11-27 21:02:38,685 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2016-11-27 21:02:38,691 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2016-11-27 21:02:38,691 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2016-11-27 21:02:38,691 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2016-11-27 21:02:38,691 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2016-11-27 21:02:38,693 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2016-11-27 21:02:38,693 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2016-11-27 21:02:38,693 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2016-11-27 21:02:38,696 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2016-11-27 21:02:38,696 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2016-11-27 21:02:38,696 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2016-11-27 21:02:38,697 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2016-11-27 21:02:38,697 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2016-11-27 21:02:38,699 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2016-11-27 21:02:38,699 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2016-11-27 21:02:38,699 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2016-11-27 21:02:38,699 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2016-11-27 21:02:38,723 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop/hadoopinfra/hdfs/namenode/in_use.lock acquired by nodename [email protected]
2016-11-27 21:02:38,777 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /usr/local/hadoop/hadoopinfra/hdfs/namenode/current
2016-11-27 21:02:38,777 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: No edit log streams selected.
2016-11-27 21:02:38,777 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Planning to load image: FSImageFile(file=/usr/local/hadoop/hadoopinfra/hdfs/namenode/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
2016-11-27 21:02:38,875 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2016-11-27 21:02:38,898 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2016-11-27 21:02:38,898 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /usr/local/hadoop/hadoopinfra/hdfs/namenode/current/fsimage_0000000000000000000
2016-11-27 21:02:38,907 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2016-11-27 21:02:38,908 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 1
2016-11-27 21:02:39,041 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2016-11-27 21:02:39,041 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 339 msecs
2016-11-27 21:02:39,192 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to localhost:9000
2016-11-27 21:02:39,199 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2016-11-27 21:02:39,212 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9000
2016-11-27 21:02:39,237 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2016-11-27 21:02:39,249 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2016-11-27 21:02:39,250 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2016-11-27 21:02:39,250 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues
2016-11-27 21:02:39,250 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs
2016-11-27 21:02:39,250 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2016-11-27 21:02:39,250 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2016-11-27 21:02:39,255 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2016-11-27 21:02:39,260 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks            = 0
2016-11-27 21:02:39,260 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks          = 0
2016-11-27 21:02:39,260 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 0
2016-11-27 21:02:39,260 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of  over-replicated blocks = 0
2016-11-27 21:02:39,260 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written    = 0
2016-11-27 21:02:39,261 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 10 msec
2016-11-27 21:02:39,281 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2016-11-27 21:02:39,282 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
2016-11-27 21:02:39,284 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: localhost/127.0.0.1:9000
2016-11-27 21:02:39,284 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2016-11-27 21:02:39,289 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
2016-11-27 21:04:07,310 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2016-11-27 21:04:07,310 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2016-11-27 21:04:07,310 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 1
2016-11-27 21:04:07,310 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 10 
2016-11-27 21:04:07,319 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 18 
2016-11-27 21:04:07,322 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /usr/local/hadoop/hadoopinfra/hdfs/namenode/current/edits_inprogress_0000000000000000001 -> /usr/local/hadoop/hadoopinfra/hdfs/namenode/current/edits_0000000000000000001-0000000000000000002
2016-11-27 21:04:07,323 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3
2016-11-27 21:05:07,419 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2016-11-27 21:05:07,420 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2016-11-27 21:05:07,420 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3
2016-11-27 21:05:07,420 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 2 
2016-11-27 21:05:07,427 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 10 
2016-11-27 21:05:07,428 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /usr/local/hadoop/hadoopinfra/hdfs/namenode/current/edits_inprogress_0000000000000000003 -> /usr/local/hadoop/hadoopinfra/hdfs/namenode/current/edits_0000000000000000003-0000000000000000004
2016-11-27 21:05:07,428 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 5
2016-11-27 21:06:07,445 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2016-11-27 21:06:07,445 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2016-11-27 21:06:07,445 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 5
2016-11-27 21:06:07,446 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 3 
2016-11-27 21:06:07,454 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 11 
2016-11-27 21:06:07,455 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /usr/local/hadoop/hadoopinfra/hdfs/namenode/current/edits_inprogress_0000000000000000005 -> /usr/local/hadoop/hadoopinfra/hdfs/namenode/current/edits_0000000000000000005-0000000000000000006
2016-11-27 21:06:07,455 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 7

and same goes for other like datanode, secondarynod and yarn. Can I body put me into correct direction please. Thanks

1

1 Answers

1
votes

Which messages concern you in this log?

It looks like a typical namenode startup log, I don't see anything fatal.

There are a few message about under replicated blocks but this should just be a warning. It looks like you have a default block replication set to 1, which usually is 3. Are you able to run any hadoop commands?

hadoop fs -ls /

If so you can check the set blocksize in your Hadoop 2.7.3 version with command hdfs getconf -confKey dfs.blocksize

If it says command not found it is possible this wasn't added to your users $PATH environment variable to execute. The executable itself should exist in /usr/local/hadoop/bin, /usr/local/hadoop/bin/hadoop/, or close by.

I am not sure which distribution of Hadoop you are running but you can update your hdfs-site.xml the cluster uses to update your hdfs-site.xml which is usually in your hadoop home or symbolically linked to /etc/hadoop/conf/hdfs-site.xml. Find the property in the hdfs-site.xml for dfs.replication. You will notice the value is a 1. You can change this to the value of your choice, 3 is usually a good replication factor. If you do this bring down the environment, make the change then start it backup.

It is also likely since you just formatted your namenode then started that blocks have not replicated yet. The primary purpose of the name node is to track your blocks via edit files. So if you just formatted then started it up the name node will take a few minutes for the data nodes to start up, send their heart beats then report on the blocks, and the replicate them.

You can also run the following on the Linux command line to check the status of ports listening;

sudo lsof -i tcp | grep -i LISTEN

You should see port 9000 listening, probably 8020 as well if the name node is up. These will not be the same ports as what the data node will use by the way. If you check your *site.xml files it will tell you the properties for your cluster including hostnames, ports, and services running along with other cluster information.

You should have a web ui as well if you go to http://hostname.example.com:50070/ that should give you basic status.

The important configuration files for hadoop to note are:

hadoop-env.sh (sets your HADOOP_HOME variable along with JAVA_HOME). It should be in or around /usr/local/hadoop/ folder depending on your distribution.

hdfs-site.xml, core-site.xml, yarn-site.xml, mapred-site.xml, and other *-site.xmls for other services such as hive, spark, ect.

Usually a directory named /etc/hadoop/conf is symbolically linked to their actual location on each node. The properties in these files determine most of your cluster settings.

Keep in mind to the HDFS, the file system of Hadoop, is NOT a path on your operating systems file system. If you have a path in HDFS such as /user/hive/warehouse that exists only in hdfs, not on the servers file system. So you can't cd /user/hive/warehouse. You will need to use either client software or the hadoop fs commands to interact with hdfs.

**

If your cluster is running you can run the following on the Linux terminal (you might have to go to the /usr/local/hadoop/bin folder to fix the executable if the install didn't update your $PATH to include it);

hdfs getconf -namenodes
-secondaryNameNodes
-backupNodes
-includeFile
-excludeFile
-nnRpcAddresses
-confKey [key]

hdfs version

Enable debugging easily for HDFS commands;

HADOOP_ROOT_LOGGER=DEBUG,console hdfs dfs -ls /

Logs should be stored in /var/log with a variety of names depending on what services are running on your cluster.

Also make sure you have disabled any system firewall including iptables/firewalld depending on the operating system on all nodes. Hadoop will use a variety of ports to communicate between nodes on non-standard ports. So if you have iptables/firewalld turned on the connections will get refused making nodes appear offline even if the services are up.