0
votes

I am trying to change the ports used by NameNode (default port 50070)

The site: https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/SingleCluster.html

seems to indicate that the configuration used is etc/hadoop/core-site.xml

I have placed the following in core-site.xml

<property>
    <name>dfs.http.address</name>
    <value>80</value>
</property>

but it does not change the port used by the namenode.

I have already tried placing the core-site.xml in a couple of places under my hadoop folder including conf/ and etc/hadoop but it still does not change the port of the NameNode.

I have used the site: http://tecadmin.net/setup-hadoop-2-4-single-node-cluster-on-linux/

as a guide in the setup of hadoop 2.6.0 single node.

I would appreciate some advice.

Update:

I have made the following entry in conf/hdfs-site.xml

<property>
<name>dfs.namenode.http-address</name>
<value>http://localhost:80</value>
</property>

This results in the following namenode log error:

************************************************************/
2016-02-05 07:24:27,752 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2016-02-05 07:24:27,755 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2016-02-05 07:24:28,064 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2016-02-05 07:24:28,139 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2016-02-05 07:24:28,140 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2016-02-05 07:24:28,141 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
2016-02-05 07:24:28,142 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service.
2016-02-05 07:24:28,265 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://localhost:80
2016-02-05 07:24:28,297 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2016-02-05 07:24:28,300 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2016-02-05 07:24:28,308 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-02-05 07:24:28,328 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2016-02-05 07:24:28,329 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2016-02-05 07:24:28,361 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.SocketException: Permission denied
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:886)
        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:827)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:703)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:590)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2016-02-05 07:24:28,363 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2016-02-05 07:24:28,363 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2016-02-05 07:24:28,364 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2016-02-05 07:24:28,364 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.SocketException: Permission denied
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:886)
        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:827)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:703)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:590)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2016-02-05 07:24:28,365 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2016-02-05 07:24:28,366 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at lx5557/196.1.241.3
************************************************************/

I have tried several variations on the value for the setting including: just the port value, localhost:port, http://localhost:port, 0.0.0.0:port and using the hostname

and can't get it to work so far.

2
Did you reboot the cluster after changing the config? - OneCricketeer
yup. stop/start-dfs/yarn.sh - paolov
"Short answer: you can't (use port 80). Ports below 1024 can be opened only by root" -- Source - OneCricketeer

2 Answers

1
votes

In hdfs-site.xml, this is the default value.

Change the port number to what you want it to be.

<property>
    <name>dfs.namenode.http-address</name>
    <value>0.0.0.0:50070</value>
</property>

I will explain your confusion in the reading of those instructions as to why it isn't core-site, also.

The fs.defaultFS value changed in that the link you provided is described as...

The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.

Its default value is file:///, meaning the local filesystem of the node in the cluster.

The instructions tell you to change fs.defaultFS to hdfs://localhost:9000 because you want to change the filesystem to HDFS, not the local filesystem.

tl;dr

dfs.http.address is not even a configuration setting in core-site.xml and that page you linked to mentions nothing about it, so no, it "does not seem to indicate" to change that file.

2
votes

You have changed the wrong setting. Change the setting for

dfs.namenode.http-address

you will find the setting in the hdfs-site.xml configuration file.