I want to access hdfs with fully qualified names such as :
hadoop fs -ls hdfs://machine-name:8020/user
I could also simply access hdfs with
hadoop fs -ls /user
However, I am writing test cases that should work on different distributions(HDP, Cloudera, MapR...etc) which involves accessing hdfs files with qualified names.
I understand that hdfs://machine-name:8020
is defined in core-site.xml as fs.default.name
. But this seems to be different on different distributions. For example, hdfs is maprfs on MapR. IBM BigInsights don't even have core-site.xml
in $HADOOP_HOME/conf
.
There doesn't seem to a way hadoop tells me what's defined in fs.default.name
with it's command line options.
How can I get the value defined in fs.default.name
reliably from command line?
The test will always be running on namenode, so machine name is easy. But getting the port number(8020) is a bit difficult. I tried lsof, netstat.. but still couldn't find a reliable way.