I have a network with some weird (as I understand) DNS server which causes Hadoop or HBase to malfunction.
It resolves my hostname to some address my machine doesn't know about (i.e. there is no such interface).
Hadoop does work if I have following entries in /etc/hosts:
127.0.0.1 localhost
127.0.1.1 myhostname
If entry "127.0.1.1 myhostname" is not present uploading file to HDFS fails and complains that it can replicate the file only to 0 datanodes instead of 1.
But in this case HBase does not work: creating a table from HBase shell causes NotAllMetaRegionsOnlineException (caused actually by HMaster trying to bind to wrong address returned by DNS server for myhostname).
In other network, I am using following /etc/hosts:
127.0.0.1 localhost
192.168.1.1 myhostname
And both Hadoop and HBase work. The problem is that in second network the address is dynamic and I can't list it into /etc/hosts to override result returned by weird DNS.
Hadoop is run in pseudo-distributed mode. HBase also runs on single node.
Changing behavior of DNS server is not an option. Changing "localhost" to 127.0.0.1 in hbase/conf/regionservers doesn't change anything.
Can somebody suggest a way how can I override its behavior while retaining internet connection (I actually work at client's machine through Teamviewer). Or some way to configure HBase (or Zookeeper it is managing) not to use hostname to determine address to bind?