33
votes

I have a Hadoop cluster setup and working under a common default username "user1". I want to put files into hadoop from a remote machine which is not part of the hadoop cluster. I configured hadoop files on the remote machine in a way that when

hadoop dfs -put file1 ...

is called from the remote machine, it puts the file1 on the Hadoop cluster.

the only problem is that I am logged in as "user2" on the remote machine and that doesn't give me the result I expect. In fact, the above code can only be executed on the remote machine as:

hadoop dfs -put file1 /user/user2/testFolder

However, what I really want is to be able to store the file as:

hadoop dfs -put file1 /user/user1/testFolder

If I try to run the last code, hadoop throws error because of access permissions. Is there anyway that I can specify the username within hadoop dfs command?

I am looking for something like:

hadoop dfs -username user1 file1 /user/user1/testFolder
5
I think you need to change right answer to HADOOP_USER_NAME variant with most upvotes. whoami hack is not right thing to do when you can set env variable. - Mihail Krivushin

5 Answers

90
votes

If you use the HADOOP_USER_NAME env variable you can tell HDFS which user name to operate with. Note that this only works if your cluster isn't using security features (e.g. Kerberos). For example:

HADOOP_USER_NAME=hdfs hadoop dfs -put ...
18
votes

This may not matter to anybody, but I am using a small hack for this.

I'm exporting the HADOOP_USER_NAME in .bash_profile, so that every time I'm logging in, the user is set.

Just add the following line of code to .bash_profile:

export HADOOP_USER_NAME=<your hdfs user>
13
votes

By default authentication and authorization is turned off in Hadoop. According to the Hadoop - The Definitive Guide (btw, nice book - would recommend to buy it)

The user identity that Hadoop uses for permissions in HDFS is determined by running the whoami command on the client system. Similarly, the group names are derived from the output of running groups.

So, you can create a new whoami command which returns the required username and put it in the PATH appropriately, so that the created whoami is found before the actual whoami which comes with Linux is found. Similarly, you can play with the groups command also.

This is a hack and won't work once the authentication and authorization has been turned on.

1
votes

Shell/Command way:

Set HADOOP_USER_NAME variable , and execute the hdfs commands

  export HADOOP_USER_NAME=manjunath
  hdfs dfs -put <source>  <destination>

Pythonic way:

  import os 
  os.environ["HADOOP_USER_NAME"] = "manjunath"
0
votes

There's another post with something similar to this that could provide a work around for you using streaming via ssh:

cat file.txt | ssh user1@clusternode "hadoop fs -put - /path/in/hdfs/file.txt"

See putting a remote file into hadoop without copying it to local disk for more information