I have a Hadoop cluster setup and working under a common default username "user1". I want to put files into hadoop from a remote machine which is not part of the hadoop cluster. I configured hadoop files on the remote machine in a way that when
hadoop dfs -put file1 ...
is called from the remote machine, it puts the file1 on the Hadoop cluster.
the only problem is that I am logged in as "user2" on the remote machine and that doesn't give me the result I expect. In fact, the above code can only be executed on the remote machine as:
hadoop dfs -put file1 /user/user2/testFolder
However, what I really want is to be able to store the file as:
hadoop dfs -put file1 /user/user1/testFolder
If I try to run the last code, hadoop throws error because of access permissions. Is there anyway that I can specify the username within hadoop dfs command?
I am looking for something like:
hadoop dfs -username user1 file1 /user/user1/testFolder