0
votes

I was trying to run wordcount java program on hadoop multinode cluster using eclipse (which worked fine with single node cluster but not working in multinode). I am getting back following info

INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 16/04/24 21:30:46 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

public static void main(String[] args) throws Exception 
 {
Configuration conf = new Configuration();

    Job job = new Job(conf, "wordcount");

job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);

job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);

job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);

FileInputFormat.addInputPath(job, new Path("hdfs://localhost:54310/user/hduser/sam/"));
FileOutputFormat.setOutputPath(job, new Path("hdfs://localhost:54310/user/hduser/wc-output"));

job.waitForCompletion(true);

}

} *

I Think there is something wrong with the paths. I am running this code at master end

1

1 Answers

0
votes

Does the command

hdfs dfs -ls hdfs://localhost:54310/user/hduser/sam/

work ?