0
votes

I'm trying a run a sample program with single node hadoop cluster. I'm getting an exception when I'm running a sample program (jar file).

I configured core-site.xml as localhost:9000. I put my text files in to HDFS properly and it can be view executing hadoop dfs -ls /tmp command.

Thanks.

13/11/27 05:47:52 INFO mapred.LocalJobRunner: Map task executor complete. 13/11/27 05:47:52 WARN mapred.LocalJobRunner: job_local617545423_0001 java.lang.Exception: java.io.FileNotFoundException: /tmp/Jetty_0_0_0_0_50090_secondary__y6aanv (Is a directory) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354) Caused by: java.io.FileNotFoundException: /tmp/Jetty_0_0_0_0_50090_secondary__y6aanv (Is a directory) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.(FileInputStream.java:138) at org.apache.hadoop.fs.RawLocalFileSystem$TrackingFileInputStream.(RawLocalFileSystem.java:71) at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.(RawLocalFileSystem.java:107) at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:182) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:126) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:427) at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:75) at org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader.initialize(KeyValueLineRecordReader.java:65) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:521) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) 13/11/27 05:47:52 INFO mapred.JobClient: map 35% reduce 0%

1
Could you please show me your code?Tariq
Here is the code exactly i'm trying. java.dzone.com/articles/hadoop-basics-creatingJanith
Can you try the same code by putting your file in / or /user directory in HDFS ?Mukesh S
Thanks, I figured it out now. see the answer.Janith

1 Answers

0
votes

Directories can not be in the input path directory for job even in the linux file system.

above tmp/Jetty_0_0_0_0_50090_secondary__y6aanv (Is a directory) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354) Caused by: java.io.FileNotFoundException: /tmp/Jetty_0_0_0_0_50090_secondary__y6aanv

Jetty_0_0_0_0_50090_secondary__y6aanv is a directory which is in the Input path directory.

I changed the Input path and it is working now.