11
votes

We are running a workflow in oozie. It contains two actions: the first is a map reduce job that generates files in the hdfs and the second is a job that should copy the data in the files to a database.

Both parts are done successfully but oozie throws an exception at the end that marks it as a failed process.

This is the exception:

2014-05-20 17:29:32,242 ERROR org.apache.hadoop.security.UserGroupInformation:   PriviledgedActionException as:lpinsight (auth:SIMPLE) cause:java.io.IOException: Filesystem   closed
2014-05-20 17:29:32,243 WARN org.apache.hadoop.mapred.Child: Error running child
java.io.IOException: Filesystem closed
    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:565)
    at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:589)
    at java.io.FilterInputStream.close(FilterInputStream.java:155)
    at org.apache.hadoop.util.LineReader.close(LineReader.java:149)
    at org.apache.hadoop.mapred.LineRecordReader.close(LineRecordReader.java:243)
    at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.close(MapTask.java:222)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:421)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at   org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)

2014-05-20 17:29:32,256 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task

Any idea ?

2

2 Answers

10
votes

Use the below configuration while accessing file system.

Configuration conf = new Configuration();
conf.setBoolean("fs.hdfs.impl.disable.cache", true);
FileSystem fileSystem = FileSystem.get(conf);
4
votes

I had encountered a similar issue that prompted java.io.IOException: Filesystem closed. Finally, I found I closed the filesystem somewhere else. The hadoop filesystem API returns the same object. So if I closed one filesystem, then all filesystems are closed. I get the solution from this answer