I am developing a Hadoop application in order to process DICOM files. The files are distributed on the HDFS.
I am using a modified RecordReader that reads a whole file and emits its content as a key-value pair. The record reader is working properly.
All the file data of the files is in the ByteWritable value. I have already checked that and the data is completely equal to the original file. Thus, inputContent has the same value of the original file.
After converting the ByteWritable to a byte array, I am unable to turn that into a stream and generate a new DICOM image - using the imageJ api - with its content.
How can i do so using this API?
Follows an example of the code:
public void inputTester()
{
DICOM image;
Configuration testConf = new Configuration(false);
/* Reads the local file system */
testConf.set("fs.default.name", "file:///");
File testFile = new File("path/to/dicom/file");
Path path = new Path(testFile.getAbsoluteFile().toURI());
FileSplit split = new FileSplit(path, 0, testFile.length(), null);
InputFormat inputFormat = ReflectionUtils.newInstance(WholeFileInputFormat.class, testConf);
TaskAttemptContext context = new TaskAttemptContextImpl(testConf, new TaskAttemptID());
try
{
RecordReader reader = inputFormat.createRecordReader(split, context);
while (reader.nextKeyValue())
{
/* get the bytes array */
BytesWritable inputBytesWritable = (BytesWritable) reader.getCurrentValue();
byte[] inputContent = inputBytesWritable.getBytes();
InputStream is = new ByteArrayInputStream(inputContent);
image = new DICOM(is);
}
}
catch (Exception e)
{
}
}
}