66
votes

A fast method for inspecting files on HDFS is to use tail:

~$ hadoop fs -tail /path/to/file

This displays the last kilobyte of data in the file, which is extremely helpful. However, the opposite command head does not appear to be part of the shell command collections. I find this very surprising.

My hypothesis is that since HDFS is built for very fast streaming reads on very large files, there is some access-oriented issue that affects head. This makes me hesitant to do things to access the head. Does anyone have an answer?

5
Lack of community interest to implement such feature? https://issues.apache.org/jira/browse/HDFS-206. - cabad

5 Answers

144
votes

I would say it's more to do with efficiency - a head can easily be replicated by piping the output of a hadoop fs -cat through the linux head command.

hadoop fs -cat /path/to/file | head

This is efficient as head will close out the underlying stream after the desired number of lines have been output

Using tail in this manner would be considerably less efficient - as you'd have to stream over the entire file (all HDFS blocks) to find the final x number of lines.

hadoop fs -cat /path/to/file | tail

The hadoop fs -tail command as you note works on the last kilobyte - hadoop can efficiently find the last block and skip to the position of the final kilobyte, then stream the output. Piping via tail can't easily do this.

7
votes

Starting with version 3.1.0 we now have it:

Usage: hadoop fs -head URI

Displays first kilobyte of the file to stdout.

See here.

3
votes
hdfs -dfs /path | head

is a good way to solve the problem.

2
votes

you can try the folowing command

hadoop fs -cat /path | head -n 

where -n can be replace with number of records to view

2
votes

In Hadoop v2:

hdfs dfs -cat /file/path|head

In Hadoop v1 and v3:

hadoop fs -cat /file/path|head