I am newbie in hadoop environment. I already set up 2 node cluster hadoop. then I run sample mapreduce application. (wordcount actually). then I got output like this
File System Counters
FILE: Number of bytes read=492
FILE: Number of bytes written=6463014
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=71012
HDFS: Number of bytes written=195
HDFS: Number of read operations=404
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=80
Launched reduce tasks=1
Data-local map tasks=80
Total time spent by all maps in occupied slots (ms)=429151
Total time spent by all reduces in occupied slots (ms)=72374
Map-Reduce Framework
Map input records=80
Map output records=8
Map output bytes=470
Map output materialized bytes=966
Input split bytes=11040
Combine input records=0
Combine output records=0
Reduce input groups=1
Reduce shuffle bytes=966
Reduce input records=8
Reduce output records=5
Spilled Records=16
Shuffled Maps =80
Failed Shuffles=0
Merged Map outputs=80
GC time elapsed (ms)=5033
CPU time spent (ms)=59310
Physical memory (bytes) snapshot=18515763200
Virtual memory (bytes) snapshot=169808543744
Total committed heap usage (bytes)=14363394048
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=29603
File Output Format Counters
Bytes Written=195
Are there any explanation about every data which I got? especially,
- Total time spent by all maps in occupied slots (ms)
- Total time spent by all reduces in occupied slots (ms)
- CPU time spent (ms)
- Physical memory (bytes)
- Virtual memory (bytes) snapshot
- Total committed heap usage (bytes)