I understood hadoop block size is 64MB and linux FS is 4KB. My understanding from reading is that hadoop hdfs work on top of linux FS itself.
How does hadoop file system actually work with linux 4KB block size? Does 64MB block get broken down to 4KB blocks and saved to the disk during write operation, for example?