This is essentially because parquet is a columnar storage format. So lets say that you have stored a 3 GB file with a blocksize of 1GB. To read a whole record you will need to reconstruct the record if the information of each column is not in a single block (which his probably the case because the columnar format), this mean that necessarily in one machine will be needed reconstruct the record requiring the data transfer from other nodes to reassembly the record.
EDIT:
For the following image which compare row storage against column storage, imagine that the column cost doesn't fit in your block size, it means that this column will be outside of your block and will create a new block. If you want to use the data for one whole specific row, the data for the cost column will need to be send for one node to another which is not efficient. I hope it makes sense.