I read a parquet with :
df = spark.read.parquet(file_name)
And get the columns with:
df.columns
And returns a list of columns ['col1', 'col2', 'col3']
I read that parquet format is able to store some metadata in the file.
Is there a way to store and read extra metadata, for example, attach a human description of what is each column?
Thanks.