0
votes

Please help me with reading parquet files from remote HDFS i.e.; setup on Linux server using Dask or pyarrow in python?

Also suggest me if there are better ways to do the same other than the above two options.

Tried following code

from dask import dataframe as dd
df = dd.read_parquet('webhdfs://10.xxx.xx.xxx:xxxx/home/user/dir/sample.parquet',engine='pyarrow',storage_options={'host': '10.xxx.xx.xxx', 'port': xxxx, 'user': 'xxxxx'})
print(df)

Error is

KeyError: "Collision between inferred and specified storage options:\n- 'host'\n- 'port'"

2

2 Answers

1
votes

Looking at this post here: https://github.com/dask/dask/issues/2757

Have you tried using 3 slashes?

df = dd.read_parquet('webhdfs:///10.xxx.xx.xxx:xxxx/home/user/dir/sample.parquet',engine='pyarrow',storage_options={'host': '10.xxx.xx.xxx', 'port': xxxx, 'user': 'xxxxx'})
0
votes

You need to either provide the host/port in the URL or in the kwargs, not both. The following should both work:

df = dd.read_parquet('webhdfs://10.xxx.xx.xxx:xxxx/home/user/dir/sample.parquet',
    engine='pyarrow', storage_options={'user': 'xxxxx'})

df = dd.read_parquet('webhdfs:///home/user/dir/sample.parquet',
    engine='pyarrow', storage_options={'host': '10.xxx.xx.xxx', 'port': xxxx, 'user': 'xxxxx'})