I have a dataset stored in s3 in parquet format. I would like to know if I can load this data into redshift using copy command.I have read that I can use redhisft spectrum where I can mention the schema stored in hive and read that in redhisft.
what would be useful for me is, if I can query this parquet data stored in s3 from redhisft or if I can load them directly into redshift using copy command.
There are options where I can spin a cluster and write parquet data into s3 using jdbc. but the problem is jdbc is too slow compared to copy command.