I have a varchar column of max size 20,000 in my redshift table. About 60% of the rows will have this column as null or empty. What is the performance impact in such cases. From this documentation I read:
Because Amazon Redshift compresses column data very effectively, creating columns much larger than necessary has minimal impact on the size of data tables. During processing for complex queries, however, intermediate query results might need to be stored in temporary tables. Because temporary tables are not compressed, unnecessarily large columns consume excessive memory and temporary disk space, which can affect query performance.
So this means query performance might be bad in this case. Is there any other disadvantage apart from this?