I've checked the file manually to make sure nothing exceeds the length. That was all fine, but I doubled the length of every varchar anyway.
I added the TRUNCATECOLUMNS parameter:
TRUNCATECOLUMNSTruncates data in columns to the appropriate number of characters so that it fits the column specification. Applies only to columns with a VARCHAR or CHAR data type, and rows 4 MB or less in size.
Still getting this error: Copy s3 to redshift: String length exceeds DDL length
COPY [table name]
FROM [s3 path]
iam_role [iam role]
FORMAT CSV
IGNOREHEADER 1
region 'us-west-2'
BLANKSASNULL
TRIMBLANKS
TRUNCATECOLUMNS
TEXTinstead of a specificVARCHAR[n]length? Redshift (being Postgres) can handleTEXTnicely. - John Rotenstein