8
votes

I've been trying to unload some data from Redshift to the S3 bucket. Except I've been getting the following error:

Amazon Invalid operation: cannot drop active portal; [SQL State=XX000, DB Errorcode=500310]

To note the Redshift and S3 are in 2 different buckets so I specified the region within the unload.

I've tried looking it up, it mentions that I can't drop the table.

So I created a "test" table that I am 100% certain is not connected to any view or other but I still get the same error.

I tried also troubleshooting through AWSQuickSolutions: Redshift Table Can’t Be Dropped or Drop Table Hangs except I do not have access to svv_table_info.

Is there any way to do the unload?

5
I'm confused. You talk about wanting to UNLOAD table (which means exporting the data to S3), but then you talk about dropping the table. Which operation do you wish to perform? - John Rotenstein
I want to make a copy of the data I have in redshift to S3 so that then I can download the csv. I can't do it in SQL workbench since my memory won't allow it. I searched online that the unload would do that. Except I get the error I posted. When I search for the error it talks about table drops which confuses me. - ivb1
Are you using the Redshift UNLOAD command? (Can you show us your command?) Does it work if you unload to a bucket in the same region? Unloading has nothing to do with dropping tables. - John Rotenstein
This is happening for me, too. I worked around it by running the UNLOAD from a different app, in my case the Query Editor in the Redshift console. - nofinator
I was successful by restarting the session - darekarsam

5 Answers

3
votes

reporting @nofinator reply since it worked for me and may be helpful for someone else too:

By running the UNLOAD from a different app, in my case the Query Editor in the Redshift Console, resolved the exception Amazon Invalid operation: cannot drop active portal

1
votes

OK, so I ran into the same problem. None of the known solutions applied to my issue because:

1) The cluster was on the same region as the S3 bucket I created

2) I tried running the UNLOAD command via python, cli, and redshift with the same results.

3) I tried adding a bucket policy for the redshift role

4) I tried running the unload command using for arns (the redshift role and the s3 role)

Finally, I got it to work. What changed? I was now directly on my company's network instead of using VPN which is when I experienced this issue.

What finally worked for me:

UNLOAD ('SELECT * FROM table where EXTRACT(YEAR FROM order_datetime_utc )=2019')   
              to 's3://bucket/' 
              REGION 'us-east-1'
                iam_role 'arn:aws:iam::0000000000:role/RedshiftCopyUnload,arn:aws:iam::0000000000:role/S3Access'
              parallel off
              delimiter as '\t' 
gzip;
0
votes

I had the same issue yesterday. At least in my incident, there was a sev-2 in Data-Warehouse. I use Amazon DWP that's used by Amazon retail, so not sure if this is the same service used by AWS.

tldr; It's possible this is a server issue within the provider of the redshift.

0
votes

I had this issue after I modified IAM permissions on the role. Unfortunately I had to reboot the cluster and then my UNLOAD command worked.

0
votes

I don't know what exactly caused this error but I was getting it after a failed unload.

I disconnect then reconnect my sql client and got it to work. not super convenient but better than restarting the cluster.