I won't get into why our warehouse is setup this particular way. But in short this operation is required across all of our tables because of some changes in data ingestion we are doing.
USE DATABASE WAREHOUSE;
UPDATE WAREHOUSE.EVENTSOURCES_TEST
SET etl_uuid = a.etl_uuid
FROM
(SELECT ltrim(split_part(_FILE, '/', 6), 's') as etl_uuid
FROM WAREHOUSE.EVENTSOURCES_TEST) a
WHERE _FILE IS NOT NULL AND etl_uuid IS NULL;
Using a Medium or Large size causes this update, that is required for all of our table to support this change to return a memory error on Snowflake's warehouse size.
SQL execution internal error: Processing aborted due to error 300005:946296401
Now the Snowflake recommendation is to increase the warehouse size for the duration of the update. Which is fine, but this is happening already on one of our smaller tables with 158M rows. Many tables have ten times that amount in them. Ideally I would want to make a batch kind of process that would update something like 100k rows then keep doing that over and over again until the entire table was updated. But this is not supported within snowflake itself so I am looking if there is any additional insight on how to handle such a situation?