We in the past we have found each xsmall can load 40mb/s from S3, and thus a small can load 2x. So that's our base line expectation of load speeds.
What can legitimately slow down a copy is if you are coping from the root of the bucket s3://buck_name/
but there are millions of file in the that directory, with only one new 100 byte file. But I suspect that is also not the case.
The next thing it might be is failure to run the query part, which in the profile would have multiple profile stage tabs of the likes of 1 \ 1001 \ 2002
which the increment on the stage in the thousands indicating the query failed to execute and that it was re-run. This sometimes can be due to the warehouse get corrupted, and sometime due to the new run-time of the current release failing, and the retry's can be running on older releases to see if those succeed. But there are often clues to some of this, with time being seen "spilling to internal/external storage" is something we have seen when bugs occurs.
But in reality if things are seeming "really" strange, I would open a support ticket, and ask for an explanation of what is happening. With the usual, this is what I am seeing, this is why I think it's strange..