You probably have a good reason for that, but make sure you aware that logging full request/response bodies may have a lot of undesired implications.
For example, if your service requires GDPR compliance, this is a huge issue. Also, it may greatly affect the performance and make you run into some issues related to quotas. Basically, it is usually not a good idea doing that.
Storing these logs on cloudwatch would be the easiest option for requests were that 1K limit is not an issue. If you have just a bunch of requests where this is not enough, you could consider treating them as an exception.
You could use S3 / DynamoDB / Elastic Search, depending on what you want to do with them, and there are tradeoffs as well.
S3 - This would allow to store very large requests/responses, but it can create a lot of fragmentation. You may end up with a lot of small files and also you need to some sort of index (probably storing the S3 key in cloudwatch logs). Searching can be somewhat painful in this case (although you may be able to use Athena depending on how do you store it).
DynamoDB - Easy to store, but you can run to a lot of quota limits if your API access is too frequent. You may need to bump your costs a lot to prevent them. Also, each record has a limit of 400Kb. I personally don't recommend this approach.
ElasticSearch - Default record size limit is 100Mb but this can be increased. It would make it easy to query this data later on.
I'd say ElasticSearch can be more appropriate for this case given the amount of information this thread has. Also, depending on the volume, some of these solutions would eventually require some publish-subscribe mechanism (ex. Kinesis) in between to handle burst limits and message grouping (if you are S3 for example, you may want to group multiple entries in a single file)