1
votes

I have an application that uses JMS to send files that are about a few megabytes large. Would it be possible to use Amazon SQS as the JMS provider for this application, as described here?

The problem here is that the max size of an SQS message is 256K. One way around this is to break up each file into multiple messages of 256K. But, if I do that, would having multiple producers send files at the same time break the architecture, as the messages from different producers become mixed up?

2
I'm not familiar with JMS, but would uploading the files to S3 and having the SQS packets contain their path work? Before posting to the queue, you'd have to confirm upload and after removing from queue you would need to remove file (presumably). - dubeegee
The message transfer is from ec2 nodes to other ec2 nodes in the same region. Using S3 as an intermediate would incur data transfer costs, and probably slow things down. - leontp587
There is no data transfer charge for EC2 <=> S3 data transfer within the same region and using S3 is unlikely to cause an undesirably large amount of additional latency. - Michael - sqlbot

2 Answers

2
votes

In this scenario you cannot use the original message with SQS, you will have to use a new message with a reference to the original message. The reference can be to a S3 object or a custom location on-prem or with-in AWS. S3 option probably involves least amount of work and has best cost efficiency (building and running).

If you consider S3 option, AWS Lambdas can be used to drop the message in SQS.

On a side note, the original message considered here seems to be self contained. May be it's a good idea to revisit the contents of the message, you may find ways to trim it and send only locations around which will result a smaller payload.

1
votes

If everything is in the same region - the latency and data transfer cost is very minimal. Putting an item in S3 and having the object sent in the SQS should just turn your solution handle any sized data and take off your effort on scalability of the items and size of the each item.

While I said the data transfer costs are minimal, you might still incur data storage costs in S3; which you can use S3 life cycle rules to delete them.

@D.Luffy mentioned an important and excite solution with lambda - with that you can keep adding the items in S3 - enable S3 notifications, get that to the Queue and process the queue item (transfer it to another ec2 instance etc.) - making the solution fire and forget kind of stuff.

Please do not hesitate to leverage S3 alongside with SQS