I'm a little lost here and looking for some guidance. I'm using the ruby aws-sdk (1.29.1) gem to set up jobs in the elastic transcoder. In my staging and production environments when I call create_job() I constantly get a 'Timeout::Error: execution expired' error. Works fine in development. Each environment has it's own transcoder pipeline, s3 bucket for input/output, sns topics, and groups with their own policy. I'm calling transcoder.create_job() via the delayed_job gem.
I've looked everywhere I can think to look in AWS and in my logs and cannot figure out what would be causing this. I don't even know where to look next.
The code that triggers the timeout is transcoder.create_job().
def self.setup_aws_transcoder(id, s3_key)
unless s3_key.blank?
transcoder = AWS::ElasticTranscoder::Client.new(
region: ENV['AET_REGION']
)
transcoder.create_job(
pipeline_id: ENV['AET_PIPELINE_ID'],
input: {
key: "#{s3_key}",
frame_rate: 'auto',
resolution: 'auto',
aspect_ratio: 'auto',
interlaced: 'auto',
container: 'auto'
},
output: {
key: "#{s3_key}/#{id}.mp4",
preset_id: '1351620000001-100070', # System preset: Web
composition: [
{
time_span: {
duration: '00:10:00.000'
}
}
]
}
)
end
end
# Queue file processing
def queue_processing
Video.delay.setup_aws_transcoder(id, s3_key)
end
Also just updated them aws-sdk gem to 1.44 b/c wtf, and it didn't help.
I'm at a loss.I can provide whatever code or AWS settings necessary to figure this out.
Update. I can hardcode all of the values to the staging transcoder/bucket/user, etc in my dev environment and it will create the appropriate job and transcode the vide and put it in the right bucket. But when I deploy the same hard coded code up to staging and run it from there it times out again. Code is all the same, only difference is the application.yml file and database.yml. I shouldn't even be referencing application.yml though since I'm hard coding the access_key_id and secret_access_key values.