Currently, I am using a S3 trigger to launch a lambda function once a file is placed in S3.
My intent is to get the size from the S3 upload and pass that value into the Size field in an automated function that creates an EBS volume.
I am attempting to then create an EBS volume based upon the file size. Then copy the file to the EBS volume that will be attached to an EC2 instance that will then process the file. I am using EBS volumes because the files that get uploaded to the main S3 bucket are compressed files that need to be uncompressed and processed.
Is there a way to create an EBS volume Size based upon the original file that was uploaded?
response = client.create_volume(
AvailabilityZone='string',
Encrypted=True|False,
Iops=123,
KmsKeyId='string',
OutpostArn='string',
Size=123,
SnapshotId='string',
VolumeType='standard'|'io1'|'gp2'|'sc1'|'st1',
DryRun=True|False,
TagSpecifications=[
{
'ResourceType': 'client-vpn-endpoint'|'customer-gateway'|'dedicated-host'|'dhcp-options'|'elastic-ip'|'fleet'|'fpga-image'|'host-reservation'|'image'|'instance'|'internet-gateway'|'key-pair'|'launch-template'|'natgateway'|'network-acl'|'network-interface'|'placement-group'|'reserved-instances'|'route-table'|'security-group'|'snapshot'|'spot-fleet-request'|'spot-instances-request'|'subnet'|'traffic-mirror-filter'|'traffic-mirror-session'|'traffic-mirror-target'|'transit-gateway'|'transit-gateway-attachment'|'transit-gateway-multicast-domain'|'transit-gateway-route-table'|'volume'|'vpc'|'vpc-peering-connection'|'vpn-connection'|'vpn-gateway'|'vpc-flow-log',
'Tags': [
{
'Key': 'string',
'Value': 'string'
},
]
},
],
MultiAttachEnabled=True|False
)