I need to provide some context before asking the question:
CONTEXT
I have a Phoenix application that is being deployed to Heroku. As default, Brunch is being used to compile the Static Assets like .js,.css and images.
- Those assets are stored on
./assets(as of Phoenix 1.3). - Those assets are compiled to
./priv/static/.
The compilation process generates a cache_manifest.json, after the assets are digested using MD5 fingerprinting.
It maybe important to notice I'm using CloudFlare's free version as a CDN.
I'm not concerned about user uploaded assets, I'm talking about the app's assets
Relevant part of the apps config/prod.exs
config :bespoke_work, BespokeWork.Web.Endpoint,
on_init: {BespokeWork.Web.Endpoint, :load_from_system_env, []},
http: [port: {:system, "PORT"}],
url: [scheme: "https", host: System.get_env("HEROKU_HOST"), port: System.get_env("HEROKU_PORT")],
static_url: [scheme: "https", host: System.get_env("STATIC_ASSETS"), port: 443],
force_ssl: [rewrite_on: [:x_forwarded_proto]],
cache_static_manifest: "priv/static/cache_manifest.json",
secret_key_base: System.get_env("SECRET_KEY_BASE")```
QUESTION
How can I prevent Heroku from building the assets and, instead, during deploy, automatically upload the digested assets to an Amazon S3 Bucket?
Will that make Heroku's slug smaller?
POSSIBLE SOLUTION
Reducing Heroku's Slug Size:
• On the
Procfileredirectmix phx.digestto output digested items to/dev/null.or
• Redefine
mix deps.compilefor Prod, not generating the assets.
- Generate the assets locally.
- Either Manually upload them or use a Shell Script to upload them to S3. (In my case probably using Codeship's
AmazingAmazon S3 deployment)
- Use
static_urlto generate paths "pointing" to the S3 Bucket.
• Is there any simpler way to accomplish this? •