In the case that you are running your production on AWS, then you should first and foremost leverage CloudWatch
to your advantage.
In your elixir
code, configure your logger like this:
config :logger,
handle_otp_reports: true,
handle_sasl_reports: true,
metadata: [:application, :module, :function, :file, :line]
config :logger,
backends: [
{LoggerFileBackend, :shared_error}
]
config :logger, :shared_error,
path: "#{logging_dir}/verbose-error.log",
level: :error
Inside your Dockerfile, configure an environment variable for where exactly erl_crash.dump
gets written to, such as:
ERL_CRASH_DUMP=/opt/log/erl_crash.dump
Then configure awslogs
inside a .config
file under .ebextensions
as follows:
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[erl_crash.dump]
log_group_name=/aws/elasticbeanstalk/your_app/erl_crash.dump
log_stream_name={instance_id}
file=/var/log/erl_crash.dump
[verbose-error.log]
log_group_name=/aws/elasticbeanstalk/your_app/verbose-error.log
log_stream_name={instance_id}
file=/var/log/verbose-error.log
And ensure that you set a volume to your docker under Dockerrun.aws.json
"Logging": "/var/log",
"Volumes": [
{
"HostDirectory": "/var/log",
"ContainerDirectory": "/opt/log"
}
],
After that, you can inspect your error messages under CloudWatch
.
Now, if you are using ElasticBeanstalk
(which my example above implicitly implies) with Docker
deployment as opposed to AWS ECS
, then the logs of std_input
are redirected by default to /var/log/eb-docker/containers/eb-current-app/stdouterr.log
inside CloudWatch
.
The main purpose of erl_crash.dump
is to at least know when your application crashed, thereby taking the container down. AWS EB
will normally restart the container, thus keeping you ignorant about the restart. This understanding can also be obtained from other docker related logs, and you can configure alarms to listen for them and be notified accordingly when your docker had to restart. But another advantage of logging erl_crash.dump
to CloudWatch is that if need be, you can always export it later to S3, download the file and import it inside :observer
to do analysis of what went wrong.
If after consulting the logs, you still require a more intimate interaction with your production application, then you need to leverage remsh
to your node. If you use distillery
, you would configure the cookie
and the node name
of your production application with your release like this:
inside rel/confix.exs
, set cookie:
environment :prod do
set include_erts: false
set include_src: false
set cookie: :"my_cookie"
end
and under rel/templates/vm.args.eex
you set variables:
-name <%= node_name %>
-setcookie <%= release.profile.cookie %>
and inside rel/config.exs
, you set release like this:
release :my_app do
set version: "0.1.0"
set overlays: [
{:template, "rel/templates/vm.args.eex", "releases/<%= release_version %>/vm.args"}
]
set overlay_vars: [
node_name: "[email protected]",
]
Then you can directly connect to your production node running inside docker by first ssh-ing inside the EC2-instance that houses the docker container, and run the following:
CONTAINER_ID=$(sudo docker ps --format '{{.ID}}')
sudo docker exec -it $CONTAINER_ID bash -c "iex --name [email protected] --cookie my_cookie"
Once inside, you can then try to poke around or if need be
, at your own peril inject modified code dynamically of the module you would like to inspect. An easy way to do that would be to create a file inside the container and to invoke a Node.spawn_link target_node, fn Code.eval_file(file_name, path) end
In the case your production node is already running and you do not know the cookie, you can go inside your running container and do a ps aux > t.log
and do a cat t.log
to figure out what random cookie has been applied and use accordingly.
Docker serves as an impediment to the way epmd
is able to communicate with other nodes. The best therefore would be to rather create your own AWS AMI
image using Packer
and do bare metal deployments instead.
Amazon has recently released a new feature to AWS ECS
, AWS VPC Networking Mode, which perhaps may facilitate inter-container epmd
communication and thus connecting to your node directly. I have not tried it out as yet, I may be wrong.
In the case that you are running on a provider other than AWS, then figuring out how to get easy access to your remote logs with some SSM agent
or some other service is a must.
Logger.error
everywhere you want to debug, re-deploy and watch logs. I also want to hear about better approach. – denis.peplin