Putting configuration in the environment is a fairly well acknowledged best practice now. That configuration often includes secrets.
But environment variables in container images — like the docker ENV
stanza — are not really secure. They are built as part of the image, after all, so anyone with access to the image can get at the secrets themselves.
We also know that secrets shouldn’t live alongside application code itself either. Letting them live alongside configuration management files is not terrible, but that’s still next to code. However, there are ways to keep secrets safe in the context of configuration management.
What’s the best way forward? A docker entry point or setup script that runs immediately when the application starts combined with some sort of secrets backend like vault, keywhiz, credstash, or AWS’s parameter store.
The secrets backend depends on the requirements of the business and app. Need audit-ability? Credstash doesn’t really have that, choose something else. Don’t want to self host? Use a managed service like parameter store.
Example
We’ll imagine we have a script at /app/bin/creds
for this example. It retrieves the secret from the secrets backend and prints it to stdout
.
#!/bin/sh # Fetch secrets from the backend export SOME_SECRET="$(/app/bin/creds some_secret)" export OTHER_CONFIG="$(/app/bin/creds other_config)" # start the app, replacing the shell with `exec` exec "$@"
The configuration itself is still in the environment as far as the app is concerned, but on production the actual secrets and config are retrieved from some storage backend. They are not part of the image itself, but when the image is run in a context where it can access the secrets backend the secrets can be retrieved.
Using the example above in an image:
FROM alpine # ... # the script from above ADD app-entrypoint.sh /app/entrypoint.sh ENTRYPOINT ["/app/entrypoint.sh"] # CMD here if you like or when running the container CMD ["/app/app"]
Warnings
Environment variable secrets still have caveats. Apps that create subprocesses need to sanitize the environment’s secrets, for instance. Logs can’t blindly dump the entire environment for debugging purposes.
My goal here is to show a practical example of the store your secrets/config in the environment and what that really looks like in a dockerized application: using an entrypoint script and fetching secrets from a backend into environment variables before exec
ing the app itself.
Some AWS Parameter Store Specifics
I’ve been using AWS’s parameter store and ECS in a way very similar to the above. In those cases, the container themselves can access the secrets via the ECS task’s role.
Parameter store supports hierarchies in its secrets by giving them /names/like/this
and their resource ARNs can be specific to an application.
So a practice like /{appName}/{environment}/{secretName}
(for example /acmeapp/prod/database_url
) means that IAM permissions can be crafted to restrict secrets to specific places.
Parameter store API calls are also logged via CloudTrail for some auditability.
Other Solutions
The first and most obvious thing is the application itself can talk to a secrets backend directly. This is easy enough if that backend speaks a common protocol (like HTTP). But it also makes a development environment harder to manage.
Keywhiz (mentioned above) has a few libraries that provide fuse or tempfs filesystems from which secrets can be read like regular files (including fun things like file permissions).
Docker swarm has a similar system for secrets where secrets are mounted on the running containers as files. So does kubernetes.