Docker, Supervisord and logging – how to consolidate logs in docker logs?

Each Answer to this Q is separated by one/two green lines.

So, experimenting with Docker + Supervisord + Django app via uWSGI. I have the whole stack working fine, but need to tidy up the logging.

If I launch supervisor in non-daemon mode,

/usr/bin/supervisord -n

Then I get the logging output for supervisor played into the docker logs stdout. However, if supervisord is in daemon mode, its own logs get stashed away in the container filesystem, and the logs of its applications do too – in their own app__stderr/stdout files.

What I want is to log both supervisor, and application stdout to the docker log.

Is starting supervisord in non-daemon mode a sensible idea for this, or does it cause unintended consequences? Also, how do I get the application logs also played into the docker logs?

I accomplished this using .

Install supervisor-stdout in your Docker image:

RUN apt-get install -y python-pip && pip install supervisor-stdout

Supervisord Configuration

Edit your supervisord.conf look like so:


command = supervisor_stdout 
buffer_size = 100 
events = PROCESS_LOG 
result_handler = supervisor_stdout:event_handler

Docker container is like a kleenex, you use it then you drop it. To be “alive”, Docker needs something running in foreground (whereas daemons run in background), that’s why you are using Supervisord.

So you need to “redirect/add/merge” process output (access and error) to Supervisord output you see when running your container.

As Drew said, everyone is using to achieve it (to me this should be added to supervisord project!). Something Drew forgot to say, you may need to add


To the supervisord program configuration block.

Something very usefull also, imagine your process is logging in a log file instead of stdout, you can ask supervisord to watch it:

command=tail -f /var/log/php5-fpm.log

This will redirect php5-fpm.log content to stdout then to supervisord stdout via supervisord-stdout.

supervisor-stdout requires to install python-pip, which downloads ~150mb, for a container I think is a lot just for install another tool.

Redirecting logfile to /dev/stdout works for me:


I agree, not using the daemon mode sounds like the best solution, but I would probably employ the same strategy you would use when you had actual physical servers or some kind of VM setup: centralize logging.

You could use something self-hosted like logstash inside the container to collect logs and send it to a central server. Or use a commercial service like loggly or papertrail to do the same.

Today’s best practice is to have minimal Docker images. For me, ideal container with Python application contain just my code, supporting libraries and something like uwsgi if it is necessary.

I published one solution on It is simple Django application behind uwsgi which is configured to display logs from uwsgi and Django app on containers stdout without need of supervisor.

Indeed, starting supervisord in non-daemon mode is the best solution.

You could also use volumes in order to mount the supervisord’s logs to a central place.

I had the same problem with my python app (Flask). Solution that worked for me was to:

  • Start supervisord in nodaemon mode (supervisord -n)
  • Redirect log to /proc/1/fd/1 instead of /dev/stdout

  • Set these two environment variables in my docker image PYTHONUNBUFFERED=True and PYTHONIOENCODING=UTF-8

Just add below line to your respective supervisor.ini config file.


Export these variables to application (linux) environment.


The answers/resolutions are collected from stackoverflow, are licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 .