Non blocking subprocess.call

Each Answer to this Q is separated by one/two green lines.

I’m trying to make a non blocking subprocess call to run a slave.py script from my main.py program. I need to pass args from main.py to slave.py once when it(slave.py) is first started via subprocess.call after this slave.py runs for a period of time then exits.

main.py
for insert, (list) in enumerate(list, start =1):

    sys.args = [list]
    subprocess.call(["python", "slave.py", sys.args], shell = True)


{loop through program and do more stuff..}

And my slave script

slave.py
print sys.args
while True:
    {do stuff with args in loop till finished}
    time.sleep(30)

Currently, slave.py blocks main.py from running the rest of its tasks, I simply want slave.py to be independent of main.py, once I’ve passed args to it. The two scripts no longer need to communicate.

I’ve found a few posts on the net about non blocking subprocess.call but most of them are centered on requiring communication with slave.py at some-point which I currently do not need. Would anyone know how to implement this in a simple fashion…?

You should use subprocess.Popen instead of subprocess.call.

Something like:

subprocess.Popen(["python", "slave.py"] + sys.argv[1:])

From the docs on subprocess.call:

Run the command described by args. Wait for command to complete, then return the returncode attribute.

(Also don’t use a list to pass in the arguments if you’re going to use shell = True).


Here’s a MCVE1 example that demonstrates a non-blocking suprocess call:

import subprocess
import time

p = subprocess.Popen(['sleep', '5'])

while p.poll() is None:
    print('Still sleeping')
    time.sleep(1)

print('Not sleeping any longer.  Exited with returncode %d' % p.returncode)

An alternative approach that relies on more recent changes to the python language to allow for co-routine based parallelism is:

# python3.5 required but could be modified to work with python3.4.
import asyncio

async def do_subprocess():
    print('Subprocess sleeping')
    proc = await asyncio.create_subprocess_exec('sleep', '5')
    returncode = await proc.wait()
    print('Subprocess done sleeping.  Return code = %d' % returncode)

async def sleep_report(number):
    for i in range(number + 1):
        print('Slept for %d seconds' % i)
        await asyncio.sleep(1)

loop = asyncio.get_event_loop()

tasks = [
    asyncio.ensure_future(do_subprocess()),
    asyncio.ensure_future(sleep_report(5)),
]

loop.run_until_complete(asyncio.gather(*tasks))
loop.close()

1Tested on OS-X using python2.7 & python3.6

There’s three levels of thoroughness here.

As mgilson says, if you just swap out subprocess.call for subprocess.Popen, keeping everything else the same, then main.py will not wait for slave.py to finish before it continues. That may be enough by itself. If you care about zombie processes hanging around, you should save the object returned from subprocess.Popen and at some later point call its wait method. (The zombies will automatically go away when main.py exits, so this is only a serious problem if main.py runs for a very long time and/or might create many subprocesses.) And finally, if you don’t want a zombie but you also don’t want to decide where to do the waiting (this might be appropriate if both processes run for a long and unpredictable time afterward), use the python-daemon library to have the slave disassociate itself from the master — in that case you can continue using subprocess.call in the master.

For Python 3.8.x

import shlex
import subprocess

cmd = "<full filepath plus arguments of child process>"
cmds = shlex.split(cmd)
p = subprocess.Popen(cmds, start_new_session=True)

This will allow the parent process to exit while the child process continues to run. Not sure about zombies.

Tested on Python 3.8.1 on macOS 10.15.5


The answers/resolutions are collected from stackoverflow, are licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 .