I create requests POST-requests like this, where I specify timeout threshold:

response = requests.post(url, data=post_fields, timeout=timeout)

However, to determine a “good” threshold value, I would like to benchmark the server response time in advance.

How do I compute the minimum and maximum response times for the server?

The Response object returned by requests.post() (and requests.get() etc.) has a property called elapsed, which provides the time delta between the Request was sent and the Response was received. To get the delta in seconds, use the total_seconds() method:

response = requests.post(url, data=post_fields, timeout=timeout)
print(response.elapsed.total_seconds())

Note that requests.post() is a synchronous operation, which means that it blocks until the Response is received.

It depends on whether you can hit the server with a lot of test requests, or whether you need to wait for real requests to occur.

If you need real request data, then you’d need to wrap the call to determine the time of each request:

start = time.perf_counter()
response = requests.post(url, data=post_fields, timeout=timeout)
request_time = time.perf_counter() - start
self.logger.info("Request completed in {0:.0f}ms".format(request_time)
#store request_time in persistent data store

You’d need somewhere to store the results of each request over a period of time (file, database, etc). Then you can just calculate the stats of the response times.

If you have a test server available, you could benchmark the response without python using something like apachebench and sending test data for each request:

https://gist.github.com/kelvinn/6a1c51b8976acf25bd78