Each Answer to this Q is separated by one/two green lines.
Is there a more concise, efficient or simply pythonic way to do the following?
def product(lst): p = 1 for i in lst: p *= i return p
I actually find that this is marginally faster than using operator.mul:
from operator import mul # from functools import reduce # python3 compatibility def with_lambda(lst): reduce(lambda x, y: x * y, lst) def without_lambda(lst): reduce(mul, lst) def forloop(lst): r = 1 for x in lst: r *= x return r import timeit a = range(50) b = range(1,50)#no zero t = timeit.Timer("with_lambda(a)", "from __main__ import with_lambda,a") print("with lambda:", t.timeit()) t = timeit.Timer("without_lambda(a)", "from __main__ import without_lambda,a") print("without lambda:", t.timeit()) t = timeit.Timer("forloop(a)", "from __main__ import forloop,a") print("for loop:", t.timeit()) t = timeit.Timer("with_lambda(b)", "from __main__ import with_lambda,b") print("with lambda (no 0):", t.timeit()) t = timeit.Timer("without_lambda(b)", "from __main__ import without_lambda,b") print("without lambda (no 0):", t.timeit()) t = timeit.Timer("forloop(b)", "from __main__ import forloop,b") print("for loop (no 0):", t.timeit())
('with lambda:', 17.755449056625366) ('without lambda:', 8.2084708213806152) ('for loop:', 7.4836349487304688) ('with lambda (no 0):', 22.570688009262085) ('without lambda (no 0):', 12.472226858139038) ('for loop (no 0):', 11.04065990447998)
Without using lambda:
from operator import mul # from functools import reduce # python3 compatibility reduce(mul, list, 1)
it is better and faster. With python 2.7.5
from operator import mul import numpy as np import numexpr as ne # from functools import reduce # python3 compatibility a = range(1, 101) %timeit reduce(lambda x, y: x * y, a) # (1) %timeit reduce(mul, a) # (2) %timeit np.prod(a) # (3) %timeit ne.evaluate("prod(a)") # (4)
In the following configuration:
a = range(1, 101) # A a = np.array(a) # B a = np.arange(1, 1e4, dtype=int) #C a = np.arange(1, 1e5, dtype=float) #D
Results with python 2.7.5
| 1 | 2 | 3 | 4 | -------+-----------+-----------+-----------+-----------+ A 20.8 µs 13.3 µs 22.6 µs 39.6 µs B 106 µs 95.3 µs 5.92 µs 26.1 µs C 4.34 ms 3.51 ms 16.7 µs 38.9 µs D 46.6 ms 38.5 ms 180 µs 216 µs
np.prod is the fastest one, if you use
np.array as data structure (18x for small array, 250x for large array)
with python 3.3.2:
| 1 | 2 | 3 | 4 | -------+-----------+-----------+-----------+-----------+ A 23.6 µs 12.3 µs 68.6 µs 84.9 µs B 133 µs 107 µs 7.42 µs 27.5 µs C 4.79 ms 3.74 ms 18.6 µs 40.9 µs D 48.4 ms 36.8 ms 187 µs 214 µs
Is python 3 slower?
from functools import reduce a = [1, 2, 3] reduce(lambda x, y: x * y, a, 1)
if you just have numbers in your list:
from numpy import prod prod(list)
EDIT: as pointed out by @off99555 this does not work for large integer results in which case it returns a result of type
numpy.int64 while Ian Clelland’s solution based on
reduce works for large integer results because it returns
Python 3.8, a
prod function has been included to the
math module in the standard library:
math.prod(iterable, *, start=1)
which returns the product of a
start value (default: 1) times an iterable of numbers:
import math math.prod([2, 3, 4]) # 24
Note that if the iterable is empty, this will produce
1 (or the
start value if provided).
Well if you really wanted to make it one line without importing anything you could do:
eval('*'.join(str(item) for item in list))
import operator reduce(operator.mul, list, 1)
I remember some long discussions on comp.lang.python (sorry, too lazy to produce pointers now) which concluded that your original
product() definition is the most Pythonic.
Note that the proposal is not to write a for loop every time you want to do it, but to write a function once (per type of reduction) and call it as needed! Calling reduction functions is very Pythonic – it works sweetly with generator expressions, and since the sucessful introduction of
sum(), Python keeps growing more and more builtin reduction functions –
all() are the latest additions…
This conclusion is kinda official –
reduce() was removed from builtins in Python 3.0, saying:
functools.reduce()if you really need it; however, 99 percent of the time an explicit for loop is more readable.”
See also The fate of reduce() in Python 3000 for a supporting quote from Guido (and some less supporting comments by Lispers that read that blog).
P.S. if by chance you need
product() for combinatorics, see
math.factorial() (new 2.6).
I’ve tested various solutions with perfplot (a small project of mine) and found that
is by far the fastest solution (if the list isn’t very short).
Code to reproduce the plot:
import perfplot import numpy import math from operator import mul from functools import reduce from itertools import accumulate def reduce_lambda(lst): return reduce(lambda x, y: x * y, lst) def reduce_mul(lst): return reduce(mul, lst) def forloop(lst): r = 1 for x in lst: r *= x return r def numpy_prod(lst): return numpy.prod(lst) def math_prod(lst): return math.prod(lst) def itertools_accumulate(lst): for value in accumulate(lst, mul): pass return value b = perfplot.bench( setup=numpy.random.rand, kernels=[ reduce_lambda, reduce_mul, forloop, numpy_prod, itertools_accumulate, math_prod, ], n_range=[2 ** k for k in range(20)], xlabel="len(a)", ) b.save("out.png") b.show()
The intent of this answer is to provide a calculation that is useful in certain circumstances — namely when a) there are a large number of values being multiplied such that the final product may be extremely large or extremely small, and b) you don’t really care about the exact answer, but instead have a number of sequences, and want to be able to order them based on each one’s product.
If you want to multiply the elements of a list, where l is the list, you can do:
import math math.exp(sum(map(math.log, l)))
Now, that approach is not as readable as
from operator import mul reduce(mul, list)
If you’re a mathematician who isn’t familiar with reduce() the opposite might be true, but I wouldn’t advise using it under normal circumstances. It’s also less readable than the product() function mentioned in the question (at least to non-mathematicians).
However, if you’re ever in a situation where you risk underflow or overflow, such as in
>>> reduce(mul, [10.]*309) inf
and your purpose is to compare the products of different sequences rather than to know what the products are, then
>>> sum(map(math.log, [10.]*309)) 711.49879373515785
is the way to go because it’s virtually impossible to have a real-world problem in which you would overflow or underflow with this approach. (The larger the result of that calculation is, the larger the product would be if you could calculate it.)
I am surprised no-one has suggested using
operator.mul. This avoids using
reduce, which is different for Python 2 and 3 (due to the
functools import required for Python 3), and moreover is considered un-pythonic by Guido van Rossum himself:
from itertools import accumulate from operator import mul def prod(lst): for value in accumulate(lst, mul): pass return value
prod([1,5,4,3,5,6]) # 1800
@njit def njit_product(lst): p = lst # first element for i in lst[1:]: # loop over remaining elements p *= i return p
For timing purposes, you need to run once to compile the function first using numba. In general, the function will be compiled the first time it is called, and then called from memory after that (faster).
njit_product([1, 2]) # execute once to compile
Now when you execute your code, it will run with the compiled version of the function. I timed them using a Jupyter notebook and the
%timeit magic function:
product(b) # yours # 32.7 µs ± 510 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) njit_product(b) # 92.9 µs ± 392 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Note that on my machine, running Python 3.5, the native Python
for loop was actually the fastest. There may be a trick here when it comes to measuring numba-decorated performance with Jupyter notebooks and the
%timeit magic function. I am not sure that the timings above are correct, so I recommend trying it out on your system and seeing if numba gives you a performance boost.
The fastest way I found was, using while:
mysetup = ''' import numpy as np from find_intervals import return_intersections ''' # code snippet whose execution time is to be measured mycode=""' x = [4,5,6,7,8,9,10] prod = 1 i = 0 while True: prod = prod * x[i] i = i + 1 if i == len(x): break ''' # timeit statement for while: print("using while : ", timeit.timeit(setup=mysetup, stmt=mycode)) # timeit statement for mul: print("using mul : ", timeit.timeit('from functools import reduce; from operator import mul; c = reduce(mul, [4,5,6,7,8,9,10])')) # timeit statement for mul: print("using lambda : ", timeit.timeit('from functools import reduce; from operator import mul; c = reduce(lambda x, y: x * y, [4,5,6,7,8,9,10])'))
and the timings are:
>>> using while : 0.8887967770060641 >>> using mul : 2.0838719510065857 >>> using lambda : 2.4227715369997895
Python 3 result for the OP’s tests: (best of 3 for each)
with lambda: 18.978000981995137 without lambda: 8.110567473006085 for loop: 10.795806062000338 with lambda (no 0): 26.612515013999655 without lambda (no 0): 14.704098362999503 for loop (no 0): 14.93075215499266
I’m not sure about the fastest way, but here is the short code to get product of any collection without importing any library or module.
Here is the code:
product = 1 # Set product to 1 because when you multiply it you don't want you answer to always be 0 my_list = list(input("Type in a list: ").split(", ")) # When input, the data is a string, so you need to convert it into a list and split it to make it a list. for i in range(0, len(my_list)): product *= int(my_list[i]) print("The product of all elements in your list is: ", product)
This also works though its cheating
def factorial(n): x= if n <= 1: return 1 else: for i in range(1,n+1): p*=i x.append(p) print x[n-1]