I have a large (about 12M rows) DataFrame df with say:

df.columns = ['word','documents','frequency']

So the following ran in a timely fashion:

word_grouping = df[['word','frequency']].groupby('word')
MaxFrequency_perWord = word_grouping[['frequency']].max().reset_index()
MaxFrequency_perWord.columns = ['word','MaxFrequency']

However, this is taking an unexpectedly long time to run:

Occurrences_of_Words = word_grouping[['word']].count().reset_index()

What am I doing wrong here? Is there a better way to count occurrences in a large DataFrame?

df.word.describe()

ran pretty well, so I really did not expect this Occurrences_of_Words DataFrame to take very long to build.

P.S.: If the answer is obvious and you feel the need to penalize me for asking this question, please include the answer as well.

I think df['word'].value_counts() should serve. By skipping the groupby machinery, you’ll save some time. I’m not sure why count should be much slower than max. Both take some time to avoid missing values. (Compare with size.)

In any case, value_counts has been specifically optimized to handle object type, like your words, so I doubt you’ll do much better than that.

When you want to count the frequency of categorical data in a column in pandas dataFrame use: df['Column_Name'].value_counts()

Source.

Just an addition to the previous answers. Let’s not forget that when dealing with real data there might be null values, so it’s useful to also include those in the counting by using the option dropna=False (default is True)

An example:

>>> df['Embarked'].value_counts(dropna=False)
S      644
C      168
Q       77
NaN      2

Other possible approaches to count occurrences could be to use (i) Counter from collections module, (ii) unique from numpy library and (iii) groupby + size in pandas.

To use collections.Counter:

from collections import Counter
out = pd.Series(Counter(df['word']))

To use numpy.unique:

import numpy as np
i, c = np.unique(df['word'], return_counts = True)
out = pd.Series(c, index = i)

To use groupby + size:

out = pd.Series(df.index, index=df['word']).groupby(level=0).size()

One very nice feature of value_counts that’s missing in the above methods is that it sorts the counts. If having the counts sorted is absolutely necessary, then value_counts is the best method given its simplicity and performance (even though it still gets marginally outperformed by other methods especially for very large Series).

Benchmarks

(if having the counts sorted is not important):

If we look at runtimes, it depends on the data stored in the DataFrame columns/Series.

If the Series is dtype object, then the fastest method for very large Series is collections.Counter, but in general value_counts is very competitive.

enter image description here

However, if it is dtype int, then the fastest method is numpy.unique:

enter image description here

Code used to produce the plots:

import perfplot
import numpy as np
import pandas as pd
from collections import Counter

def creator(n, dt="obj"):
    s = pd.Series(np.random.randint(2*n, size=n))
    return s.astype(str) if dt=='obj' else s
    
def plot_perfplot(datatype):
    perfplot.show(
        setup = lambda n: creator(n, datatype),
        kernels = [lambda s: s.value_counts(),
                   lambda s: pd.Series(Counter(s)),
                   lambda s: pd.Series((ic := np.unique(s, return_counts=True))[1], index = ic[0]),
                   lambda s: pd.Series(s.index, index=s).groupby(level=0).size()
                  ],
        labels = ['value_counts', 'Counter', 'np_unique', 'groupby_size'],
        n_range = [2 ** k for k in range(5, 25)],
        equality_check = lambda *x: (d:= pd.concat(x, axis=1)).eq(d[0], axis=0).all().all(),
        xlabel="~len(s)",
        title = f'dtype {datatype}'
    )
    
plot_perfplot('obj')
plot_perfplot('int')