I’m trying to evaluate if comparing two string get slower as their length increases. My calculations suggest comparing strings should take an amortized constant time, but my Python experiments yield strange results:

Here is a plot of string length (1 to 400) versus time in milliseconds. Automatic garbage collection is disabled, and gc.collect is run between every iteration.

Time vs string length

I’m comparing 1 million random strings each time, counting matches as follows.The process is repeated 50 times before taking the min of all measured times.

for index in range(COUNT):
    if v1[index] == v2[index]:
        matches += 1
    else:
        non_matches += 1

What might account for the sudden increase around length 64?

Note: The following snippet can be used to try to reproduce the problem assuming v1 and v2 are two lists of random strings of length n and COUNT is their length.

timeit.timeit("for i in range(COUNT): v1[i] == v2[i]",
  "from __main__ import COUNT, v1, v2", number=50)

Further note: I’ve made two extra tests: comparing string with is instead of == suppresses the problem completely, and the performance is about 210ms/1M comparisons.
Since interning has been mentioned, I made sure to add a white space after each string, which should prevent interning; that doesn’t change anything. Is it something else than interning then?

Python can ‘intern’ short strings; stores them in a special cache, and re-uses string objects from that cache.

When then comparing strings, it’ll first test if it is the same pointer (e.g. an interned string):

if (a == b) {
    switch (op) {
    case Py_EQ:case Py_LE:case Py_GE:
        result = Py_True;
        goto out;
// ...

Only if that pointer comparison fails does it use a size check and memcmp to compare the strings.

Interning normally only takes place for identifiers (function names, arguments, attributes, etc.) however, not for string values created at runtime.

Another possible culprit is string constants; string literals used in code are stored as constants at compile time and reused throughout; again only one object is created and identity tests are faster on those.

For string objects that are not the same, Python tests for equal length, equal first characters then uses the memcmp() function on the internal C strings. If your strings are not interned or otherwise are reusing the same objects, all other speed characteristics come down to the memcmp() function.

I am just making wild guesses but you asked “what might” rather than what does so here are some possibilities:

  • The CPU cache line size is 64 bytes and longer strings cause a cache miss.
  • Python might store strings of 64 bytes in one kind of structure and longer strings in a more complicated structure.
  • Related to the last one: it might zero-pad strings into a 64-byte array and is able to use very fast SSE2 vector instructions to match two strings.