Each Answer to this Q is separated by one/two green lines.
The background: I’m building a trie to represent a dictionary, using a minimal construction algorithm. The input list is 4.3M utf-8 strings, sorted lexicographically. The resulting graph is acyclic and has a maximum depth of 638 nodes. The first line of my script sets the recursion limit to 1100 via
The problem: I’d like to be able to serialize my trie to disk, so I can load it into memory without having to rebuild from scratch (roughly 22 minutes). I have tried both
cPickle.dump(), with both the text and binary protocols. Each time, I get a stack-trace that looks like the following:
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/pickle.py", line 649, in save_dict self._batch_setitems(obj.iteritems()) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/pickle.py", line 663, in _batch_setitems save(v) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/pickle.py", line 725, in save_inst save(stuff) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/pickle.py", line 648, in save_dict self.memoize(obj) RuntimeError: maximum recursion depth exceeded
My data structures are relatively simple:
trie contains a reference to a start state, and defines some methods.
dfa_state contains a boolean field, a string field, and a dictionary mapping from label to state.
I’m not very familiar with the inner workings of
pickle – does my max recursion depth need to be greater/equal n times the depth of the trie for some n? Or could this be caused by something else I’m unaware of?
Update: Setting the recursion depth to 3000 didn’t help, so this avenue doesn’t look promising.
Update 2: You guys were right; I was being short-sighted in assuming that pickle would use a small nesting depth due to default recursion limitations. 10,000 did the trick.
From the docs:
Trying to pickle a highly recursive data structure may exceed the maximum recursion depth, a RuntimeError will be raised in this case. You can carefully raise this limit with
Although your trie implementation may be simple, it uses recursion and can lead to issues when converting to a persistent data structure.
My recommendation would be continue raising the recursion limit to see if there is an upper bound for the data you are working with and the trie implementation you are using.
Other then that, you can try changing your tree implementation to be “less recursive”, if possible, or write an additional implementation that has data persistence built-in (use pickles and shelves in your implementation). Hope that helps
Pickle does need to recursively walk your trie. If Pickle is using just 5 levels of function calls to do the work your trie of depth 638 will need the level set to more than 3000.
Try a much bigger number, the recursion limit is really just there to protect users from having to wait too long if the recursion falls in an infinite hole.
Pickle handles cycles ok, so it doesn’t matter even if your trie had a cycle in there
Stack size must also be increased with
resource.setrlimit to prevent segfault
If you use just
sys.setrecursionlimit, you can still segfault if you reach the maximum stack size allowed by the Linux kernel.
This value can be increased with
resource.setrlimit as mentioned at: Setting stacksize in a python script
import pickle import resource import sys print resource.getrlimit(resource.RLIMIT_STACK) print sys.getrecursionlimit() max_rec = 0x100000 # May segfault without this line. 0x100 is a guess at the size of each stack frame. resource.setrlimit(resource.RLIMIT_STACK, [0x100 * max_rec, resource.RLIM_INFINITY]) sys.setrecursionlimit(max_rec) a =  # 0x10 is to account for subfunctions called inside `pickle`. for i in xrange(max_rec / 0x10): a = [a] print pickle.dumps(a, -1)
The default maximum value for me is 8Mb.
Tested on Ubuntu 16.10, Python 2.7.12.
Double-check that your structure is indeed acyclic.
You could try bumping up the limit even further. There’s a hard maximum that’s platform dependent, but trying 50000 would be reasonable.
Also try pickling a trivially small version of your trie. If pickle dies even though it’s only storing a couple three-letter words, then you know there’s some fundamental problem with your trie and not pickle. But if it only happens when you try storing 10k words, then it might be the fault of a platform limitation in pickle.
My needs were somewhat immediate so I solved this problem by saving my dictionary in .txt format. The only thing is that when you load your file again you have to transform it back into a dictionary.
import json # Saving the dictionary with open('filename.txt', 'w') as file_handle: file_handle.write(str(dictionary)) # Importing the .txt file with open('filename.txt', 'r') as file_handle: f=""" + file_handle.read() + '"' # From .txt file to dictionary dictionary = eval(json.loads(f))
If this does not work you may try exporting the dictionary using json format.
For me, removing all uses of
importlib.reload solved the issue.
I did not even need to increase the limit with
If you want to know how I found it, continue reading.
Before I found the solution I found that I can actually save the model if I moved it to the CPU first, but then got an error during evaluation (XXX is the class name and it matters not):
PicklingError: Can't pickle <class 'XXX'>: it's not the same object as XXX
Then I found this answer:
But after removing all uses of
importlib.reload I was able to save the model without moving it to CPU device first.