Each Answer to this Q is separated by one/two green lines.
I recently came across the pandas library for python, which according to this benchmark performs very fast in-memory merges. It’s even faster than the data.table package in R (my language of choice for analysis).
pandas so much faster than
data.table? Is it because of an inherent speed advantage python has over R, or is there some tradeoff I’m not aware of? Is there a way to perform inner and outer joins in
data.table without resorting to
merge(X, Y, all=FALSE) and
merge(X, Y, all=TRUE)?
The reason pandas is faster is because I came up with a better algorithm, which is implemented very carefully using a fast hash table implementation – klib and in C/Cython to avoid the Python interpreter overhead for the non-vectorizable parts. The algorithm is described in some detail in my presentation: A look inside pandas design and development.
The comparison with
data.table is actually a bit interesting because the whole point of R’s
data.table is that it contains pre-computed indexes for various columns to accelerate operations like data selection and merges. In this case (database joins) pandas’ DataFrame contains no pre-computed information that is being used for the merge, so to speak it’s a “cold” merge. If I had stored the factorized versions of the join keys, the join would be significantly faster – as factorizing is the biggest bottleneck for this algorithm.
I should also add that the internal design of pandas’ DataFrame is much more amenable to these kinds of operations than R’s data.frame (which is just a list of arrays internally).
It looks like Wes may have discovered a known issue in
data.table when the number of unique strings (levels) is large: 10,000.
Rprof() reveal most of the time spent in the call
sortedmatch(levels(i[[lc]]), levels(x[[rc]])? This isn’t really the join itself (the algorithm), but a preliminary step.
Recent efforts have gone into allowing character columns in keys, which should resolve that issue by integrating more closely with R’s own global string hash table. Some benchmark results are already reported by
test.data.table() but that code isn’t hooked up yet to replace the levels to levels match.
Are pandas merges faster than
data.table for regular integer columns? That should be a way to isolate the algorithm itself vs factor issues.
data.table has time series merge in mind. Two aspects to that: i) multi column ordered keys such as (id,datetime) ii) fast prevailing join (
roll=TRUE) a.k.a. last observation carried forward.
I’ll need some time to confirm as it’s the first I’ve seen of the comparison to
data.table as presented.
UPDATE from data.table v1.8.0 released July 2012
- Internal function sortedmatch() removed and replaced with chmatch()
when matching i levels to x levels for columns of type ‘factor’. This
preliminary step was causing a (known) significant slowdown when the number
of levels of a factor column was large (e.g. >10,000). Exacerbated in
tests of joining four such columns, as demonstrated by Wes McKinney
(author of Python package Pandas). Matching 1 million strings of which
of which 600,000 are unique is now reduced from 16s to 0.5s, for example.
also in that release was :
character columns are now allowed in keys and are preferred to
factor. data.table() and setkey() no longer coerce character to
factor. Factors are still supported. Implements FR#1493, FR#1224
and (partially) FR#951.
New functions chmatch() and %chin%, faster versions of match()
and %in% for character vectors. R’s internal string cache is
utilised (no hash table is built). They are about 4 times faster
than match() on the example in ?chmatch.
As of Sep 2013 data.table is v1.8.10 on CRAN and we’re working on v1.9.0. NEWS is updated live.
But as I wrote originally, above :
data.tablehas time series merge in mind. Two aspects to that: i)
multi column ordered keys such as (id,datetime) ii) fast prevailing
roll=TRUE) a.k.a. last observation carried forward.
So the Pandas equi join of two character columns is probably still faster than data.table. Since it sounds like it hashes the combined two columns. data.table doesn’t hash the key because it has prevailing ordered joins in mind. A “key” in data.table is literally just the sort order (similar to a clustered index in SQL; i.e., that’s how the data is ordered in RAM). On the list is to add secondary keys, for example.
In summary, the glaring speed difference highlighted by this particular two-character-column test with over 10,000 unique strings shouldn’t be as bad now, since the known problem has been fixed.
This topic is two years old but seems like a probable place for people to land when they search for comparisons of Pandas and data.table
Since both of these have evolved over time, I want to post a relatively newer comparison (from 2014) here for the interested users: https://github.com/Rdatatable/data.table/wiki/Benchmarks-:-Grouping
It would be interesting to know if Wes and/or Matt (who, by the way, are creators of Pandas and data.table respectively and have both commented above) have any news to add here as well.
— UPDATE —
A comment posted below by jangorecki contains a link that I think is very useful: https://github.com/szilard/benchm-databases
This graph depicts the average times of aggregation and join operations for different technologies (lower = faster; comparison last updated in Sept 2016). It was really educational for me.
Going back to the question,
R DT key and
R DT refer to the keyed/unkeyed flavors of R’s data.table and happen to be faster in this benchmark than Python’s Pandas (
There are great answers, notably made by authors of both tools that question asks about.
Matt’s answer explain the case reported in the question, that it was caused by a bug, and not an merge algorithm. Bug was fixed on the next day, more than a 7 years ago already.
In my answer I will provide some up-to-date timings of merging operation for data.table and pandas. Note that plyr and base R merge are not included.
Timings I am presenting are coming from db-benchmark project, a continuously run reproducible benchmark. It upgrades tools to recent versions and re-run benchmark scripts. It runs many other software solutions. If you are interested in Spark, Dask and few others be sure to check the link.
As of now… (still to be implemented: one more data size and 5 more questions)
We tests 2 different data sizes of LHS table.
For each of those data sizes we run 5 different merge questions.
q1: LHS inner join RHS-small on integer
q2: LHS inner join RHS-medium on integer
q3: LHS outer join RHS-medium on integer
q4: LHS inner join RHS-medium on factor (categorical)
q5: LHS inner join RHS-big on integer
RHS table is of 3 various sizes
- small translates to size of LHS/1e6
- medium translates to size of LHS/1e3
- big translates to size of LHS
In all cases there are around 90% of matching rows between LHS and RHS, and no duplicates in RHS joining column (no cartesian product).
As of now (run on 2nd Nov 2019)
pandas 0.25.3 released on 1st Nov 2019
data.table 0.12.7 (92abb70) released on 2nd Nov 2019
Below timings are in seconds, for two different data sizes of LHS. Column
pd2dt is added field storing ratio of how many times pandas is slower than data.table.
- 0.5 GB LHS data
+-----------+--------------+----------+--------+ | question | data.table | pandas | pd2dt | +-----------+--------------+----------+--------+ | q1 | 0.51 | 3.60 | 7 | | q2 | 0.50 | 7.37 | 14 | | q3 | 0.90 | 4.82 | 5 | | q4 | 0.47 | 5.86 | 12 | | q5 | 2.55 | 54.10 | 21 | +-----------+--------------+----------+--------+
- 5 GB LHS data
+-----------+--------------+----------+--------+ | question | data.table | pandas | pd2dt | +-----------+--------------+----------+--------+ | q1 | 6.32 | 89.0 | 14 | | q2 | 5.72 | 108.0 | 18 | | q3 | 11.00 | 56.9 | 5 | | q4 | 5.57 | 90.1 | 16 | | q5 | 30.70 | 731.0 | 23 | +-----------+--------------+----------+--------+