MongoDB is a very popular NoSQL database. MongoDB indeed popularized the concept of the “document”, a hierarchical set of key-value pairs, with a non-fixed structure (schema). Having flexible schemas and the 1:1 mapping between the business logic and the database objects are, arguably, key to MongoDB’s popularity. The reasons why MongoDB is used as the source of truth for many OLTP applications.
Surprisingly to some, dynamic schema does not avoid doing data design, nor it prevents from performance implications of that design:
When designing data models, always consider the application usage of the data (i.e. queries, updates, and processing of the data) as well as the inherent structure of the data itself.
But what if queries cannot be known ahead of time? What if different set of queries have almost orthogonal design needs? After all, the relational theory and data normalization are precisely aimed at solving this problem: a data representation that is completely independent of the queries.
Queries that need to retrieve a document or part of it, typically indexed, perform very well. But what are the consequences of performing queries on MongoDB for which data was not modeled after? Indeed, how do aggregate queries –which represent a completely different query pattern of the typical OLTP gimme-this-single-document query– perform? How does it perform when used with Business Intelligence (BI) applications, where queries are unknown and left for the end users?
Let’s reason intuitively first. What is NoSQL’s “schema-less” concept? It means that any document, with any given structure (there’s always a structure) is stored into the system as-is. One after the other. So if there are mixed information, different data belonging to different “parts” of the business logic, some documents with some properties, others without, what does it look like? Chaos!
Aggregate queries typically require scanning many documents, looking for specific properties of the data, often without the help of an index. Since NoSQL stores document with any structure and data, how does the database know which documents belong to the query criteria? It doesn’t! It has to scan the whole dataset (normally, the whole collection) to answer the query.
This scan is not cheap. Below is depicted how it works on MongoDB. First, decompression may need to be performed. Then, for each document, the BSON document needs to be traversed and evaluated with the query criteria. Worst case, every single key-pair of every document needs to be evaluated.
Let’s use the Github Archive dataset as an example, and illustrate a query to return the top 10 most active actors. This is how MongoDB would internally process all the data in the collection to find the query results:
And the MongoDB aggregate query:
db.githubarchive.aggregate([ { $group: { _id: '$actor.login', events: { $sum: 1 } } }, { $sort: { events: -1 }}, { $limit: 10 } ])
On comparison, a RDBMS with a normalized data design will typically target a subset of the tables, and only the columns involved will be evaluated. So much less I/O, which is normally the bottleneck of a database, will be required to answer the same query. In this case, only the table github_actor is required to answer the query:The query on a PostgreSQL relational database would be:
SELECT count(*), login FROM github_actor GROUP BY login ORDER BY 1 DESC FETCH FIRST 10 ROWS ONLY;
The I/O required by both queries can be measured using iotop -o -a:
Using a 1,4Gb sample of the Github Archive dataset, the query in MongoDB requires exactly the same I/O as the collection’s reported storage size (536,37Mb). This was expected, the whole collection is scanned. PostgreSQL scans just a 6th of the data to answer the query. Taking this further, if we would use Citus Data’s cstore (a columnar storage extension for PostgreSQL), the results are even more surprising: answering the query requires just 1/100th of the IO that MongoDB requires:
So the performance of aggregate queries on NoSQL, more precisely MongoDB, is not as good as expected. What is indeed the reality? We measured the execution time of the above queries comparing MongoDB and PostgreSQL on a 1Gb, 10Gb and 100Gb Github Archive sample of the dataset: (measuring query execution time; less is better)
Without zoom is hard to appreciate PostgreSQL’s results:
Similar aggregate queries deliver the same results. At 8Kdata, while developing ToroDB, we have performed dozens of similar queries. The results are consistently clear: a relational design almost always outperforms MongoDB aggregate queries by several factors, often orders of magnitude. Although we have seen even more pathological cases (i.e., even faster), it can safely be said that it is “100x faster”.
So one solution for the BI/aggregate queries on MongoDB is to bring them to SQL! Indeed, Stripe did that. There are ETLs too. However, all of these solutions require complex processes and designing the SQL schema of the destination data on your RDBMS. There’s no easy way to bring MongoDB to relational.
Hope is not lost, though. In a few days, 8Kdata will be publicly announcing a solution to this problem. If you are interested, keep watching this blog, subscribe to our newsletter and follow us on Twitter. Stay tuned!