
You’ve felt that chill down your spine when a page is slow to load, or a critical report takes an eternity to generate. The truth is, in the world of data, sluggishness isn’t a “small problem,” it’s a real threat to your company’s reputation and, more importantly, its revenue. Most of the time, the root of the issue is hidden where you’d least expect it: in your queries. And if you work with MongoDB, you know that the flexibility of NoSQL can, ironically, be the gateway to disastrous performance. The MongoDB Profiler doesn’t lie. It’s your most honest ally, exposing the queries that are draining your cluster’s performance.
In this article, HTI Tecnologia, a specialist in consulting and 24/7 database support, will guide you down a path that many IT managers and DBAs prefer to ignore. We’ll delve into the dark world of inefficient queries and show you how to identify, analyze, and finally, optimize your database to ensure the high performance, availability, and security that your company needs. After all, a fast and responsive application is the backbone of a successful business.
The Most Underestimated MongoDB Tool: The Profiler
When we talk about observability in NoSQL databases, most teams focus on high-level metrics like CPU usage, disk I/O, and RAM. These metrics are important, without a doubt, but they’re like seeing the smoke from a fire without knowing the source. The MongoDB Profiler, on the other hand, goes directly to the cause. It’s a built-in performance analysis tool designed to capture and log query operations (reads, writes, updates, and deletes) that exceed a predefined time limit.
What does the Profiler actually do? It operates as a detailed log, documenting every slow operation and providing crucial information such as the exact query that was executed, the execution time (in milliseconds), the type of operation, and the number of documents inspected. This wealth of detail is what allows a specialized DBA to diagnose problems with surgical precision, instead of making assumptions. Most of the slowness issues in applications that use MongoDB are not in the infrastructure, but rather in how the data is requested.
How to activate the Profiler (and why you should be cautious)? Activating the Profiler is a simple process, but it requires attention, especially in production environments. The command db.setProfilingLevel(level)
is the starting point, where the level can be:
- 0 (Off): The Profiler is disabled. This is the default setting.
- 1 (On – Slow Operations Only): MongoDB logs all operations that exceed the
slowOpThresholdMs
time limit (which is 100ms by default). This is the most recommended level for production, as it generates minimal impact. - 2 (On – All Operations): MongoDB logs all operations, regardless of execution time. This level is extremely useful for intensive debugging but is not recommended for production environments due to its high performance cost. The volume of data generated can overload the disk and CPU, causing the very problem you are trying to solve.
// Activating the Profiler at level 1 (slow operations)
// This is the most recommended for production environments.
db.setProfilingLevel(1);
// Optional: To set a custom time limit for slow operations (in milliseconds)
// For example, to log operations that take longer than 50ms
db.setProfilingLevel(1, { slowOpThresholdMs: 50 });
// To activate the Profiler at level 2 (all operations) - use with caution!
// Generally for debugging in development/staging environments.
// db.setProfilingLevel(2);
// To deactivate the Profiler
// db.setProfilingLevel(0);
// Checking the current Profiler status
db.getProfilingStatus();
/*
Example output:
{ "was" : 1, "slowOpThresholdMs" : 100, "sampleRate" : 1 }
'was': 0 (off), 1 (on for slow ops), 2 (on for all ops)
'slowOpThresholdMs': time limit in ms for slow operations
*/
// To see the 10 most recent operations logged by the Profiler
db.system.profile.find().sort({ ts: -1 }).limit(10).pretty();
// To see operations that took longer than 500ms
db.system.profile.find({ millis: { $gt: 500 } }).pretty();
// To see all 'find' operations in a specific collection that were slow
db.system.profile.find({ op: "query", ns: "mydatabase.mycollection" }).pretty();
// To see operations that took longer than 100ms in a specific collection
db.system.profile.find({ ns: "mydatabase.mycollection", millis: { $gt: 100 } }).pretty();
// Example of a slow query you can run for the profiler to capture it
// (Make sure you have enough data for it to be slow)
// db.mycollection.find({ $where: "this.value > 1000000" }).toArray();
Once activated, the Profiler stores data in an internal collection called system.profile
. From there, data analysis becomes an art. You need to know which queries to look for, which metrics to ignore, and how to translate the raw data into concrete optimization actions.

3 Signs the Profiler Reveals About Your Database’s Health
The true power of the Profiler lies in its ability to expose your database’s performance sins. A specialized DBA knows exactly what to look for. And the three most common signs that your MongoDB is suffering are:
Queries with Full Collection Scan: The worst performance offense. A “Full Collection Scan” happens when MongoDB needs to scan all documents in a collection to find those that match your query. This is the digital equivalent of looking for a needle in a haystack without a magnet: slow, costly, and completely unnecessary. The Profiler signals this inefficiency with the "planSummary": "COLLSCAN"
field. The presence of this term is a giant red flag, indicating the absence of a proper index.
How the Profiler helps: It not only tells you that a COLLSCAN
occurred but also shows you the exact query that caused it. With this information, an HTI Tecnologia specialist can quickly identify the need to create a specific index, such as in a query that searches for the ‘pending’ status in a collection of millions of documents. Creating an index on { status: 1 }
would turn an operation of seconds (or minutes) into one of milliseconds.
// Example of a query that can cause a COLLSCAN if there is no index on 'status'
// Assume 'orders' is a large collection.
db.orders.find({ status: "pending" }).pretty();
// Checking the execution plan (before the index)
db.orders.find({ status: "pending" }).explain("executionStats");
/*
If the output includes "planSummary": "COLLSCAN", this confirms the problem.
*/
// Creating the index to optimize the query above
db.orders.createIndex({ status: 1 });
// After creating the index, check the execution plan again
db.orders.find({ status: "pending" }).explain("executionStats");
/* Now, the "planSummary" should show something like "IXSCAN" (Index Scan), indicating that the index is being used, and "totalDocsExamined"
will be much smaller than "totalKeysExamined".
*/
Queries with Incorrect or Non-Existent Indexes: Most developers know they need indexes, but few understand the importance of having the right ones. An index can be useless if the query isn’t using the main field correctly or if the order of the fields in the compound index is incorrect. The Profiler reveals these queries that, despite the existence of indexes, remain inefficient. It shows the planSummary
and the executionStats
, revealing how many documents were scanned versus how many were returned.
Optimizing indexes: The Profiler analysis allows an HTI Tecnologia DBA to not only create indexes but also optimize them. This includes creating compound indexes (for queries with multiple filters), text indexes (for text searches), and even removing unnecessary indexes that can harm the performance of write operations. HTI Tecnologia masters the art of balancing read and write performance, a vital skill in mission-critical environments.
// Example of a query that would benefit from a compound index
// Assume a 'products' collection with millions of documents
db.products.find({ category: "electronics", inStock: true }).pretty();
// If there is only an index on 'category: 1' or 'inStock: 1',
// the query can still be inefficient.
// Check the execution plan:
db.products.find({ category: "electronics", inStock: true }).explain("executionStats");
/*
If the "planSummary" shows "IXSCAN" but "totalDocsExamined" is high, it may indicate that only part of the index is being used or that another filter still requires a scan.
*/
// Creating a proper compound index for the query
db.products.createIndex({ category: 1, inStock: 1 });
// Check the execution plan again after the compound index
db.products.find({ category: "electronics", inStock: true }).explain("executionStats");
/*
The "planSummary" should now indicate a more efficient "IXSCAN", with a significantly smaller "totalDocsExamined".
*/
// Example of removing an unused index (identified by the Profiler)
// First, list the existing indexes in the collection
db.products.getIndexes();
// Then, remove a specific index by its name (e.g., "fieldName_1")
// db.products.dropIndex("fieldName_1");
Queries with Excessive $lookup
and $unwind
: The flexibility of MongoDB often leads to complex data architectures. Operations like $lookup
(which simulates a relational join) and $unwind
(which de-structures arrays) are powerful but have a significant performance cost. The improper or excessive use of these operations, especially with large volumes of data, can lock your database and dramatically drain resources. The Profiler captures the execution time of these aggregation operations, exposing bottlenecks that would not be visible otherwise.
The solution goes beyond the query: In these cases, the solution is not just to optimize the query but to rethink the data modeling. An HTI Tecnologia manager could, for example, suggest denormalizing data to avoid the $lookup
, or restructuring an aggregation so that $unwind
is more efficient. Expertise in MongoDB Data Modeling is one of HTI’s consulting services that saves companies from escalating performance problems.

// Example of an aggregation pipeline with $lookup and $unwind that can be costly
// Assume 'orders' and 'customers' collections
// We want to find orders with customer details
db.orders.aggregate([
{
$lookup: {
from: "customers", // The collection to join with
localField: "customerId", // Field from the input documents (orders)
foreignField: "_id", // Field from the documents of the "from" collection (customers)
as: "customerInfo" // The new array field to add to the input documents
}
},
{
$unwind: "$customerInfo" // Deconstructs the array; can be costly with many matches
},
{
$match: {
"customerInfo.country": "Brazil"
}
}
]).pretty();
// To analyze the performance of an aggregation pipeline
db.orders.aggregate([
{
$lookup: {
from: "customers",
localField: "customerId",
foreignField: "_id",
as: "customerInfo"
}
},
{
$unwind: "$customerInfo"
},
{
$match: {
"customerInfo.country": "Brazil"
}
}
]).explain("executionStats");
/*
Analyzing the "executionStats" output will reveal the slowest stages.
In particular, look for "totalDocsExamined" and "nReturned" in each stage,
as well as the total execution time.
*/
From Analysis to Action: How HTI Tecnologia Transforms the Scenario
Identifying problems is only half the battle. The other half, the most difficult part, is solving them. The complexity of analyzing the Profiler, creating and optimizing indexes, and restructuring queries requires a level of knowledge and experience that many internal IT teams simply don’t have, as they are overwhelmed with day-to-day tasks. This is where outsourcing your DBA becomes the smartest strategic decision.
By opting for HTI Tecnologia’s 24/7 support and maintenance, your company doesn’t just hire one DBA, but a whole team of MongoDB experts, with years of experience in high-criticality environments. Our team doesn’t just put out fires; we prevent them. The preventative action, based on the continuous analysis of performance metrics and Profiler data, ensures that your database is always operating at its maximum capacity.
The result? Technical focus, as your team can dedicate themselves to the core business while we take care of your data’s health; risk reduction, because human errors are minimized by experience and automation; and operational continuity, with support available at all times, ensuring your business never stops.
One of our recent success stories involved a large e-commerce company that was struggling with sales reports that took more than 10 minutes to generate. Our HTI team used the Profiler, identified aggregation queries that were doing full scans on huge collections, and implemented a set of compound indexes and an optimization in the aggregation pipeline. The result was a 95% reduction in execution time, with the report generating in less than 30 seconds. Read more about how HTI Tecnologia can solve complex database performance problems.
Don’t Let Slow Queries Undermine Your Business
The MongoDB Profiler is a powerful tool, but its true value is only realized in the hands of a specialist. It is the diagnosis, and HTI Tecnologia is the solution. Query optimization is not a task to be done in “spare time”; it is a continuous job, fundamental to the performance, availability, and security of any application that depends on data.
The time has come to stop putting out fires and start building a future with performance and security. Schedule a meeting with an HTI Tecnologia specialist and discover how we can use our MongoDB expertise to transform your database’s performance.
Visit our Blog
Learn more about databases
Learn about monitoring with advanced tools

Have questions about our services? Visit our FAQ
Want to see how we’ve helped other companies? Check out what our clients say in these testimonials!
Discover the History of HTI Tecnologia