
Applying a uniform monitoring strategy to different Database Management Systems (DBMS) is an engineering mistake that introduces significant operational risks. Observability tools and APM (Application Performance Management) platforms that treat PostgreSQL, Oracle, and SQL Server as generic endpoints capture only superficial metrics, such as CPU utilization and I/O latency, failing to diagnose the root cause of performance anomalies. The reason for this failure is not in the tools, but in the flawed premise that these systems behave similarly under load.
The truth is that the fundamental architecture of each DBMS, its process and memory model, its concurrency control mechanism, and its physical storage structure, is drastically different. These design differences dictate how resources are consumed, how bottlenecks manifest, and, consequently, which metrics and wait events should be prioritized. Ignoring this specificity is the equivalent of using a stethoscope to diagnose a software failure: the tool is inadequate for the nature of the problem.
HTI Tecnologia, whose expertise spans 24/7 support and consulting for heterogeneous data environments, operates on the principle that effective monitoring is platform-specific. The ability to go beyond generic metrics and interpret the internal signals of an Oracle, SQL Server, or PostgreSQL is what distinguishes reactive data management from proactive reliability engineering.
This article details three critical architectural differences between these DBMSs and the direct implications they have on any serious performance monitoring strategy.
1. Process Model and Memory Architecture
The way a DBMS manages client connections and allocates memory is perhaps its most defining architectural characteristic. It directly impacts scalability, resource isolation, and the methodology for diagnosing CPU and RAM consumption.
The Distinct Architecture of Each DBMS
- Oracle: Dedicated Processes and Shared Memory (SGA/PGA)
Oracle traditionally uses a multi-process architecture. For each client connection, a dedicated server process (or a shared one, in Shared Server configurations) is created in the operating system. The memory is divided into two main areas: the SGA (System Global Area), a large buffer shared by all processes for data caching, execution plans, and others, and the PGA (Program Global Area), a private memory area for each server process, used for operations like sorting and hash joins. - SQL Server: Single Process and Worker Threads
SQL Server adopts a single-process model (sqlservr.exe). All connections are managed as worker threads within this single process. Memory is managed as a unified buffer pool, which dynamically allocates and deallocates space for data caching, compilation plans, and other needs. This design is optimized for the Windows operating system, leveraging its thread scheduler. - PostgreSQL: Process per Connection
PostgreSQL uses a process-per-connection model. A master process (postgres) accepts connections and, for each new connection, it creates (forks) a new backend process in the operating system. Each backend process exclusively manages the queries for that connection. The main memory, the Shared Buffers, is shared among all processes, but each has its own local memory area for work (work_mem).
Critical Implications for Monitoring
Treating these models as the same leads to completely wrong diagnoses.
Resource Consumption Diagnosis:
- Scenario: A single query is consuming excessive CPU.
- Oracle: You isolate the PID of the dedicated server process in the operating system and correlate it with the session (V$SESSION, V$PROCESS) to identify the user and the query. The memory consumption of that specific operation will be reflected in the use of the PGA.
- SQL Server: You will not find a separate process. The diagnosis occurs within SQL Server, by analyzing the worker threads through Dynamic Management Views (DMVs) like sys.dm_os_workers and sys.dm_exec_requests. CPU consumption is attributed to a session_id and a scheduler_id.
- PostgreSQL: Similar to Oracle, you can identify the PID of the backend process in the operating system and correlate it with the pg_stat_activity view. The memory consumption for sorts and joins is controlled by that process’s work_mem parameter.
Connection Management and Scalability:
PostgreSQL’s architecture makes connection management a critical performance point. Creating a new process for each connection has a significant cost in RAM and CPU. Therefore, the use of an external connection pooler, such as PgBouncer, is not a recommendation but a requirement for high-concurrency applications. In SQL Server and Oracle, connection management is lighter, although using pooling in the application is always a good practice. A generic monitoring tool that only counts “active connections” fails to capture the real cost of a new connection in PostgreSQL.
2. Concurrency Control and Transaction Management
The way a database manages simultaneous access to the same data (concurrency) is a pillar of its transactional performance. the differences here define the types of blocking and contention bottlenecks a DBA needs to monitor.
The Distinct Architecture of Each DBMS
- Oracle and PostgreSQL: Multi-Version Concurrency Control (MVCC)
Both use MVCC. In this model, readers do not block writers, and writers do not block readers. When a row is modified, the DBMS creates a new version of the row, keeping the old version visible to transactions that started before the modification.- Oracle manages the old versions of the data in a dedicated space called the Undo Segment.
- PostgreSQL stores the old versions (dead tuples) directly in the table itself. These dead tuples need to be cleaned up later by a process called VACUUM.
- SQL Server: Traditional Locking and Row Versioning (Optional MVCC)
SQL Server’s default model is based on pessimistic locking. Readers can block writers and vice versa, depending on the transaction’s isolation level. The DBMS uses a sophisticated lock manager to control access to resources (rows, pages, tables). In more recent versions, SQL Server introduced versioning-based isolation levels (ALLOW_SNAPSHOT_ISOLATION), which create a behavior similar to MVCC, storing the row versions in tempdb.
Critical Implications for Monitoring
Monitoring transaction contention in these systems requires completely different approaches.
Diagnosis of Blocking and Slowness:
- Scenario: An application is slow, and users report that the system “froze.”
- SQL Server: The first suspicion is blocking. The DBA needs to investigate the blocking chain using DMVs like sys.dm_tran_locks and sys.dm_os_waiting_tasks to find the session_id at the head of the blocking chain and the transaction that is holding the resources.
- Oracle: Traditional blocking is less common due to MVCC. However, other types of contention can occur, such as waiting for latches (low-level locks that protect memory structures) or enqueues. The analysis focuses on views like V$LOCK and the wait events in V$SESSION_WAIT.
- PostgreSQL: The “freezing” problem is rarely caused by readers. However, a problem unique to PostgreSQL is table bloat. If the VACUUM process cannot keep up with the rate of data modification, the tables and indexes become filled with dead tuples, making sequential scans (Seq Scan) extremely slow and consuming a lot of disk space. Monitoring PostgreSQL must include tracking the health of VACUUM and the statistics of dead tuples (pg_stat_user_tables).

Transaction History Management:
- In Oracle, the DBA must monitor the Undo Tablespace to ensure it is large enough to support long transactions, avoiding the infamous “ORA-01555: Snapshot too old” error.
- In SQL Server (with snapshot isolation), the DBA must monitor the usage of tempdb, as it is used to store the row versions and can become a bottleneck.
- In PostgreSQL, monitoring autovacuum is a high-priority task. It is necessary to check if it is running frequently enough and if it is not being blocked by long-running transactions.
3. Physical Storage Structure and I/O
The way data is organized on disk directly affects I/O monitoring, space management, and backup and recovery operations.
The Distinct Architecture of Each DBMS
- Oracle: Logical Abstraction over Physical (Tablespaces)
Oracle uses a strong abstraction with Tablespaces. A tablespace is a logical unit of storage that is mapped to one or more physical data files. This allows for granular control over where database objects are stored, facilitating I/O and backup policies. - SQL Server: Data and Log Files (MDF/LDF)
SQL Server stores data in primary (.mdf), secondary (.ndf), and transaction log (.ldf) files. These files are grouped into Filegroups, which offer a functionality similar to, but less flexible than, Oracle’s tablespaces. The transaction log is a critical component, and its management is one of the main tasks of a SQL Server DBA. - PostgreSQL: One File per Object
PostgreSQL, in its default configuration, maps each table and index to its own file in the server’s file system. This simplifies some operations but can also lead to a large number of files for databases with many objects. Like Oracle, it uses the concept of Tablespaces to allow the storage of objects on different disk devices. Its transaction log is called the Write-Ahead Log (WAL).
Critical Implications for Monitoring
I/O Performance Analysis:
- Oracle: Using views like V$IOSTAT_FILE, a DBA can identify exactly which tablespaces and datafiles are under the heaviest I/O load, allowing them to move high-activity objects to faster disks.
- SQL Server: I/O analysis is done at the file level using the sys.fn_virtualfilestats function. A common bottleneck is contention on the transaction log file, which can be diagnosed by the WRITELOG wait event.
- PostgreSQL: I/O monitoring is often done in conjunction with operating system tools, but views like pg_stat_io_user_tables (in more recent versions) provide insights into the I/O patterns (buffer read vs. disk read) per table. A critical monitoring point is the activity of the WAL, as a high WAL generation rate can saturate the disk subsystem.
Space Management:
- In Oracle, space monitoring is centralized on tablespaces.
- In SQL Server, it is necessary to monitor the growth of the .mdf, .ndf, and, crucially, the .ldf files, as a full transaction log can paralyze the entire database.
- In PostgreSQL, in addition to the overall disk space, it is necessary to monitor the bloat of tables and indexes, which is the space occupied by dead tuples that have not yet been cleaned up by VACUUM.
The Expertise That Fills the Monitoring Gaps
The architectural differences detailed above demonstrate that a generic monitoring tool is, at best, incomplete. It can alert on high disk latency, but it cannot discern whether the cause is a poorly managed transaction log in SQL Server, intense VACUUM activity in PostgreSQL, or contention in the Undo tablespace in Oracle.
This diagnostic gap is where the expertise of a specialized DBA team becomes a strategic asset.
- Specific Multi-Platform Knowledge: Understanding the nuances of each DBMS is not something learned in a quick course. It is the result of years of experience resolving complex incidents on each platform. The team at HTI Tecnologia is composed of specialists in each of these systems, ensuring that the diagnosis is accurate and the solution, effective.
- Operational Risk Reduction: Having an in-house team that deeply masters all these systems is unfeasible for most companies. Partnering with a managed services provider mitigates the risk of having a “single point of failure” in a DBA who masters only one technology. Our Database Consulting provides access to this diverse pool of knowledge.
- 24/7 Business Continuity: A performance problem doesn’t choose a time to happen. With our 24/7 Support and Sustaining service, your company is guaranteed that a specialist, with the specific knowledge needed for your database platform, will be available to act immediately.
Effective Monitoring is Architecture-Based
Treating database monitoring as a commodity is a strategic failure that leads to inaccurate diagnoses, longer resolution times, and, ultimately, avoidable service interruptions. The performance, stability, and scalability of your database depend on a monitoring strategy that respects its fundamental architecture.
PostgreSQL, Oracle, and SQL Server are distinct tools, designed with different philosophies to solve similar problems. Demanding that they be monitored in the same way is not just ineffective; it is dangerous. True observability comes from the combination of the right tools with the deep, platform-specific knowledge to interpret the data they provide.
Does your current monitoring strategy have the same “blind spots” we discussed? Schedule a conversation with one of our specialists and discover how a platform-specific monitoring can protect your operation.
Schedule a conversation with one of our specialists and discover the blind spots in your monitoring.
Schedule a conversation with one of our specialists and discover the blind spots in your monitoring.
Visit our Blog
Learn more about databases
Learn about monitoring with advanced tools

Have questions about our services? Visit our FAQ
Want to see how we’ve helped other companies? Check out what our clients say in these testimonials!
Discover the History of HTI Tecnologia
Recommended Reading
- DBA Consulting: the strategic role that prevents failures and ensures operational continuity: This article details how consulting goes beyond the reactive DBA, focusing on architecture and prevention. The reading is essential to understand how the expertise discussed in the monitoring of PostgreSQL, Oracle, and SQL Server translates into a strategic service that ensures business resilience.
- Server monitoring: what automated tools don’t show (and only experts can identify): Explore the performance anomalies that generic tools do not capture, a direct complement to the central theme of this article. The text reinforces that, regardless of the DBMS, true observability depends on the expert’s deep knowledge to interpret the signals that the tools cannot contextualize.
- How to detect performance bottlenecks before the user complains: This guide focuses on proactive methodologies to find and solve performance problems before they impact the operation. The reading connects the technical differences in monitoring between DBMSs to the ultimate goal: to use this platform-specific knowledge to maintain the performance and availability of the systems preventively.













