Monitoring and Security: The Forgotten Link That Exposes Your Company to Risks

Monitoring and Security: The Forgotten Link That Exposes Your Company to Risks

Generic Monitoring

Data security governance and infrastructure performance monitoring are, in most organizations, two disciplines that operate in silos. The Information Security (InfoSec) team uses firewalls, WAFs, and SIEM (Security Information and Event Management) systems to analyze authentication logs and network patterns. The Site Reliability Engineering (SRE) or Operations (Ops) team uses APM and observability platforms to analyze query latency, CPU consumption, and I/O utilization. The tools are different, the dashboards are separate, and, crucially, the objectives are perceived as distinct.

This separation is one of the most dangerous and underestimated vulnerabilities in modern IT architecture. It creates a fundamental “blind spot” because it assumes that a security problem and a performance problem are mutually exclusive events. The reality is that many of the most damaging malicious activities, such as unauthorized access, data exfiltration, and privilege abuse, do not generate traditional security alerts but first manifest as performance anomalies in the database.

At HTI Tecnologia, our approach to 24/7 support for critical data environments is based on the premise that security and performance are inseparable. The performance telemetry of a database is one of the richest security data streams available, if we know how to interpret it. Ignoring this link is leaving a door open for risks that conventional security tools were not designed to see.

This technical article details three types of security breaches that traditional performance monitoring ignores and how an integrated analysis, focused on database telemetry, can detect and prevent them.

Why Security and Performance Tools Fail in Isolation

To understand the vulnerability, one must first recognize the limitations of each discipline when they operate in isolation.

The View of Traditional Security (SIEM): A SIEM is excellent for correlating log events from multiple sources. It can detect a brute-force attempt on the database login or access from an unauthorized IP address. However, once an attacker obtains valid credentials (e.g., through a leaked API key from an application), to the SIEM, their actions within the database appear legitimate. The SIEM sees the login, but not the subsequent query behavior.

The View of Performance Monitoring (APM): An APM tool is excellent for identifying slow queries. It can alert that SELECT * FROM transactions is taking 30 seconds to execute. However, the APM lacks the context to know if this query is a normal part of a month-end report or if it’s a developer inappropriately accessing data in production. The APM sees the slowness, but not the intent or the legitimacy of the access.

The security breach lies in the space between the login and the slowness. It is in this space that the analysis of an expert DBA, who understands both security architecture and performance optimization, becomes the most effective detection mechanism.

3 Security Breaches Detectable Through Performance Telemetry

Below, we present realistic attack scenarios that would be difficult to detect with conventional security tools but that leave clear traces in the database’s performance telemetry.

1. Unauthorized Access and Data Discovery

In this scenario, an attacker or a malicious insider has obtained low-privilege access credentials. Their initial goal is not to steal data, but to map the database structure to find tables with valuable information (PII, financial data, etc.).

  • What traditional security sees: A series of successful logins from an authorized user or service account. The executed queries (SELECT COUNT(*) FROM…, SELECT * FROM … LIMIT 10) are syntactically valid and do not trigger SQL Injection alerts. To the SIEM, the activity appears normal.
  • What performance monitoring reveals: The activity of an attacker exploring a database is radically different from the workload pattern of an application.
    • Anomalous I/O Pattern: A typical application executes a repetitive set of optimized queries that, ideally, use indexes and read few data pages from the disk. The attacker, on the other hand, executes exploratory and ad-hoc queries. This manifests as a sudden increase in logical reads and physical reads. In an Oracle AWR or PostgreSQL’s pg_stat_statements, we would see the emergence of queries with a high BLOCK_GETS or shared_blks_read that have never appeared before.
    • Execution Plan Degradation: The attacker’s queries will almost certainly not be optimized. They will result in Full Table Scans on tables that are normally accessed via an index. An increase in the number of Seq Scans in PostgreSQL or Table Scans in SQL Server, correlated with a specific user account, is a strong indicator of anomalous activity.
    • Cache Saturation: The execution of these table scans “pollutes” the database’s buffer cache, evicting the data blocks that are useful for the application. This manifests as a drop in the Buffer Cache Hit Ratio metric. For the application, the symptom is slowness, as it now needs to read more data from the disk. For the analyst, it is a sign that something is reading an unusual volume of data.
  • Our team does not look at a Full Table Scan just as a performance problem. We contextualize it: Who is executing it? From where (IP, application)? At what timeIs this query part of the known application workload? Correlating performance telemetry (the what) with the session metadata (the who and the where) transforms a “slow query problem” into a “potential security incident.”
Generic Monitoring

2. Slow and Continuous Data Exfiltration (“Low and Slow”)

This is a more sophisticated attack. The attacker has already identified the valuable data and now wants to extract it without being detected. Instead of executing a massive SELECT * that would trigger all the alarms, they extract the data in small chunks, over days or weeks.

  • What traditional security sees: Normal TLS/SSL network traffic leaving the database server. The individual queries are small, fast, and do not consume many resources, remaining below the alert thresholds of any APM tool. To all conventional tools, nothing abnormal is happening.
  • What performance monitoring reveals: The key to detecting this attack is not in peak metrics, but in trend analysis and baseline deviations.
    • Subtle Increase in Network Egress: Although each transfer is small, the cumulative volume of data transferred out of the database server over time will show an increase relative to its historical baseline. Monitoring the derivative (rate of change) of the database server’s network interface outbound traffic can reveal this trend.
    • Historical Workload Analysis: Tools like SQL Server’s Query Store or pg_stat_statements in PostgreSQL allow us to analyze the workload historically. We can identify the emergence of a new query pattern that, although individually fast, executes with an abnormally high frequency. For example, a query that fetches a single customer by ID, which normally executes 1000 times per hour, starts executing 50,000 times per hour, always from the same source.
    • Cumulative Resource Consumption: Even fast queries consume resources. By aggregating the CPU and I/O consumption per user or per query hash over 24 hours, the “low and slow” attack becomes visible. The compromised account will stand out as one of the largest cumulative resource consumers, even if none of its individual queries appeared on the “top 10 slow queries” list.
  • Our support approach is based on the creation and continuous monitoring of performance baselines. We don’t just ask “is the system fast now?”, but rather “is the system’s behavior today statistically consistent with its behavior over the last 90 days?”. The detection of pattern deviations, even if subtle, is what allows us to identify activities like slow data exfiltration.

3. Privilege Abuse and Lateral Movement

In this scenario, a service account of an application, which has high privileges, is compromised. The attacker uses this account not to steal data directly, but to create new user accounts for themselves, grant privileges, or disable security mechanisms like auditing.

  • What traditional security sees: Actions being executed by a “trusted” service account. Since the account has the necessary permissions to execute these commands, the access control tools see no violation.
  • What performance (and audit) monitoring reveals: This scenario blurs the line between performance and auditing, but detection still depends on database telemetry.
    • Monitoring of DDL (Data Definition Language) Commands: A well-behaved application account executes almost exclusively DML commands (SELECT, INSERT, UPDATE, DELETE). The execution of DDL commands (CREATE USER, GRANT, ALTER TABLE, DROP TRIGGER) by an application account is an extremely suspicious event. Setting up specific alerts for the execution of DDL by service accounts is a high-fidelity detection mechanism.
    • Analysis of Blocking Wait Events: Creating a user or altering an object may require acquiring exclusive locks on the DBMS’s data dictionary. This can cause brief but visible blocking events for other sessions. A spike in wait events related to data dictionary locks (library cache lock in Oracle, SCH-M locks in SQL Server), correlated with a service account, is a strong sign of anomalous activity.
    • Monitoring of Audit Integrity: The first step of a sophisticated attacker is often to disable the audit trail. The execution of commands like NOAUDIT (Oracle) or the attempt to stop a SQL Server Audit or disable pgaudit (PostgreSQL) are critical events. The monitoring telemetry must include the continuous verification of the state and integrity of the audit configuration itself.
  • Our database hardening methodology is not limited to configuring security; it includes configuring a monitoring that monitors the security itself. We implement alerts that not only trigger on login failures but also on any change in the security configuration or the execution of DDL commands by unauthorized accounts.

The Role of Expertise

The detection of these breaches cannot be fully automated by a single tool. It requires the integration of data from multiple sources and, more importantly, human expertise to interpret the context.

This is where outsourcing database management with HTI Tecnologia becomes a strategic security decision.

  • Integrated Expertise: Our DBAs are trained to think of security and performance as a single discipline. They are able to look at an AWR performance report and identify security indicators, a skill that transcends the traditional boundaries between InfoSec and Ops teams.
  • 24/7 Incident Response: A potential security alert, whether it comes from a SIEM or a performance anomaly, requires immediate action. Our 24/7 Support and Sustaining service ensures that an expert will be available to analyze, validate, and respond to an incident in minutes, at any time of day or night.
  • Risk Reduction: By centralizing the responsibility for integrated performance and security monitoring in a specialized partner, you eliminate the blind spots created by organizational and technological silos, reducing the overall risk of an undetected data breach.

Performance Telemetry is Your Last Line of Defense

Relying exclusively on traditional security tools to protect your data is like installing a camera on the front door but ignoring the motion sensors inside the house. Once an adversary is inside, you need a way to detect their anomalous behavior. Your database’s performance telemetry – the I/O patterns, the execution plans, the wait events – is that motion sensor.

The link between monitoring and security can no longer be forgotten. Integrating these two disciplines is not just a good engineering practice; it is a fundamental requirement to protect a company’s data assets against real-world threats.

Does your operation have the same “blind spots” between security and performance? Schedule a conversation with one of our specialists and discover how our integrated approach can protect your company.

Schedule a meeting here

Visit our Blog

Learn more about databases

Learn about monitoring with advanced tools

Generic Monitoring

Have questions about our services? Visit our FAQ

Want to see how we’ve helped other companies? Check out what our clients say in these testimonials!

Discover the History of HTI Tecnologia

Recommended Reading

  • Performance Tuning: how to increase speed without spending more on hardware: This article details the methodology for resolving the root cause of slowness, one of the main symptoms that can mask anomalous security activities, such as “low and slow” data exfiltration. The reading is fundamental to understanding the optimization techniques that, by establishing a high-performance baseline, make any deviation, whether operational or malicious, immediately more visible.
  • Technical Assessment: the Missing Step in Your Database Monitoring: Continuous monitoring is reactive by nature; it shows what is happening now. This article argues for the importance of periodic assessment as a deep analysis tool to discover vulnerabilities and bottlenecks before they are exploited. It is the strategic complement that connects monitoring data to a security and performance risk analysis.
  • The Health Check that Reveals Hidden Bottlenecks in Your Environment: Similar to an assessment, the health check is a focused examination that can reveal subtle anomalies. This article explains how this in-depth analysis exposes problems that are not evident in day-to-day dashboards. For security, this is crucial, as a “hidden bottleneck” could be the manifestation of an account with excessive privileges or an unauthorized data scan.

Compartilhar: