Book a demo

Complete this form to speak with one
of our sales representatives.

Building visibility and resilience across Kubernetes


Why Kubernetes Security and Monitoring Matter


Kubernetes has transformed how modern applications are deployed and scaled. Its flexibility and automation power innovation but also expand the attack surface. From control plane access to runtime drift, Kubernetes introduces layers of complexity that can obscure visibility if not properly monitored.

For security leaders, Kubernetes is both an opportunity and a risk. While it enables agility, it also decentralizes security responsibility across teams, tools, and cloud layers. Strong preventive controls like RBAC, Pod Security Standards, and NetworkPolicies reduce risk, but they’re only half the story. The real differentiator is continuous monitoring and observability: seeing what’s actually happening in real time.

In this blog, we’ll cover:


Understanding Shared Responsibility


Just like database or cloud infrastructure monitoring, Kubernetes security begins with understanding who owns what. In a self-managed deployment, you’re responsible for the entire stack – from the control plane to worker nodes. In managed Kubernetes services (such as Amazon EKS, Azure AKS, or Google GKE), the cloud provider manages the control plane, but you remain accountable for workloads, nodes, networking, and observability.

LayerSelf-Managed KubernetesManaged Kubernetes
Control Plane (API, etcd, scheduler)You manage and secureCloud provider manages
Worker NodesYou manageYou manage
Network & AccessYou enforce policies and segmentationYou enforce policies and segmentation
Logging & ObservabilityYou implement collectorsYou enable and forward logs
Runtime & Workload SecurityYou protect workloadsYou protect workloads


Key takeaway: Cloud providers reduce operational overhead but do not eliminate your security responsibility. Visibility gaps often emerge when teams assume that managed services automatically log or secure everything – they don’t.


Securing Self-Managed Kubernetes


When you run your own Kubernetes clusters, you have full control – and full accountability. This freedom brings flexibility but also the burden of securing every layer of the stack.


Best Practices


Because all components are under your control, log collection must be deliberate. Aggregate logs from:


A centralized observability solution allows correlation across these layers – essential for detecting malicious API calls or policy drift before it becomes an incident.


Securing Managed Kubernetes


Managed Kubernetes services simplify cluster management by outsourcing the control plane, but your workloads remain your responsibility. Cloud providers secure the API server, etcd, and certificates – yet customers must secure workloads, IAM/identity roles, node configurations, and observability.


Best Practices


Critical Telemetry: What to Monitor


Borrowing from database monitoring principles, Kubernetes monitoring also depends on ingesting the right signals.


Below are the foundational log sources that together provide a full picture of your cluster’s health and security posture.

Log SourcePurposeKey Insight
Cloud Audit LogsRecords cloud platform API callsDetect IAM misuse or privilege escalation
kube-apiserverCaptures all Kubernetes API requestsIdentify unauthorized or high-risk actions
Authentication LogsTrack successful and failed access attemptsSpot brute-force or credential replay
Audit LogsChronicle cluster changesTrace who modified what, when
Controller ManagerManage state reconciliationsDetect unauthorized scaling or misconfigurations
Scheduler LogsShow pod placement logicIdentify anomalies or resource drift


Example Threat Scenarios


These are the same detection concepts database security teams apply to DML anomalies or privilege grants – but in Kubernetes, they span multiple telemetry sources.


Correlating Data: Real-World Use Cases


Like database monitoring, value emerges when you correlate events across systems.


Use Case 1: Identity Role Change + Kubernetes RoleBinding


A DevOps user assumes a new cloud identity role (cloud audit log) and immediately modifies an RBAC RoleBinding (Kubernetes audit log). This combination often signals privilege escalation.


Use Case 2: Suspicious Pod Execution


A container starts executing binaries not typically present in the image (runtime log), while the scheduler places it on an unusual node. This can indicate lateral movement or a compromised workload.


Use Case 3: Config Drift + Unauthorized Access


The controller manager logs show a new Deployment, while the API server logs multiple failed authentications from the same IP. Together, this may indicate automated exploitation or misused credentials.


Centralizing Observability


Fragmented logs are the biggest obstacle to Kubernetes security. Control plane, identity, network, and runtime data often live in silos – making correlation difficult and delaying response.


Best Practices for Unified Visibility


Executive value: Centralized observability enables governance – turning logs into measurable assurance for compliance, resilience, and audit readiness.

From Zero to Full Visibility: Coralogix Kubernetes Extension


The Coralogix Kubernetes extension delivers instant security monitoring with pre-built alerts for common threat scenarios and comprehensive dashboards that visualize your cluster’s security posture. No complex configuration required – start detecting anomalies and tracking compliance within minutes.


Conclusion


Kubernetes security isn’t just a technical checklist – it’s a visibility and governance challenge.

Self-managed clusters give full control but full accountability; managed Kubernetes services offload control plane operations but not responsibility. By aligning preventive controls, continuous monitoring, and centralized observability, organizations gain true operational resilience. For CISOs, that means shifting from reactive to proactive – transforming Kubernetes from a potential blind spot into a measurable pillar of enterprise security posture.

Learn more about how Coralogix powers observability for modern, secure Kubernetes environments across all major cloud platforms.

Mastering Web Application Security: Enterprise-Grade OWASP Detection Rules for AWS WAF, Akamai, F5 and Cloudflare

Application Security, WAF, and OWASP form an interconnected defense strategy for web applications. OWASP (Open Web Application Security Project) provides the framework for identifying critical vulnerabilities through resources like the OWASP Top 10, while WAFs act as the protective layer that detects and blocks attacks targeting these vulnerabilities in real-time.

Web Application Firewalls (WAFs) are the first line of defense against sophisticated web-based attacks. By implementing comprehensive OWASP-based detection rules across your WAF infrastructure and sending logs to Coralogix, organizations can unlock deeper security insights through integrated log analytics, real-time alerting, and threat intelligence capabilities. Coralogix enables faster threat detection through advanced parsing and indexing, customizable dashboards, and automated alerts that provide security teams with early warning across critical web application attack vectors.


Understanding the Modern WAF Security Landscape

As web applications become increasingly complex and critical to business operations, they also become prime targets for attackers. The OWASP Core Rule Set provides a standardized framework for detecting and preventing common web application attacks. Whether you’re running AWS WAF, Akamai WAF, F5 WAF, or Cloudflare WAF, implementing consistent detection rules across your infrastructure is essential for comprehensive security coverage.

Modern WAFs generate vast amounts of security telemetry, capturing every request, response, and potential threat. However, raw WAF logs alone are insufficient. Organizations need sophisticated detection rules that can identify attack patterns, distinguish false positives from genuine threats, and provide actionable intelligence to security teams.

The detection rules outlined in this article focus on the most critical OWASP-based attack vectors:


Key Detection Rules for Enterprise WAF Deployments


Cross-Site Scripting (XSS) Payload Detection


Overview

Cross-site scripting remains one of the most prevalent web application vulnerabilities. This detection rule identifies XSS attempts by searching for HTML/script markers and malicious payloads within query parameters and URIs. It focuses specifically on events that were allowed, accepted, or logged highlighting inputs that reached the application layer and represent potential successful injection attempts.


Detection Logic

The rule monitors for common XSS indicators including:

These patterns are examined in both URL-encoded and standard formats across query strings and URI paths.


Security Impact

Successful XSS exploitation enables attackers to:


Mitigation Strategy

Security teams should immediately block or rate-limit offending source IPs at the WAF or reverse proxy layer. Review application access logs and user sessions for signs of successful exploitation. For stored XSS, identify and remove malicious content from the database. Update WAF rules to block the specific payload patterns observed, and implement content security policies (CSP) to prevent script execution from unauthorized sources.

MITRE ATT&CK Mapping: TA0003 (Persistence), T1059 (Command and Scripting Interpreter)


SQL Injection Attack Detection


Overview

SQL injection attacks attempt to manipulate backend databases by injecting malicious SQL code into application inputs. This comprehensive detection rule identifies suspicious SQL keywords, function calls, and common payload patterns across multiple database engines including MySQL, PostgreSQL, MSSQL, and Oracle.


Detection Logic

The rule detects a wide range of SQLi techniques:


Security Impact

Successful SQL injection can result in catastrophic security breaches:


Mitigation Strategy

Immediately investigate and block offending IP addresses at the WAF. Review correlated logs for signs of successful exploitation and contain any compromised systems. Long-term mitigation requires implementing parameterized queries and prepared statements throughout the application codebase. Enforce strict input validation on the server side, apply least-privilege principles to database service accounts, and conduct regular security audits and penetration testing to identify injection vectors before attackers do.

MITRE ATT&CK Mapping: TA0002 (Execution), T1059 (Command and Scripting Interpreter)


Remote Code Execution (RCE) Detection


Overview

This high-priority detection rule identifies requests containing patterns that suggest remote command execution or attempts to invoke system shells and processes. These patterns are commonly observed in web shell uploads, template exploitation (OGNL, Struts), and command injection attacks.


Detection Logic

The rule flags requests containing:


Security Impact

Successful RCE represents one of the most severe security breaches. Attackers who achieve code execution can:


Mitigation Strategy

Configure the WAF to require high-confidence matches by combining multiple indicators rather than triggering on single patterns. Maintain aggressive patch management for web frameworks and dependencies many RCE exploits target known vulnerabilities. Enforce strong input validation and output encoding across all application layers. Run web services with least-privilege accounts that cannot execute system commands. Implement application-layer sandboxing where possible, and maintain robust logging to detect successful exploitation attempts.

MITRE ATT&CK Mapping: TA0002 (Execution), T1190 (Exploit Public-Facing Application)


PHP Code Injection Detection


Overview

PHP injection attacks attempt to execute malicious PHP code through vulnerable application endpoints. This detection rule identifies both server-side and client-side code execution attempts, including encoded payloads that bypass basic filtering.


Detection Logic

The rule searches for:


Security Impact

PHP injection can lead to complete server compromise. Successful attacks enable:


Mitigation Strategy

Application hardening is critical: never use eval() or assert() with user-controlled data, validate and canonicalize all inputs, and apply context-appropriate output encoding. Configure PHP with disable_functions to prevent execution of dangerous functions. Enforce least privilege for web processes and isolate secrets using environment variables or secure vaults rather than configuration files. Tune WAF rules to block both raw and URL-encoded payloads while maintaining separate allow-lists for legitimate static assets and query patterns.

MITRE ATT&CK Mapping: TA0002 (Execution), T1059 (Command and Scripting Interpreter)


Server-Side Template Injection (SSTI) Detection


Overview

Server-side template injection exploits vulnerabilities in template engines such as Jinja2, Twig, Velocity, Handlebars, and EJS. Attackers inject malicious expressions that break out of the template context and execute arbitrary code on the server.


Detection Logic

The rule identifies common SSTI patterns:

These patterns appear in query parameters or URIs when attackers attempt to manipulate template rendering.


Security Impact

SSTI allows attackers to escape the intended template scope and access the underlying execution environment. This leads to:


Mitigation Strategy

Implement strict input validation before passing data to template engines. Use allow-lists for acceptable variables and avoid rendering untrusted input directly. Configure template engines in restricted or sandboxed mode when available. Implement proper separation between template code and user data. Consider using logic-less template engines that limit code execution capabilities. Regularly audit template usage across the application and conduct security reviews of template-rendering code paths.

MITRE ATT&CK Mapping: TA0002 (Execution), T1190 (Exploit Public-Facing Application)


SSRF Metadata Service Access Detection


Overview

Server-Side Request Forgery (SSRF) attacks exploit applications that fetch remote resources based on user input. This detection specifically targets attempts to access cloud metadata services a critical attack vector in cloud environments that can expose sensitive credentials and configuration.


Detection Logic

The rule monitors for requests targeting:


Security Impact

Successful SSRF exploitation against metadata services represents a severe cloud security breach:


Mitigation Strategy

Implement defense-in-depth for SSRF prevention. At the cloud level, enforce IMDSv2 (AWS) or equivalent secure metadata service access that requires session tokens and blocks unauthenticated queries. At the application level, rigorously sanitize and validate all user-supplied input used in server-side requests. Maintain explicit deny-lists for internal IP ranges (169.254.0.0/16, 127.0.0.0/8, ::1/128) and known metadata service domains. Use network segmentation to prevent application servers from reaching metadata services. Implement egress filtering to restrict outbound connections from application servers.

MITRE ATT&CK Mapping: TA0001 (Initial Access), T1190 (Exploit Public-Facing Application)


Enhanced Visibility and Analytics with Coralogix

Deploying these detection rules across AWS WAF, Akamai WAF, F5 WAF, and Cloudflare WAF provides comprehensive coverage, but the true value emerges when WAF logs are centralized and analyzed through a platform like Coralogix. By streaming WAF logs to Coralogix, organizations gain:


Advanced Log Analytics


Real-Time Threat Detection


Security Operations Efficiency


Implementation Best Practices


Multi-WAF Standardization

When deploying these detection rules across different WAF platforms, maintain consistent naming conventions, severity classifications, and alert configurations. This standardization enables security teams to develop unified playbooks and response procedures regardless of which WAF platform triggers the alert.


Tuning and Optimization

Initial deployment will generate false positives. Establish a tuning period where alerts are logged but not immediately blocked. Analyze patterns, identify legitimate traffic that triggers rules (such as security scanners, monitoring tools, or specific business workflows), and implement precise exclusions. The queries provided include common exclusion patterns for advertising trackers, transaction IDs, and authentication tokens expand these based on your environment.


Performance Considerations

Complex regex patterns in detection rules can impact WAF performance. Monitor rule evaluation latency and optimize patterns when necessary. Consider implementing sampling for high-volume endpoints while maintaining full coverage for critical authentication and data entry points.


Continuous Improvement

Threat landscapes evolve constantly. Schedule regular reviews of detection rules, incorporating new attack patterns, emerging vulnerabilities, and lessons learned from incidents. Subscribe to OWASP updates and security advisories relevant to your WAF platforms.


Optimization Based on Insights

By leveraging comprehensive WAF log analytics through Coralogix, security teams can:


Conclusion

Web application security requires a multi-layered defence strategy, and WAFs serve as a critical control point for detecting and preventing attacks before they reach application logic. The six enterprise-grade OWASP detection rules outlined in this article provide comprehensive coverage against the most dangerous web application attack vectors: XSS, SQL injection, RCE, PHP injection, SSTI, and SSRF.

By implementing these rules consistently across AWS WAF, Akamai WAF, F5 WAF, and Cloudflare WAF, and centralizing log analysis through Coralogix, organizations achieve unified visibility, accelerated threat detection, and enhanced security operations efficiency. The combination of robust detection rules and advanced analytics capabilities empowers security teams to stay ahead of sophisticated attackers and protect critical web applications against evolving threats.

Remember that detection is only the first step effective web application security requires continuous monitoring, regular tuning, incident response readiness, and ongoing collaboration between security, development, and operations teams to address vulnerabilities at their source.

Threat Intel Update: November 2025

What’s included:

What this means for you:


Threat insights


Major cloud attack vectors


Ransomware activity


Critical CVEs


Key trends in supply chain, cloud, & AI


Supply chain: Third-party integration compromise High-profile breaches at Qantas and MANGO demonstrated a shift to targeting third-party platforms integrated with core business systems like Salesforce. Attackers compromised marketing vendors and integrated apps to gain trusted access to customer data, bypassing traditional perimeter defenses.  

Supply chain: “Living-off-the-Trusted-Stack” (LOTTS) Chinese-linked actors were observed weaponizing the legitimate, open-source Nezha monitoring tool. By abusing this trusted software, they successfully delivered the Gh0st RAT malware, inheriting privileged access and remaining undetected by traditional endpoint security solutions.  

Cloud abuse: AWS-native attack playbook The Crimson Collective group executed a full-lifecycle attack on AWS environments. After compromising exposed IAM credentials, they escalated privileges to ‘AdministratorAccess’ and used the victim’s own AWS Simple Email Service (SES) to send extortion notes, guaranteeing deliverability and amplifying pressure.  

AI abuse (offensive): AI-generated malware in the wild The state-aligned actor UTA0388 was confirmed to be using OpenAI’s ChatGPT as a force multiplier. The group used the public AI to develop a custom malware family (GOVERSHELL) and to craft highly convincing, context-aware phishing emails at scale.  

AI abuse (internal): “Shadow AI” data exposure A global analysis revealed that 1 in 54 employee prompts into public GenAI tools contains high-risk sensitive data. This “Shadow AI” phenomenon has become a massive, unmonitored data loss vector, with 91% of organizations found to be regularly leaking proprietary information. 


Most exploited- CVE


CVE-2025-61882 — Oracle E-Business Suite BI Publisher (10.0)


CVE-2025-59287 — Microsoft Windows Server Update Service (9.8)


CVE-2025-10035 — Fortra GoAnywhere MFT (10.0)


CVE-2025-49844 — Redis Lua Engine (Critical)


Indicators covered

Over the past month, we have expanded our threat intelligence coverage by integrating new Indicators of Compromise (IOCs) associated with the following ransomware groups,malwares  and threat actors:


Ransomware & threat actor groups:


Malware & botnets

And many more ….


Monthly IOC added


Ransomware activity

Raising the Bar in Observability and Security: Coralogix Extensions at Scale


In today’s high-velocity digital ecosystem, visibility isn’t enough. SREs and engineering leaders need real-time insights, actionable signals, and automated workflows to operate at scale. As systems grow more distributed and cloud-native, the demand for intelligent observability and security has never been higher.

Extensions are solutions to get instant observability with prepackaged parsing rules, alerts, dashboards and more. At Coralogix, we don’t just offer extensions-we deliver operational value out of the box. Our curated ecosystem of extensions helps teams detect issues faster, understand them more deeply, and resolve them with confidence.



Observability that goes beyond the surface


Observability should empower, not overwhelm. While many platforms showcase a long list of extensions, Coralogix focuses on which use cases  that really matters to SREs and administrators: depth, usability, and time to value.

With over 3,800+ deployable artifacts-including dashboards, alerts, and parsing rules-Coralogix helps SREs and platform teams achieve instant visibility into their environments.

PlatformActual Artifacts
AWS650
Azure160
Google Cloud Platform168
AI Monitoring7
Other Platforms2991


Instead of manually creating custom dashboards and creating alerts, teams can rely on real world,time tested tailored for common cloud services, infrastructure components, and application tiers to be efficient and quick in delivery of use cases.



Security that’s proactive, not reactive


Security isn’t just a function-it’s a shared responsibility. And SREs are increasingly on the front lines as there is good overlap in security situations leading to impact on application availability during security incidents. 

Coralogix provides over 2,400 preconfigured security alerts to help you catch threats early-from IAM misconfigurations to anomalous login behavior. These aren’t generic templates-they’re curated rules built on best practices and real-world patterns.

CategorySecurity Alerts
Core Security2443
AI/Other1133


With full-stack visibility across logs, metrics, and traces, your team gets a unified view of reliability and risk-all in a single platform.



Content first approach


Coralogix takes a content-first approach: delivering enriched, deployable resources, not just raw connectors. This helps engineering and SRE teams spend less time configuring and more time optimizing.



The Coralogix advantage


Whether you’re optimizing SLOs, tightening your security posture, or reducing mean time to resolution (MTTR), Coralogix delivers the tools and context your team needs-immediately.



See it for yourself


Curious what observability and security can look like when it’s built to scale with you? Visit the Coralogix Integrations Hub and explore our library of ready-to-deploy extensions.

Coralogix Expands Unified Threat Intelligence Coverage

Coralogix is excited to announce a major enhancement to our Unified Threat Intelligence (UTI) capabilities – now with expanded IOC matching beyond IPs. While our earlier focus was primarily on detecting malicious IP addresses, threats have evolved. Attackers now hide behind encrypted traffic, disposable domains, and polymorphic files.

To stay ahead, we’ve normalized new critical fields – JA3, JA4, domain, URL, and file hash and integrated them into our UTI engine. These are now fully supported in the Snowbit Utilities Extension, bringing faster detection, richer context, and broader coverage.


What’s New?


Until now, our IOC enrichment focused on:


We’ve now added IOC enrichment support for:

This extended support means more comprehensive detection, enabling threat hunters and analysts to surface stealthier adversarial activity across multiple attack surfaces.


Available Alerts


These enrichments power new dedicated alerts – now live via the Snowbit Utilities Extension:


These alerts work alongside the existing IP-based detection to give you full-spectrum IOC monitoring.


Sample Log – Matched IOC (Domain)


Here’s how a matched malicious domain appears in logs under the cx_security namespace:

Each match is enriched with contextual intel such as feed source, confidence level, threat notes, and malware tags- enabling rapid triage and response.


Delivered Through Snowbit Utilities Extension


These detections and enrichments are available immediately for customers using the Snowbit Utilities Extension, offering:


Whether you’re handling proxy logs, NetFlow, DNS queries, or file activity –  these new IOCs are automatically correlated with threat feeds and surfaced in real time.


Why This Matters


Modern attacks rarely rely on static IPs alone. Adversaries:


With JA3/JA4 fingerprinting, file hash correlation, and domain/URL intelligence — you’re equipped to catch:

VPC Flow Logs – Everything you need to know

Why VPC Flow Log Configuration Matters


In our investigative experience, we have observed that the default fields in VPC Flow Logs often provide limited visibility into network connections. This constraint hampers deep-dive investigations during incident response or threat hunting. 

That’s why understanding and optimizing your VPC Flow Log configuration isn’t just a nice-to-have; it’s foundational to effective threat detection, troubleshooting, and cost control.

Therefore, we strongly recommend implementing proper logging configurations in VPC Flow Logs tailored to your environment and use cases.

In this document, we analyze the fields available in AWS VPC Flow Logs to evaluate their role in security, observability, and operational use. Each parameter was reviewed to understand what insight it provides, how it supports detection and troubleshooting, and whether it is essential in most environments. 

By clearly identifying which fields deliver the most value, we help you build a smarter, more efficient logging strategy. We categorized fields as essential, recommended, or optional helping guide decisions on what to retain and what can be safely excluded to reduce logging volume and cost.

We previously published a blog covering common use cases of VPC Flow Logs and out-of-the-box detections available in Coralogix. 

The blog can be accessed here .


Common Gaps in VPC Flow Log Configurations :

These gaps show why default setups are rarely enough. Tailored configurations are the most effective way to ensure both visibility and efficiency.


The Role of VPC Flow Log Fields


VPC Flow Logs offer dozens of fields that can be included in each log record, but not all are created equal. Some fields are crucial for identifying suspicious behavior, tracking traffic patterns, or troubleshooting issues, while others are more situational or redundant. 

To help you strike the right balance between visibility and efficiency, we’ve grouped fields based on their value across security, observability, and cost.


High-Value Fields to Retain


These fields are considered essential for any VPC Flow Log configuration and should be consistently enabled. They provide visibility into network activity and are critical for security detection, forensics, and traffic analysis.

FieldDescriptionSecurity Use Case
versionLog format version usedCannot be disabled
account-idAWS account owning the interfaceAttribute traffic to specific accounts in multi-account environments; support tenant-aware detection.
interface-idNetwork interface IDHelps trace traffic to ENIs (e.g., NAT gateways, load balancers); useful in identifying misconfigured routes or abused endpoints.
instance-idLinked instance IDDrop if instance attribution isn’t needed
srcaddrSource IP of trafficPinpoints traffic origin; enables geo-IP lookups, anomaly detection (e.g., access from unusual countries), and IP-based threat intel matching.
dstaddrDestination IP of trafficIdentifies which systems or services were targeted; critical in tracking lateral movement or identifying protected asset exposure.
srcportSource port usedDetects unusual port usage or port scans from external sources; highlights ephemeral port behaviors often seen in malware C2.
dstportDestination port usedIdentifies attempts to reach sensitive services (e.g., SSH, RDP, databases); supports port-based threat models.
protocolProtocol number (e.g. TCP, UDP)Flags suspicious or non-standard protocol usage (e.g., ICMP abuse, stealthy exfiltration via UDP).
regionAWS Region of resourceDrop in single-region setups
tcp-flagsTCP handshake/termination flagsDrop unless deep behavioral detection needed
pkt-srcaddrOriginal (pre-NAT) source IPDrop if not using NAT/EKS
pkt-dstaddrOriginal (pre-NAT) destination IPSame as above
actionACCEPT or REJECT decisionIdentifies blocked attacks vs. successful connections; useful for tuning firewall rules and alert triage.
flow-directionIngress or Egress indicatorHelps distinguish between inbound scans and outbound C2/beaconing; crucial for DLP and egress filtering.
traffic-pathEgress path from the VPCReveals whether traffic exited via internet, VPN, or peering; useful for identifying unapproved exposure routes.
ecs-cluster-arnARN of ECS clusterMaps traffic to specific container clusters; supports container-aware security monitoring.
ecs-cluster-nameName of ECS clusterHuman-readable cluster name for correlating flows in dashboards and alerts.
reject-reasonWhy traffic was rejected (e.g. BPA)Explains blocked attempts; particularly useful in enforcing AWS BPA (Block Public Access) policies.

Lightweight Fields for Selective Use


These fields are not strictly required for baseline visibility and detection, and can be trimmed to reduce ingestion volume and cost. However, they offer additional insights for specific use cases such as network performance tuning, environment tagging, or deep forensic analysis. If you’re focused primarily on cost optimization, these are good candidates to drop – but each comes with a tradeoff depending on your monitoring goals.     

FieldDescriptionTradeoff
packetsPackets transferredDrop if performance tracking is unnecessary
startStart time of the flowDrop if timing isn’t critical
endEnd time of the flowSame as above
log-statusLog collection health indicatorDrop if not tracking ingestion gaps
vpc-idID of the associated VPCDrop in single-VPC setups
subnet-idID of the associated subnetDrop if not subnet-aware
typeTraffic type (IPv4/IPv6/EFA)Drop if IP type can be inferred from addresses
az-idAvailability Zone IDDrop if zone-level tracking not needed
sublocation-typeEdge location type (Outpost, etc)Drop if not using wavelength/local zones
sublocation-idID of edge infrastructure zoneSame as above
pkt-src-aws-serviceAWS service associated with source IPDrop unless detailed AWS service tracking is needed
pkt-dst-aws-serviceAWS service associated with destination IPSame as above
ecs-container-instance-arnECS container instance (ARN)Drop if Fargate-only or not using ECS
ecs-container-instance-idECS container instance IDSame as above
ecs-container-idDocker ID of first containerDrop unless deep ECS visibility needed
ecs-second-container-idDocker ID of second containerSame as above
ecs-service-nameECS service nameDrop if service-level mapping isn’t needed
ecs-task-definition-arnECS task definition ARNDrop if not needed
ecs-task-arnARN of running ECS taskDrop unless container visibility required
ecs-task-idECS task IDDrop in Fargate/basic ECS setups


Customizing VPC Flow Log Record Format


There are two ways to define the log record format for AWS VPC Flow Logs:

  1. Default Format
    This includes a fixed set of fields defined by AWS. It is easy to enable and provides baseline network visibility but offers no control over which fields are included. The version for default format is always set to 2.
  2. Custom Format
    This allows you to explicitly specify which fields to include in your flow logs. Custom format gives you fine-grained control over the log content, making it ideal for cost optimization, security-specific logging, or adapting logs to match your SIEM or analytics pipeline.

 Step by Step Customization:

  1. Head over to the VPC Dashboard in the AWS Console.
  2. Select the VPC for which you want to enable flow logs.
  3. Click on the Flow Logs tab and choose Create flow log (or Edit if modifying an existing one).

  4. Configure all the required fields as per your requirement.                                              
  1. Under Log record format, choose Custom from the dropdown.

  1. In the text box that appears, enter your desired field list (e.g., srcaddr dstaddr srcport dstport protocol bytes action).
  1. Click Create flow log to save and activate the configuration.


Conclusion 


By customizing VPC Flow Logs, organizations can significantly improve network visibility, enhance security posture, and optimize logging costs. Moving beyond default configurations allows for precise control over data collection, ensuring that critical information for security detection, incident response, and operational analysis is retained, while extraneous data is excluded. This tailored approach is crucial for building a robust and efficient monitoring strategy within AWS environments.

Securing NHIs with Coralogix

Non-Human Identities (NHIs) refer to digital identities assigned to machines, applications, services, APIs, containers, bots, and other automated or programmatic entities within an IT or cloud environment. 

Unlike user accounts that are tied to real people, NHIs enable systems to communicate and perform actions on each other’s behalf, such as a microservice querying a database or a CI/CD pipeline deploying code. These identities are typically associated with credentials like API keys, tokens, certificates, or SSH keys, which grant them access to systems resources and data.

NHI’s often hold elevated permissions and can access sensitive data or critical infrastructure components. If left unmanaged or unsecured, they increase the attack surface exponentially. Unlike human users, NHIs don’t often follow predictable work hours or behaviors, which makes them harder to baseline and analyze without proper controls. 

Securing and monitoring NHIs ensures that only authorized systems can interact, access, or make changes within the environment — a crucial aspect of maintaining security, compliance, and system authenticity and integrity.

The number of NHIs has exploded with the rise of cloud-native architectures, DevOps automation, and AI workloads. In many organizations, NHIs outnumber human users tens or hundreds of times. Unfortunately, traditional identity and access management (IAM) systems were not designed to handle this scale or complexity. Meanwhile, attackers are increasingly targeting these identities because they are often poorly monitored, hard-coded into scripts, or left with excessive privileges which makes the urgency to act now driven by the increasing complexity of modern infrastructure and the growing volume of automated, machine-based communication.

Failure to secure and monitor NHIs can lead to a wide range of security incidents, including:

NHI’s come in various forms, depending on the systems, environments, and tasks they are associated with. Common types include service accounts, which are used by applications or scripts to perform automated tasks; API keys and tokens, which grant access to cloud services or APIs; robots and bots, such as chatbots or automation bots used in IT workflows; IoT devices, which connect to networks and systems often with their own identity and authentication needs; and machine credentials, including SSH keys and certificates used for secure communication between servers or services. In cloud environments, NHIs also include IAM roles and managed identities, which allow cloud-native services like virtual machines or containers to interact securely with other components. As digital infrastructure evolves, the number and variety of NHIs continue to grow, making their visibility and management a top security priority.

Coralogix is getting ahead of the game by concentrating on NHI factors, such as : 

As the digital ecosystem continues to expand, the presence of non-human identities—ranging from bots and APIs to autonomous systems—has become foundational to modern infrastructure. Ensuring their security and effective monitoring is not optional; it’s critical. Without proper safeguards, these identities can be exploited, leading to breaches, service disruptions, and loss of trust. Just as human identity management evolved to meet growing digital demands, securing non-human identities must become a top priority to protect data integrity, ensure compliance, and enable seamless, secure automation across industries.

Best Practices for Monitoring Database ActivityHoney Tokens: Turning Attackers’ Curiosity into a Security Advantage Using Coralogix

Honey Tokens are a security deception mechanism designed to detect unauthorized access or malicious activity by creating fake credentials, API keys, or cloud resources that should never be accessed under normal conditions. If an attacker interacts with a Honey Token, it triggers a pre-defined alert, helping security teams identify breaches early.

In a cloud environment, one should always assume compromise due to the cloud infrastructure’s complex and dynamic nature, where multiple risk factors contribute to potential security breaches.

The human factor plays a significant role, as misconfigurations—such as overly permissive IAM policies, exposed storage buckets, or unencrypted databases—are among the leading causes of cloud security incidents.

Supply chain risks, including compromised third-party dependencies and insecure CI/CD pipelines, further increase the attack surface.

Additionally, credential leakage, whether through exposed API keys in public repositories or phishing attacks against cloud administrators, can provide attackers with unauthorized access.

Given these risks, among other things, organizations must adopt continuous monitoring with automated threat detection to identify anomalies before they escalate. Assuming compromise ensures that security teams operate with a proactive mindset, focusing on rapid detection, containment, and response rather than relying solely on perimeter defenses.

Using Honey Tokens enhances security by detecting unauthorized access early and exposing attackers before they cause harm. These deceptive tokens act as tripwires, triggering alerts when accessed. Since legitimate users should never interact with them, any activity is a red flag, helping identify attack vectors like compromised credentials or insider threats. Unlike traditional security controls, Honey Tokens provide behavior-based detection, making them effective against zero-day exploits. Lightweight and cost-effective, they improve threat visibility and incident response when integrated with Coralogix.

Types of Honey Tokens

Honey tokens come in different forms, each designed to lure attackers and provide valuable insights into their activities. Organizations can enhance their security monitoring and threat detection across multiple attack vectors by deploying various types of honey tokens. Here are the most common types:

By strategically placing honey tokens, organizations can improve their ability to detect and respond to security threats before real damage occurs.

Common Places AWS Access Keys Are Misplaced

AWS access keys are often misplaced in various locations, either by human error, insecure development practices, or overlooked configurations. A very common practice by attackers is to scan for exposed credentials in these areas, making them ideal places to plant honey tokens—decoy credentials designed to trigger alerts when used.

Here are some of the most common locations where AWS keys are inadvertently leaked, to name a few, and how you can leverage them for security monitoring.

  1. Code Repositories – The Goldmine for Attackers Developers frequently make the critical mistake of hardcoding AWS access keys during the development process into source code and pushing them to version control systems like GitHub, GitLab, Bitbucket, or AWS CodeCommit. Even private repositories are not immune—compromised developer accounts or accidental public exposure can leak credentials. Attackers routinely scan public repositories for credentials using tools like Gitrob and TruffleHog. To counter this risk, honey tokens can be embedded in configuration files (config.json, .env files) or within scripts. Any attempt to use these credentials can then serve as an immediate indicator of unauthorized access.
  2. Cloud Storage – The Hidden Credential Dump AWS S3 buckets, EFS, FSx, and even EBS snapshots are often used to store backup files, logs, or configuration data. Unfortunately, access keys sometimes end up in these storage solutions, either due to improper security controls or poor file organization. Attackers frequently probe for misconfigured S3 buckets and open shares, making them an excellent place for honey tokens. By placing decoy credentials inside a log file or old backup, you can detect unauthorized scanning and credential theft attempts in your cloud storage environment.
  3. CI/CD Pipelines & Automation Tools – A Growing Risk Continuous integration and deployment (CI/CD) pipelines automate software delivery but often involve credentials stored as environment variables or embedded in scripts. Jenkins, GitHub Actions, GitLab CI/CD, and Bitbucket Pipelines are notorious for accidental key exposure, especially when debug logs are left enabled. Infrastructure-as-code tools like Terraform, CloudFormation, and Ansible scripts can also be sources of credential leakage. Security teams can insert honey tokens into pipeline configurations or automation scripts to detect misuse and enhance visibility into unauthorized actions.
  4. Email & Collaboration Tools – The Unintentional Leak In fast-paced development teams, credentials often get shared over communication tools like Slack, Microsoft Teams, Google Drive file share, or even a simple email. Attackers gaining access to these platforms, either through phishing or compromised accounts, can search for AWS access keys (and not only) in old messages or shared documentation (e.g., Google Docs, Notion, Confluence). It is to note that in these sorts of platforms, access key leaks might be the least of the organization’s concerns. information such as the client’s records or other Personally identifiable information (PII) is most likely a critical asset to jeopardize. By strategically placing honey tokens in documentation or chat messages, security teams can track whether attackers attempt to use credentials from leaked conversations.

Coralogix In Action

By ingesting logs from both CloudTrail and Google Workspace, we can monitor all changes happening within both platforms and monitor the activity of both tokens in a single console.

We will now simulate two scenarios

Planting the decoys

For AWS, we will first create the user and its access key with no permissions

to ease the process, we can use Terraform:

provider "aws" {
  region  = "eu-west-1"
}
resource "aws_iam_user" "this" {
  name = "foo"
}
resource "aws_iam_access_key" "this" {
  user  = aws_iam_user.this.name
}
locals {
  credentials = {
    access_key = aws_iam_access_key.this.id
    secret_key = aws_iam_access_key.this.secret
  }
}
resource "local_file" "this" {
  filename = "credentials.json"
  content  = jsonencode(local.credentials)
}

Note: In a real Honey Token used in a production environment, use a better name than “foo”. attackers might recognize this testing name and will refrain from using it.

After the running above Terraform, we get an additional file named credentials.json with a similar content to the following:

{
  "access_key": "AKIA3LVX...",
  "secret_key": "OjycnxVKdyRv..."
}

Now that we have the user with access keys ready, let’s plant them in a demo code:

For Google Drive, we will create a file called user_data.xlsx which we will make public to anyone with this link in the main Drive directory. It’s important to note that for a real scenario it is recommended to place the file in a path that will appear real enough to arise the curiosity of the unwanted entity.

Setup the alerts in Coralogix

For AWS

With the query:

userIdentity.type:"IAMUser" AND userIdentity.accessKeyId:"AKIA3LVX..."


For Google Drive

With the query:

id.applicationName:"drive" AND event.parameters.doc_title:"user_data.xlsx"

Investigating AWS Credentials

When the attacker finds the credentials and configures them in his AWS CLI, he will most likely try to first enumerate his permissions.

The first and most used API call is usually the get-caller-identity for the STS service to see the underlined user’s name. the response will look something like this

After knowing the user’s name, the attacker will try to understand what he can do in the environment

all those actions should be faced with AccessDenied response, as this user should not have any permissions attached

When an alert triggers, we can then investigate what the attacker tried to do. and of course to verify that he wasn’t able to perform any actions in the environment

we can use the query (the same as the alert)

userIdentity.type:"IAMUser" AND userIdentity.accessKeyId:"AKIA3LVX..."

and then do the “Show Graph For Key” action in the explore screen of Coralogix

We can then verify that those action were not successful by enhancing the query

userIdentity.type:"IAMUser" AND userIdentity.accessKeyId:"AKIA3LVX..." AND NOT errorCode:"AccessDenied"

The only outcome should be the log for the get-caller-identity action that was initially performed.

In conclusion

Honey Tokens serve as an effective security deception mechanism, helping organizations detect unauthorized access and malicious activity by deploying fake credentials, API keys, decoy database records, and other lures that should never be accessed under normal conditions. Any interaction with these tokens should be monitored and have pre-defined alerts set up, allowing security teams to identify attackers early and analyze their tactics.

In cloud environments, security risks stem from misconfigurations, supply chain vulnerabilities, credential leaks, and zero-day exploits. Attackers frequently scan for exposed credentials in repositories, cloud storage, CI/CD pipelines, and collaboration tools, making these locations ideal for planting honey tokens. By strategically placing decoy AWS keys and monitoring unauthorized use, organizations can gain valuable intelligence on attack vectors, enhance threat visibility, and strengthen their incident response. Assuming compromise as a security mindset ensures that teams focus on proactive threat detection and rapid mitigation rather than relying solely on perimeter defenses.

Eliminating Blind Spots: The Case for Integrated Security and Observability

As organizations increasingly adopt security observability practices via SIEM, they fall into one of the obvious traps: over-reliance on a siloed signal, focusing only on logs, and not adding metrics as an additional source of truth. Each signal is important for understanding system behavior and its load, but they’re only half the picture if used in isolation. For example, a high CPU usage metric might look alarming, but without context from logs, it’s difficult to diagnose the issue causing such a load on the instance. When metrics and logs are managed separately, or by separate systems with non-mutual or non-correlated integration, or even in some cases not managed at all, it creates limited visibility. This makes it challenging to discover environmental and network issues, lack in data for RCA, and challenging incident resolution. The best solution to keep your environment secured and visible is by security-observability aggregation strategies.

The security world is witnessing a shift towards “full-blown” security-observability, where all the layers of the stack, from the application to the infrastructure, are monitored and analyzed in a combined-integrated fashion. This approach allows for better and in-depth comprehensive insights, while helping to eliminate blind spots that can occur when monitoring in progress. Investing in a platform that provides both observability and security will ensure that you stay ahead of the changes in the industry.

The AWS platform provides a masterclass in executing security-observability aggregation. AWS is the world’s most comprehensive and broadly adopted cloud with over 200 fully featured services from data center’s globally. AWS has a significant number of services and features – from infrastructure technologies like compute, storage, and databases – to emerging technologies, such as ML and AI, data lakes and analytics. This mixture of of infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) and packaged software-as-a-service (SaaS) solutions exist in the same ecosystem, making security engineer’s lives easier and providing more effective observability to cloud-based applications.

You’re probably familiar with Coralogix as the leading observability platform, but we combine both modern full-stack observability with a cutting-edge SIEM. Our SIEM platform helps centralize and analyze logs and metrics without reliance on indexing in real-time and monitors different data scales of our small project.

Let’s review two dependent pieces of information coming from different sources, and see how we can correlate them:

The configuration will include basic webapp Juiceshop. It has a variety of different static and dynamic content. We are not going to get into the bits and bytes of the specific of different environmental components, but for purpose of this project I have configured Cloudfront for the caching purposes and WAF to protect the public interface following the diagram below.

The following 3 components I used to ship logs to Coralogix:

Once all configured, several minutes later I started see first bytes of information in my Coralogix account:

Next, we need to configure a Custom Dashboard to see an overview of the information. At the first I will configure different widgets to present the information from two different sources: Caching Stats and CPU Utilization.

Next, I aggregate those two under one widget and represent the data.

The beauty of this graph is the data correlation between the CPU and Caching offload:

Query 1 : CPU Utilization

Query 2 : Metric of caching ratio

Query 3 : Logs with different Edge response statuses

This data can point to specific issue or anomalies related to EC2, abnormal rates of offload, as well network related activity not correlated with CDN service.

The image above represents expected behaviour where CPU goes up while the caching started to accumulate. We can clearly see that Cloudfront edge starts to kick in and respond with the edge hits, meaning objects served from cache and not from the origin server. CPU utilization starting to go down, which makes absolute sense. So far this functionality is expected. The next event creates another wave of caching misses that ramps up CPU, which at the end stabilizes to some 3%-5% CPU utilization.

Here’s one last example in the dashboard below, which represents the anomaly we are looking for. Following CDN hits, there is a CPU anomaly which keeps the machine busy 50% average utilization, where we have edge hits and misses, but the CPU remains high all over the time.

This new approach to monitoring data is truly a game changer, revolutionizing how organizations safeguard their assets in a rapidly evolving environmental landscape. By leveraging advanced correlated analytics, real-time monitoring, and intelligent automation, this solution transforms traditional observability practices, enabling quicker detection and proactive responses to potential anomalies. This shift in technology not only enhances operational efficiency but also empowers organizations to stay one step ahead of potential failure, redefining the standards for robust and adaptive monitoring.

To learn more, talk to one of our security experts.