In today’s high-velocity digital ecosystem, visibility isn’t enough. SREs and engineering leaders need real-time insights, actionable signals, and automated workflows to operate at scale. As systems grow more distributed and cloud-native, the demand for intelligent observability and security has never been higher.
Extensions are solutions to get instant observability with prepackaged parsing rules, alerts, dashboards and more. At Coralogix, we don’t just offer extensions-we deliver operational value out of the box. Our curated ecosystem of extensions helps teams detect issues faster, understand them more deeply, and resolve them with confidence.
Observability that goes beyond the surface
Observability should empower, not overwhelm. While many platforms showcase a long list of extensions, Coralogix focuses on which use cases that really matters to SREs and administrators: depth, usability, and time to value.
With over 3,800+ deployable artifacts-including dashboards, alerts, and parsing rules-Coralogix helps SREs and platform teams achieve instant visibility into their environments.
Platform | Actual Artifacts |
AWS | 650 |
Azure | 160 |
Google Cloud Platform | 168 |
AI Monitoring | 7 |
Other Platforms | 2991 |
Instead of manually creating custom dashboards and creating alerts, teams can rely on real world,time tested tailored for common cloud services, infrastructure components, and application tiers to be efficient and quick in delivery of use cases.
Security that’s proactive, not reactive
Security isn’t just a function-it’s a shared responsibility. And SREs are increasingly on the front lines as there is good overlap in security situations leading to impact on application availability during security incidents.
Coralogix provides over 2,400 preconfigured security alerts to help you catch threats early-from IAM misconfigurations to anomalous login behavior. These aren’t generic templates-they’re curated rules built on best practices and real-world patterns.
Category | Security Alerts |
Core Security | 2443 |
AI/Other | 1133 |
With full-stack visibility across logs, metrics, and traces, your team gets a unified view of reliability and risk-all in a single platform.
Content first approach
Coralogix takes a content-first approach: delivering enriched, deployable resources, not just raw connectors. This helps engineering and SRE teams spend less time configuring and more time optimizing.
The Coralogix advantage
- 3,800+ deployable assets for observability and security
- One-click extensions for AWS, Azure, GCP, Kubernetes, and more
- Out-of-the-box dashboards, alerts, and parsing rules
- Real-time streaming with no indexing delay or storage lock-in
- Proactive security content aligned with modern compliance needs
Whether you’re optimizing SLOs, tightening your security posture, or reducing mean time to resolution (MTTR), Coralogix delivers the tools and context your team needs-immediately.
See it for yourself
Curious what observability and security can look like when it’s built to scale with you? Visit the Coralogix Integrations Hub and explore our library of ready-to-deploy extensions.
Coralogix is excited to announce a major enhancement to our Unified Threat Intelligence (UTI) capabilities – now with expanded IOC matching beyond IPs. While our earlier focus was primarily on detecting malicious IP addresses, threats have evolved. Attackers now hide behind encrypted traffic, disposable domains, and polymorphic files.
To stay ahead, we’ve normalized new critical fields – JA3, JA4, domain, URL, and file hash and integrated them into our UTI engine. These are now fully supported in the Snowbit Utilities Extension, bringing faster detection, richer context, and broader coverage.
What’s New?
Until now, our IOC enrichment focused on:
- Malicious IP Address Detection
We’ve now added IOC enrichment support for:
- JA3 Fingerprint
- JA4 Fingerprint
- Malicious Domain
- Malicious URL
- Malicious File Hash
This extended support means more comprehensive detection, enabling threat hunters and analysts to surface stealthier adversarial activity across multiple attack surfaces.
Available Alerts
These enrichments power new dedicated alerts – now live via the Snowbit Utilities Extension:
- Unified Threat Intel – Malicious URL Detected
- Unified Threat Intel – Malicious JA4 Fingerprint Detected
- Unified Threat Intel – Malicious JA3 Fingerprint Detected
- Unified Threat Intel – Malicious Domain Detected
- Unified Threat Intel – Malicious Hash Detected
These alerts work alongside the existing IP-based detection to give you full-spectrum IOC monitoring.
Sample Log – Matched IOC (Domain)
Here’s how a matched malicious domain appears in logs under the cx_security namespace:
Each match is enriched with contextual intel such as feed source, confidence level, threat notes, and malware tags- enabling rapid triage and response.
Delivered Through Snowbit Utilities Extension
These detections and enrichments are available immediately for customers using the Snowbit Utilities Extension, offering:
- Plug-and-play integration
- Prebuilt alert rules
- Seamless enrichment at ingest
- Unified logging under cx_security
Whether you’re handling proxy logs, NetFlow, DNS queries, or file activity – these new IOCs are automatically correlated with threat feeds and surfaced in real time.
Why This Matters
Modern attacks rarely rely on static IPs alone. Adversaries:
- Use encrypted channels that evade DPI
- Register throwaway domains and malicious URLs
- Deploy hashed payloads and beaconing clients
With JA3/JA4 fingerprinting, file hash correlation, and domain/URL intelligence — you’re equipped to catch:
- TLS-based malware (e.g., Cobalt Strike, VenomRAT)
- Phishing infrastructure
- Malicious file downloads and lateral movement
Why VPC Flow Log Configuration Matters
In our investigative experience, we have observed that the default fields in VPC Flow Logs often provide limited visibility into network connections. This constraint hampers deep-dive investigations during incident response or threat hunting.
That’s why understanding and optimizing your VPC Flow Log configuration isn’t just a nice-to-have; it’s foundational to effective threat detection, troubleshooting, and cost control.
Therefore, we strongly recommend implementing proper logging configurations in VPC Flow Logs tailored to your environment and use cases.
In this document, we analyze the fields available in AWS VPC Flow Logs to evaluate their role in security, observability, and operational use. Each parameter was reviewed to understand what insight it provides, how it supports detection and troubleshooting, and whether it is essential in most environments.
By clearly identifying which fields deliver the most value, we help you build a smarter, more efficient logging strategy. We categorized fields as essential, recommended, or optional helping guide decisions on what to retain and what can be safely excluded to reduce logging volume and cost.
We previously published a blog covering common use cases of VPC Flow Logs and out-of-the-box detections available in Coralogix.
The blog can be accessed here .
Common Gaps in VPC Flow Log Configurations :
- Overlogging: Collecting all fields by default leads to high cost with limited added value.
- Missing NAT Context: Omitting pkt-srcaddr / pkt-dstaddr in NAT or EKS environments hides true endpoints.
- Dropped Visibility Fields: Fields like action, flow-direction, and traffic-path are often skipped, reducing clarity on traffic intent and exposure.
- Container Blindness: Important ECS-specific fields are left out, making it hard to trace service-level activity.
These gaps show why default setups are rarely enough. Tailored configurations are the most effective way to ensure both visibility and efficiency.
The Role of VPC Flow Log Fields
VPC Flow Logs offer dozens of fields that can be included in each log record, but not all are created equal. Some fields are crucial for identifying suspicious behavior, tracking traffic patterns, or troubleshooting issues, while others are more situational or redundant.
To help you strike the right balance between visibility and efficiency, we’ve grouped fields based on their value across security, observability, and cost.
High-Value Fields to Retain
These fields are considered essential for any VPC Flow Log configuration and should be consistently enabled. They provide visibility into network activity and are critical for security detection, forensics, and traffic analysis.
Field | Description | Security Use Case |
version | Log format version used | Cannot be disabled |
account-id | AWS account owning the interface | Attribute traffic to specific accounts in multi-account environments; support tenant-aware detection. |
interface-id | Network interface ID | Helps trace traffic to ENIs (e.g., NAT gateways, load balancers); useful in identifying misconfigured routes or abused endpoints. |
instance-id | Linked instance ID | Drop if instance attribution isn’t needed |
srcaddr | Source IP of traffic | Pinpoints traffic origin; enables geo-IP lookups, anomaly detection (e.g., access from unusual countries), and IP-based threat intel matching. |
dstaddr | Destination IP of traffic | Identifies which systems or services were targeted; critical in tracking lateral movement or identifying protected asset exposure. |
srcport | Source port used | Detects unusual port usage or port scans from external sources; highlights ephemeral port behaviors often seen in malware C2. |
dstport | Destination port used | Identifies attempts to reach sensitive services (e.g., SSH, RDP, databases); supports port-based threat models. |
protocol | Protocol number (e.g. TCP, UDP) | Flags suspicious or non-standard protocol usage (e.g., ICMP abuse, stealthy exfiltration via UDP). |
region | AWS Region of resource | Drop in single-region setups |
tcp-flags | TCP handshake/termination flags | Drop unless deep behavioral detection needed |
pkt-srcaddr | Original (pre-NAT) source IP | Drop if not using NAT/EKS |
pkt-dstaddr | Original (pre-NAT) destination IP | Same as above |
action | ACCEPT or REJECT decision | Identifies blocked attacks vs. successful connections; useful for tuning firewall rules and alert triage. |
flow-direction | Ingress or Egress indicator | Helps distinguish between inbound scans and outbound C2/beaconing; crucial for DLP and egress filtering. |
traffic-path | Egress path from the VPC | Reveals whether traffic exited via internet, VPN, or peering; useful for identifying unapproved exposure routes. |
ecs-cluster-arn | ARN of ECS cluster | Maps traffic to specific container clusters; supports container-aware security monitoring. |
ecs-cluster-name | Name of ECS cluster | Human-readable cluster name for correlating flows in dashboards and alerts. |
reject-reason | Why traffic was rejected (e.g. BPA) | Explains blocked attempts; particularly useful in enforcing AWS BPA (Block Public Access) policies. |
Lightweight Fields for Selective Use
These fields are not strictly required for baseline visibility and detection, and can be trimmed to reduce ingestion volume and cost. However, they offer additional insights for specific use cases such as network performance tuning, environment tagging, or deep forensic analysis. If you’re focused primarily on cost optimization, these are good candidates to drop – but each comes with a tradeoff depending on your monitoring goals.
Field | Description | Tradeoff |
packets | Packets transferred | Drop if performance tracking is unnecessary |
start | Start time of the flow | Drop if timing isn’t critical |
end | End time of the flow | Same as above |
log-status | Log collection health indicator | Drop if not tracking ingestion gaps |
vpc-id | ID of the associated VPC | Drop in single-VPC setups |
subnet-id | ID of the associated subnet | Drop if not subnet-aware |
type | Traffic type (IPv4/IPv6/EFA) | Drop if IP type can be inferred from addresses |
az-id | Availability Zone ID | Drop if zone-level tracking not needed |
sublocation-type | Edge location type (Outpost, etc) | Drop if not using wavelength/local zones |
sublocation-id | ID of edge infrastructure zone | Same as above |
pkt-src-aws-service | AWS service associated with source IP | Drop unless detailed AWS service tracking is needed |
pkt-dst-aws-service | AWS service associated with destination IP | Same as above |
ecs-container-instance-arn | ECS container instance (ARN) | Drop if Fargate-only or not using ECS |
ecs-container-instance-id | ECS container instance ID | Same as above |
ecs-container-id | Docker ID of first container | Drop unless deep ECS visibility needed |
ecs-second-container-id | Docker ID of second container | Same as above |
ecs-service-name | ECS service name | Drop if service-level mapping isn’t needed |
ecs-task-definition-arn | ECS task definition ARN | Drop if not needed |
ecs-task-arn | ARN of running ECS task | Drop unless container visibility required |
ecs-task-id | ECS task ID | Drop in Fargate/basic ECS setups |
Customizing VPC Flow Log Record Format
There are two ways to define the log record format for AWS VPC Flow Logs:
- Default Format
This includes a fixed set of fields defined by AWS. It is easy to enable and provides baseline network visibility but offers no control over which fields are included. The version for default format is always set to 2. - Custom Format
This allows you to explicitly specify which fields to include in your flow logs. Custom format gives you fine-grained control over the log content, making it ideal for cost optimization, security-specific logging, or adapting logs to match your SIEM or analytics pipeline.
Step by Step Customization:
- Head over to the VPC Dashboard in the AWS Console.
- Select the VPC for which you want to enable flow logs.
- Click on the Flow Logs tab and choose Create flow log (or Edit if modifying an existing one).
- Configure all the required fields as per your requirement.
- Under Log record format, choose Custom from the dropdown.
- In the text box that appears, enter your desired field list (e.g., srcaddr dstaddr srcport dstport protocol bytes action).
- Click Create flow log to save and activate the configuration.
Conclusion
By customizing VPC Flow Logs, organizations can significantly improve network visibility, enhance security posture, and optimize logging costs. Moving beyond default configurations allows for precise control over data collection, ensuring that critical information for security detection, incident response, and operational analysis is retained, while extraneous data is excluded. This tailored approach is crucial for building a robust and efficient monitoring strategy within AWS environments.
Non-Human Identities (NHIs) refer to digital identities assigned to machines, applications, services, APIs, containers, bots, and other automated or programmatic entities within an IT or cloud environment.
Unlike user accounts that are tied to real people, NHIs enable systems to communicate and perform actions on each other’s behalf, such as a microservice querying a database or a CI/CD pipeline deploying code. These identities are typically associated with credentials like API keys, tokens, certificates, or SSH keys, which grant them access to systems resources and data.
NHI’s often hold elevated permissions and can access sensitive data or critical infrastructure components. If left unmanaged or unsecured, they increase the attack surface exponentially. Unlike human users, NHIs don’t often follow predictable work hours or behaviors, which makes them harder to baseline and analyze without proper controls.
Securing and monitoring NHIs ensures that only authorized systems can interact, access, or make changes within the environment — a crucial aspect of maintaining security, compliance, and system authenticity and integrity.
The number of NHIs has exploded with the rise of cloud-native architectures, DevOps automation, and AI workloads. In many organizations, NHIs outnumber human users tens or hundreds of times. Unfortunately, traditional identity and access management (IAM) systems were not designed to handle this scale or complexity. Meanwhile, attackers are increasingly targeting these identities because they are often poorly monitored, hard-coded into scripts, or left with excessive privileges which makes the urgency to act now driven by the increasing complexity of modern infrastructure and the growing volume of automated, machine-based communication.
Failure to secure and monitor NHIs can lead to a wide range of security incidents, including:
- Credential Leakage: Hard-coded credentials or exposed tokens in public repositories can be exploited by attackers.
- Lateral Movement: Once a malicious actor compromises one NHI, they can potentially pivot across systems using that identity’s access.
- Privilege Escalation: Overprivileged NHIs can be abused to execute unauthorized actions or gain access to sensitive systems.
- Data Exfiltration: Compromised NHIs can be used to silently extract large volumes of data without triggering traditional user-based alerts.
- Supply Chain Attacks: NHIs involved in a build pipeline process or software distribution can be hijacked to inject malicious code or backdoors.
NHI’s come in various forms, depending on the systems, environments, and tasks they are associated with. Common types include service accounts, which are used by applications or scripts to perform automated tasks; API keys and tokens, which grant access to cloud services or APIs; robots and bots, such as chatbots or automation bots used in IT workflows; IoT devices, which connect to networks and systems often with their own identity and authentication needs; and machine credentials, including SSH keys and certificates used for secure communication between servers or services. In cloud environments, NHIs also include IAM roles and managed identities, which allow cloud-native services like virtual machines or containers to interact securely with other components. As digital infrastructure evolves, the number and variety of NHIs continue to grow, making their visibility and management a top security priority.
Coralogix is getting ahead of the game by concentrating on NHI factors, such as :
- Inventory Management – Discover and inventories NHI’s with visibility into usage and access privileges.
- Context – context about each identity – ownership, usage, resource access, privileged status and stale accounts.
- Proactive Security – Continuously analyse and improve the security of non-human identities, trigger alerts and visualize activity through the custom dashboards.
- Integration – Integrating with Azure, GCP, AWS and Okta cloud
As the digital ecosystem continues to expand, the presence of non-human identities—ranging from bots and APIs to autonomous systems—has become foundational to modern infrastructure. Ensuring their security and effective monitoring is not optional; it’s critical. Without proper safeguards, these identities can be exploited, leading to breaches, service disruptions, and loss of trust. Just as human identity management evolved to meet growing digital demands, securing non-human identities must become a top priority to protect data integrity, ensure compliance, and enable seamless, secure automation across industries.
Best Practices for Monitoring Database ActivityHoney Tokens: Turning Attackers’ Curiosity into a Security Advantage Using CoralogixHoney Tokens are a security deception mechanism designed to detect unauthorized access or malicious activity by creating fake credentials, API keys, or cloud resources that should never be accessed under normal conditions. If an attacker interacts with a Honey Token, it triggers a pre-defined alert, helping security teams identify breaches early.
In a cloud environment, one should always assume compromise due to the cloud infrastructure’s complex and dynamic nature, where multiple risk factors contribute to potential security breaches.
The human factor plays a significant role, as misconfigurations—such as overly permissive IAM policies, exposed storage buckets, or unencrypted databases—are among the leading causes of cloud security incidents.
Supply chain risks, including compromised third-party dependencies and insecure CI/CD pipelines, further increase the attack surface.
Additionally, credential leakage, whether through exposed API keys in public repositories or phishing attacks against cloud administrators, can provide attackers with unauthorized access.
Given these risks, among other things, organizations must adopt continuous monitoring with automated threat detection to identify anomalies before they escalate. Assuming compromise ensures that security teams operate with a proactive mindset, focusing on rapid detection, containment, and response rather than relying solely on perimeter defenses.
Using Honey Tokens enhances security by detecting unauthorized access early and exposing attackers before they cause harm. These deceptive tokens act as tripwires, triggering alerts when accessed. Since legitimate users should never interact with them, any activity is a red flag, helping identify attack vectors like compromised credentials or insider threats. Unlike traditional security controls, Honey Tokens provide behavior-based detection, making them effective against zero-day exploits. Lightweight and cost-effective, they improve threat visibility and incident response when integrated with Coralogix.
Types of Honey Tokens
Honey tokens come in different forms, each designed to lure attackers and provide valuable insights into their activities. Organizations can enhance their security monitoring and threat detection across multiple attack vectors by deploying various types of honey tokens. Here are the most common types:
- Decoy Files – These files are labeled with sensitive names, such as “Financial Records” or “Employee Data,” to attract unauthorized access.
- Fake Credentials – These are non-functional login credentials planted in expected locations. Any attempt to use them signals potential credential theft and helps trace the attacker’s origin.
- Decoy Database Records – These false database entries resemble real sensitive data, allowing security teams to detect unauthorized access attempts and study attackers’ objectives.
- Canary Tokens – These are small triggers embedded in applications, servers, or URLs that notify security teams when accessed. They can be disguised as API keys, browser cookies, or unique URLs to track malicious activity.
- Email-Based Honey Tokens – These involve fake email addresses that, when targeted in phishing campaigns or hacker communications, provide insights into attackers’ tactics and sources.
By strategically placing honey tokens, organizations can improve their ability to detect and respond to security threats before real damage occurs.
Common Places AWS Access Keys Are Misplaced
AWS access keys are often misplaced in various locations, either by human error, insecure development practices, or overlooked configurations. A very common practice by attackers is to scan for exposed credentials in these areas, making them ideal places to plant honey tokens—decoy credentials designed to trigger alerts when used.
Here are some of the most common locations where AWS keys are inadvertently leaked, to name a few, and how you can leverage them for security monitoring.
- Code Repositories – The Goldmine for Attackers Developers frequently make the critical mistake of hardcoding AWS access keys during the development process into source code and pushing them to version control systems like GitHub, GitLab, Bitbucket, or AWS CodeCommit. Even private repositories are not immune—compromised developer accounts or accidental public exposure can leak credentials. Attackers routinely scan public repositories for credentials using tools like Gitrob and TruffleHog. To counter this risk, honey tokens can be embedded in configuration files (config.json, .env files) or within scripts. Any attempt to use these credentials can then serve as an immediate indicator of unauthorized access.
- Cloud Storage – The Hidden Credential Dump AWS S3 buckets, EFS, FSx, and even EBS snapshots are often used to store backup files, logs, or configuration data. Unfortunately, access keys sometimes end up in these storage solutions, either due to improper security controls or poor file organization. Attackers frequently probe for misconfigured S3 buckets and open shares, making them an excellent place for honey tokens. By placing decoy credentials inside a log file or old backup, you can detect unauthorized scanning and credential theft attempts in your cloud storage environment.
- CI/CD Pipelines & Automation Tools – A Growing Risk Continuous integration and deployment (CI/CD) pipelines automate software delivery but often involve credentials stored as environment variables or embedded in scripts. Jenkins, GitHub Actions, GitLab CI/CD, and Bitbucket Pipelines are notorious for accidental key exposure, especially when debug logs are left enabled. Infrastructure-as-code tools like Terraform, CloudFormation, and Ansible scripts can also be sources of credential leakage. Security teams can insert honey tokens into pipeline configurations or automation scripts to detect misuse and enhance visibility into unauthorized actions.
- Email & Collaboration Tools – The Unintentional Leak In fast-paced development teams, credentials often get shared over communication tools like Slack, Microsoft Teams, Google Drive file share, or even a simple email. Attackers gaining access to these platforms, either through phishing or compromised accounts, can search for AWS access keys (and not only) in old messages or shared documentation (e.g., Google Docs, Notion, Confluence). It is to note that in these sorts of platforms, access key leaks might be the least of the organization’s concerns. information such as the client’s records or other Personally identifiable information (PII) is most likely a critical asset to jeopardize. By strategically placing honey tokens in documentation or chat messages, security teams can track whether attackers attempt to use credentials from leaked conversations.
Coralogix In Action
By ingesting logs from both CloudTrail and Google Workspace, we can monitor all changes happening within both platforms and monitor the activity of both tokens in a single console.
We will now simulate two scenarios
- Leaked access keys found in a public GitHub repository
- Decoy sensitive file found in Google Drive
Planting the decoys
For AWS, we will first create the user and its access key with no permissions
to ease the process, we can use Terraform:
provider "aws" {
region = "eu-west-1"
}
resource "aws_iam_user" "this" {
name = "foo"
}
resource "aws_iam_access_key" "this" {
user = aws_iam_user.this.name
}
locals {
credentials = {
access_key = aws_iam_access_key.this.id
secret_key = aws_iam_access_key.this.secret
}
}
resource "local_file" "this" {
filename = "credentials.json"
content = jsonencode(local.credentials)
}
Note: In a real Honey Token used in a production environment, use a better name than “foo”. attackers might recognize this testing name and will refrain from using it.
After the running above Terraform, we get an additional file named credentials.json
with a similar content to the following:
{
"access_key": "AKIA3LVX...",
"secret_key": "OjycnxVKdyRv..."
}
Now that we have the user with access keys ready, let’s plant them in a demo code:

For Google Drive, we will create a file called user_data.xlsx which we will make public to anyone with this link in the main Drive directory. It’s important to note that for a real scenario it is recommended to place the file in a path that will appear real enough to arise the curiosity of the unwanted entity.
Setup the alerts in Coralogix
For AWS

With the query:
userIdentity.type:"IAMUser" AND userIdentity.accessKeyId:"AKIA3LVX..."
For Google Drive

With the query:
id.applicationName:"drive" AND event.parameters.doc_title:"user_data.xlsx"
Investigating AWS Credentials
When the attacker finds the credentials and configures them in his AWS CLI, he will most likely try to first enumerate his permissions.
The first and most used API call is usually the get-caller-identity
for the STS service to see the underlined user’s name. the response will look something like this

After knowing the user’s name, the attacker will try to understand what he can do in the environment

all those actions should be faced with AccessDenied
response, as this user should not have any permissions attached
When an alert triggers, we can then investigate what the attacker tried to do. and of course to verify that he wasn’t able to perform any actions in the environment
we can use the query (the same as the alert)
userIdentity.type:"IAMUser" AND userIdentity.accessKeyId:"AKIA3LVX..."
and then do the “Show Graph For Key” action in the explore screen of Coralogix

We can then verify that those action were not successful by enhancing the query
userIdentity.type:"IAMUser" AND userIdentity.accessKeyId:"AKIA3LVX..." AND NOT errorCode:"AccessDenied"
The only outcome should be the log for the get-caller-identity
action that was initially performed.
In conclusion
Honey Tokens serve as an effective security deception mechanism, helping organizations detect unauthorized access and malicious activity by deploying fake credentials, API keys, decoy database records, and other lures that should never be accessed under normal conditions. Any interaction with these tokens should be monitored and have pre-defined alerts set up, allowing security teams to identify attackers early and analyze their tactics.
In cloud environments, security risks stem from misconfigurations, supply chain vulnerabilities, credential leaks, and zero-day exploits. Attackers frequently scan for exposed credentials in repositories, cloud storage, CI/CD pipelines, and collaboration tools, making these locations ideal for planting honey tokens. By strategically placing decoy AWS keys and monitoring unauthorized use, organizations can gain valuable intelligence on attack vectors, enhance threat visibility, and strengthen their incident response. Assuming compromise as a security mindset ensures that teams focus on proactive threat detection and rapid mitigation rather than relying solely on perimeter defenses.
Eliminating Blind Spots: The Case for Integrated Security and ObservabilityAs organizations increasingly adopt security observability practices via SIEM, they fall into one of the obvious traps: over-reliance on a siloed signal, focusing only on logs, and not adding metrics as an additional source of truth. Each signal is important for understanding system behavior and its load, but they’re only half the picture if used in isolation. For example, a high CPU usage metric might look alarming, but without context from logs, it’s difficult to diagnose the issue causing such a load on the instance. When metrics and logs are managed separately, or by separate systems with non-mutual or non-correlated integration, or even in some cases not managed at all, it creates limited visibility. This makes it challenging to discover environmental and network issues, lack in data for RCA, and challenging incident resolution. The best solution to keep your environment secured and visible is by security-observability aggregation strategies.
The security world is witnessing a shift towards “full-blown” security-observability, where all the layers of the stack, from the application to the infrastructure, are monitored and analyzed in a combined-integrated fashion. This approach allows for better and in-depth comprehensive insights, while helping to eliminate blind spots that can occur when monitoring in progress. Investing in a platform that provides both observability and security will ensure that you stay ahead of the changes in the industry.
The AWS platform provides a masterclass in executing security-observability aggregation. AWS is the world’s most comprehensive and broadly adopted cloud with over 200 fully featured services from data center’s globally. AWS has a significant number of services and features – from infrastructure technologies like compute, storage, and databases – to emerging technologies, such as ML and AI, data lakes and analytics. This mixture of of infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) and packaged software-as-a-service (SaaS) solutions exist in the same ecosystem, making security engineer’s lives easier and providing more effective observability to cloud-based applications.
You’re probably familiar with Coralogix as the leading observability platform, but we combine both modern full-stack observability with a cutting-edge SIEM. Our SIEM platform helps centralize and analyze logs and metrics without reliance on indexing in real-time and monitors different data scales of our small project.
Let’s review two dependent pieces of information coming from different sources, and see how we can correlate them:
- AWS Performance Metrics: detailed monitoring of different resources, such as Amazon EC2 instances, Amazon EBS volumes, and Amazon RDS DB instances. AWS CloudWatch can import/export all the metrics from your account (both AWS resource metrics and application metrics that you provide).
- AWS CloudFront: CDN service, based network of edge locations seamlessly connected to different regions improving origin fetches and dynamic content acceleration.
The configuration will include basic webapp Juiceshop. It has a variety of different static and dynamic content. We are not going to get into the bits and bytes of the specific of different environmental components, but for purpose of this project I have configured Cloudfront for the caching purposes and WAF to protect the public interface following the diagram below.

The following 3 components I used to ship logs to Coralogix:
- S3 to store Cloudfront logs with Coralogix Lambda Shipper to ship it directly to my Coralogix account
- Kinesys Firehose to stream WAF logs to Coralogix
- Cloudwatch Metrics to ship infrastructure stats of EC2 instance
Once all configured, several minutes later I started see first bytes of information in my Coralogix account:


Next, we need to configure a Custom Dashboard to see an overview of the information. At the first I will configure different widgets to present the information from two different sources: Caching Stats and CPU Utilization.

Next, I aggregate those two under one widget and represent the data.

The beauty of this graph is the data correlation between the CPU and Caching offload:
Query 1 : CPU Utilization
Query 2 : Metric of caching ratio
Query 3 : Logs with different Edge response statuses
This data can point to specific issue or anomalies related to EC2, abnormal rates of offload, as well network related activity not correlated with CDN service.
The image above represents expected behaviour where CPU goes up while the caching started to accumulate. We can clearly see that Cloudfront edge starts to kick in and respond with the edge hits, meaning objects served from cache and not from the origin server. CPU utilization starting to go down, which makes absolute sense. So far this functionality is expected. The next event creates another wave of caching misses that ramps up CPU, which at the end stabilizes to some 3%-5% CPU utilization.
Here’s one last example in the dashboard below, which represents the anomaly we are looking for. Following CDN hits, there is a CPU anomaly which keeps the machine busy 50% average utilization, where we have edge hits and misses, but the CPU remains high all over the time.

This new approach to monitoring data is truly a game changer, revolutionizing how organizations safeguard their assets in a rapidly evolving environmental landscape. By leveraging advanced correlated analytics, real-time monitoring, and intelligent automation, this solution transforms traditional observability practices, enabling quicker detection and proactive responses to potential anomalies. This shift in technology not only enhances operational efficiency but also empowers organizations to stay one step ahead of potential failure, redefining the standards for robust and adaptive monitoring.
To learn more, talk to one of our security experts.
Threat Hunting with AWS CloudTrail: Detecting Anomalies for a Secure Cloud EnvironmentIn the ever-evolving landscape of cloud security, AWS CloudTrail has emerged as an essential tool for monitoring and understanding activity across your AWS environment. By logging user actions and resource behavior, CloudTrail provides invaluable insights for strengthening security, ensuring compliance, and creating a robust audit trail.
However, while CloudTrail captures a wealth of event data, the real challenge lies in identifying anomalies that could indicate potential threats. This blog explores how CloudTrail can be leveraged for threat hunting and anomaly detection, offering practical guidance and alert strategies to detect suspicious activities early.
What is AWS CloudTrail?
AWS CloudTrail records detailed logs of actions across AWS services, including:
- Who performed an action
- When it occurred
- Where it originated
These logs form a comprehensive audit trail, aiding:
- Security analysis by identifying unauthorized activities
- Compliance auditing to meet regulatory requirements
- Resource tracking to monitor and troubleshoot changes
Despite its comprehensive coverage, organizations often face challenges in identifying meaningful patterns amid the data, especially during an active attack where the attacker’s sequence of actions must be pieced together.
The Threat Landscape: Why Anomaly Detection is Crucial
Threat actors often begin with reconnaissance to find vulnerabilities, subsequently escalating privileges and exploiting resources. Detecting this early activity can significantly reduce potential damage. Yet, AWS doesn’t provide a built-in guide on which CloudTrail events to monitor or how to prioritize them.
To address this gap, Coralogix has developed a threat-hunting framework using CloudTrail, focusing on over 150 critical events. This includes a correlation alert with its foundational building blocks to link together multiple anomalous events that are usually seen during reconnaissance and an anomaly dashboard to identify suspicious activity effectively.
Building Blocks for Anomaly Detection
1. Multiple Events Detected (By User)
Alert Trigger:
This alert fires when more than 15 unique CloudTrail events that are part of 150 critical events as mentioned above, are detected from a single user within a 20-minute interval.
Rationale: External threat actors usually use automated tools to perform recon activities. When they run such tools, a high number of CloudTrail events are logged within a short interval of time. Many of these events are for “Get”, “List” and “describe” actions.
Challenges & Fine-Tuning Recommendations:
- False Positives: High activity by legitimate users can trigger this alert.
- Fine-Tuning Recommendations:
- Adjust threshold values based on usage patterns.
- Whitelist specific users as needed.
This alert provides a high-fidelity signal for detecting unusual access patterns when fine-tuned to your organization’s needs.
2. Unique Error Types
Alert Trigger:
This alert fires when more than one unique error code is detected within a 20-minute interval for CloudTrail events from a single user.
Rationale:
Threat actors operating in unfamiliar environments often cause multiple unique errors due to insufficient knowledge of permissions and privileges.
Fine-Tuning Recommendations:
- Understand the reasons behind these errors (e.g., misconfigurations or malicious attempts).
- Whitelist legitimate users or error codes where appropriate.
3. More than Usual Errors
Alert Trigger:
This alert fires when an unusually high number of errors are detected in CloudTrail events for a single user within a 20-minute window.
Rationale:
Similar to the previous alert, attackers’ lack of familiarity often leads to failed actions, making this metric a strong indicator of suspicious activity.
Correlation Alert: Correlating Anomalies for Better Detection
To enhance detection fidelity, a correlation alert combines the above building blocks. This alert triggers when either of the following combinations occur within 20 minutes:
- Combination 1:
- Multiple Events Detected (By User)
- Unique Error Types
- Combination 2:
- Multiple Events Detected (By User)
- More than Usual Errors
Purpose of the Correlation Alert
This alert flags multiple anomalous activities by a user in a short time frame, especially those resulting in errors or unusual patterns. Such behavior is rarely legitimate and warrants further investigation. If the activity is benign, adjust the relevant building block alerts by:
- Modifying thresholds
- Whitelisting specific users or error codes
Dashboards: Visualizing the Anomalies
In addition to the alerts, Coralogix has also created a dashboard that provides:
- A clear view of event trends and anomalies
- Insights into top users triggering errors or anomalies
- Breakdown of error codes by user, helping to pinpoint the root cause
By combining alerts with actionable insights, organizations can quickly identify and respond to potential threats.
Conclusion
AWS CloudTrail is a powerful tool for cloud monitoring, but its true value lies in its ability to enable proactive threat hunting. By implementing the above strategies, organizations can:
- Detect and investigate suspicious activities early
- Minimize false positives with tailored fine-tuning
- Strengthen overall security posture
With the right tools, dashboards, and alert configurations, CloudTrail becomes a cornerstone of any robust cloud security strategy. Start your anomaly detection journey today to stay ahead of potential threats and protect your AWS environment.
How Tetragon Redefines Security and ObservabilityKubernetes continues to be the favorite orchestration tool for most companies. This consistent demand means that the need for advanced security and observability solutions has never been greater. Tetragon, powered by the innovative eBPF technology, provides a groundbreaking approach to runtime security and system observability. By operating directly within the Linux kernel, Tetragon enables teams to monitor, detect, and enforce policies with unparalleled efficiency and precision.
This article explores how Tetragon leverages eBPF to deliver real-time insights into Kubernetes environments, enhance threat detection, and streamline policy enforcement. We’ll also look at how Coralogix amplifies these capabilities, offering powerful analytics, alerting, and visualization tools to turn Tetragon’s data into actionable intelligence for modern cloud-native ecosystems.
What is Tetragon?
Tetragon, developed by Cilium, is an open-source runtime security observability and enforcement platform. It leverages eBPF technology to provide real-time monitoring, detection, and enforcement of security policies directly within the Linux kernel. Tetragon enables fine-grained visibility into system activities without requiring intrusive instrumentation or modifying application code.
What is eBPF anyway…?
Extended Berkeley Packet Filter is a revolutionary technology in the Linux kernel that allows programs to run safely and efficiently within the kernel space without modifying kernel source code or adding modules.
eBPF provides a programmable interface to dynamically attach small programs to kernel-level events, such as system calls, network packets, and file operations, enabling deep observability and control over system behavior.
Unlike traditional observability solutions, eBPF operates at the kernel level, providing unparalleled visibility into application and system events with minimal overhead.
Traditional methods often rely on user-space instrumentation or kernel modules, which can be intrusive, require context switching, and may introduce performance penalties. In contrast, eBPF programs execute directly in the kernel’s execution context, offering high-performance data collection and real-time insights without the need for costly polling or extensive logging.
This unique capability makes eBPF an ideal foundation for modern observability, security, and performance monitoring tools.

Application and Usage In Kubernetes
By combining eBPF’s kernel-level observability with Tetragon’s Kubernetes-native capabilities, teams can achieve a high-performance, secure, scalable observability and security solution tailored for modern cloud-native environments.
Real-Time Observability
Extends eBPF’s capabilities to provide real-time monitoring of process execution, file access, and network activity for Kubernetes pods and nodes.
Tracks workload behaviors at a granular level to aid in debugging, compliance, and performance optimization.
Runtime Threat Detection
Can detect anomalous behaviors and security threats within Kubernetes clusters, such as unauthorized process execution, suspicious file writes, or network connections.
Useful for identifying and responding to active threats without waiting for postmortem analysis.
Native Kubernetes Integration
Integrates with Kubernetes metadata (e.g., pods, namespaces, labels), mapping observability data to specific workloads and clusters, and provides a Kubernetes-aware view of security events, making it easier to manage and enforce security at the application level.
Policy Enforcement
Enforces granular security policies directly in the kernel, tailored to specific workloads or namespaces in the Kubernetes cluster.
Enhanced Forensics and Audit Capability
Captures detailed logs of system events, making it easier to perform root-cause analysis or audits for compliance requirements making it extremely useful for tracing back the origin of security incidents or debugging application issues.
Scalability for Large Clusters
Designed to operate efficiently at scale, leveraging eBPF’s low overhead to monitor large Kubernetes deployments without impacting performance making it suitable for environments with dynamic workloads where traditional tools struggle to keep up or are less scaleable.
Lightweight Agent Deployment
Tetragon agents run as lightweight daemons on Kubernetes nodes, avoiding the need for complex instrumentation or sidecar containers.
How it all comes together
Coralogix enhances the observability and operational insights of Tetragon by leveraging advanced log correlation, powerful alerting mechanisms, and intuitive visualizations. Coralogix allows users to unify Tetragon’s detailed runtime logs with data from other sources such as cloud platforms such as AWS, application logs, and network traffic.
This centralization enables users to correlate eBPF-driven insights, such as process execution or anomalous system calls, with other contextual information, like application behavior or network activity. Such correlations provide a holistic view of the cluster, making it easier to troubleshoot issues, detect threats, and optimize system performance.
The alerting capabilities enable real-time notifications based on predefined or custom thresholds, patterns, or anomalies in the logs. For example, users can configure alerts for specific eBPF-detected events like privilege escalations or unauthorized file access. These alerts are enriched with Kubernetes metadata (e.g., pod name, namespace) and provide actionable context for rapid incident response.
Customized dashboards are available to display actions and trends derived from Tetragon events, such as process activity, and system call distributions. By combining these insights with Kubernetes performance data, users gain a comprehensive view of their environment in an intuitive format. This facilitates proactive performance tuning, compliance auditing, and security posture evaluation.
In short, Coralogix amplifies Tetragon’s capabilities by turning its granular kernel-level observability into actionable intelligence, seamlessly integrated into the broader ecosystem of a Kubernetes monitoring strategy.
Learn more about Coralogix security here.
Offensive security assessment: detect, defend and deterIn today’s fast-evolving cybersecurity landscape, organizations face an increasing number of threats targeting their digital assets. Offensive Security Assessment plays a critical role in safeguarding these assets by proactively identifying and addressing vulnerabilities before attackers can exploit them. This method simulates real-world attack scenarios to test and enhance an organization’s security defenses.
What is an Offensive Security Assessment?
Offensive Security Assessment is a hands-on approach to evaluating an organization’s security posture by mimicking the behavior of malicious attackers. By simulating multi-stage attacks, this technique identifies potential vulnerabilities and explores how an attacker might exploit them. It assumes an attacker has already gained initial access to the system and examines how they could escalate privileges, move laterally within the Cloud, exfiltrate sensitive data, or disrupt operations.
Offensive, Defensive, and Purple Teaming Insights
Aspect | Offensive Security | Defensive Security | Purple Teaming |
Definition | Proactively identifies vulnerabilities by simulating attacks. | Protects Infrastructure by implementing and maintaining security measures. | Combines offensive and defensive approaches for collaborative security enhancement. |
Core Principle | “Think like an attacker” to find the exploitable weaknesses. | “Think like a defender” to prevent, detect, and respond to threats. | “Collaborate and adapt” to integrate offensive insights with defensive strategies. |
Key Activities | Penetration testing, red teaming, adversary emulation. | Deploying SIEM, intrusion detection systems (IDS), and threat hunting. | Joint exercises, feedback loops, real-world attack emulation, improving defences. |
Mindset | Focuses on breaking in to expose vulnerabilities. | Focuses on safeguarding assets from potential attacks | Focuses on teamwork and knowledge sharing between offensive and defensive teams |
Goal | Strengthen systems by uncovering and remediating flaws. | Security analysts, blue team members, SOC engineers, system administrators. | Collaboration between red and blue teams, often guided by purple team facilitators. |
Focus Area | Proactively testing resilience against simulated attacks. | Ensuring Infrastructure integrity through monitoring, threat detection, and incident management. | Enhancing both offensive and defensive capabilities through seamless coordination and shared objectives. |
Tools and Technique | Exploit frameworks, attack simulations, vulnerability scanners, social engineering. | Firewalls, SIEM tools, EDR, threat intelligence platforms, incident response plans. | Integration of offensive and defensive tools; joint analysis of simulated attacks and incident handling. |
Elevate Your Offensive Security Assessment: Proactive Strategies for Modern Threats
In today’s complex Infrastructure, understanding and addressing vulnerabilities is critical to safeguarding your assets. This guide walks you through key strategies to strengthen your cloud security posture by combining offensive, defensive, and collaborative approaches.
Detect Critical Gaps in Your Infrastructure
Assess the infrastructure to determine the blast radius and evaluate the potential impact of security misconfigurations. By prioritizing mitigation strategies based on the highest risks, you can proactively strengthen your defenses and focus on the most critical areas for improvement.
Identify Rogue Access in Your Cloud Environment
Detect accounts, users, and groups with unnecessary or elevated privileges to sensitive information. By analyzing cloud permissions, you can minimize the attack surface and enforce least privilege principles.
Elevate Your Security with Collaborative Purple Teaming
Leverage the power of Purple Teaming to enhance your defenses:
- Collaborative Assessments: Work alongside experts to simulate real-world attack scenarios based on findings from offensive security assessments.
- Enhanced Visibility: Integrate missing log sources into Coralogix for comprehensive monitoring and detection.
- Custom Recommendations: Build tailored strategies to detect and respond to threats, enhancing your overall alerting capabilities.
Full Flexibility for Custom Attack Scenarios
Test your cloud infrastructure under tailored conditions, focusing on your specific threat landscape. Whether targeting insider threats, unauthorized access, or lateral movement, the flexibility ensures the assessment aligns with your business objectives.
Simulating Real-World Attack Scenarios with Operational Safeguards
Demonstrate how skilled adversaries could exploit vulnerabilities in your cloud environment, all while maintaining operational integrity.
Our safeguards include:
- No Service Disruption: Ensuring uninterrupted operations throughout the assessment.
- Data Integrity: No deletion or modification of existing data.
- Configuration Preservation: Retaining current system configurations during testing.
These safeguards allow for a realistic yet safe assessment of your defenses, preparing your team to detect and respond to advanced threats without risk to your business continuity.
Why Snowbit for Offensive Security?
At Snowbit, we go beyond traditional security assessments. Our Offensive Security Assessment helps customers identify custom attack paths unique to their infrastructure—paths that could be exploited by adversaries. By leveraging cutting-edge techniques, our managed security services team simulates real-world attack scenarios to uncover vulnerabilities and hidden risks.
Turn Insights Into Actionable Alerts
Following each assessment, our research team develops tailored alerts for every identified attack path, ensuring continuous monitoring and proactive defense. These alerts are integrated directly into the Coralogix SIEM platform, giving customers unparalleled visibility and actionable intelligence to safeguard their cloud environments.
Case Study:
GCP Offensive Security Assessment
AWS Offensive Security Assessment
Learn more about Coralogix security offerings today