Skip to main content

Validation & Monitoring

Purpose

This document defines how security controls in Scheol Security Lab are:

  • monitored during operation
  • validated through verification activities
  • reviewed over time

The objective is to ensure that controls are not only implemented, but:

  • remain effective
  • produce observable signals
  • can be actively verified

Core Principle

A control that is not monitored or validated cannot be trusted over time.


Validation Philosophy

Validation follows a simple rule:

A control is not considered effective unless it is observed, tested, or verified.

This implies that:

  • implementation alone is not sufficient
  • monitoring must support validation
  • detection gaps must be explicitly identified

Monitoring vs Validation

Monitoring

Monitoring is continuous or periodic observation of system behavior.

It answers:

  • Is the control active?
  • Is it producing expected signals?

Examples:

  • logs generated by systems
  • alerts from CrowdSec
  • service availability checks

Validation

Validation is intentional verification of a control.

It answers:

  • Does the control actually work under stress or misuse?
  • Can it detect or prevent expected threats?

Examples:

  • triggering failed logins
  • simulating malicious requests
  • testing backup restoration

Monitoring Strategy

Monitoring is structured around key domains aligned with system exposure and risk.


1. Access & Identity Monitoring

Scope:

  • SSH access
  • administrative entry points
  • authentication systems

Signals:

  • login attempts (success / failure)
  • unusual access patterns
  • brute-force attempts

Systems:

  • VPS logs
  • CrowdSec
  • future Wazuh integration

2. Network & Exposure Monitoring

Scope:

  • exposed services
  • inbound traffic
  • abnormal connection patterns

Signals:

  • scanning activity
  • repeated access to sensitive endpoints
  • unexpected connections

3. Application Monitoring

Scope:

  • application-level behavior
  • authentication flows
  • error conditions

Signals:

  • repeated failed logins
  • abnormal usage patterns
  • application errors

4. Infrastructure & Health Monitoring

Scope:

  • system availability
  • service health
  • resource utilization

Signals:

  • service downtime
  • abnormal CPU / memory usage
  • failed scheduled tasks

Systems:

  • VPS provider metrics
  • Proxmox monitoring
  • Prometheus / Grafana (planned)

5. Security Monitoring (SOC-Oriented)

Scope:

  • log aggregation
  • event correlation
  • detection logic

Signals:

  • blocked IPs
  • alert generation
  • correlated suspicious activity

Systems:

  • CrowdSec (current)
  • Wazuh (planned centralization)

Detection vs Visibility

Not all monitoring leads to detection.

The current approach distinguishes between:

  • Visibility → logs are generated but not actively analyzed

  • Detection → events trigger alerts or investigation

Improving detection coverage is a key objective of the lab.


Data Flow & Monitoring Architecture (High-Level)

Monitoring follows a progressive centralization model:

Current State

  • logs are partially local
  • some systems rely on manual inspection
  • limited correlation between sources

Target State

  • all critical systems forward logs
  • centralized analysis platform (Wazuh)
  • improved cross-system visibility (Heaven ↔ Hell)

Validation Strategy

Validation is performed through targeted verification scenarios.


1. Configuration Validation

Verify that controls are correctly configured.

Examples:

  • SSH root login disabled
  • firewall rules correctly applied
  • services not unnecessarily exposed

2. Behavioral Validation

Verify system response to expected events.

Examples:

  • failed login attempts generate logs
  • CrowdSec bans IP after repeated attempts
  • logs are properly written and accessible

3. Security Validation

Simulate threat scenarios.

Examples:

  • brute-force attempt simulation
  • HTTP probing on exposed services
  • unauthorized access attempts

4. Recovery Validation

Verify resilience mechanisms.

Examples:

  • backup restoration test
  • service recovery after failure

Validation Frequency

Validation is currently:

  • partially manual
  • event-driven (after changes)
  • periodic for critical controls

Automation is a future objective.


Relationship with Evidence Model

Validation produces Validation Evidence.

Each validation activity should:

  • generate evidence (E-XXX)
  • be linked to:
    • control (C-XXX)
    • risk (R-XXX)

Known Detection Gaps

At the current stage:

  • incomplete log centralization
  • limited correlation between data sources
  • absence of mature detection rules
  • inconsistent monitoring coverage across systems
  • limited automated validation scenarios

These gaps are tracked and progressively reduced.


Current Maturity

At the current stage, validation and monitoring are considered early operational.

Established

  • clear distinction between monitoring and validation
  • defined monitoring domains aligned with risks
  • initial validation practices (manual testing)
  • integration with evidence model

In Progress

  • centralization of logs (Wazuh deployment)
  • improvement of detection capabilities
  • alignment between validation scenarios and risks
  • increased monitoring coverage across systems

Planned / Next Phase

  • automated validation scenarios
  • centralized monitoring and correlation
  • alert-driven validation workflows
  • stronger integration with audit and evidence processes

This section is expected to evolve significantly as detection and validation capabilities mature.