Skip to main content

Evidence Model

Purpose

This document defines how security-related evidence is structured, collected, and linked to risks and controls within the Scheol Security Lab.

The objective is to ensure that:

  • controls are not only defined, but demonstrably implemented
  • security posture can be reviewed and validated
  • documentation supports audit-oriented reasoning

Core Principle

A control is only meaningful if its implementation can be verified through evidence.


Evidence Categories

Evidence is grouped into four categories:


1. Configuration Evidence

Proof that a control is implemented at configuration level.

Examples:

  • firewall rules (OPNsense, nftables)
  • SSH configuration (no root login, key-based auth)
  • reverse proxy routing rules
  • VLAN segmentation

Typical Sources:

  • config files
  • screenshots
  • command outputs

2. Operational Evidence

Proof that a control is active and functioning during normal operations.

Examples:

  • running services (Wazuh agent, CrowdSec)
  • active firewall filtering
  • successful CI/CD deployments
  • backup execution logs

3. Monitoring & Detection Evidence

Proof that detection mechanisms are in place and producing signals.

Examples:

  • security alerts (CrowdSec, Wazuh)
  • log entries (auth logs, HTTP logs)
  • anomaly detection events

4. Validation Evidence

Proof that controls have been actively tested or verified.

Examples:

  • failed login attempts triggering alerts
  • blocked IP after brute-force attempt
  • simulated attack detection
  • backup restore test

Evidence Structure

Each piece of evidence should be documented using the following structure:


Evidence Entry Template

FieldDescription
Evidence IDUnique identifier (E-00X)
Control IDRelated control (C-00X)
Risk IDRelated risk (R-00X)
SystemSystem where evidence is observed
Evidence TypeConfiguration / Operational / Monitoring / Validation
DescriptionWhat is being demonstrated
SourceFile, log, command output, screenshot
Collection MethodHow evidence is obtained
FrequencyOne-time / periodic / continuous
Last VerifiedDate of last verification

Example Entries


E-001 - SSH Hardening

  • Control: C-001 - Identity & Access Control
  • Risk: R-003 - Compromise of administrative access path
  • System: VPS-01
  • Type: Configuration

Description:
SSH access restricted to key-based authentication, root login disabled.

Source:
/etc/ssh/sshd_config

Collection Method:
Manual verification via SSH and config review

Frequency:
Periodic


E-002 - CrowdSec Blocking

  • Control: C-003 - Logging & Detection
  • Risk: R-008 - Detection blind spots
  • System: VPS-01
  • Type: Monitoring

Description:
CrowdSec detects and blocks malicious IP after repeated requests.

Source:
CrowdSec logs / decisions list

Collection Method:
Log review

Frequency:
Continuous


E-003 - Backup Execution

  • Control: C-005 - Backup & Recovery
  • Risk: R-004 - Backup failure
  • System: Proxmox / VPS
  • Type: Operational

Description:
Regular snapshot or backup successfully executed.

Source:
Backup logs / provider interface


E-004 - Failed Login Detection

  • Control: C-003 - Logging & Detection
  • Risk: R-003 - Admin access compromise
  • System: VPS / SSH
  • Type: Validation

Description:
Failed SSH login attempts are logged and visible.


Evidence Storage Strategy

Evidence is not centralized in a single tool at this stage.

It is distributed across:

  • system configurations
  • logs (local / future SIEM)
  • infrastructure interfaces (VPS provider, Proxmox)
  • documentation snapshots

Known Limitations

At the current stage:

  • evidence collection is partially manual
  • no centralized evidence repository
  • limited automation of validation scenarios
  • incomplete linkage between all controls and evidence

Current Maturity

At the current stage, the evidence model is considered early operational.

Established

  • clear evidence categories
  • structured evidence model
  • initial linkage between controls and risks
  • real examples aligned with lab systems

In Progress

  • expansion of evidence coverage across all controls
  • better integration with logging and monitoring systems
  • improvement of traceability between systems and evidence

Planned / Next Phase

  • centralized evidence collection (via SIEM or structured storage)
  • automated validation scenarios
  • tighter linkage with audit and validation processes
  • periodic verification workflows