CyberLabs 101

Table of Contents

  1. What this lab is and isn’t
  2. Architecture
  3. Hardware
  4. Phase 1 — Parallels application setup
  5. Phase 2 — Gateway VM build
    1. SSH key auth from the Mac
    2. Network plumbing — netplan
    3. IP forwarding
    4. dnsmasq — DHCP and DNS for the lab
    5. nftables — firewall and NAT
    6. Suricata — IDS across all three lab segments
    7. fail2ban — brute-force protection on SSH
  6. Phase 3 — Lab guest VMs
  7. Verification
  8. What this lab supports
  9. Decisions and rationale
  10. What’s next
   ███████╗██╗   ██╗ ██████╗  ██╗██╗      █████╗ ██████╗
   ██╔════╝██║   ██║██╔═████╗███║██║     ██╔══██╗██╔══██╗
   █████╗  ██║   ██║██║██╔██║╚██║██║     ███████║██████╔╝
   ██╔══╝  ╚██╗ ██╔╝████╔╝██║ ██║██║     ██╔══██║██╔══██╗
   ███████╗ ╚████╔╝ ╚██████╔╝ ██║███████╗██║  ██║██████╔╝
   ╚══════╝  ╚═══╝   ╚═════╝  ╚═╝╚══════╝╚═╝  ╚═╝╚═════╝
   ──────────────────────────────────────────────────────
            a segmented offensive lab build
                 by darwin microsystems
   ──────────────────────────────────────────────────────

EV01LAB — Building a Segmented, Observable Offensive Security Lab

Chapter 1 — Darwin Microsystems Field Notes


I built my EV01LAB because I needed a lab that wasn’t a toy. Most home setups I’ve seen were either flat — every VM on the same bridge, no segmentation, no observability — or they were so abstracted behind a vendor appliance that you couldn’t tell what was actually happening at the packet level. Neither is useful for the work I want to do: offensive security research, eventual bug bounty engagements, and forensic-style writeups that demonstrate both attack and response across the full kill chain.

This post documents the build end-to-end. If you have an Apple Silicon Mac with Parallels, an external SSD, and a few hours, you can reproduce this lab from the snippets below. The architecture also translates to other hypervisors (Proxmox, ESXi, VMware Fusion) with minor adjustments — the principles are what matter.

The lab name is EV01LAB — first numbered environment in what I’m calling the EVOL series. Pronounced “evol lab.” Visually parses as “evil lab,” which is a deliberate nod to the offensive focus.


What this lab is and isn’t

    [+] hardened, segmented gateway — all lab traffic transits
    [+] suricata IDS watching every lab segment
    [+] three target environments (kali, windows, macOS)
    [+] reproducible, document-driven, no snapshots
    ─────────────────────────────────────────────────
    [-] not a production environment
    [-] not internet-facing
    [-] not a honeypot — no inbound external traffic expected
    [-] not a general-purpose homelab

The “no snapshots, no backups” decision is deliberate. If integrity is ever in question, the residual silent-compromise risk of patching in place is unacceptable for security infrastructure. Rebuilds are fast — about 90 minutes from prepared ISOs — and the document below is the recovery procedure.


Architecture

 ┌──────────────────────────────────────────────────────────────────────────┐
 │                    Mac host (macOS, Apple Silicon)                       │
 │                                                                          │
 │  ┌────────────────────────────────────────────────────────────────────┐  │
 │  │                  Parallels Desktop hypervisor                      │  │
 │  │                                                                    │  │
 │  │  ┌──── Shared (NAT, 10.211.55.0/24) ──────────────┐                │  │
 │  │  │    Mac (10.211.55.x)        gateway.eth0       │                │  │
 │  │  │         │                   10.211.55.13       │                │  │
 │  │  └─────────┼─────────────────────────┬────────────┘                │  │
 │  │            │ SSH (mgmt only)         │                             │  │
 │  │            ▼                         ▼                             │  │
 │  │  ┌──────────────────────────────────────────────────┐              │  │
 │  │  │     gateway VM (Ubuntu Server 26.04 ARM64)       │              │  │
 │  │  │                                                  │              │  │
 │  │  │   nftables  │ dnsmasq  │ Suricata │ fail2ban     │              │  │
 │  │  │                                                  │              │  │
 │  │  │   enp0s5 (WAN, Shared)   = 10.211.55.13          │              │  │
 │  │  │   enp0s6 (kali)          = 10.37.129.1           │              │  │
 │  │  │   enp0s7 (windows)       = 10.37.132.1           │              │  │
 │  │  │   enp0s8 (macintosh)     = 10.37.133.1           │              │  │
 │  │  └─────┬───────────────┬──────────────┬─────────────┘              │  │
 │  │        │               │              │                            │  │
 │  │  ┌─────┴────┐    ┌─────┴────┐   ┌─────┴───────┐                    │  │
 │  │  │   kali   │    │ windows  │   │ macintosh   │                    │  │
 │  │  │ host-only│    │ host-only│   │ host-only   │                    │  │
 │  │  └─────┬────┘    └─────┬────┘   └─────┬───────┘                    │  │
 │  │        │               │              │                            │  │
 │  │     kali VM        windows VM     macintosh VM                     │  │
 │  │  10.37.129.x      10.37.132.x    10.37.133.x                       │  │
 │  └────────────────────────────────────────────────────────────────────┘  │
 └──────────────────────────────────────────────────────────────────────────┘

Each lab guest lives on its own host-only network. The Mac is not a member of any lab network — it cannot reach lab guests directly, and lab guests cannot reach the Mac. All inter-guest traffic transits the gateway, where it is firewalled, NAT’d, and observed by the IDS. Management of the gateway is via SSH from the Mac on the WAN-side Shared network only.

The subnets (10.37.129.0/24, 10.37.132.0/24, 10.37.133.0/24) are RFC 1918 ranges that Parallels assigns by default — there’s no reason to renumber them, and they don’t conflict with anything on a typical home network.


Hardware

Component Spec
Host MacBook, M4 Apple Silicon, 16 GB RAM
External storage 4 TB USB-C SSD, APFS, encrypted
Hypervisor Parallels Desktop (Pro or Business — required for custom host-only networks)

VMs live entirely on the external drive. The Mac’s internal SSD stays clean. This frees host capacity, makes the lab portable, and gives you failure isolation between lab and host.

The 16 GB RAM ceiling is real and worth respecting. The gateway plus Kali plus one target is the comfortable concurrent maximum. All four guests at once will swap. Plan accordingly.

                              ╔═════════════╗
                              ║   PHASE 1   ║
                              ╚═════════════╝
                       parallels application setup

Phase 1 — Parallels application setup

These are settings on Parallels itself, not on any individual VM.

Default VM location:

Parallels Desktop → Settings → General → Virtual machines folder
/Volumes/ev01lab/machines

Three custom host-only networks:

Name IPv4 subnet DHCP Connect Mac
kali 10.37.129.0/24 off off
windows 10.37.132.0/24 off off
macintosh 10.37.133.0/24 off off

Two settings matter most:

  • DHCP off. The gateway VM’s dnsmasq will be the DHCP authority. Two DHCP servers on one segment causes lease races.
  • Connect Mac to this network: off. This removes the Mac’s interface on each lab subnet, enforcing host-VM isolation at the Parallels layer in addition to nftables.
                              ╔═════════════╗
                              ║   PHASE 2   ║
                              ╚═════════════╝
                            gateway VM build

Phase 2 — Gateway VM build

Create the VM in Parallels with four NICs, in this order: Shared, kali, windows, macintosh. RAM 2 GB (3 GB if Suricata gets squeezed), 2 vCPU, 20 GB disk. Boot the Ubuntu Server 26.04 ARM64 ISO.

In the VM’s Configure pane: enable Isolate Linux from Mac (kills shared clipboard, drag-drop, shared folders, SmartMount), and uncheck Sync time from Mac under Options → Time. The guest will sync via chrony, keeping log timestamps decoupled from host clock for forensic cleanliness.

Critical install-time choice: Ubuntu Server, not Desktop. Desktop installs GNOME and adds substantial attack surface that has no business on a chokepoint VM. Install OpenSSH server during setup. Decline all snap servers and featured server snaps.

After first boot, log in at the console:

ip -br addr
sudo systemctl enable --now ssh

Expected: enp0s5 UP with a 10.211.55.x lease from Parallels. Other interfaces UP at link layer with only IPv6 link-local addresses.

SSH key auth from the Mac

From your Mac terminal (not the VM):

ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519 -N "" -C "darwin-microsystems-gateway"
ssh-copy-id gateway@10.211.55.13
ssh gateway@10.211.55.13 "echo connected"

Optionally add to ~/.ssh/config:

Host gateway
    HostName 10.211.55.13
    User gateway
    IdentityFile ~/.ssh/id_ed25519
    IdentitiesOnly yes

Now ssh gateway from the Mac works passwordless. Do all subsequent gateway work over SSH — much better paste/edit experience than the Parallels console.

Network plumbing — netplan

Replace /etc/netplan/00-installer-config.yaml:

network:
  version: 2
  renderer: networkd
  ethernets:
    enp0s5:
      dhcp4: true
      dhcp6: true
    enp0s6:
      dhcp4: false
      dhcp6: false
      accept-ra: false
      addresses:
        - 10.37.129.1/24
        - "fdb2:2c26:f4e4:1::1/64"
      optional: true
    enp0s7:
      dhcp4: false
      dhcp6: false
      accept-ra: false
      addresses:
        - 10.37.132.1/24
        - "fdb2:2c26:f4e4:2::1/64"
      optional: true
    enp0s8:
      dhcp4: false
      dhcp6: false
      accept-ra: false
      addresses:
        - 10.37.133.1/24
        - "fdb2:2c26:f4e4:3::1/64"
      optional: true

optional: true keeps systemd-networkd from blocking on these at boot. accept-ra: false because the gateway is the router — it doesn’t accept RAs from anywhere.

sudo netplan generate
sudo netplan try
ip -br addr

All four interfaces should show their assigned IPv4 and IPv6 addresses.

IP forwarding

sudo tee /etc/sysctl.d/99-gateway-forwarding.conf > /dev/null <<EOF
net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1
EOF

sudo sysctl --system
sysctl net.ipv4.ip_forward net.ipv6.conf.all.forwarding

Both should return = 1.

dnsmasq — DHCP and DNS for the lab

sudo apt update && sudo apt install -y dnsmasq

Drop config at /etc/dnsmasq.d/lab.conf:

bind-interfaces
except-interface=lo
except-interface=enp0s5

interface=enp0s6
interface=enp0s7
interface=enp0s8

no-resolv
server=1.1.1.1
server=9.9.9.9

dhcp-range=10.37.129.50,10.37.129.99,255.255.255.0,12h
dhcp-range=10.37.132.50,10.37.132.99,255.255.255.0,12h
dhcp-range=10.37.133.50,10.37.133.99,255.255.255.0,12h

log-queries
log-dhcp
log-facility=/var/log/dnsmasq.log
sudo dnsmasq --test
sudo systemctl restart dnsmasq
sudo ss -tlnup | grep dnsmasq

The except-interface=enp0s5 is what keeps DNS off the WAN. The Mac side stays clean.

nftables — firewall and NAT

This is where the segmentation actually gets enforced. Two parallel paths a packet from Kali can take, depending on where it’s going:

   ALLOWED: kali → internet            BLOCKED: kali → mac host
   ──────────────────────────          ──────────────────────────

   kali (10.37.129.62)                 kali (10.37.129.62)
        │                                   │
        │  [enp0s6]                         │  [enp0s6]
        ▼                                   ▼
   gateway forward chain                gateway forward chain
   match: lab → WAN                    match: daddr 10.211.55.0/24
   action: ACCEPT                       rule fires FIRST
        │                                   │
        ▼                                   ▼
   NAT postrouting                      DROP + log
   masquerade out enp0s5                [lab→host blocked]
   suricata logs the flow               (no ICMP unreachable)
        │                                   │
        ▼                                   ▼
   internet (via mac → router)           ✗  packet discarded
        │
        ▼
   ┌─────────────────┐                 ┌─────────────────┐
   │ packet delivered│                 │ packet dropped  │
   └─────────────────┘                 └─────────────────┘

The order matters. Replace /etc/nftables.conf:

#!/usr/sbin/nft -f
flush ruleset

table inet filter {
    chain input {
        type filter hook input priority filter; policy drop;

        iif lo accept
        ct state established,related accept

        meta l4proto icmp limit rate 10/second accept
        meta l4proto icmpv6 limit rate 10/second accept

        iifname { "enp0s6", "enp0s7", "enp0s8" } udp dport 67 accept
        iifname { "enp0s6", "enp0s7", "enp0s8" } udp dport 53 accept
        iifname { "enp0s6", "enp0s7", "enp0s8" } tcp dport 53 accept

        iifname "enp0s5" tcp dport 22 accept

        log prefix "[nft input drop] " counter drop
    }

    chain forward {
        type filter hook forward priority filter; policy drop;
        ct state established,related accept

        # Block lab → Mac subnet — MUST come before lab → enp0s5 accept
        iifname { "enp0s6", "enp0s7", "enp0s8" } ip daddr 10.211.55.0/24 \
            log prefix "[lab->host blocked] " counter drop

        # Lab egress to internet
        iifname { "enp0s6", "enp0s7", "enp0s8" } oifname "enp0s5" counter accept

        # Kali → targets (engagement traffic, logged)
        iifname "enp0s6" oifname "enp0s7" log prefix "[eng] " counter accept
        iifname "enp0s6" oifname "enp0s8" log prefix "[eng] " counter accept

        log prefix "[nft fwd drop] " counter drop
    }

    chain output {
        type filter hook output priority filter; policy accept;
    }
}

table inet nat {
    chain postrouting {
        type nat hook postrouting priority srcnat;
        oifname "enp0s5" masquerade
    }
}

Rule ordering matters here. The lab→host block rule comes before the lab→WAN accept. If you reverse them, Kali can reach the Mac via routing. nftables matches the first rule that fits, so order is policy.

sudo nft -c -f /etc/nftables.conf      # validate syntax
sudo nft -f /etc/nftables.conf         # apply
sudo systemctl enable --now nftables

Existing SSH sessions survive the reload because of ct state established,related accept.

Suricata — IDS across all three lab segments

              ┌─────────────────────────────┐
              │    ▼       ▼       ▼        │
              │    │       │       │        │   suricata watches
              │ enp0s6   enp0s7  enp0s8     │   every lab segment
              │  kali   windows  mcintosh   │
              └──────────────┬──────────────┘
                             │
                             ▼
                    /var/log/suricata/
                    ├─ eve.json   (structured events)
                    ├─ fast.log   (alerts only)
                    └─ stats.log

Install:

sudo apt install -y suricata

Out of the box, Suricata’s config has one interface entry, usually eth0. We need it to monitor all three lab segments. Edit /etc/suricata/suricata.yaml and find the af-packet: block. The default has one - interface: eth0 entry. Replace it with three entries — one per lab interface, each with a unique cluster-id:

af-packet:
  - interface: enp0s6
    cluster-id: 99
    cluster-type: cluster_flow
    defrag: yes
    use-mmap: yes
    tpacket-v3: yes

  - interface: enp0s7
    cluster-id: 97
    cluster-type: cluster_flow
    defrag: yes
    use-mmap: yes
    tpacket-v3: yes

  - interface: enp0s8
    cluster-id: 98
    cluster-type: cluster_flow
    defrag: yes
    use-mmap: yes
    tpacket-v3: yes

  - interface: default
    # ... default block from upstream config, leave intact

cluster-id must be unique per interface. The values are arbitrary as long as they don’t collide.

Pull the Emerging Threats Open ruleset and validate:

sudo suricata-update
sudo suricata -T -c /etc/suricata/suricata.yaml -v

Look for Configuration provided was successfully loaded. Exiting.

Start it:

sudo systemctl enable --now suricata
sudo cat /var/log/suricata/suricata.log | grep "creating 2 threads"

You should see three lines, one for each lab interface:

Info: runmodes: enp0s6: creating 2 threads
Info: runmodes: enp0s7: creating 2 threads
Info: runmodes: enp0s8: creating 2 threads

Memory note: Suricata with the full ET Open ruleset uses ~1.1 GB RAM. On a 2 GB gateway VM that’s half the memory. Bump to 3 GB if it ever swaps.

fail2ban — brute-force protection on SSH

sudo apt install -y fail2ban
sudo systemctl status fail2ban
sudo fail2ban-client status sshd

The default jail bans an IP for 10 minutes after 5 failed SSH attempts in a 10-minute window. Watches via systemd-journal. Sufficient for a lab gateway that only runs during active sessions.

Gateway is now hardened.

                              ╔═════════════╗
                              ║   PHASE 3   ║
                              ╚═════════════╝
                            lab guest VMs

Phase 3 — Lab guest VMs

Each guest has exactly one NIC, sourced to its own host-only network — never Shared. If you leave a guest’s NIC on Shared, it bypasses the gateway entirely: gets internet directly via Parallels NAT, no segmentation, no IDS visibility, no firewall enforcement. Always confirm the NIC source after VM creation.

VM RAM vCPU Disk NIC source Notes
kali 6 GB 4 40 GB kali Attacker workstation
windows 4 GB 2 60 GB windows Target — ARM64 build
macintosh 4 GB 2 60 GB macintosh Target — install via recovery partition

Enable “Isolate Linux from Mac” (or platform equivalent) on each. macOS guests install via Parallels’ “Install macOS using the recovery partition” path — no ISO needed on Apple Silicon hosts.


Verification

   ┌────────────────────────────────────────────────────────────┐
   │                                                            │
   │      [*] testing the chain end-to-end from kali...         │
   │                                                            │
   │      $ ip -br addr           # got dhcp lease?         OK  │
   │      $ ping 1.1.1.1          # NAT egress works?       OK  │
   │      $ apt update            # DNS chain works?        OK  │
   │      $ ping 10.211.55.1      # mac UNREACHABLE?        OK  │
   │      $ curl testmyids.com    # IDS fires alert?        OK  │
   │                                                            │
   │      [+] all green. lab is operational.                    │
   │                                                            │
   └────────────────────────────────────────────────────────────┘

End-to-end tests, run from the Kali guest:

ip -br addr            # expect: 10.37.129.X (50–99)
ping -c 2 1.1.1.1      # NAT egress works
sudo apt update        # DNS via gateway → upstream chain works
ping -c 3 10.211.55.1  # 100% loss; lab→host block enforced
curl http://testmyids.com   # IDS test

On the gateway:

sudo tail -f /var/log/suricata/fast.log

The curl http://testmyids.com from Kali should produce an alert:

[**] [1:2100498:7] GPL ATTACK_RESPONSE id check returned root [**]
[Classification: Potentially Bad Traffic] [Priority: 2]
{TCP} 217.160.0.187:80 -> 10.37.129.62:44268

Run the same test from the Windows and macOS guests and you should see equivalent alerts with source IPs in 10.37.132.x and 10.37.133.x respectively. That’s your proof that Suricata is reading packets across all three segments.

When all five tests pass, the lab is operational.


What this lab supports

Offensive security research. Kali is the attack workstation. Windows and macOS targets exist for documented scenarios — vulnerable services, weak credentials, demonstration payloads. All traffic is logged at the gateway with timestamps, all alerts are in eve.json and fast.log, every engagement is reproducible.

Cradle-to-grave incident writeups. Run an attack from Kali against a target, capture the gateway and IDS logs as third-party verification that events occurred as documented, then pivot to the defender’s perspective and investigate forensically. Single-author writeups that show both sides of the kill chain are rare and disproportionately valuable as portfolio pieces.

MSP demonstration vehicle. The architecture maps cleanly to small-business client environments — segmented networks, observed traffic, default-deny firewalling, IDS coverage. Showing a prospect “here’s the kind of monitoring I can deploy on your network” with a working demo lab carries more weight than slideware.

Forensic methodology practice. The IDS produces structured event logs that feed the same workflows real incident responders use. Practicing analysis against your own staged scenarios on your own logs builds the muscle that transfers to client engagements.

What this lab does not support: anything internet-facing, anything you intend to leave running unattended, or anything where state needs to persist reliably across compromise. The “rebuild on suspicion” model is incompatible with always-on services.


Decisions and rationale

A few choices that look strange without explanation:

External drive instead of internal storage. Frees host capacity, gives the lab portability, isolates failure between lab and host. The USB-C bottleneck is real (random I/O caps around 200-400 MB/s versus 3000+ for internal NVMe) but acceptable for the scale of this lab.

No snapshots, no backups. Build-it-right or rebuild philosophy. If integrity is in question, residual silent-compromise risk on infrastructure that watches my offensive traffic is unacceptable. The blueprint is the recovery procedure.

Ubuntu Server, not Desktop, for the gateway. Smaller attack surface, no GNOME/GDM, no graphical stack races at boot. The gateway is a chokepoint VM; a chokepoint VM has no business running a desktop environment.

Parallels-assigned subnets (10.37.129/132/133). No reason to renumber — they don’t conflict with anything Parallels uses by convention.

Lab→Mac block before lab→WAN accept. nftables matches the first rule that fits. Without the explicit drop ordered first, the broad lab→WAN accept rule would let Kali reach the Mac via routing.

Suricata across all three lab segments, not just Kali’s. Original plan was Phase 1 = Kali only (highest-value visibility, attack source) with multi-interface deferred to Phase 3. Pulled forward because writeups that include defender-side perspective need IDS visibility on the target’s traffic too, not just the attacker’s.


What’s next

This is Chapter 1. Subsequent chapters in this series:

  • Phase 2 — Clean egress. Self-hosted WireGuard endpoint on a VPS, gateway routes lab traffic out a dedicated operations IP instead of residential ISP. Relevant when attribution matters for engagements.
  • Documented attack scenarios. Each scenario gets its own writeup — initial access vector, kill chain execution, gateway/IDS logs as evidence, forensic investigation from the defender’s perspective, IOCs, detection logic, remediation.
  • Suricata in IPS mode. Graduating from observe-only to enforce-and-block. After rule tuning is mature.
  • Centralized log aggregation. Wazuh or Loki + Grafana on a separate VM. Real SIEM with correlation, dashboards, alerting on top of the gateway’s eve.json.

The lab as it stands is the foundation. Everything else stacks on top of it.

              ─────────────────────────────────────
                       end of chapter 01
                  darwin microsystems · 2026
              ─────────────────────────────────────
                            ░░▒▒▓▓██

Build complete: 2026-04-29 → 2026-04-30. Maintainer: Zachary D. Rife — Darwin Dynamics LLC (dba Darwin Microsystems).

CyberLabs 101