8 minutes
Pro vs Joes CTF at BSides NYC 2025
This is a my account of the Pro vs Joes run at BSidesNYC 2025, a simple moment to reflect on what I learned. It documents pregame, arrival and connectivity, the initial foothold and Phase 1 context, the timeline through Phase 2 and Scorched Earth, the parsing and triage one‑liners I wish I’d rehearsed, a reworked approach to remote execution and multi‑host remediation, a short note on using Ansible, and practical advice for anyone looking to participate in the future.
Overview and summary
- Opening: a focused, tactical recap that provides actionable lessons.
- TL;DR:
- Quick wins: initial foothold via default credentials, rapid mapping and hardening by senior teammates, effective Remote Code Execution (RCE) use (maint.php) during Phase 2/Scorched Earth.
- Major misses: parsing under pressure, unclear ownership (Windows), many Linux accounts with UID 0, and lack of recent practice with parallel remote tools.
- Three immediate takeaways: pack network hardware (switch + long cables), rehearse parsing one‑liners and parallel remediation in a lab, and assign platform owners within the first five minutes.
Arrival and setup
- Pre-event: a few short syncs to align roles, share tooling, and point to resources. Those quick touchpoints were brief but useful.
- Connectivity reality: event Wi‑Fi was unreliable and congested. Phone hotspots worked for some, but a local dumb switch and very long Ethernet cables would have kept the whole team connected and saved repeated re-checks.
- Initial access: organizers provided VPN access, hypervisor credentials, and asset assignments. We gained an early foothold using default credentials in Phase 1 — valuable for reconnaissance and staging but only a preparatory advantage until Phase 2 opened offensive options.
- Early operational checklist to pack and verify:
- Dumb switch, long Ethernet cables, spare NICs, a small jumpbox image, USB-to-Ethernet adapters.
- Prebuilt credential vault or one-liner rotation script and clearly documented initial accounts.
Team operations and timeline
-
Roles and ownership model:
- Senior vs junior split: sessioned members focused on mapping, triage, and high-risk remediation; while new comers and those less proficient performed observation, remediation of lower-risk tasks, and validated steps on single hosts.
- Recommended owners: Linux group, Windows group, network group, and a communications lead who publishes status and ownership updates in chat.
-
Chronological flow:
- Onboarding and VPN checkout.
- Mapping and initial reconnaissance.
- Hardening (reset default creds, firewall tweaks within rules, SSH key management).
- Phase 2 (Purple / offensive phase) and Scorched Earth (final phase where broader offensive options are allowed).
-
Phase 2 and Scorched Earth: Phase 2 authorizes Blue Teams to perform offensive actions against other Blue Teams; Scorched Earth broadens the tolerance for aggressive actions with some restrictions
-
Points of friction observed: unclear early ownership (particularly Windows), slow delegation of repeatable tasks to juniors, and a communications bottleneck when multiple remediation steps were proposed simultaneously.
Technical play and command breakdowns
Summary of observed persistence and exploitation vectors
- Persistence vectors we saw: scheduled tasks on Windows, bash RC files (.bashrc, .profile, .login), crontab entries on Linux.
- Exploits used: maint.php RCE leveraged during Phase 2/Scorched Earth to impact other teams and execute purple-team objectives.
- Operational security action the team performed: cleaned unused SSH keys, added our public key to authorized_keys on assets, disabled password authentication and root SSH where policy allowed.
Community tooling
- The team leveraged community scripts and knowledge from the PvJ repo (https://github.com/t3cht0n1c/PvJ-all-the-things) for common tasks and templates; copy, vet, and minimally test community scripts before mass use.
Key commands I struggled to parse — expanded and explained
- Context: I found parsing nmap and host outputs under time pressure was a bottleneck. These are the expanded one‑liners I wish I’d rehearsed; the basics matter—rehearse them in a lab (e.g., overthewire.org or similar) until they’re reflexive.
Discover hosts and save raw output while extracting IPs
# Discover hosts; append raw output to raw_purple_nmap.txt and extract IPs to purple_IP.txt
nmap -sn 100.80.8.0/24 >> raw_purple_nmap.txt | grep "Nmap scan report" | awk '{print $5}' > purple_IP.txt
- What it does: host discovery across the /24; raw output preserved for later parsing, discovered IPs written to
purple_IP.txt.
Simpler immediate extraction
# Discover hosts and extract IPs directly to purple_IP.txt
nmap -sn 100.80.8.0/24 | grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' >> purple_IP.txt
- What it does: a one‑liner that prints responding host IPs directly.
Port scan prioritized services
# Scan top 200 ports for services and versions using the gathered IP list
nmap -sV -T4 --top-ports 200 -iL purple_IP.txt >> purple_PORTS.txt
- What it does: surfaces likely services for follow-up; optimized to be fast while still informative.
Heuristic parsing for open services
# Attempt to extract IP:port pairs for quick follow-up targeting
cat purple_PORTS.txt | awk '/open/{ s = $2; for (i = 5; i <= NF-4; i++) s = substr($i,1,length($i)-4) "\n"; split(s, a, "/"); print $2 ":" a[1]}'
- What it does: tries to produce
IP:portpairs from nmap output. Nmap output varies—test and refine locally.
Privilege audit — quick root check
# Find accounts with UID 0 (root-equivalent)
cat /etc/passwd | grep :0: | cut -d: -f1
- What it does: lists users with UID 0; we missed this early and it later proved significant.
Why this matters
- I struggled to parse and act on the data fast enough. Preparing and testing parsing one‑liners and small aggregation scripts before the event would have saved valuable minutes and reduced context switching.
Quick triage checklists — run immediately and centralize Linux (run right after shell access; save outputs to timestamped files)
ss -tunlp
ip addr show
ip route show
arp -a
ps aux
netstat -tulpen
crontab -l
cat /etc/passwd | grep :0: | cut -d: -f1
Windows (elevated PowerShell)
netstat -ano
Get-NetIPAddress
route print
arp -a
Get-Process
schtasks /query /v /fo LIST
Get-Service
wevtutil qe System /c:50 /rd:true /f:text
Notes for triage
- Timestamp and centralize every output so triage owners can make decisions without re-scanning.
- Validate parsing scripts locally against known formats and tune for your nmap version and flags.
Remote execution and multi‑host remediation — reworked, practical
Context: I had not recently practiced these tools and that gap showed. The approach below is lean, safety‑first, and rehearsable.
Tool snapshot and tradeoffs
| Tool | Minimal setup | Strength | Best phase |
|---|---|---|---|
| pssh / parallel-ssh | SSH keys/creds | Very fast parallel read-only queries | Chaos / Phase 1 discovery |
| PowerShell Remoting / PSSession | WinRM + creds | Native Windows remote scripting | Windows remediation |
| scp / rsync / pscp | SSH | Fast, reliable file push | Distribute scripts or artifacts |
Practical phased workflow
- Phase 0 — Parallel read-only sweep:
- Use pssh or Invoke-Command to collect sockets, processes, crontabs/scheduled tasks across hosts; centralize outputs with timestamps and a simple filename convention. Prefer read-only commands first.
- Phase 1 — Validate:
- Triage owners analyze outputs and define exact remediation steps. Validate those actions on a single host before scaling.
- Phase 2 — Controlled parallel remediation:
- Execute validated, surgical commands in constrained batches (10–30 hosts), log all actions, and monitor impact. Rotate or revoke any credentials used for mass actions immediately after.
- Phase 3 — Verification and artifact collection:
- Collect verification artifacts and preserve logs for the postmortem and Scorebot verification.
Safe execution examples
# Parallel read-only: collect socket info and timestamp output
parallel-ssh -h hosts.txt -p 20 -i "ss -tunlp" > /srv/analyst/pssh_ss_outputs_$(date +%s).txt
# Push a tested script and execute in small batches
parallel-scp -h hosts.txt rotate_creds.sh /tmp/ && parallel-ssh -h hosts.txt -p 10 -i "bash /tmp/rotate_creds.sh"
# PowerShell Remoting: collect scheduled tasks from Windows hosts
$hosts = Get-Content .\win_hosts.txt
Invoke-Command -ComputerName $hosts -ScriptBlock { schtasks /query /v /fo LIST } -Credential $creds -ErrorAction Stop
Operational rules and guardrails
- Prefer read-only queries at first; validate changes on one host.
- Keep concurrency conservative (10–30), tuned to network capacity.
- Always timestamp outputs and store centrally.
- Prepare rollback steps for every mass action.
- Rotate or revoke credentials used for mass changes immediately.
- Practice these workflows in a lab to know failure modes.
What I would prioritize now
- Rehearse pssh, scp, and PSSession workflows in a lab so you know the mechanics and common errors.
- Maintain a tiny library of safe, tested remediation scripts (with rollback) that are signed/verified.
- Centralize outputs to a known location so triage owners don’t wait on re-scans.
Ansible — a short operational note
- Observation: Ansible is powerful for consistent, auditable remediation but it depends on target capabilities (SSH and typically Python on Linux, WinRM on Windows).
- Our caveat: we were not confident Python would be present on many Linux targets, which reduces Ansible’s viability in the chaos phase.
- Recommended posture: use Ansible as a post‑stabilization enforcement and verification layer. Keep a minimal set of conservative, idempotent playbooks, include dry‑run checks, and avoid Ansible as the initial pivoting tool in a highly dynamic environment.
- Rely on tools such as pssh, scp, PSsession, and WinRM for tactical administration of assets.
What went well and what I’ll change
What went well
- Fast initial foothold via default credentials provided valuable reconnaissance time in Phase 1.
- We effectively leveraged RCE in maint.php during Phase 2/Scorched Earth for purple-team objectives.
- Experienced teammates quickly mapped the network and applied hardening steps once privileges were established.
- Team operational security actions: cleaned unused SSH keys, added our public key to authorized_keys, disabled password auth and root SSH where allowed.
Where we fell short
- Parsing and aggregating data under pressure slowed action.
- Ownership and delegation gaps, especially for Windows hosts, created bottlenecks.
- Multiple Linux accounts with UID 0 were missed until later.
- Lack of recent practice with pssh, PSSession, and parallel remediation tools reduced our speed during critical windows.
Concrete checklist for the next event
- Hardware: dumb switch, very long Ethernet cables, small jumpbox, USB-to-Ethernet adapters.
- People: assign platform owners and a communications lead within the first five minutes.
- Playbooks and scripts: 15‑minute inventory playbook, parsing cheat sheet, and a tiny set of safe remediation scripts with rollback.
- Practice: rehearse parsing one‑liners, pssh, PSSession, scp, and controlled parallel remediation in a lab. Revisit hands‑on basics on sites like overthewire.org..
- Post-game: centralize artifacts immediately, run a focused hotwash on delegation and parsing shortfalls, fold tested one‑liners into the team’s Ops Card.
Closing
This run exposed the difference between knowing a tool exists and being ready to use it under pressure. The tactical changes above — hardware to pack, roles to assign, and lab rehearsals to run — will close that gap. The guiding priorities remain the same: learn, have fun, and win if you can.
[[Security]] [[2025]] [[CTF]] [[Hacked]] [[Training]] [[S.O.C]]