Selected Work / Project

Evidence-Based Audit Engine for IoT Security & Reliability

Author: Thomas Bonderup Published: Focus area: IoT Security and Reliability

Built a repeatable audit engine for IoT systems that turns evidence into deltas, regression alerts, and an executable backlog that can be re-run in CI.

Why this project matters

Context over abstraction

These projects show the technical environment, delivery pressure, and system constraints surrounding the work.

Decisions should be traceable

The useful part is understanding which architecture or implementation choices mattered and why they were taken.

Proof should be reusable

A strong project write-up leaves behind code, patterns, and lessons that another engineer can evaluate quickly.

IoT Security and Reliability IoT Audit Security Reliability
Built a repeatable audit engine for IoT systems that turns evidence into deltas, regression alerts, and an executable backlog that can be re-run in CI.

Project snapshot

Challenge

Manual IoT audits produced one-off snapshots with weak repeatability, weak run-to-run comparability, and no direct path from findings to a re-runnable engineering backlog.

Constraints

  • Evidence had to be collected across edge, gateway, and cloud boundaries.
  • Findings had to be traceable and reproducible for follow-up audit cycles.
  • The solution needed to run in CLI/CI workflows, not as a slide deck exercise.

Intervention

  • Implemented canonical evidence collection with Rust probes and structured JSON output.
  • Built a Scala/ZIO orchestration service to execute probes, persist evidence, and evaluate versioned rules.
  • Added run-to-run delta tracking and regression signaling for posture drift.
  • Generated report artifacts and issue/backlog outputs from evidence.

Outcomes

  • Audit workflows became repeatable and scriptable across environments.
  • Evidence and findings became comparable between runs instead of isolated snapshots.
  • Security and reliability drift could be detected as part of an operational cadence.

Context

This was built for edge-to-cloud IoT systems where TLS, MQTT, gateway behavior, and unstable networks create drift over time.
The objective was to move audits from one-time assessments to a repeatable control loop.

For the deep technical design context, see the blog post section on the problem and system boundaries.

Audit run center showing scoped probe selection and run mode

Intervention

I designed and implemented an evidence-first audit engine with a deterministic flow:

  1. Collect probe evidence from target assets.
  2. Normalize and evaluate evidence using versioned rules.
  3. Produce issue drafts and report artifacts.
  4. Compare runs to surface deltas and regressions.
  5. Feed outcomes into a re-runnable engineering backlog.

Technical implementation details are documented in:

Canonical report view with executive summary, findings, and remediation tabs

Evidence

The platform generates auditable artifacts, not just recommendations:

  • Structured raw evidence from each probe execution (JSON contract per check).
  • Rule-evaluated findings with severity and issue keys.
  • Run records that allow run-to-run comparisons and drift detection.
  • Backlog-ready issue outputs that can be scheduled and re-verified.

This enables a “show, do not tell” audit style where each finding has traceable evidence and a deterministic rerun path.

Run-to-run delta report showing compared controls and evidence drift

Outcome

Engineering outcomes from this implementation:

  • Audits moved from manual snapshots to repeatable, evidence-backed runs.
  • Findings became measurable across cycles through deltas and regression checks.
  • Reliability, security, and observability could be assessed in one operating model.
  • Follow-up work became operational through generated, prioritized backlog items.

Alerts summary showing critical regressions and expiring certificate signals

Next Step

This engine is a reusable proof asset for how I approach evidence modeling, deterministic audit execution, and backlog-oriented hardening in IoT systems:

  • Review the deep dive for the architecture and implementation details behind the run model.
  • Browse the wider portfolio for related gateway, telemetry, and reliability work.

Read the full deep dive or view more portfolio work.

Tech Notes

Small probes and deterministic orchestration make this practical for real projects where teams need repeatability, traceability, and an upgrade path rather than one-off reports.

Stack: Rust probes + Scala/ZIO service + Postgres + report/backlog generation + CI-friendly execution.

Thomas Bonderup

Thomas Bonderup

Senior Software Engineer

Specializes in IoT architecture, distributed systems, reliability and observability, edge-to-cloud delivery.

Builder notes and project references

If this portfolio entry feels close to something you're building, let's talk through the implementation details and tradeoffs.

These portfolio entries come from a mix of real delivery work, deep technical explorations, and earlier builder projects. If the constraints, tradeoffs, or implementation choices feel familiar, I can help you compare the next practical move.

Technical scope: IoT architecture, distributed systems, reliability and observability, edge-to-cloud delivery.

Explore related work or start a technical conversation

If this project overlaps with the systems you care about, continue into related work, the CV, or the contact page for hiring and technical follow-up.

Related content

Get in touch about this work

Use this for hiring conversations, collaboration, or technical follow-up.

Prefer direct contact? Call +45 22 39 34 91 or email tb@tbcoding.dk.

Best for hiring conversations, technical discussion, and thoughtful follow-up on published work.

Typical response time: same business day.