L1
Artifact Model
First, because the runtime has no object to execute without an addressable artifact. Digest addressing and signature verification must be established before runtime interfaces are written — retroactive changes to the artifact contract propagate across all layers.
put_artifact(digest, manifest, signature) works locally; digest is verified deterministically.
L2
Single-node Runtime
Accepts artifact by digest, executes Wasm/WASI, returns status. No placement, no multi-node, no policy at this stage. The only question: does the artifact execute correctly and terminate predictably.
submit_task and cancel_task operate on a single node.
L3
Ephemeral State + OTel Instrumentation parallel
Neither component depends on the other, but both depend on a working runtime. OTel instrumentation points must be embedded during runtime development, not added after. Retrofitting instrumentation after interfaces are stable is consistently a refactor. Even if data flows nowhere at Layer 2, the instrumentation hooks must be present.
create_state / bind_state operate correctly. Traces and metrics flow via OTLP on submit → execute → complete path.
L4
Control API v0 — freeze point
Formalisation of the interface over working components. This is the freeze point — changes after this layer are expensive. The API must explicitly omit apply_policy, simulate_action, and explain_decision. Their absence is a design decision, not a gap.
API v0 specification is fixed; endpoint is operational.
L5
CLI / SDK alpha
Thin wrapper over Control API v0. Purpose: remove the barrier for the first external engineer — run an artifact, inspect traces, manage state without knowledge of internal structure. Not production-grade; sufficient for reference workloads.
An engineer without source access can complete a full submit → observe → cancel cycle via CLI.
L6
Reference Workloads × 3 Environments
Two workloads chosen from the benchmark corpus. event transform / stream enrichment — minimal dependencies, best demonstrates portability. disconnected collector with delayed sync — demonstrates ephemeral state and offline tolerance.
Three environments: Laptop (local), Simulated edge (VM with constrained network), Cloud VM. No code changes to the workload between environments — this is the direct test of H1.
Both workloads pass all three environments; results are documented with raw numbers.
L7
Comparative Benchmark
Same two workloads on baseline stack. Baseline: Kubernetes + simple job runner (Argo Workflows or bare K8s Jobs).
MetricTarget (pre-prod)
Wasm cold start P50< 100 ms
Wasm cold start P99< 400 ms
Bootstrap time (single node)< 30 min
Mandatory components to run workloadvs baseline count
OCI fallback success rate> 90%
Results table with numbers, not assertions.
L8
Whitepaper v0.1
Last, because it documents what was built and measured — not what is planned. Written after benchmark results exist. The explicit non-goals section is mandatory: it separates a research document from product marketing and establishes trust with technical readers.
1. Problem statement     why execution fabric, not orchestrator
2. Architecture          execution model, artifact model, consistency classes
3. API v0                minimal interface and its boundaries
4. Results               benchmark results for H1 and H2
5. Explicit non-goals   what MVO does not do and why
6. Roadmap signal        what the next phase addresses and when
Document is internally reviewable by a technical reader with no prior context on VYR.