Performance & Capacity
Capacity planning is telemetry-first, not marketing-claim-first.
NOXA publishes observability metrics and cross-project trust-chain tests today. Public benchmark figures are not yet published, so this page describes what is measured and how capacity is validated before production commitments.
Status
Implemented, partially in place, and planned
Runtime telemetry and scenario scaffolds
ImplementedImplemented: Runtime metrics endpoints and baseline performance scenario scaffolds are available.
Partially in place: Scenario chaining and repeatable seeded datasets are not fully industrialized.
Planned: Broader scenario packs with stable seeded datasets and recurring baseline executions.
Source: ../Noxa/docs/performance-test-plan.md
Benchmark publication readiness
Partially in placeImplemented: Methodology, controls, and measurement targets are documented.
Partially in place: No public benchmark report with guaranteed throughput/latency figures is released yet.
Planned: Publish validated benchmark packs with profile/scenario context and regression thresholds.
Source: ../Noxa-Packager/docs/PERFORMANCE_TEST_PLAN.md
Measured Today
What is already measured in repositories
HTTP request volume, latency histograms, status codes, and in-flight requests via /metrics.
Automation worker cycle health, errors, scanned tickets, and applied actions.
License and product conformity indicators including support eligibility status.
Operational health endpoints used in deployment, upgrade, and incident runbooks.
Load Profiles
Profiles used for validation
Profile A: team-level production with moderate concurrency and no local AI.
Profile B: multi-team production with sustained ticket and automation activity.
Profile C: AI-enabled production where local model footprint is isolated and monitored.
Validation Scenarios
Planned benchmark and qualification scenarios
Read/write API mix under controlled concurrent user load.
Worker-intensive scenario with automation rules enabled.
License and conformity guard behavior under strict production mode.
Upgrade and rollback rehearsal with post-check latency and error budget review.
Optional local AI scenario separating NOXA core metrics from model inference latency.
Validation Goals
Expected outcomes before production commitment
Confirm stability under expected daily and peak usage windows.
Detect bottlenecks in database, worker loops, and ingress/reverse-proxy layers.
Validate that strict trust checks do not break operational SLAs.
Provide evidence-based sizing recommendations per deployment profile.
Capacity Philosophy
How NOXA sizing decisions are framed
Capacity planning starts from measured telemetry, not marketing throughput claims.
Sizing always includes backup windows, upgrade windows, and rollback safety margins.
Integrator and client teams validate assumptions in pre-production before go-live.
Any published numeric target must be tied to a documented scenario and profile.