Deployment

Dedicated on-prem deployment with clear network and trust boundaries.

NOXA deployment is designed for customer-controlled infrastructure. DNS/FQDN, static IP, reverse proxy, and TLS choices are integrated in deployment planning without changing the runtime verify-only trust model.

Deployment Reality

What is implemented today and what stays environment-specific

Dedicated server deployment
Implemented

NOXA runtime is delivered for customer-owned on-prem infrastructure with Compose, Kubernetes, or Helm deployment paths.

One root .env is the runtime configuration source.
Preflight, upgrade, and rollback commands are implemented in scripts/install.py.
Runtime services and operational controls stay on customer infrastructure.
Source: ../Noxa/README.md
DNS, FQDN, and static IP strategy
Partially in place

The product documents DNS/FQDN and certificate expectations, while the final mapping is defined per client by integrators.

Integrator checklist explicitly includes DNS/FQDN and TLS strategy.
Certificate rotation guidance requires SAN/FQDN/IP validation.
No single hardcoded public DNS model is imposed by the runtime.
Source: ../Noxa/docs/production-deployment-dns-ip.md
Reverse proxy and TLS boundary
Partially in place

Helm ingress is implemented, and TLS controls are enforced. External reverse proxy patterns are deployment architecture decisions handled with the integrator.

Helm mode exposes HTTPS ingress and can include OIDC paths.
Transport security guardrails fail fast when HTTPS and TLS requirements are not met.
For Compose and raw Kubernetes, reverse proxy topology remains environment-specific.
Source: ../Noxa/docs/production-deployment-dns-ip.md
Runtime and signed bundle integration
Implemented

Operational deployment relies on signed artifacts delivered by Factory and Packager, then verified locally by runtime.

Runtime is verify-only and does not sign artifacts.
Packager delivers signed bundle-manifest and product-manifest artifacts.
Support eligibility depends on official signed artifacts and conformity checks.
Source: ../Noxa/docs/product-trust-chain.md
DNS / IP / FQDN

Client network planning baseline

Reserve a client-owned FQDN per exposed service (for example app, API, optional SSO).
Associate FQDNs with static IP or internal load-balancer addresses managed by client operations.
Issue certificates with SAN entries matching expected FQDN/IP paths before production go-live.
Document DNS ownership and certificate renewal responsibilities in the integrator handover dossier.
Reverse Proxy / TLS

Boundary and implementation notes

Helm mode includes ingress-based HTTPS exposure.
Compose and raw Kubernetes can be fronted by client reverse proxies without changing trust-chain logic.
TLS termination model (edge termination or end-to-end TLS) is selected per security policy.
Regardless of proxy topology, runtime strict checks still require valid signed artifacts.
Deployment Variants

Single-server, internal infrastructure, and air-gapped variants

Single Server
Implemented
Usage: One dedicated host running Compose services with local TLS.
Network: Restricted LAN exposure, optional VPN entry point.
DNS/TLS: FQDN and certificate mapping managed by integrator.
Fastest path for first production deployment.
Uses documented backup, upgrade, and rollback scripts.
Internal Infrastructure
Implemented
Usage: Clustered deployment with Kubernetes or Helm inside customer internal network.
Network: Ingress and service boundaries controlled by internal networking policy.
DNS/TLS: Ingress TLS and hostnames configured by platform team/integrator.
Best fit for organizations with existing cluster governance.
Supports optional Keycloak/OIDC path in Helm mode.
Air-Gapped
Implemented
Usage: Disconnected environment using offline licensing and signed artifact transfer.
Network: No mandatory external license server dependency at runtime.
DNS/TLS: Internal CA and internal DNS are prepared by client/integrator.
Trust model and licensing flow are offline-ready.
Operational packaging/hardening kit is delivered case-by-case with integrators.
Implemented Paths

Deployment commands

Docker Compose

Fast on-prem bootstrap and repeatable local operation.

python ./scripts/install.py compose-up --init-env --generate-certs
Kubernetes manifests

Raw manifest deployment for controlled cluster environments.

python ./scripts/install.py k8s-up --init-env --generate-certs
Helm

Chart-based rollout with ingress-oriented deployment paths.

python ./scripts/install.py helm-up --init-env --generate-certs
Checks

Preflight and readiness validation

python ./scripts/preflight.py --mode compose
python ./scripts/preflight.py --mode k8s
python ./scripts/preflight.py --mode helm
cd backend && make prod-readiness-check
cd backend && make license-check
Base Prerequisites
Runtime prerequisites
Python 3
Docker
kubectl (for Kubernetes and Helm modes)
helm (for Helm mode)
Root .env file as the single configuration source
Runtime

Baseline endpoints after deployment

Frontend: https://localhost:3443/
Backend health: https://localhost:8443/health
Backend docs: https://localhost:8443/docs
Metrics: /metrics
Bundle Content

Expected integrator handover artifacts

manifests/product-manifest.json + manifests/product-manifest.sig
manifests/bundle-manifest.json + manifests/bundle-manifest.sig
license/license.json + license/license.sig (canonical handover for production target)
install/docker-compose.yml and install/client-values.yaml
install/deployment-context.json (installation/support artifact, not primary runtime trust manifest)
CHECKSUMS.txt and signed bundle archive (.tar.gz + .sig)
Integrator Install Scope

What is installed and validated before handover

Install runtime deployment assets (Compose, Kubernetes, or Helm path).
Place official signed trust artifacts on runtime paths: license + manifests + signatures.
Keep install/deployment-context.json for installation/support evidence (not primary trust source).
Run diagnostics and production-guard checks before customer handover.
Support Eligibility

What deployment must preserve for support continuity

Valid signed license artifacts (license.json + license.sig).
Valid signed product-manifest and bundle-manifest artifacts.
Edition/module/version/artifact coherence between runtime and signed manifests.
No unsupported rebuild or tampering of official runtime artifacts.

Need deployment planning with DNS, TLS, and trust-chain constraints?