Invest in the Future of
Edge AI Inference
THOX.ai is building the purpose-built edge AI platform — Nova hardware, MagStack™ clustering, MeshStack™ private AI fabric, and ThoxQuantum on-device quantum emulation — for every industry that demands private, local, and scalable AI.
$30B+ Edge AI Market
The edge AI hardware market is growing rapidly as inference costs eclipse training and privacy constraints tighten across all industries.
$2.5B+ Edge Quantum TAM
THOX.ai is first to combine edge AI with on-device quantum emulation, unlocking an entirely new category: privacy-first quantum computing at the edge.
Purpose-Built Hardware
THOX Nova ships first as the personal / single-tenant edge device. The THOX Edge Series™ (Pro · Pro Max · Pro Ultra) extends the line into workgroup, department, and enterprise-rack tiers — same ThoxOS, same MagStack clustering, scaled compute. R&D is wrapping up; the Edge Series launches right after the Nova flagship ships.
MagStack™ Clustering
Proprietary magnetic stacking enables 7B to 200B+ model deployment and 29–32 qubit quantum emulation across 1–8 device clusters.
MeshStack™ — Private AI Fabric
Cross-platform app (iOS, Android, macOS, Windows) that turns any phone, tablet, or laptop into a private mesh node — layering recurring SaaS revenue on top of hardware.
ThoxMigrate™ — Automated Cloud-to-Edge Migration Platform
A THOX.ai-built platform that lets AI software companies move from cloud APIs (OpenAI, Anthropic, Bedrock, Azure OpenAI, Cohere) to the THOX Edge Series™ and THOX Nova Series™ with one API_BASE_URL change. Three parallel AI teams over MeshCognition produce the architecture, codebase, and debrief. Access is gated; pricing is set on admission — high-margin platform-access revenue compounding on top of hardware and SaaS.
Platform Partners
Intel Partner Alliance member and NVIDIA Inception startup — third-party validation of technical execution and ecosystem access.
Investment Thesis
THOX.ai is building a purpose-built edge AI platform that enables professionals and enterprises across all industries to run modern LLM inference locally with predictable cost, low latency, and strong privacy guarantees—without relying on cloud GPUs or oversized general-purpose hardware.
The platform now spans five reinforcing layers: THOX hardware (THOX Nova Series™ shipping first; THOX Edge Series™ — Pro · Pro Max · Pro Ultra — coming next), MagStack™ magnetic clustering, MeshStack™ — a cross-platform private AI fabric across phones, tablets, and laptops — ThoxQuantum on-device quantum emulation, and ThoxMigrate™ — a THOX.ai-built automated cloud-to-edge migration platform. Each layer is a distinct revenue line and each one strengthens the others.
As inference costs eclipse training costs and privacy constraints tighten across healthcare, legal, finance, and enterprise, compute is shifting from centralized cloud to right-sized edge infrastructure. THOX aims to own this transition for every industry that demands data sovereignty.
The World's First Privacy-First Edge Quantum Platform
ThoxQuantum: GPU-accelerated quantum circuit emulation with quantum simulation capabilities on every Nova device
THOX.ai has integrated quantum simulation capabilities directly into the ThoxOS stack, turning every THOX device into an on-device quantum emulator with up to 31 qubits of exact state vector simulation, with tensor network simulation extending practical capacity to 50–100+ qubits for structured circuits.
This is not vaporware. Quantum simulation runs natively on the GPU-accelerated runtime baked into ThoxOS.
Qubit Capacity by Hardware
| Device | SV FP64 | SV FP32 | TensorNet | Density |
|---|---|---|---|---|
| Nova (single) | 29q | 30q | 50–100q+ | 14q |
| MagStack Quad | 30q | 31q | 80–120q+ | 15q |
| MagStack Octuple | 31q | 32q | 100q+ | 15q |
| MagStack 8× cluster | 32q | 33q | 200q+ | N/A |
SV = State Vector · TN = Tensor Network · DM = Density Matrix
Competitive Moat vs Cloud Quantum
| Feature | THOX | IBM | AWS | |
|---|---|---|---|---|
| On-device (no cloud) | ✅ | ❌ | ❌ | ❌ |
| Air-gapped / offline | ✅ | ❌ | ❌ | ❌ |
| HIPAA compliant | ✅ | ❌ | ❌ | ❌ |
| $0 per-shot cost (owned HW) | ✅ | ❌ | ❌ | ❌ |
| LLM + Quantum on same device | ✅ | ❌ | ❌ | ❌ |
| MagStack cluster scaling | ✅ | ❌ | ❌ | ❌ |
Revenue-Generating Use Cases
MeshStack™ — One Private AI Fabric Across Your Devices
A cross-platform app that pairs phones, tablets, and laptops into a private mesh — adding a recurring SaaS layer on top of every Nova sale.
MeshStack is the software fabric that lets any combination of personal devices run AI together — without a coordinator that ever sees plaintext, without cloud key custody, and offline on LAN after first pairing. Available on iOS, Android, macOS, and Windows.
Strategically, MeshStack converts THOX from a one-time hardware sale into a multi-tier subscription business. Every paired Nova grants Pro-equivalent capability inside the mesh, anchoring hardware buyers into the recurring revenue layer.
Four Privacy Invariants — public on /meshstack
Coordinator never sees plaintext
Weights, prompts, activations, KV cache, agent state, and files never leave the mesh in plaintext.
Keys live on your devices
WireGuard® private keys are device-local. No cloud key custody, backup, or THOX-mediated recovery.
Off-LAN traffic stays ciphertext
Off-LAN relay carries encrypted traffic only. Relay and cellular behavior is disclosed before opt-in.
Works offline on LAN
After pairing, your mesh runs offline on your LAN. Coordinator is used only for first-time pairing and registry.
Subscription Tiers (Recurring Revenue)
| Tier | Price | Devices | Model Class |
|---|---|---|---|
| Free | $0 | 2 | 7B class |
| Prodefault | $19 / month | 5 | 13B class |
| Family | $39 / month | 8 | 13B class |
| Team | $99 / seat / month | 25 / seat pool | 13B class |
| Enterprise | Custom | Custom | Custom |
THOX Nova grants Pro-equivalent capability while a registered Nova remains active in your mesh.
Founder Annual Pre-Orders — Lifetime Lock
MeshStack Pro · Founder Annual
MeshStack Family · Founder Annual
MeshStack Team · Founder Annual
Founder annual pre-orders lock subscription pricing for the life of the account — a durable customer-acquisition wedge that compounds with every Nova reservation.
ThoxMigrate™ — Automated Cloud-to-Edge Migration Platform
A THOX.ai-built platform that moves cloud-AI workloads to the THOX Edge Series™ and THOX Nova Series™ with one API_BASE_URL change. Powered by three parallel AI teams sharing intelligence over MeshCognition.
ThoxMigrate is THOX.ai’s own automated migration platform — not a partner-delivered service. Three parallel AI teams (Architecture & Strategy, Technical Implementation, Documentation) coordinate over MeshCognition and converge on a collective migration solution: the architecture spec, the running proxy/scanner/matcher codebase, and the human debrief deck.
Strategically, ThoxMigrate is the missing piece that makes THOX.ai a genuine cloud-AI infrastructure replacement, not just a hardware product. AI software companies running OpenAI / Anthropic / Bedrock / Azure OpenAI / Cohere today can migrate to the edge without touching frontend code; THOX.ai earns recurring platform-access revenue layered on top of hardware and SaaS.
Built by THOX.ai
A proprietary platform owned and operated by THOX.ai. Three AI teams over MeshCognition produce the migration plan, code, and debrief without back-and-forth user instructions.
Access by application
No free or self-serve tier. Sign-up + access request → platform review → admission into Pilot, Production, or Enterprise. Admission is the moment of monetization.
One env var change
Customer changes API_BASE_URL from api.openai.com to thox-proxy.local. Same request schema, same response schema, automatic rollback if anything degrades.
Recurring access revenue
Three access tiers priced on admission. Every admission compounds platform-access revenue on top of hardware and SaaS.
Three access tiers — granted on admission to the ThoxMigrate platform
Pricing is set by THOX.ai on admission to the ThoxMigrate platform — public pricing is intentionally not disclosed.
ThoxMicroV™ — Managed Microvisor Control Plane
Developer platform to virtualize ThoxOS on Intel and NVIDIA Jetson with gated registration, session governance, and terminal access for development/testing.
Account-gated access model: users register, request access, and are granted role-based permissions before terminal actions are available.
Unified control plane for Intel and Jetson variants with operator-visible running sessions and health telemetry surfaces.
High-margin recurring platform layer: managed access, enterprise governance, and support SLAs on top of hardware and MeshStack subscriptions.
THOX Edge Series™ — Workgroup, Department, Enterprise Edge AI
Three SKUs (Pro · Pro Max · Pro Ultra) extending the THOX hardware line beyond personal Nova into workgroup, department, and enterprise-rack tiers. Same ThoxOS, same MagStack clustering, scaled compute. R&D is wrapping up; launching right after the Nova flagship ships. Not part of the Founders campaign.
The THOX Edge Series adds three production-grade SKUs on top of the personal THOX Nova line: Pro (32GB · 64 TOPS) for workgroups, Pro Max (64GB · 128 TOPS) for departments, and Pro Ultra (128GB · 256 TOPS) for enterprise-rack deployments. All three run ThoxOS, ship in three chassis colors (Matte Black · Space Gray · Arctic White), and are clusterable through MagStack.
The team is currently wrapping up research and development on the Edge Series, with launch sequenced to follow Nova: Nova ships first to Founders, then the Edge Series flips live. Strategically, the Edge Series moves THOX up-market from the personal-device price point into team, department, and enterprise hardware — without rebuilding the software stack. ThoxOS, MagStack, MeshStack, and ThoxMigrate were all designed to scale from one Nova up to a 256-TOPS Pro Ultra rack; the Edge Series turns that scalability into ICP-aligned hardware revenue. Live Stripe products are already created (9 SKUs); pricing flips on at launch.

Three SKUs · three colors
Pro / Pro Max / Pro Ultra in Matte Black, Space Gray, or Arctic White — 9 live Stripe products today, no active prices until launch.
Up-market from Nova
Pro (workgroup), Pro Max (department), Pro Ultra (enterprise rack). Larger memory, more TOPS, same THOX experience.
Same software stack
ThoxOS, MagStack clustering, MeshStack subscriptions, and ThoxMigrate access tiers all unmodified — every Edge Series unit drops into the existing platform.
R&D wrapping up · launches after Nova
The Edge Series ships right after the Nova flagship ships to Founders. Pre-announcement only on the public site for now.
Three SKUs — same software stack, scaled compute



Available in Matte Black, Space Gray, and Arctic White — same chassis colors as Nova. Pricing intentionally not disclosed; the Edge Series is not part of the Founders campaign.


Founders Reserve — Pre-Revenue Demand Validation
Refundable reservation deposits and tiered MagStack bundles that fund working capital and validate ICPs before manufacturing scales.
Reservations are placed with a fully refundable $99.99 deposit per Nova unit. Tiered pricing ranges from $629 (Super Early Bird, 30% off MSRP) through $899 MSRP, with MagStack Duo, Quad, and Octuple bundles available at corresponding savings.
Each reservation also bundles 12 months of MeshStack Pro and locks Founder annual pricing for the life of the subscription — converting a hardware deposit into a long-tail recurring revenue commitment.
Target shipping window is December 2026, in line with hardware EVT and manufacturing-partner timelines published on the Founders Campaign page.
The Problem
There is no simple, modular, inference-first platform designed for on-prem LLM workloads.
Our Solution
Run 7B-32B LLMs locally, scale via clustering, maintain full data sovereignty.
MagStack™ Clustering Technology
Revolutionary magnetic stacking technology that enables multiple THOX.ai devices to combine RAM and compute power for running larger AI models.
- Memory
- 16GB
- Throughput
- 20-72 tok/s
- Max Model
- 32B
- Memory
- 32GB
- Throughput
- 25-45 tok/s
- Max Model
- 70B
- Memory
- 64GB
- Throughput
- 15-30 tok/s
- Max Model
- 100B+
- Memory
- 128GB
- Throughput
- 10-20 tok/s
- Max Model
- 200B+
Magnetic Alignment
Powerful neodymium magnets with precision alignment pins ensure perfect stacking
Auto-Discovery
Devices automatically form clusters via mDNS when stacked together
Distributed Inference
Pipeline parallelism splits model layers across devices efficiently
MagStack™ is a trademark of THOX.ai LLC. Proprietary Technology.
Proprietary Technology Stack
Full-stack platform with custom OS, optimized AI models, and developer tools
ThoxOS™
v1.1
Purpose-built edge AI operating system
- Hardware-accelerated AI inference
- Native edge AI compute optimization
- Smart model routing for optimal performance
- Blazing-fast inference on all model sizes
- OpenAI-compatible REST API
THOX.ai Coder Models
7B / 14B / 32B variants
Custom fine-tuned models for coding assistance
- Based on Qwen3-Coder architecture
- 50+ programming languages
- 45-72 tok/s on 7B models
- Hardware-accelerated inference optimized for 14B+
- On-device model compression support
Developer Platform
APIs, SDK & Integrations
Seamless integration with existing workflows
- OpenAI-compatible REST API
- VS Code extension
- Model Context Protocol (MCP)
- Web dashboard & monitoring
- CLI tools & Ollama compatibility
Cluster-Optimized Model Library
Pre-optimized models for MagStack distributed inference
- Context
- 96K tokens
- Memory
- 220GB
- Min Devices
- 4x
- Speed
- 15-30 tok/s
Expert-level model for enterprise, research, healthcare, and legal workloads.
- Context
- 128K tokens
- Memory
- 810GB
- Min Devices
- 8x
- Speed
- 10-20 tok/s
Frontier-class model matching cloud AI capabilities for any industry application.
- Context
- 64K tokens
- Memory
- 140GB
- Min Devices
- 2x
- Speed
- 25-45 tok/s
Enterprise-grade model for complex reasoning, analysis, and professional workflows.
- Context
- 128K tokens
- Memory
- 19GB
- Min Devices
- 4x
- Speed
- 100-150 tok/s
Elite software engineering model with GPT-4o competitive performance. Supports 92 programming languages with repository-level analysis, code generation, debugging, and collaborative code review.
- Context
- 128K tokens
- Memory
- 243GB
- Min Devices
- 12x
- Speed
- 120-180 tok/s
Frontier reasoning model with state-of-the-art capabilities. Largest openly available model for research institutions, strategic consulting, financial modeling, legal research, and complex quantitative analysis.
- Context
- 1M tokens
- Memory
- 245GB
- Min Devices
- 12x
- Speed
- 30-50 tok/s
Enterprise flagship model with frontier multimodal intelligence. For Fortune 500, hospitals, universities, and government.
- Context
- 1M tokens
- Memory
- 24GB
- Min Devices
- 2x
- Speed
- 80-120 tok/s
Long-context model with 1 million token window for processing entire documents, datasets, and complex analyses. MoE architecture with 128 experts.
- Context
- 10M tokens
- Memory
- 67GB
- Min Devices
- 4x
- Speed
- 60-90 tok/s
Professional multimodal model with vision capabilities and industry-leading 10M token context. Native image understanding for healthcare, legal, and finance.
- Context
- 128K tokens
- Memory
- 47GB
- Min Devices
- 6x
- Speed
- 60-90 tok/s
Government/defense-grade model with maximum security. Supports UNCLASSIFIED through SECRET workloads with N+2 redundancy, air-gap deployment, ITAR compliance, and FedRAMP High authorization.
- Context
- 32K tokens
- Memory
- 6GB
- Min Devices
- 2x
- Speed
- 50+ tok/s
Speed-optimized model for high-volume, real-time applications. Handles 30-50+ concurrent users with <100ms latency. Ideal for customer support, call centers, and interactive applications.
Ideal Customer Profile
Who we are building for and why
Primary ICP
Healthcare Organizations
Hospitals, clinics, research institutions needing HIPAA-compliant AI for patient data analysis
Legal & Financial Services
Law firms, banks, compliance teams requiring confidential document processing
Enterprise & Technology
Companies deploying private AI for R&D, customer service, and internal operations
Secondary ICP (Expanding)
Research & Academia
Universities, research labs, and scientific institutions
Government & Defense
Agencies requiring air-gapped, classified AI environments
Creative & Media
Studios, agencies, and creators needing private content generation
Product Status
Target Shipping: December 2026
Completed
- Product design, architecture, and finalized hardware specifications
- Founders Reserve campaign live with refundable deposits and tiered MagStack bundles
- Stripe catalog wired for Nova reservations and MeshStack Founder annual pre-orders
- Device configurator and pricing (single Nova, Duo, Quad, Octuple)
- ThoxOS direction and hardware-accelerated inference architecture defined
- ThoxQuantum simulation runtime integrated into ThoxOS — live demos at /quantum-portfolio and /quantum-lab
- MeshStack public marketing surface, waitlist, and tier matrix (Free / Pro / Family / Team / Enterprise)
- Interactive cross-platform device demos for MeshStack (iOS, iPad, Android, macOS, Windows)
- MagStack™ SDK and developer tooling
- Intel Partner Alliance and NVIDIA Inception memberships
In Progress
- Hardware prototyping (EVT phase)
- Benchmarking vs comparable edge AI platforms and Apple Silicon
- Manufacturing partner selection
- MeshStack app distribution across iOS, Android, macOS, and Windows
Business Model
Phase 1: Hardware Revenue
Entry point for professionals and small teams
Clinics, law offices, small enterprises
Hospitals, enterprises / 100B+ models
Pro (32GB·64 TOPS) · Pro Max (64GB·128 TOPS) · Pro Ultra (128GB·256 TOPS) — workgroup → enterprise rack. Launches right after the Nova flagship ships. Not part of Founders.
Phase 2: MeshStack™ Recurring Revenue
- Free / Pro ($19/mo) / Family ($39/mo) / Team ($99/seat/mo) tiers — public on /meshstack
- Founder annual pre-orders — Pro $99/yr, Family $249/yr, Team $599/seat/yr (locked for life)
- Every paired Nova grants Pro-equivalent capability inside the mesh, anchoring hardware buyers into the SaaS layer
- Enterprise tier — custom contracts, audit support, SSO roadmap
Phase 3: Software & Services for Scale
- Fleet and cluster management software (SaaS)
- Industry-specific enterprise support and SLAs
- HIPAA, GDPR, SOC2 compliance packages
- Custom model fine-tuning and deployment services
- ThoxMigrate™ — automated cloud-to-edge migration platform with gated access tiers
Long-term value accrues by owning the AI workflow across every privacy-sensitive industry — hardware seats the customer, MeshStack monetizes them, ThoxMigrate captures cloud-AI migration revenue at the platform layer, and services lock the enterprise.
Market Opportunity
Market Drivers
Why Not Alternatives?
| Alternative | Limitations | THOX Advantage |
|---|---|---|
| Cloud AI (OpenAI, Anthropic) | Variable cost, latency, privacy concerns, rate limits | Predictable $0/month after purchase, <50ms latency, 100% private |
| Apple Silicon Macs | Poor clustering, consumer-grade thermals, limited deployment control | MagStack clustering up to 8 devices, enterprise-grade, purpose-built |
| Raw edge AI compute modules | Requires integration work, no turnkey solution, limited software | Complete platform with OS, APIs, and developer tools |
| Server GPUs | Expensive ($10K+), high power (300W+), complex deployment | Edge-optimized, 25W typical, desk-friendly form factor |
Defensibility
Current Moats
Hardware-Software Integration
ThoxOS + hardware-accelerated inference + custom models
MagStack Proprietary Technology
Unique modular scaling approach
First-Mover in Category
Purpose-built developer inference device
Building Toward
Developer Ecosystem
VS Code extension, CLI tools, API compatibility
Model Library
THOX.ai Coder and optimized model collection
Fleet Management
Enterprise software layer for lock-in
Defensibility increases significantly with software layer adoption.
Platform & Ecosystem Partners
Independent third-party validation and direct access to silicon-vendor ecosystems.
Intel Partner Alliance
Standard Partner
Access to Intel optimization tooling, partner ecosystem support, and joint go-to-market resources for edge AI workloads.
NVIDIA Inception
Member · Early-Stage Startup Program
Early-stage program providing technical resources, SDK access, and hardware/software co-optimization support relevant to inference performance.
Intel and the Intel logo are trademarks of Intel Corporation. NVIDIA and NVIDIA Inception are trademarks of NVIDIA Corporation. Use of these marks is governed by each program's brand guidelines.
Investor FAQ
Honest answers to the hard questions
Interested in Investing?
Fill out the form below and our investor relations team will be in touch.