Skip to main content
Vision

We put the power of compute, storage, and AI
back in the hands of the people.

The same TsugiNode that drives your TV to its full potential is also a node in a residential delivery, storage, and compute grid. Nodes join as a side effect of wanting the picture. No crypto-speculation cold start. The architecture is the long bet.

The thesis

Residential edge wins
on the margin that matters.

The data center wins on raw density. It loses on power, cooling, latency to the user, and consumer agency. Most of the time those four are footnotes. For consumer-facing video and an entire class of B2B inference workloads, they are the whole game.

A residential node is paid for by the homeowner who wanted a better TV. Its power is paid by the local utility, not at hyperscaler rates. Its cooling is the ambient room. Its latency to the viewer is the speed of light across one ZIP code, not across two coasts.

The data center will not go away. The argument is that the marginal next unit of consumer video and a meaningful slice of consumer-tier inference belong on the edge, not in another concrete shell with its own substation.

Power

Paid by homes, not by hyperscalers buying substations.

Cooling

Ambient. No chillers, no water tower, no scaling cliff.

Latency

Last-mile, not last-coast. Speed of light across one ZIP code.

Agency

The operator, not the platform, decides which jobs run.

How the grid composes

Every TsugiNode is
a delivery peer, a cache, and a compute node.

Residential edge mesh: a central origin node connected to six residential edge nodes by gold seams, with dashed edge-to-edge peering lines.
Phase 1

Delivery peer

Edge CDN. Reduces origin egress and improves last-mile latency for the viewer next door.

Phase 1

Storage cache

Operator-controlled. Holds gifted unlocks, master-quality preloads, and cohort content close to where it will play.

Phase 2

Compute node

Spare cycles for AI inference and creative tooling. The operator picks which workloads run.

Phase 2

B2B compute

Research, studio, and scientific workloads. Active interest from the computational biology research community.

Operators earn credits weighted by hardware trust tier and content tier. The credit ledger settles in fiat. Nothing in this design requires a coin, and nothing about joining the grid asks the operator to speculate on one.

The consumer's choice

Operators choose
who their compute serves.

The platform does not capture the operator's spare-compute decision. The operator chooses which artists, studios, and research customers receive their cycles. The platform's job is to keep the marketplace honest and to settle credits.

Two B2B verticals are nearest at hand. The first is computational biology, where there is active interest in non-trivial image-workload runs on the grid once it is alpha-ready. The second is consumer-tier AI inference at scale, where the latency and power profile of a residential node beat a hyperscaler hop on the workloads that fit.

The point is not that the grid replaces the data center. The point is that the operator, not the platform, gets to decide whose work runs on hardware they own.

The architectural rule

The operator owns the box. The operator owns the choice of what runs on it. The platform owns the rails, not the routing.

The architecture argument

Large private AI is consolidating
into a small number of data centers.

That trend is real, expensive, and not stopping. It is also not the only viable shape for the next decade of compute. There is an alternative that scales residentially, on hardware that pays for itself by doing something the household already wanted.

The same TsugiNode that resolves a 12-bit master to your panel can serve a neighbor's playback, hold a creator's catalog close to the viewers who paid for it, and run a tranche of inference on behalf of a researcher the operator chose to support. Each of those is a small unit. The grid is what makes them add up.

That is the long bet. Compute, storage, and AI back in the hands of the people, on hardware they own, paying them for the picture they wanted in the first place.

The substrate

Three filed US provisional patents.
One working pipeline.

Trinity V4.4

Foundational synchronization for the dual-decoder context. Filed 2026-02-19.

Dual-Layer Compression

4K 12-bit 4:4:4 from a 30 to 38 GB file, forty percent smaller than AV1 at the same perceptual quality. Filed 2026-05-01.

Infinity

Plesiochronous gradient consensus. The substrate for the residential compute grid. Filed 2026-05-01.

Specifications are filed at the USPTO. External patentability and value assessments have been completed for all three. The grid is not a sketch on a napkin.

Buy the box for the picture.
Join a grid by doing it.

Phase 0 is the catalog and the box. Phase 1 is the delivery and storage grid. Phase 2 is consumer-owned compute. The waitlist is the on-ramp for all three.