From the desk · Methodology

Why Pilot Wave Allocation Outperforms at Scale

Classical factor models describe a market in which the past predicts the future in a few orthogonal directions. Real markets have a geometry that factor models cannot see, and that geometry becomes increasingly expensive to ignore as portfolios scale.

What follows is a structural comparison, not a marketing case. Where the pilot-wave architecture and the factor tradition give the same answer, the advantage is academic. Where they give different answers — correlation spikes, regime transitions, the boundary between the allocation an allocator intended and the one a stressed market forces on them — the advantage compounds with AUM. This essay walks through where those divergences appear, and why they widen rather than narrow at scale.

The factor model's hidden assumption

A standard factor model — Fama-French, Barra, or any modern descendant — is a projection. It takes a return stream and decomposes it along a small number of pre-specified axes: market, size, value, momentum, quality. Each axis is treated as independent of the others. The assumption is that if you can attribute a return to those axes, you have understood it.

This assumption holds approximately, most of the time, in calm markets. It breaks at precisely the moments that determine long-run portfolio outcomes: regime transitions, correlation spikes, liquidity events. And when it breaks, the model does not warn anyone — it keeps producing output in the same format.

At those moments the axes rotate. What was a "value" return last week is a "defensive" return this week and a "liquidity risk" return next week. The projection continues to print numbers, but the numbers no longer refer to the same thing. An allocator running a pure factor book in March 2020 discovered this in real time: the risk-factor loadings on the screen stopped corresponding to the moves on the tape four to six business days before the model's own attribution caught up.

What pilot-wave allocation does differently

A pilot-wave model does not start from factors. It starts from a different mathematical object: a configuration space of all possible portfolio states, equipped with a wavefunction that measures which states are likely and which are unlikely given the current market.

The wavefunction is not a prediction. It is a map. It tells you where in configuration space the portfolio is currently sitting, which directions are smooth and which are cliff-faced, and where the nearby attractors are. The allocation decision is a trajectory through that map, guided by a deterministic equation that treats the wavefunction as a field.

Three properties of this construction matter for institutional scale:

Property 1 — Non-locality

The pilot-wave architecture is explicitly non-local. The state of one asset influences the trajectory of every other asset in the portfolio through a shared wavefunction, even when the classical correlation is near zero. At small portfolio sizes this is a theoretical curiosity. At $500M+ of AUM across hundreds of positions, it is the difference between a portfolio that rebalances smoothly and one that produces surprise drawdowns during regime transitions.

Property 2 — Regime-native geometry

The wavefunction has structure — wells, saddles, barriers — that directly correspond to market regimes. A regime transition in a factor model is a statistical event that the model must detect after the fact. A regime transition in a pilot-wave model is a topological feature of the configuration space that the allocation engine is navigating in real time. The detection is structural rather than statistical.

Property 3 — Tail-aware by construction

Classical factor models are Gaussian at heart. Their tails are thin by assumption, and correcting for fat tails requires grafting on an adjustment the base model does not produce on its own. A pilot-wave architecture carries fat tails natively: the mathematics yields a crash-likelihood term that is part of the model's own structure, not a tail percentile stitched on afterwards. The practical consequence is that the crash signal has the same provenance as the allocation — one internally-consistent object produces both — instead of coming from a separate module with its own assumptions an allocator has to reconcile.

The question is not whether factor models are wrong. Most of the time, in most markets, they are not. The question is whether they are a coarse enough approximation that their errors compound in the regimes that matter — and at scale, the answer is yes.

What modern factor models get right

It is fair to note that a competent modern factor book is not the 1993 Fama-French implementation. Allocators running Barra-style multi-factor frameworks with hierarchical risk parity overlays, dynamic correlation regimes, and explicit tail-risk budgets are not ignoring any of the problems above. They are patching them — one patch at a time, as each becomes salient.

That workflow has real virtues. It is interpretable. Each patch has a literature behind it, a sign-off process, and a specific operational failure mode it was introduced to address. An allocator who has staffed up a quant team over a decade has every reason to prefer this accretion over a rebuild.

The argument for a pilot-wave architecture is not that those patches are wrong. It is that the patches are expensive to maintain, each imposes its own data requirements, and the composite system's behavior at the boundary between patches is harder to reason about than either the individual patches or a single internally-consistent model. At a certain portfolio scale, the coordination cost of the patched stack exceeds the cost of adopting an alternative that handles the underlying problems structurally.

Where the scaling advantage appears

At $10M of AUM, a factor-model-driven allocation and a pilot-wave-driven allocation will often produce similar trajectories. The universe is small, rebalancing is cheap, and the approximation errors of the factor model are in the noise.

The divergence becomes material above three inflection points:

Above a certain scale, the choice of model is no longer a matter of style. It is a matter of which errors compound fastest.

The honest limits

The case above is the case a partisan would make. A useful partisan also names where their argument does not apply. A pilot-wave architecture is not a universal solvent.

If an allocator's mandate does not touch any of the scenarios in the earlier list, the right response to the pilot-wave argument is not to adopt it. It is to file it as a question to ask again in eighteen months, when the scale or mandate has changed.

What this looks like in practice

Take a scenario a multi-asset allocator is familiar with: a correlation regime shift, mid-week, where equity-bond correlation flips from the negative value that has held for eighteen months to a positive value that will hold for the next three months. The tape shows it on Tuesday. A factor-model attribution shows it on Friday. An allocator-facing dashboard shows it the following Monday morning when the weekly risk report circulates.

In the factor-model world, the Tuesday-through-Friday window is a period of systematic misattribution. Positions that were held for "diversification" according to Monday's correlation matrix are no longer diversifying; they are co-moving. The book's realized volatility overshoots the predicted volatility. The risk-parity overlay, seeing higher realized vol, de-risks — at the exact moment the correlation shift is stabilizing. The turnover cost is paid precisely when the benefit it was paying for (diversification) has already evaporated.

In a pilot-wave architecture, the same days look different. The configuration-space geometry is itself a function of the joint correlation structure; when that structure starts to shift, the geometry tilts. The allocation trajectory smoothly redirects along the tilted gradient. The book is not rebalancing against yesterday's correlation matrix; it is navigating today's geometry. The turnover happens earlier, is smaller per unit of rebalance, and — critically — is not triggered by realized-volatility thresholds that only fire after the damage is in the return stream.

Whether this produces a better outcome in any specific instance is an empirical question about that instance. What is not specific to any instance is the structural difference: the factor stack reacts to statistics computed on returns that have already happened; the pilot-wave stack navigates a geometry the returns are being produced by. In long-run average over many regime transitions, those two workflows do not converge.

The statistical shadow of the difference

Across a sufficiently long comparison window, the two workflows leave different statistical fingerprints on the same nominal strategy:

These are signatures, not proofs. An allocator evaluating them should treat them the way they treat any other claimed quant-strategy pattern: as a hypothesis that should hold up in their own data, on their own universe, under their own out-of-sample discipline. The argument of this essay is that the mechanism producing these signatures is well-defined and structural, not that any specific magnitude will survive replication without work.

What this means for allocators

The pragmatic question is not whether a pilot-wave architecture is "better" in the abstract. The pragmatic question is whether it produces materially different decisions than a factor architecture in the scenarios where allocators most need a materially different answer.

The scenarios where it does:

The scenarios where it does not materially differ:

For an allocator whose mandate is dominated by the first list rather than the second, the architecture choice is a material one, not a stylistic one. For an allocator whose mandate lives in the second list, the incremental value is mostly theoretical.

A question to ask before any rebuild

An allocator evaluating whether any of this matters for their book does not need to read a paper to find out. They need to answer one question about their own history:

In the last three regime transitions in your portfolio's history — the three episodes where you wish, looking back, that the book had been positioned differently — did your model produce the trade you actually wanted, or did it produce the trade that was mechanically next?

If the answer is "the trade I actually wanted, most of the time," then the current stack is solving the relevant problem and the structural upgrade argument in this essay is not for you. If the answer is closer to "the trade that was mechanically next," and that pattern is consistent across transitions rather than a single unlucky draw, then the case for moving to a model where the geometry is an input rather than a residual is no longer theoretical. It is an operational question about how many more of those episodes the mandate can absorb.

Nothing in this argument requires an allocator to accept a particular theoretical construction. It requires only that an allocator evaluate, honestly, whether their current tooling handles the regimes that historically dominate their long-run returns, or whether it handles the regimes in between those transitions — which are where the returns are reported, but not where they are earned or lost.

The live model portfolio runs on this architecture

Our live ensemble model portfolio, weekly desk notes, and rebalancing triggers all emerge from the pilot-wave stack described above.

View Model Portfolio Follow on X