In physics, half-life is the time it takes for half of a radioactive sample to decay. In portfolio management, it is the time it takes for half of a signal's predictive value to disappear into the tape. The distance between those two definitions is where most alpha is lost.
Consider a mid-sized family office that runs a diversified multi-asset portfolio. A research note arrives on a Tuesday morning. It flags deteriorating breadth in small caps, a tightening of cross-asset correlations, and a pickup in realized volatility. By any reasonable standard, the signal is actionable: it is quantitative, specific, and unambiguous about direction.
What happens next is the problem. The note is read by an analyst. She summarizes it for the investment committee. The committee meets on Thursday afternoon — they meet twice a week. A decision is taken on Friday to reduce small-cap exposure by three percent. The trade is routed through the custodian's rebalancing window and lands in the market on Tuesday.
Seven calendar days. The signal that arrived on Tuesday morning has been in the portfolio for seven days before any capital has moved. In that window the market has repriced three times over. The signal's information content — its ability to predict a path the market has not yet taken — has decayed by an order of magnitude.
Half-life is the binding constraint, not signal quality
Most conversations about portfolio intelligence focus on the signal itself. Is the model well-specified? Does the backtest hold up on out-of-sample data? Is the Sharpe ratio above 1?
These are the wrong questions, or at least the wrong first questions. The right first question is: what is the half-life of this signal? — and immediately after that, what is the decision latency of the institution receiving it?
If the signal half-life is two trading sessions and the institutional decision latency is seven calendar days, the signal is worthless before it is ever acted on. It doesn't matter how good the model is. The information has decayed past the point of profitability by the time capital arrives.
A signal with a one-week half-life delivered to an institution with a one-month decision cycle is not a signal. It is a historical artifact.
Where decision latency actually lives
Decision latency is not a single number. It is the sum of several components, and most institutions can only measure one or two of them with any accuracy.
Component 1 — Signal arrival to internal consensus
The time between a signal entering the building and the investment committee agreeing that it should be acted on. For most wealth managers this is measured in days, sometimes weeks.
Component 2 — Consensus to execution authorization
The time between an agreed-upon trade and the release of an execution authorization. Compliance review, risk sign-off, custodian notification. Typically one to three business days.
Component 3 — Execution window
The time between authorization and the order actually reaching the market. For managers who batch trades into weekly or monthly rebalancing windows, this can be another five to twenty days.
Component 4 — Fill to settled position
The time between the first fill and the full intended position being in place. For large rotations executed via TWAP or VWAP, this can be another one to three days.
Summed across all four components, the median decision latency at traditional wealth-management institutions is approximately seven to fifteen business days from signal arrival to full position.
The median half-life of a quantitative equity signal of moderate frequency is approximately two to five trading sessions.
The gap is three to seven times.
Shortening decision latency is a software problem
The solution is not to find better signals. It is to shorten the distance between signal generation and capital movement. This distance is almost entirely a software problem, and it is one that the legacy wealth-management stack cannot solve because it was built around human decision-making cadences.
The architectural changes that compress decision latency:
- Continuous signal computation. Signals are refreshed on every new market tick, not on an analyst's review schedule. There is no batch cycle, no Tuesday-morning note. The signal is a live number.
- Pre-authorized execution envelopes. The investment committee agrees in advance on a bounded set of rebalancing actions the architecture may take autonomously — within fixed risk limits, within a whitelist of instruments, within a maximum position change per session. This removes compliance review from the decision path.
- Direct API routing to execution. Orders route from the decision engine to the broker without a manual touch. The custodian is reconciled post-hoc rather than pre-authorized on each trade.
- Continuous rather than windowed rebalancing. Trades execute when the signal says to trade, not on the first Thursday of the month.
Each of these changes is available today. None of them are technically difficult. All of them are absent from the overwhelming majority of institutional wealth-management platforms, because the platforms are structured around the cadence of human committees rather than the cadence of market information.
The bottleneck is not the model. The bottleneck is the seven days between the model being right and the portfolio being allowed to act on it.
What this looks like when it is solved
Quark's reference implementation runs its full signal stack on every market tick for every asset in its universe. When a regime-transition signal crosses its decision threshold, the allocation engine computes a new target weight, the executor routes the resulting trades to a broker API, and the portfolio is in its new posture within minutes of the threshold being crossed.
The investment-committee equivalent is a risk-budget and universe-whitelist configuration that the committee signs off on once per quarter. Day-to-day decisions happen below that envelope. The committee's role shifts from trade-by-trade approval to setting the boundaries within which the architecture operates.
This is not a small change in how wealth management works. It is a different organizational architecture entirely — and it is the one that extracts the full value of modern signal research rather than watching most of it decay in a review queue.
Practical implications for allocators
If you are evaluating a quantitative research partner, the questions to ask are not only about the research itself:
- What is the median half-life of the signals you produce?
- What is the latency between a signal changing and the downstream subscriber receiving it?
- What is the minimum round-trip from signal to execution for a subscriber who wanted to act on it immediately?
- Do you produce the signal, or do you produce the signal and the execution architecture to consume it?
The answers to those four questions will tell you more about whether a research partner will deliver durable alpha than any number of backtest graphs.
See where Quark's signals sit on the half-life spectrum
Our daily signal dashboard, weekly desk notes, and live model portfolios are designed around a decision cadence closer to the signal half-life than to the traditional committee cycle.
View Live Performance Follow on X