
Artificial intelligence was supposed to make markets smarter. The standard argument has been simple: if investors can process more data, read more filings, test more scenarios, and react more quickly to changing information, then markets should become more efficient. Prices should adjust faster. Mispricings should shrink. Human emotion should matter less. Alpha should become harder to find.
But a different view is now gaining traction across Wall Street’s most sophisticated trading platforms: AI may make markets faster without necessarily making them more efficient.
That is the emerging warning from the quantitative-investment world, where Goldman Sachs’ Osman Ali, global co-head of quantitative investment strategies at Goldman Sachs Asset Management, recently argued that AI could create more predictable, more correlated responses across investment models. His point was not that AI is useless. It was that when many investors ask similar models similar questions, those models may generate similar answers — and similar trades. Goldman’s own discussion framed the issue bluntly: “Will AI make markets less efficient?”
For hedge funds, pod shops, quant managers, market makers, and institutional allocators, that is a major shift in how AI should be understood. The risk is not simply that machines trade faster. The risk is that machines increasingly trade alike.
That is where the Citadel angle becomes important. Citadel has been one of the clearest examples of how elite hedge funds are embedding AI into the research and investment process. Reuters reported that Citadel launched an AI assistant for equities investors, trained on licensed third-party content such as filings, transcripts, brokerage research and the firm’s own investment strategies. Citadel CTO Umesh Subramanian said the tool helps investors highlight risks, generate tailored reading lists and accelerate research, while emphasizing that final investment judgment remains with humans.
That distinction — AI as a research amplifier rather than a replacement for judgment — is now central to the debate. The most powerful investment firms are not simply asking AI to pick stocks. They are using it to compress research time, surface signals, standardize analysis, monitor risks and test investment theses at scale. In the right hands, that can be a formidable edge. In the wrong market environment, it can also become a source of crowding.
The paradox is that the more widely AI tools are adopted, the more likely they are to identify the same factors, the same narratives and the same risk signals. If dozens or hundreds of sophisticated investors are training models on similar earnings transcripts, macro data, filings, analyst notes, price histories and alternative data feeds, the output may converge. A model does not need to be identical to another model to produce correlated behavior. It only needs to weigh the same signals at the same time.
That is why AI could make markets more fragile even as it makes investors more productive.
The concern is not theoretical. Crowded trades have already become one of the defining risks in the modern hedge fund ecosystem. In early 2026, several reports pointed to pressure across systematic and quant strategies as crowded U.S. equity positions faltered. Investing.com reported that quant hedge funds began the year in the red after losses in crowded U.S. stocks, citing Goldman prime-brokerage data that described the first ten trading days of January as the worst period for systematic long-short equity managers since October.
The same problem showed up again during the AI-led tech selloff. Reuters reported in February 2026 that hedge funds suffered their worst trading day in nearly a year as technology stocks sold off sharply, with Goldman describing the move as a momentum event where funds rushed to exit concentrated long positions. Multi-strategy funds, systematic strategies and fundamental stock pickers all felt the pressure.
That is the environment into which AI is now being deployed: not a clean, frictionless market, but a highly levered, highly competitive ecosystem where the same themes can attract enormous capital. AI does not eliminate that dynamic. In some cases, it may intensify it.
The most obvious example is the AI trade itself. Over the past several years, investors have crowded into semiconductors, cloud infrastructure, data centers, power demand, software productivity and other AI-linked themes. Goldman Sachs’ own market commentary has repeatedly focused on the breadth and durability of the AI trade, including how AI disruption is driving sector rotations and creating new investment opportunities beyond the obvious megacap technology names.
But when a trade becomes too dominant, it can begin to behave less like a fundamental thesis and more like a positioning problem. That is especially true when hedge funds, ETFs, retail flows, options activity, quant signals and factor models all reinforce the same direction. A stock or sector may still have strong long-term fundamentals, but near-term price action can become vulnerable to forced selling, volatility spikes and liquidity gaps.
AI makes this problem more complex because it can increase the speed at which crowded trades form and unwind. In the past, a fundamental thesis might spread through analyst reports, conferences, earnings calls and manager meetings. Today, models can ingest the same information instantly, summarize the same conclusions and flag the same trades across thousands of desks. That does not mean every investor will trade at once. But it does mean narrative convergence can happen faster.
In liquid markets, speed is usually celebrated. Faster information processing should reduce inefficiencies. But speed also reduces the time available for disagreement to develop. If many models identify the same “surprise,” “risk,” “inflection,” or “downward revision” at the same time, the market can jump before human investors fully assess whether the reaction is justified.
That is the new flash-volatility risk.
Traditional flash crashes were often associated with market plumbing: high-frequency trading, liquidity withdrawal, stop-loss triggers, exchange fragmentation, or automated order-book dynamics. The AI-era version could be more narrative-driven. A machine-readable signal emerges. Models detect it. Similar portfolios adjust. Risk systems reduce exposure. Liquidity providers widen spreads. Momentum strategies follow the move. Options dealers hedge. What began as an information event becomes a positioning event.
In that kind of market, volatility is not just a measure of uncertainty. It becomes the product of synchronized interpretation.
This is where the debate over market efficiency becomes more nuanced. AI may make markets informationally efficient in the narrow sense that data is processed quickly. But it may make markets behaviorally less efficient if too many participants process that data the same way. Prices can overshoot when models converge. Correlations can rise when systems react together. Liquidity can disappear when the same risk thresholds are triggered across portfolios.
The result is a market that appears highly efficient most of the time — until it suddenly becomes unstable.
Citadel Securities’ own market commentary has warned about clustered downside moves and positioning stress. In February, the firm noted that the number of S&P 500 constituents experiencing statistically extreme downside moves had surged into the top 5% of historical observations, a pattern it said has historically coincided with positioning stress and forced deleveraging episodes.
That language is important because it reflects the modern structure of risk. Markets no longer move only because investors change their minds about fundamentals. They also move because risk systems, leverage constraints, volatility targets, factor exposures and crowding indicators force investors to change their positioning.
AI can strengthen every part of that chain. It can identify risk faster. It can suggest de-risking faster. It can translate market shocks into portfolio actions faster. It can also make managers more confident in the same signals at the same time.
For large multi-strategy platforms, this presents both opportunity and danger. Firms such as Citadel, Millennium, Point72, Balyasny, D. E. Shaw and other major platforms are built around diversified teams, strict risk management and constant capital allocation. AI can help them allocate research resources, detect anomalies, improve execution and monitor exposures across a vast number of strategies. In many cases, the biggest firms may benefit from AI more than smaller rivals because they have better data, better infrastructure, better talent and better internal feedback loops.
But even the best platforms are not immune to crowding. If the broader hedge fund ecosystem is long the same growth stocks, short the same underperformers, exposed to the same factors, or relying on similar macro assumptions, an unwind can hit multiple strategies at once. AI may help individual firms see the risk earlier, but it may also help everyone else see it earlier. That means the exit can become crowded too.
The real edge, then, may not come from simply using AI. It may come from using AI differently.
That is why Subramanian’s point about human judgment matters. Reuters quoted him saying that simply using AI will not automatically make someone a much better investor; performance depends on how the tool is used.
That may become one of the most important distinctions in the next phase of alternative investing. AI as a productivity tool is becoming table stakes. AI as an alpha engine is much harder. If every major hedge fund can summarize earnings calls, map supply chains, compare sentiment, scan filings and monitor real-time news, those capabilities stop being differentiators. The differentiator becomes proprietary data, portfolio construction, risk discipline, model design, and the willingness to disagree with consensus machine output.
In other words, the future of quant investing may not reward the managers who ask AI the most questions. It may reward the managers who ask better questions — and who know when the answer is too obvious.
That creates a new form of meta-analysis for hedge funds. Managers must now analyze not only companies, rates, currencies and commodities, but also how other machines may analyze those same assets. The question is no longer only “What does the data say?” It is also “What will the models say the data says?” and “How crowded will that response become?”
This has major implications for allocators. Pension funds, endowments, family offices and private banks increasingly need to understand how AI is being used inside hedge fund portfolios. It is not enough to ask whether a manager has AI tools. Nearly everyone will. The better questions are: Does AI influence trade selection or only research? Does the manager track AI-driven crowding? Are models trained on proprietary or widely available data? How are AI-generated signals stress-tested? What happens when model outputs conflict with human judgment? How does the firm prevent multiple teams from unknowingly expressing the same AI-derived view?
Those questions could become part of the next generation of operational due diligence.
The issue also matters for regulators. AI-driven trading does not fit neatly into old categories. It is not exactly high-frequency trading, although it may influence execution. It is not exactly discretionary investing, although humans remain involved. It is not exactly passive indexing, although it can produce systematic flows. It is a hybrid layer that sits across research, portfolio construction, execution and risk management.
Regulators are likely to become more interested in whether AI tools create herding, amplify volatility or introduce hidden dependencies into market structure. The challenge is that much of the risk will be difficult to observe from the outside. AI models may be proprietary. Data inputs may be private. Signals may be embedded inside broader investment processes. A sudden market move may look like ordinary volatility even if it was amplified by synchronized machine interpretation.
For now, the industry is moving faster than the rulebook.
That does not mean AI should be viewed negatively. The technology has obvious benefits. It can help investors identify risks earlier, reduce manual errors, monitor more securities, detect fraud, translate complex disclosures, improve research coverage and democratize access to information. Smaller firms may be able to cover more ground. Analysts may spend less time on routine tasks and more time on judgment. Portfolio managers may see scenario analysis that would previously have required large teams.
But the same technology that improves the individual investor’s process may create collective risk when adopted broadly. This is not unique to AI. Quant models, value-at-risk systems, volatility targeting, risk parity, factor investing and indexation have all shown that tools designed to manage risk can sometimes concentrate it.
AI is simply the latest and potentially most powerful version of that dynamic.
The key difference is adaptability. AI systems can learn, summarize, classify and generate responses in ways that feel more flexible than traditional quantitative models. That makes them more useful, but also more opaque. A factor model may tell a manager that a portfolio is overweight momentum or growth. An AI system may instead produce a seemingly nuanced narrative that still leads to the same trade everyone else is making.
That is why language matters. If many investors are using large language models to interpret market narratives, the words themselves become part of the market structure. Earnings transcripts, CEO comments, Fed statements, analyst downgrades and regulatory headlines can be processed not only as information, but as machine-readable catalysts.
In that world, the market can move because models agree on meaning.
The danger is that agreement may not always equal truth. AI can be persuasive, but it is not immune to bias, incomplete information or training-set limitations. It can overweight recent patterns. It can understate regime shifts. It can produce confident summaries of uncertain situations. It can miss subtle context that experienced investors recognize. And when models are trained on overlapping information, the errors may also overlap.
This is the heart of the efficiency paradox. AI can reduce some inefficiencies while creating new ones. It can make obvious mispricings disappear faster, but it can also create crowded “machine consensus” trades that overshoot fundamentals. It can improve research productivity while increasing correlation. It can help managers see risk faster while making exits more crowded.
For the hedge fund industry, that means AI is no longer just a technology story. It is a market-structure story.
The next phase of competition will likely separate managers into three groups. The first group will use AI mainly as a cost-saving tool. They may become more efficient operationally but not necessarily better investors. The second group will use AI as a signal engine, but may risk blending into the crowd if their inputs and methods are not sufficiently differentiated. The third group will use AI as one component of a broader investment architecture — combining proprietary data, human judgment, differentiated research, disciplined risk management and awareness of crowding.
That third group is where the real alpha may reside.
The irony is that AI could make human judgment more valuable, not less. As machines become better at summarizing consensus, the premium may shift toward investors who can challenge consensus. As models become better at finding obvious patterns, the edge may move toward non-obvious interpretation. As everyone gains access to faster analysis, patience, skepticism and differentiated positioning may become more important.
For Citadel and its peers, this is not a theoretical exercise. These firms operate at the frontier of capital, technology and risk. Their AI adoption will shape how the rest of the industry thinks about research productivity, trading efficiency and portfolio oversight. But their warnings — explicit or implied — should also be taken seriously. Faster tools do not automatically create better markets. They can also create faster mistakes.
The broader lesson for investors is clear: AI is not eliminating market cycles. It is changing how they form.
Crowding may happen faster. Unwinds may accelerate. Flash volatility may become more narrative-driven. Liquidity may become more conditional. The difference between a useful signal and a crowded signal may become harder to detect.
That makes risk management more important than ever.
The winners in the AI era will not simply be the firms with the largest models or the most impressive dashboards. They will be the firms that understand the second-order effects of AI adoption: correlation, crowding, reflexivity, liquidity and the psychology of machine-assisted consensus.
AI may indeed make markets faster. It may make research cheaper. It may make information more accessible. But whether it makes markets more efficient is now an open question.
For alternative investment managers, that question has become one of the defining issues of 2026. The rise of AI is not just changing how trades are found. It is changing how trades become crowded, how volatility erupts and how alpha must be defended.
In the old market, investors worried about crowded rooms.
In the new market, they may need to worry about crowded algorithms.