AlphaAgent: Regularized Exploration to Fight Alpha Decay

The previous AlphaGPT review left an open question: when everyone uses LLMs to mine factors, how long can those factors stay effective? AlphaAgent (paper, KDD 2025) tackles this head-on. Its core observation: LLM-generated factors lean too heavily on existing knowledge, producing homogeneous signals that crowd the same trades and accelerate alpha decay. The fix is three regularization constraints injected into the factor generation process, forcing the model to explore structurally novel, logically coherent, and complexity-controlled factors. ...

Posted on 2026-04-21 ·  In Quant ·  7 min read

AlphaGPT: Mining Quantitative Factors with LLMs

One of the core tasks in quantitative investing is mining alpha factors — finding signals that predict asset returns. The traditional approach relies on researchers manually constructing factor expressions, or using automated search methods like Genetic Programming (GP) to brute-force combinations in the operator space. The former depends on human experience and intuition — low efficiency but high interpretability. The latter is efficient but produces deeply nested operator expressions that are nearly impossible for researchers to interpret. AlphaGPT (paper) brings large language models into the factor mining pipeline, using an LLM as the factor “generator.” The follow-up work, AlphaGPT 2.0 (paper), further introduces a human-in-the-loop closed cycle. ...

Posted on 2026-04-10 ·  In Quant ·  5 min read