Foundation Models, Surrogate Biology

An analogy: harvesting knowledge from language models


The origin of this idea traces back to a conversation with a CS colleague during the summer of 2022. At the time, I was fascinated by questions in efficient learning theory and the role of model architecture in determining learning capacity. Our topic was, “Why haven’t vision models achieved results comparable to language models?” This was shortly after the emergence of RLHF, when large language models were attracting global attention.

I proposed a hypothesis: “Perhaps language itself is inherently structured for learning—serving as a highly compressed and organized form of representation.

He then added an intriguing point: “If large language models capture this structure, we can think of them not just as tools, but as datasets themselves. For small-scale researchers, one good strategy is to harvest knowledge embedded in these models.

This conversation planted a question in my mind: Could similar principles apply in biology? Could large-scale models, trained on vast biological observations, sufficiently serve as surrogates for experimental validation or as proxies for biological knowledge?

Validation of high-throughput measurements


Three years later, these questions have become increasingly relevant in functional genomics, where high-throughput measurements produce enormous yet noisy datasets. One emerging approach involves asking whether conclusions drawn from large-scale studies(both observation and perturbation)—often summarized as global statements about systemic architecture—can be recapitulated in silico using models trained from self-supervision of rich data.

For example, models trained on genomic sequences have been used to impute data in a composite $^\dagger$ fashion[1], identify cis-regulatory syntaxes[2], and probe simpler statistical models such as linear predictors[1,2]. These examples demonstrate how deep models can complement or even substitute experimental measurements by serving as structured surrogates for biological information.

$\dagger$ It reminds me of terminology called ‘Socratic Model’, coined from this seminal paper[3].

In silico recapitulation of global biology


It has become increasingly clear that equating scale with foundational capability is problematic. Models trained on massive datasets in a self-supervised manner do not inherently qualify as foundation models; rather, true foundation models might demonstrate robust generalization beyond pretraining objectives and the ability to reproduce established biological principles without task-specific tuning.

Biology offers a proper testbed for this argument. Biologists possess a wealth of experimentally validated propositions about biological system architecture, and also have natural system/synthetic tools to validate novel hypotheses experimentally. Consider an example from enhancer biology: Gasperini et al.[4] produced large-scale CRISPRi screens, and another study[5] re-analyzed the data to argue that enhancer action is predominantly multiplicative and that evidence for enhancer interactions was undetectable. Here, the Enformer model was used as a computational surrogate to validate these claims in silico. Reversal of this logic is also instructive: if a proposed model aspires to be considered a foundation model for biology, it might reproduce such propositions in a zero-shot manner without explicit fine-tuning.

Some might argue that this claim is quite aggressive, but propositions about biological systems with an established consensus now serve as good benchmarks. Evidence must be systematic and comprehensive—not cherry-picked or anecdotal. Notably, increasing efforts are being made to verify whether so-called “foundation models” enable zero-shot inference of verifiable biological properties derived from large-scale data[6,7].

Learning Constraints from Biological Feedback


An even more ambitious direction is to learn models directly from biological systems by leveraging endogenous mechanisms. Biological systems impose strong selective constraints, determining which states persist and which are eliminated. Negative examples—e.g., states that fail selection—are particularly challenging to obtain under natural conditions, leading to biased datasets where unfavorable configurations are underrepresented.

Among biological systems, the immune repertoire offers a unique opportunity in this regard. For instance, B-cell development is shaped by stringent selection. Productive B cell receptor (BCR) sequences are preferentially retained, while nonproductive or autoreactive sequences are eliminated. This process provides a natural source of implicit preference data, analogous to reward signals in reinforcement learning.

Building on recent work[8], which exploits allelic inclusion to classify suboptimal BCR sequences, one could extend this idea to a generative framework. Specifically, incorporating preference-based reinforcement learning—where productive versus nonproductive repertoires provide implicit ranking signals—could enable models to internalize selective constraints shaping BCR diversity. Such models would not only generate biologically plausible antibody sequences but also simulate affinity maturation pathways, offering new tools for immunological research and therapeutic design.

Reference

[1] Zhou, Yichao, et al. “scPrediXcan integrates advances in deep learning and single-cell data into a powerful cell-type–specific transcriptome-wide association study framework.” bioRxiv (2025): 2024-11.
[2] Seitz, Evan E., et al. “Interpreting cis-regulatory mechanisms from genomic deep neural networks using surrogate models.” Nature machine intelligence 6.6 (2024): 701-713.
[3] Zeng, Andy, et al. “Socratic models: Composing zero-shot multimodal reasoning with language.” arXiv preprint arXiv:2204.00598 (2022).
[4] Gasperini, Molly, et al. “A genome-wide framework for mapping gene regulation via cellular genetic screens.” Cell 176.1 (2019): 377-390.
[5] Zhou, Jessica L., et al. “Analysis of single-cell CRISPR perturbations indicates that enhancers predominantly act multiplicatively.” Cell Genomics 4.11 (2024).
[6] Wang, Yihui, et al. “Genomic Touchstone: Benchmarking Genomic Language Models in the Context of the Central Dogma.” bioRxiv (2025): 2025-06.
[7] Tang, Ziqi, et al. “Evaluating the representational power of pre-trained DNA language models for regulatory genomics.” Genome Biology 26.1 (2025): 203.
[8] Jagota, Milind, et al. “Learning antibody sequence constraints from allelic inclusion.” bioRxiv (2024).

Comments