Trends in Oncology
Not All Tumors Look Alike: Using Surfaceome Diversity to Enrich Responders
The case for measuring the surface directly Biologics and antibody drug conjugates operate at the cell's surface, yet many programs still guide patient selection using RNA expression or whole cell protein abundance. That practice can misclassify candidates because transcripts and total protein often do not correlate, due to post-transcriptional and post-translational mechanisms. Moreover, cellular localization and receptor density can’t be quantified by using whole cell bulk proteomics, therefore not guaranteeing that a receptor is exposed on the exterior membrane above drug relevant thresholds. In 2024, investigators profiled the surfaceome of 100 genetically diverse, primary human AML specimens and resolved antigen patterns on primitive and stem like cells with limited expression in essential normal tissues. The work demonstrated that surface level heterogeneity is real, clinically meaningful, and different from what one would infer from bulk RNA or whole cell proteomics alone. Cell+1 Calls to better map the surfaceome are growing. A 2024 Cancer Discovery commentary noted that only a few dozen cell surface targets currently anchor FDA or EMA approved therapies, despite the surface being the most accessible compartment for antibodies, CAR T cells, and radiopharmaceuticals. The authors argued for intensified efforts to chart the universe of surface proteins across cancers, which would accelerate target discovery and improve translational relevance for modalities that require true surface accessibility. AACR Journals+1 Solving the Surfaceome Problem Why a surface resolved view changes development plans Modern reviews describe how advances in mass spectrometry, fractionation, and enrichment are making it feasible to survey the cancer surfaceome at scale. These approaches improve the identification of drug accessible proteins that whole cell proteomics can miss, because abundant intracellular proteins dominate unfractionated measurements and mask low abundance, surface localized receptors. The reviews also outline practical ways to mitigate technical challenges such as low copy number, hydrophobic domains, and contamination by intracellular compartments, for example through careful membrane preparation, parallel intracellular and whole cell fractions, and stringent enrichment and quality control thresholds. The program level implication is straightforward. When a therapy acts at the surface, the biomarker strategy should resolve surface localization directly rather than infer it from RNA or whole cell protein alone. PubMed+1 A second implication concerns internalization and trafficking. For ADCs and some bispecifics, internalization kinetics and routing to lysosomes affect payload delivery and potency. Surface resolved workflows can be coupled to orthogonal assays, for example flow cytometry with ligand induced internalization or live cell imaging, to determine whether a receptor is not only present at the surface but also behaves in a way that supports the intended mechanism. This behavioral dimension rarely appears in transcript based selection but influences both efficacy and safety. From heterogeneity to practical enrichment Turning surface heterogeneity into clinical signal requires a sequence of disciplined steps. First, measure what matters. Use fractionated surface proteomics or validated surface specific immunohistochemistry and flow assays that distinguish the exterior membrane from total abundance. Second, define actionable thresholds that tie expression to benefit and that can be reproduced across sites. Educational content from the ASCO Educational Book emphasizes that quantitative cutoffs and standardized assays are central to patient selection for antibody drug conjugates, because target accessibility and abundance determine benefit, and because inconsistent thresholds erode the interpretability of early phase studies. Third, map prevalence by histology and by clinically relevant subgroups, since an attractive target with low prevalence may not support enrollment or may require a targeted site strategy. When these elements are present, enrichment reflects how the drug actually works and not a proxy signal. ASCO Publications+1 Surface level diversity can also sharpen indication strategy. If a receptor is highly enriched at the surface in a subset of colorectal cancer but not in pancreatic cancer, a program can prioritize the former for first in human evaluation, even if RNA levels are similar in both. This is particularly relevant where surface and whole cell abundances diverge. The AML study noted above showed precisely this phenomenon, where antigen exposure on stem like compartments could be quantified and connected to therapeutic concepts that require surface binding. Cell Selectivity and safety as design constraints Surface heterogeneity intersects with safety because many candidate antigens have some expression in normal tissues. A 2024 review in Trends in Pharmacological Sciences surveyed strategies that increase antibody selectivity in oncology, including superselectivity through avidity and multivalency, conditional or pH sensitive binding, dual targeting that requires co expression to achieve high affinity, and engineering designs that leverage tissue context. These concepts translate directly into earlier program decisions. Targets that can be paired with selective engineering approaches should score higher than those that would require unrealistic discrimination from a conventional binder. A transparent scoring framework, for example one that penalizes normal tissue expression in essential organs and rewards tumor specific co expression patterns, makes those choices auditable during governance reviews. Cell+1 The safety argument is not theoretical. ADCs can cause off tumor toxicities when a payload is delivered to normal tissues that express the target at modest levels. A biomarker plan that quantifies true surface exposure in disease and screens for surface exposure in a curated panel of normal tissues improves the chance of a workable therapeutic window. Reviews that focus on ADC biomarkers, together with trial design guidance, point in the same direction, namely that selection should be quantitative and assayable, not an exploratory cutpoint chosen after the fact. ASCO Publications Concrete examples of surfaceome driven discovery Evidence that surface resolved discovery translates into programs is accumulating. In late 2024, a Cancer Cell study used an integrative proteogenomic surfaceome approach to credential DLK1 as an immunotherapeutic target in neuroblastoma. The authors combined mass spectrometry based surface profiling with genomic and transcriptomic context to demonstrate surface exposure, tumor specificity, and functional relevance, then moved to validation experiments that supported drug development. This work illustrates how surface level datasets, anchored in clinical material, can identify and qualify targets that a transcript only screen might underrate. PMC+3Cell+3PubMed+3 Surfaceome maps are also becoming more accessible, which will help teams generalize findings across institutions. The AML dataset is public in GEO, enabling independent inspection of QC criteria, antigen lists, and statistical methods. Commentary from Cancer Discovery underscores the systemic opportunity, arguing that better cartography of the surfaceome across cancers is likely to grow the small set of currently actionable surface targets. Together, public data and field level commentary support a move from opportunistic target picking toward systematic, population informed discovery. NCBI+1 TALK Decoding the Cell Surface to Accelerate Discovery Program design, trial execution, and measurement Programs that embrace surface resolved selection can operationalize three habits that improve downstream signal. First, connect discovery assays to clinical screening early. If discovery uses a fractionated surface proteomics threshold, define the clinical assay that will mirror that readout, for example an IHC score or a validated flow protocol, and harmonize cutoffs before first patient in. Second, pre specify enrichment rules in the protocol. Enrolling only the top percentiles of surface expressors may seem restrictive, but it increases the chance of a pharmacodynamic signal that validates the mechanism and informs dose expansion. Third, measure what happens when selection criteria are varied in sensitivity analyses, and share these details in publications and regulatory interactions. This transparency builds cumulative knowledge that benefits the field and informs the next iteration of thresholds. There is also value in linking surface metrics to pharmacology. If a surface antigen is abundant but shows slow internalization, a payload with a bystander effect may be preferable to a payload that requires rapid lysosomal routing. If surface expression is heterogeneous at the lesion level, a radionuclide therapy that can exploit crossfire may offer advantages over a conventional ADC. These are not general rules, they are examples of how a surface resolved view can shape the choice of modality and payload in a way that reflects the physical constraints of the target. Where Champions Oncology contributes Champions Oncology generates surface resolved datasets from low passage, clinically relevant tumor models and integrates them with deep multi omic profiles and in vivo pharmacology. This enables teams to quantify true surface positivity, to set and test actionable thresholds, and to understand prevalence by indication before committing to costly trials. The same datasets support orthogonal validation and method standardization so that discovery assays translate into clinical screening. The approach is designed to be non promotional and data first, the objective is to clarify risk and to help programs make better decisions earlier.
Beyond Single Omic Biomarkers: How Proteogenomic ML Reveals Therapy Vulnerabilities
Why a functional view changes predictions Genomics and transcriptomics remain foundational for precision oncology, but they do not fully represent the functional state that determines how tumors respond to therapy. Proteins and phosphoproteins capture activity at the level where drugs actually engage, for example receptor density and localization, complex assembly, and pathway signaling. That distinction is not academic. In a 2024 pan-cancer analysis from the Clinical Proteomic Tumor Analysis Consortium (CPTAC), investigators integrated proteogenomic data from 1,043 patients across 10 tumor types, surveyed 2,863 druggable proteins, and quantified biological factors that weaken mRNA to protein correlation, making the case for models that learn directly from protein and phosphoprotein context rather than inferring from transcripts alone. Cell From data to models that travel The practical bottleneck has been access to harmonized, well-annotated cohorts that support training, testing, and independent validation. In August 2023, the National Cancer Institute announced a standardized pan-cancer proteogenomic dataset that aligns genomics, proteomics, imaging, and clinical data for more than 1,000 tumors across 10 cancer types, explicitly to enable reproducible discovery and model benchmarking. The Proteomic Data Commons (PDC) now serves these resources in a way that supports programmatic access and cross-study comparisons, a requirement if machine-learning outputs are going to generalize beyond a single study. National Cancer Institute+1 Solving the Surfaceome Problem What the evidence shows when proteins are included Two Cell papers from 2024 illustrate why adding protein-level information changes conclusions. An immune-landscape analysis derived distinct immune subtypes by integrating genomic, epigenomic, transcriptomic, and proteomic features and connected oncogenic drivers to downstream protein states that influence immune surveillance and evasion. A companion pan-cancer study expanded the landscape of therapeutic opportunities by evaluating thousands of druggable proteins across tissues and documenting where mRNA is a poor proxy for protein, especially in pathways relevant to therapy response. Together, they show that multi-omic modeling, including protein and phosphoprotein features, improves biological interpretability and exposes actionable biology that single-omic approaches overlook. Cell+1 A broader signal from the field The trend is not confined to CPTAC. The Pan-Cancer Proteome Atlas (TPCPA), published in Cancer Cell in 2025, quantified 9,670 proteins across 999 primary tumors representing 22 cancer types using DIA-MS. The atlas offers a tissue-based substrate for target nomination, biomarker discovery, and external validation, and has been highlighted in the trade press for its global availability and immediate relevance to oncology research. Such atlases are valuable because they capture proteomic variability directly in clinical material, not only in cell lines, providing realistic distributions for features that ML models attempt to learn. Cell+2PubMed+2 Why proteogenomic ML improves prediction Integrating proteins and phosphoproteins adds information that is both mechanistic and measurable. First, pathway activity is reflected in phosphorylation states, which function as on–off or rheostat-like controls for signaling. Second, receptor exposure and complex formation at the protein level determine whether a therapy can bind or disrupt a process. Third, protein degradation and post-translational regulation often decouple mRNA abundance from target availability, which explains why transcript-only biomarkers can fail at the bedside. When these features are engineered into models, performance gains are not just numeric; they tend to be more interpretable, mapping to drug-actionable pathways and receptors that clinicians recognize. The 2024 CPTAC studies provide concrete examples. Immune subtypes defined by proteogenomic features correlate with differences in antigen presentation, cytokine signaling, and interferon responses, features with obvious translational implications. The survey of druggable proteins shows wide variation in abundance and localization across tumors and details the contexts where transcript and protein diverge, arguing for protein-aware rules when nominating targets or stratifying patients. Cell+1 What good practice looks like in model building There is a growing consensus on practical guardrails. Independent validation across cohorts is essential to avoid overfitting, and the infrastructure now exists to support that step through the PDC and related CPTAC resources. Feature construction should prioritize pathway-level signals that aggregate individual phospho-sites into kinase or pathway activity because these are more stable across cohorts and easier to interpret for clinical decision making. Finally, clinically annotated samples, including treatment history and outcomes, are indispensable if models are expected to inform responder enrichment and mechanism-of-resistance hypotheses rather than only classify molecular subtypes. National Cancer Institute Translational payoffs, with appropriate caution When executed with these guardrails, proteogenomic ML offers tangible benefits. Programs can generate earlier responder and non-responder hypotheses and test them prospectively in preclinical systems before committing costly clinical designs. Resistance pathways inferred from phospho-proteomic features can motivate combination strategies, for example pairing an antibody-drug conjugate with a kinase inhibitor when signaling indicates a plausible escape route. Educational content from ASCO has emphasized the centrality of quantitative thresholds and validated assays for patient selection, particularly for ADCs where surface accessibility and abundance determine benefit. The lesson is consistent across modalities. Predictive features must be connected to assays that can be deployed consistently in trials, and thresholds should be defined in a way that anticipates real-world variability. PubMed Where limitations still matter Several limitations deserve explicit mention. Proteomic and phospho-proteomic data remain technically variable across platforms and laboratories. Although CPTAC and PDC mitigate this through standardization, modelers should evaluate batch effects and apply normalization strategies suited to proteomic data. Coverage of kinase–substrate relationships and post-translational networks is incomplete, which constrains inference. Tumor heterogeneity adds another layer, particularly when bulk tissue averages mask subclonal or microenvironmental signals. These caveats do not negate the value of proteogenomic ML, but they do argue for conservative claims, orthogonal validation, and a bias toward features that can be measured reproducibly in clinical settings. TALK Decoding the Cell Surface to Accelerate Discovery Implications for trial design and portfolio focus The immediate implication is a more disciplined approach to enrichment. If protein-level features identify a subgroup with plausible sensitivity, early designs can incorporate eligibility criteria and stratification based on validated assays rather than exploratory cutpoints. Conversely, if pathway-level features suggest multiple escape routes, it may be more efficient to prioritize combinations earlier instead of iterating single-agent studies. At a portfolio level, proteogenomic evidence can help prioritize programs with a mechanistic rationale supported by functional data, not only by mutation prevalence or gene expression. How Champions Oncology contributes Champions Oncology builds models on tumor-derived systems that preserve patient biology and heterogeneity. Our datasets link genomics and transcriptomics with proteomics, phospho-proteomics, and cell surface proteomics, and they are annotated with pharmacologic phenotypes. This combination supports models that tie features to functional biology and drug accessibility, making it possible to move from correlation to causal relation and from causal relation to druggable targets.
Reducing Clinical Attrition: Why Stronger Data Needs to be the Starting Point for Oncology R&D
Clinical attrition has been oncology’s oldest problem and, in many ways, still its biggest. The pattern is painfully familiar. A promising therapy emerges with encouraging preclinical data, advances through IND-enabling studies, shows early signals of activity in Phase I, and then fails in Phase II or Phase III. The financial costs of these failures are staggering, billions of dollars are lost globally each year. But the greater cost is measured in time and opportunity, years of development work invested, only to leave patients still waiting for new therapies. Despite decades of innovation, attrition rates in oncology haven’t shifted as much as the industry hoped. Better trial design and precision medicine strategies have helped in some areas, but the fundamental problem remains: the data we use to make early decisions often doesn’t capture the full reality of patient biology. Decoding the Cell Surface to Accelerate Discovery Why attrition remains so stubborn To understand why attrition persists, it’s worth looking at the foundation. Much of oncology R&D still relies on models and datasets that, while powerful, were never meant to carry the full burden of translational decision-making. Genomics is a prime example. Sequencing technologies have revolutionized how we classify tumors and identify potential targets. But tumors are not defined by their mutations alone. Transcriptional programs, proteomic signaling networks, post-translational modifications, and dynamic adaptations under treatment all contribute to how a tumor grows, evades therapy, and eventually resists intervention. A therapeutic strategy built solely on genetic alterations may miss the downstream biology that ultimately determines clinical outcome. Cell lines are another example. They are convenient, reproducible, and cost-effective, which is why they remain a staple of preclinical research. But they lack the heterogeneity and clinical context of patient tumors. They rarely reflect the complexity of pretreated, metastatic disease — exactly the patient populations that new oncology drugs are tested in. When early models don’t reflect the biology of the intended clinical population, it is not surprising that translation breaks down. Even when multi-omic data is available, it is often sparse, fragmented, or drawn from public repositories that were never built for translational research. These datasets may be useful for generating hypotheses, but they are rarely robust enough to support critical go/no-go decisions. And yet, in the absence of better resources, they are often asked to do just that. The gap between data and patients The result of this reliance on incomplete models is a gap between what we believe about a therapy and what happens when it is tested in patients. That gap is where attrition lives. It’s the difference between a drug that looks compelling in preclinical settings and one that can’t demonstrate sufficient efficacy or durability in the clinic. One concrete example comes from RNA and protein data. In acute myeloid leukemia (AML), large-scale analyses have shown that only about 17% of genes have a positive correlation between RNA expression and protein expression. That means if you are relying on transcriptomics alone to predict biology, you’re often looking at signals that don’t translate to the level where drugs actually act. This divergence isn’t unique to AML — it’s a reminder that single-omic views can give an incomplete or even misleading picture of tumor biology. Another example is in resistance biology. In pretreated patient-derived xenografts (PDX), resistance pathways are often “baked in” from the start, reflecting real-world clinical histories. These mechanisms are invisible in naïve cell lines, which haven’t experienced therapy. By working with tumors that already carry resistance features, researchers can anticipate escape mechanisms before they derail late-stage trials. What better data could look like If we accept that the root of the problem lies in the misalignment between early data and patient biology, then the question becomes: what would better data look like? First, it would need to come from models that are closer to the clinic. Patient-derived tumors, especially those from pretreated and metastatic populations, preserve the genetic complexity, phenotypic heterogeneity, and resistance mechanisms that cell lines cannot replicate. Studying these tumors allows us to see not just what cancer looks like in theory, but how it behaves in practice. Second, it would need to move beyond genomics into multi-omic depth. Genes matter, but so do the transcripts they produce, the proteins they encode, the phosphorylation states that regulate those proteins, and the cell surface markers that mediate interactions with the immune system or targeted therapies. Each of these layers adds context. And critically, each reveals discrepancies that can’t be seen in isolation. Take cell surface proteomics as an example. Traditional workflows for mapping the “surfacome” are plagued by noise and misclassification, which can lead to wasted effort on false targets. By capturing both plasma membrane and intracellular fractions, newer approaches provide cleaner enrichment and reduce false positives. The result is surface protein datasets that can actually be used to prioritize antibody, ADC, or CAR-T targets with confidence. That’s not a small improvement — it’s the difference between pursuing targets that work in patients and chasing dead ends. Third, it would need to incorporate functional context. Static descriptions of tumors, no matter how deep, tell us what’s there, but not how the tumor behaves under pressure. Functional assays that perturb tumors directly, whether through gene knockdowns or compound exposure, provide causal insights that correlation alone cannot. They show us how pathways respond, how resistance emerges, and how biology adapts. For example, siRNA knockdown studies in 3D PDX models can reveal dependencies that aren’t obvious from genomics alone. When combined with high-resolution transcriptomic profiling (what we call FunctionalSeq), these experiments identify pathways that are not only present but functionally essential. That’s the kind of information that can distinguish a biomarker from a true therapeutic target. What this means for pharma decision-making For pharma R&D leaders, the implications of this kind of data are significant. Instead of evaluating a candidate on a narrow slice of biology, you can assess it in the context of real patient tumors, profiled across multiple dimensions. You can compare across cohorts, understand potential resistance pathways earlier, and align therapeutic strategies with the biology most likely to be encountered in the clinic. Consider the decision to advance an asset into IND-enabling studies. In many organizations, this call is based primarily on genomic alignment, preliminary efficacy signals, and a limited view of resistance. Adding multi-omic and functional data changes the conversation. It allows teams to say, “Yes, the target is present at the DNA level, but the protein expression isn’t concordant,” or, “The mechanism looks strong in cell lines, but resistance emerges rapidly in pretreated PDX.” These insights don’t just inform science — they directly affect which assets receive investment and how development strategies are shaped. A future with fewer blind spots Attrition will always be a risk in oncology. Biology is unpredictable, and even the most carefully designed program may fail in the clinic. But the scale of today’s attrition, and the cost it imposes does not have to be inevitable. By aligning our early data more closely with patient reality, we can reduce blind spots, strengthen translational confidence, and make smarter decisions about which programs deserve to move forward. For pharma leaders, the payoff is not just fewer late-stage failures. It’s a more rational, efficient, and patient-centered pipeline. And for patients, it’s a better chance that the therapies entering trials are the ones with the greatest likelihood of success. That is the promise of stronger data and the reason it should be the starting point for oncology R&D. This isn’t just data. It’s a foundation for discovery.