Oncoscope-AI is now available on iPad
Request free access to get started

Anna Forsythe at ESMO AI & Digital Oncology 2025

ESMO AI & Digital Oncology Podium Presentation

Last week, Oncoscope-AI Founder & CEO Anna Forsythe gave a podium presentation to a packed room of clinicians, researchers, and industry leaders at ESMO AI & Digital Oncology in Berlin. The subject? What Oncoscope-AI does best: Living Systematic Review Linked to Guidelines and Regulatory Approvals as a Treatment Decision Support Tool. Now you can see Anna’s full talk too: Oncoscope-AI Founder & CEO presents at ESMO AI & Digital Oncology Congress in Berlin, Germany. 12 November 2025. Oncoscope-AI: Living Systematic Review Linked to Guidelines and Regulatory Approvals as a Treatment Decision Support Tool

Why Chatbots Aren’t Enough In Oncology

Why Chatbots Aren’t Enough In Oncology

This article was originally published by Anna Forsythe in Forbes on 13 November 2025. In the fast-moving world of oncology, clinical decision making has never been more complex—or more urgent. Thousands of new cancer studies are published every month, each with findings that could alter treatment pathways or reshape guidelines. For oncologists, research teams, hospitals and payers, the challenge isn’t simply finding information—it’s finding the right information, quickly and confidently. The market is full of AI-powered tools promising help. Many rely on large language models (LLMs) and chatbot-style interfaces that offer answers in conversational form. The appeal is obvious: type in a query, get an instant response. But in oncology—where the stakes are measured in survival rates—ease of use is not enough. Why Decisions Are So Complicated Consider a patient with late-stage lung cancer whose tumor harbors a rare genetic mutation. This is the reality of modern oncology, which offers targeted therapies for specific genetic mutations. The physician must weigh the disease stage, prior therapies, co-morbidities and preferences. They must verify whether a targeted therapy exists, check FDA approvals, review guideline recommendations and explore whether a clinical trial could provide access to the latest investigational drug. This involves combing through journal articles, conference abstracts and regulatory documents—each a piece of the puzzle. There is no “one-size-fits-all” solution in an era where targeted therapies produce individualized pathways. A chatbot might return a single response based on an editorial or opinion piece it “remembers,” presenting it as definitive. The nuance—say, that another trial showed limited efficacy in heavily pre-treated patients, or that guidelines recommend a different approach after immunotherapy—can easily be lost. The Gold Standard: Systematic, Comprehensive, Expert-Vetted Medicine relies on the hierarchy of evidence. At its peak sit systematic reviews and meta-analyses—studies that evaluate and synthesize all available research. Regulatory agencies like the FDA, as well as organizations such as the American Society of Clinical Oncology (ASCO) and the National Comprehensive Cancer Network (NCCN), have long required systematic reviews as the foundation for guidelines and approvals. An effective oncology decision support tool must therefore also be systematic, with transparent, reproducible searches of all relevant research. It must be comprehensive, drawing from peer-reviewed journals, guidelines, conference abstracts and regulatory filings. It must be robust in distinguishing between high-quality randomized trials and weaker evidence. Just as importantly, it must update continuously (ideally daily) to reflect the latest research. Medical decisions based on outdated knowledge risk outdated care. Trained oncologists and other specialists can ensure the conclusions are accurate. Where Chatbots Fall Short I’ve found that even the most advanced LLMs cannot meet those criteria. Their weaknesses are structural. Built for speed and limited in transparency, chatbots rarely disclose their sources. They may omit references entirely, and without systematic searching, key studies are often missed. Their datasets often exclude recent guideline updates or pivotal conference results. Moreover, as black boxes reliant on opaque algorithms, chatbots provide no evidence grading. An editorial can appear with the same weight as a phase three trial. They may even fabricate references—so-called and largely reported on “hallucinations.” In my experiments, queries have sometimes led to outdated and false information. In one instance, a chatbot cited a non-existent study to me. Transparency of the dataset is critical, especially in a field where thousands of new studies are published each month. Using AI on an iPhone to call a taxi is convenient, but in oncology, where each decision can alter survival, these shortcomings aren’t just inconvenient; for a patient with a rare mutation, it can mean the difference between hope and harm. Beyond Oncology: A Universal Lesson The risk of relying on incomplete or unverified evidence isn’t unique to cancer care. In finance, successful portfolio managers don’t bet other people’s money on one analyst’s hunch; they use meta-analyses of market data. In aviation, flight safety depends on synthesizing thousands of reports and assessments. No pilot would fly based on a chatbot’s opinion about turbulence. In public health, vaccine rollouts depend on systematic reviews of global trial data, not a handful of preliminary studies. Across industries, convenience cannot replace rigor. The ideal system in oncology—and other data-driven fields—is an expert-driven partner that can provide trustworthy insights. The Human and AI Solution Despite certain limitations in its use within chatbots, the beauty of AI is how it can scan millions of documents in seconds, helping detect patterns and surface relevant studies. With the mountain of data produced every day, that feature is undeniably important. But human experts are needed to bring judgment, clinical context and critical thinking to the mix. I think the winning model is a living systematic literature review (SLR)—continuously updated by AI, structured through reproducible methodology and validated by experts. (Disclosure: I lead an AI-assisted oncology evidence platform this type of approach.) LLMs power today’s chatbots—but they can also hallucinate or misread complex evidence. The approach I champion still uses LLMs, but with continuous expert oversight. Every data point should be verified by trained analysts and clinicians, eliminating hallucinations and ensuring full transparency. That said, I find this hybrid model effective but demanding. It requires capital, expertise and time to build for each cancer type. And even then, people still prefer someone or something they can talk to. The future may lie in combining both approaches—a conversational chatbot connected to a rigorously curated, expert-verified database. But by working to overcome these hurdles, pharmaceutical companies, payers and healthcare networks stand to benefit as much as clinicians. Beyond oncology, systematic, AI-augmented evidence synthesis has the potential to streamline internal decision making, support value-based care initiatives, strengthen negotiations and reduce duplication across research teams. The Bottom Line AI is here to stay, and its potential in healthcare is enormous. But in oncology—and in every field where lives or livelihoods are at stake—it must be deployed with discipline. Chatbots may offer instant, conversational answers, but approachability is not the same as reliability. Anna Forsythe Anna Forsythe, pharmacist & health economist, is the Founder & CEO of Oncoscope-AI

Smarter Oncology, Faster Access: How Anna Forsythe and Oncoscope-AI Are Reengineering Cancer Care

This article was originally published in Entrepreneur on 25 September 2025. In oncology, a cascade of new trials, approvals, and guideline updates has become the norm. Yet the systems meant to translate that progress into care haven’t kept pace. Clinicians and product teams are inundated with data, but rarely is it organized to support fast, defensible decisions at the point of need. The result is often delays in care and lost time for patients. Recognizing this disconnect, Anna Forsythe, a pharmacist, health economist, and founder of Oncoscope-AI, built a solution. Motivated by personal experience, including friends and colleagues who received care that lagged behind the evidence, she fused clinical insight with commercial acumen to create a platform that supports clinical decision-making and strategic evidence planning. Her timing couldn’t be more critical. Today’s environment is split between regulatory acceleration and payer caution. Regulators increasingly offer accelerated pathways for medicines addressing unmet needs. Meanwhile, national payers demand deeply contextualized evidence before granting public reimbursement. This tension, compounded by external reference pricing and strategic launch sequencing, has led to uneven access across countries. Oncoscope-AI was designed to operate precisely in this gap between clinical urgency and regulatory rigor. For physicians, it can compress hours of literature review into seconds. By entering stage, genetic markers, and prior therapies, clinicians receive a human-reviewed, guideline-aligned summary that surfaces survival outcomes, progression metrics, toxicity data, and approval status. Each data point is linked directly to primary studies and relevant guidelines for transparency and traceability. This clinical precision is matched by commercial depth. Market access teams can define Population, Intervention, Comparator, Outcome (PICO) criteria, retrieve relevant studies, map drug availability across jurisdictions, and run simulations to model market impact and reimbursement risk. These capabilities are increasingly vital amid European Health Technology Assessment(HTA) reform, where Joint Clinical Assessments (JCAs) will standardize evidence evaluation across Member States. For qualifying products, developers must now submit evidence simultaneously to both the European Medicines Agency and the HTA secretariat, raising the bar for dossier preparation. Anticipating this evolution, Oncoscope-AI’s roadmap now integrates European regulatory guidance, reimbursement decisions, and localized guideline text. It also provides exportable, auditable evidence tables to support dossier preparation. Its simulation engine runs on a continuously updated, expert-validated dataset. This helps ensure that market models reflect current trial outcomes and regulatory activity, not static literature snapshots. Forsythe shares her observations of industry behavior. She acknowledges why companies sequence launches and manage pricing. “These are rational responses to fiscal realities and international price governance. But I believe technology can mitigate the inequities those strategies often produce,” she says. Oncoscope-AI blends trained AI with human curation. The AI scans registries, preprints, journals, and filings to surface signals at scale. Domain experts validate relevance, extract numerical endpoints, and provide regulatory context. “Physicians don’t need more reading material,” Forsythe says. “They need the timely, relevant information that is tailored to the patient in front of them.” For pharmaceutical teams, this translates into strategic preparedness. By identifying emerging comparators, simulating comparative effectiveness, and organizing evidence into auditable PICO-driven exports, companies can build stronger, timelier market access dossiers and anticipate reimbursement questions before they escalate. Industry analysts and consultancies have urged similar readiness strategies as the JCA takes effect. Users are already seeing results. Oncoscope-AI’s simulation outputs pinpoint country-level evidence gaps and shorten dossier preparation. Exportable, PICO-aligned tables and country trackers allow teams to respond the moment a guideline or reimbursement decision changes, without restarting literature reviews from scratch. It’s worth emphasizing that Forsythe frames equitable access not as a moral debate, but as a design challenge. She argues that system-level fixes, rather than focusing solely on industry behavior, will expand reach. Oncoscope-AI positions itself as a bridge between AI innovation and regulatory rigor at a time when scientific velocity often outpaces legacy workflows. The platform isn’t built for shortcuts. It’s built for readiness: an auditable, clinician-trusted channel from discovery to delivery. For Forsythe, the mission is both professional and ethical. She says, “If we want better outcomes in cancer care, we don’t need more information; we need smarter information.”

The Market Access Podcast: Will AI and Living Reviews Define the Next Era of Health Care Market Access?

Will AI and Living Reviews Define the Next Era of Health Care Market Access?

Oncoscope-AI Founder & CEO Anna Forsythe was recently on the Market Access Podcast with Dr. Stefan Walzer to discuss how Living Systematic Literature Reviews (Living SLRs) are redefining evidence generation in oncology and beyond – highlighting the power of real-time updates, advanced automation, and the essential role of human insight. Traditional SLRs are static snapshots, while Living SLRs are real-time, dynamic, and AI-powered—delivering continuously updated insights crucial for life-or-death decisions and payer evaluations. Join this discussion as they explore the myth of AI chatbots as true decision support tools, the need for actionable data over summaries, and the future of evidence synthesis, clinical decision-making, and smarter market access. Listen on Spotify: Listen on YouTube: Listen on PocketCasts:

The Danger of Imperfect AI: Incomplete Results Can Steer Cancer Patients in the Wrong Direction

The Danger of Imperfect AI: Incomplete Results Can Steer Cancer Patients in the Wrong Direction

This article was originally published in International Business Times on 09 October 2025. Cancer patients cannot wait for us to perfect chatbots or AI systems. They need reliable solutions now—and not all chatbots, at least so far, are up to the task. I often think of the dedicated and overworked oncologists I have interviewed who find themselves drowning in an ever-expanding sea of data, genomics, imaging, treatment trials, side-effect profiles, and patient co-morbidities. No human can process all of that unaided. Many physicians, in an understandable and even laudable effort to stay afloat, are turning to AI chatbots, decision-support models, and clinical-data assistants to help make sense of it all. But in oncology, the stakes are too high for blind faith in black boxes. AI tools offer incredible promise for the future, and AI-augmented decision systems can improve accuracy. One integrated AI agent increased decision accuracy from 30.3% to 87.2% compared to the baseline of the GPT-4 model. Clinical decision AI systems in oncology already assist in treatment selection, prognosis estimates, and synthesizing patient data. In England, for example, an AI tool called “C the Signs” helped boost cancer detection in GP practices from 58.7% to 66.0%. These are encouraging steps. Anything below 100 percent is not enough when life is at stake. Cancer patients cannot afford to wait for us to resolve the issues these technologies still have. We risk something far worse than delay; we risk bad decisions born from incomplete, outdated, or altogether fabricated information. One of the worst issues is “AI hallucination.” These are cases where the AI has been found to present false information, invented studies, nonexistent anatomical structures, and incorrect treatment protocols. In one shocking example, Google’s health AI misdiagnosed damage to a “basilar ganglia,” an anatomical part that doesn’t exist. The confidently presented output looked authoritative until physicians recognized the error. Recent testing of six leading models, including OpenAI and Google’s Gemini, revealed just how unreliable these systems can be in medicine. They produced confident, step-by-step explanations that looked persuasive but were riddled with errors, ranging from incomplete logic to entirely fabricated conclusions. In oncology, where every patient is an outlier, that margin of error is unacceptable. Even specialized medical chatbots, which may sound authoritative, still present opaque and untraceable reasoning—their sources inconsistent, and their statistics often meaningless. This is decision distortion. The legal and ethical implications are real. If a treatment based on AI guidance causes harm, who is liable? The physician? The hospital? The AI developer? Medical-legal frameworks are scrambling to catch up, with some warning that overreliance on AI without human oversight could itself constitute negligence. The problem of AI hallucination extends beyond the medical realm. In the legal world, AI hallucinations have already led to serious consequences: in at least seven recent cases, courts disciplined lawyers for citing fake case law generated by AI. In one high-profile case, Morgan & Morgan attorneys were sanctioned after submitting motions containing bogus citations. If courts are demanding accountability for AI mistakes in law, how long before the medical malpractice lawsuits start being filed? In oncology, especially, reliance on AI amplifies risk because of how the tools are trained. Many large language models or decision systems depend on fixed journal cohorts or curated datasets. New oncology breakthroughs may remain outside that training collection for months or years. When we query such a system, it may omit the newest trial, ignore emerging biomarkers, or default to an outmoded standard of care. When AI invents studies or hallucinates efficacy, and doctors rely on it, patients pay the price. Moreover, cutting-edge medical data is often fragmented, diversified, and non-standardized; imaging formats differ, electronic health record notes are not uniform, and rare biomarkers may exist only in supplementary data. AI does best with well-structured, consistent data; it struggles with the disorder at the frontier of research. That means decisions about novel or borderline cases may be precisely where AI is least reliable. I’m not arguing that we scrap AI in cancer care. On the contrary, we must keep developing these tools, pushing boundaries, harnessing the power of computation to spot patterns no human sees. But we must not hand over ultimate decision-making authority to them, at least not yet. Cancer patients deserve better than experiments. They deserve human physicians who remain in the loop, who audit, challenge, and interrogate AI outputs. We need an architecture of human and AI collaboration. When a chatbot suggests a regimen, the oncologist should review supporting evidence, check for newly published trials, and confirm that the model’s assumptions match the patient’s specifics. The physician must own the decision. We can establish effective guardrails by implementing regular validation of AI systems with updated clinical data. By promoting transparency in training sources and mandating human review of all AI-suggested decisions, we can enhance overall trust in these technologies. Additionally, developing clear liability rules will help ensure accountability and foster responsible innovation. In practice, that means clinics deploying AI decision tools should monitor AI output, compare outcomes, run audits, and allow physicians to override or correct AI suggestions. We must also push for standardization of data, sharing across institutions, open and timely inclusion of new studies, and rigorous mechanisms to flag contradictions or hallucinations. Without that, the models will always lag the frontier. Cancer patients cannot wait for us to achieve AI perfection. But they deserve the best possible care now, and that requires that we never quit human responsibility in the name of speed. AI must serve as an assistant, not a dictator. Humans are in charge of deliberation and decision-making, and they must always prioritize caution when faced with unverified or ambiguous algorithms. AI chatbots are tools, not authorities. When we start letting algorithms decide instead of doctors, we have crossed from medicine into potential malpractice. Cancer patients don’t need perfect chatbots. They don’t have the time for the technology to catch up, and they cannot afford doctors who make decisions based on incomplete or outdated information. For patients and their families, the stakes are too high, and they deserve a much higher standard of

OncoDaily Interview: Could Oncoscope-AI Save Clinicians Hours – and Spare Patients Side Effects?

Could OncoScope AI Save Clinicians Hours - and Spare Patients Side Effects?

It’s good to talk about our 𝘄𝗵𝘆 sometimes. That’s why we appreciate Emma Ter-Azaryan’s interview so much. Not just for her insightful questions, but for giving us an opportunity to publicly reflect on 𝘄𝗵𝗮𝘁 𝗢𝗻𝗰𝗼𝘀𝗰𝗼𝗽𝗲-𝗔𝗜 𝗺𝗲𝗮𝗻𝘀 𝘁𝗼 𝘂𝘀. In this interview with OncoDaily, Oncoscope-AI Founder & CEO Anna Forsythe shares what drives her personally, an example of an oncologist using the tool and the impact it had, and the frustration of seeing people we love treated with chemotherapy because their doctors weren’t aware of updates in the guidelines and the research behind them. You can watch the full interview “Could Oncoscope-AI Save Clinicians Hours – and Spare Patients Side Effects?” here: From OncoDaily: In this episode of OncoDaily TV, host Emma Ter-Azaryan speaks with Anna Forsythe, CEO & Founder of Oncoscope-AI, to unpack how clinicians can cut through oncology’s data overload—FDA labels, guidelines, congress abstracts, and papers—and get to the right evidence in just a few clicks. What you’ll learn: ✅ What OncoScope AI is (in simple terms): a clinician-friendly “Expedia for evidence” that pulls from major medical databases, guidelines, regulatory updates, and congress outputs—cross-linked in one place.✅ Essential vs. Edge: two workflows—patient-first decision support vs. deep-dive topic exploration (e.g., ADCs in lung cancer, mutation-specific updates).✅ Power features: clickable disease maps, filter by congress (ASCO, World Lung, etc.), tumor-board prep, and one-click prior-auth reports with citations.✅ Real-world impact: how a brand-new FDA approval surfaced that week and helped a patient access a better-tolerated therapy sooner.”

From Evidence To AI: Why The Future Of Oncology Decision Support Must Be Built On Living Evidence

From Evidence To AI: Why The Future Of Oncology Decision Support Must Be Built On Living Evidence

This article was originally published in Forbes on 18 September 2025. How do oncologists decide which treatment to give their patients? It’s rarely an easy choice. Physicians must weigh multiple levels of information, such as the patient’s disease stage, genetic markers, previous therapies, overall health in general and even personal preferences. Then comes the quest for evidence. In order to validate the optimal way forward, oncologists need not only know what is effective, but also whether it is FDA-approved, guideline-adherent or available through a clinical trial. To find the best, most up-to-date information, that validation typically involves toggling between PubMed, society guidelines, journal notifications and conference summaries, and then rationalizing information that doesn’t always align. All of this is tedious and time-consuming. Time that most oncologists don’t have. In a high-volume clinic, a medical oncologist may see 30 to 50 patients in a day. But even with all of those time pressures, each and every decision should be made with the latest, most complete and scientifically valid evidence available. The stakes are high. With the mountain of new research and evidence published in oncology journals constantly expanding, evidence literally shifts by the day. Those shifts in evidence—the decisions between the right and wrong treatment—can be life or death. The Enduring Value Of Evidence Hierarchies Medicine has long recognized that not all evidence is created equal. A single case report may stimulate ideas, but it cannot guide practice. Observational studies provide associations but not certainty. Randomized controlled trials minimize bias and provide more insight. But at the very top of the hierarchy are systematic reviews and meta-analyses, which combine the entire weight of the evidence. This hierarchy matters because medicine is complicated. If we relied on anecdotes or headlines in isolation, patients would be subjected to treatments that look promising by themselves but prove ineffective or even counterproductive when considered in context. For this reason, organizations from the FDA to WHO mandate Systematic Literature Reviews (SLRs) when shaping guidelines, approvals and policies. Systematic reviews are the gold standard for evaluating medical evidence—the safety net for modern medicine. They prevent us from the risks of cherry-picking studies, overvaluing anecdotes or relying on unverified opinions. The Lure And Risk Of Chatbots Given the deluge of new medical information—and the tedium of just reading it all, let alone evaluating it—it’s no wonder that AI chatbots have captured attention. Faced with information overload, the idea of typing a quick question and receiving a fluent, confident paragraph or two is more than just appealing. It can be viewed as a lifeline for busy oncologists. But that’s where the danger lies. Chatbots don’t conduct systematic reviews. They can’t distinguish between high-quality trials and weak studies. They don’t verify whether a therapy is FDA-approved or buried in an outdated guideline. And in some cases, they even fabricate references, miss key data or rank that data inappropriately. Convenience can be seductive, but in oncology, where the margin for error is minute, the cost of error, or incomplete or inaccurate information, is disastrous. That convenience might be harmless if you’re asking Siri to find the nearest grocery store. But in cancer treatment, the right choice can extend life. The wrong choice can cut it short. Evidence Hierarchies Matter Everywhere The lesson extends well beyond oncology. In cardiology, guidelines for heart failure shift frequently. Missing an update could mean prescribing a less effective therapy. In infectious disease, choosing the wrong antibiotic fuels global resistance—making “tried and true” therapies less potent, and new approved therapies a better solution. Outside of medicine, the same principle holds true. Financial advisors trust portfolio strategies grounded in decades of cumulative analysis, not a single trader’s hunch. Aviation safety regulations are shaped by the aggregation of countless investigations, not anecdotal exceptions. Across industries, systematic, comprehensive evidence beats selective inputs every time. From Static Reviews To Living Evidence If chatbots aren’t the solution, then what is? The answer lies in bringing evidence hierarchies into the era of AI. Imagine a living systematic review in real-time, providing a comprehensive, up-to-date synthesis of the evidence—backed by AI and vetted by humans. Instead of replacing systematic reviews, AI in this new paradigm augments them. Algorithms filter through the sheer volume of new publications, screen for relevance, raise quality issues and update evidence maps in real time. And then experts evaluate the results before they reach the physician’s desktop. This model is rigorous yet addresses medicine’s biggest bottleneck—time. Doctors would no longer be forced to sort through hundreds of studies manually. Instead, they would access a dynamic, physician-ready summary rooted in the totality of evidence. AI does the heavy lifting of scanning and sorting, while human experts remain the arbiters of interpretation. A Human-AI Partnership This combination is the future that I am dedicated to and the foundation of the work that my team is producing. At Oncoscope, we don’t rely on generative AI to spin out answers. Instead, we use a suite of AI models to reproduce and accelerate the standardized steps of a systematic review. Think of it like a symphony. AI can tune the instruments, arrange the sheet music and keep the score updated in real time. But only the conductor—the oncologist—can interpret the music for the audience. This collaboration leverages each party’s strength: Machines are better at speed and repetition, while humans are better at judgment and context. The end product is evidence, both thorough and up-to-date, that doesn’t overwhelm the clinicians who need to implement it. Why Caution Matters Now The enthusiasm around AI in healthcare is understandable. Physicians are busy, patients are better informed than ever and the pace of discovery keeps accelerating. But in our rush to adopt new technology, we risk abandoning the very safeguards that make modern medicine safe. It would be unthinkable to prescribe chemotherapy based on a single press release, yet we risk doing something similar if we accept unverified chatbot outputs at face value. In oncology, where decisions can never be undone, shortcuts are dangerous. Archibald Cochrane, the father

Systematic Literature Review Versus Chatbots: Why In Oncology, It’s Not a Choice

Systematic Literature Review Versus Chatbots: Why In Oncology, It’s Not a Choice

In the age of artificial intelligence, speed is often mistaken for rigor. Nowhere is this more dangerous than in oncology, where treatment decisions can mean the difference between life and death. Some technology companies tout “systematic literature reviews” (SLRs) generated in minutes by chatbots that claim to scan thousands of papers across the internet. The appeal is obvious: quick, accessible, and seemingly comprehensive. But in reality, these outputs are neither systematic nor reliable. For oncologists, payers, and researchers, understanding the distinction between a true SLR and a chatbot’s surface-level search is not just academic—it’s essential. The Gold Standard: What a True SLR Involves Systematic literature review is the gold standard for evidence synthesis in medicine. It is the foundation of evidence-based practice because it minimizes bias, ensures completeness, and enables decisions to rest on the strongest available science. A rigorous SLR begins with a protocol: a predefined roadmap that frames the research question and methods. It requires carefully constructed search strategies, typically using combinations of keywords and controlled vocabulary, to capture every relevant publication across peer-reviewed databases. The process doesn’t stop there. Grey literature—such as abstracts from scientific conferences—must also be included, since cutting-edge oncology data often appears in congress presentations long before it reaches a journal. From there, studies undergo multi-step screening against strict inclusion and exclusion criteria: patient population, interventions, comparators, outcomes, and study design (the classic PICO framework). Each selected paper is then critically appraised for quality and relevance. Only after this painstaking filtering does the work of synthesis and interpretation begin. This is not a clerical exercise. It requires advanced training, sound judgment, and clinical insight to evaluate conflicting results, contextualize findings, and translate them into actionable conclusions. Why Chatbots Fall Short Chatbots, even those powered by large language models (LLMs), cannot replicate this process. At best, they skim unstructured text. At worst, they hallucinate citations or omit critical studies. They lack protocols, inclusion criteria, appraisal of study quality, or a transparent audit trail. What results may look convincing on the surface—but lacks the depth and reliability required in oncology. When a chatbot says it can “review 1,000 studies in seconds,” what it’s really doing is producing a text summary based on whatever sources it happens to ingest. There is no guarantee that the sources are peer-reviewed, complete, current, or even real. That is not an SLR. Why It Matters in Oncology Oncology is not forgiving of shortcuts. Selecting the right therapy for a patient is an exercise in precision: choosing between regimens, sequencing targeted therapies, balancing efficacy and toxicity, and staying current on breakthroughs that can extend survival or improve quality of life. In this context, incomplete, outdated, or fabricated evidence isn’t a minor flaw—it’s a threat to patient safety. The rigor of a systematic literature review is not a “nice to have”; it’s the foundation for making responsible decisions in cancer care. The Path Forward AI absolutely has a role to play in evidence synthesis. When paired with human expertise and transparent methodology, it can accelerate searches, streamline screening, and reduce administrative burden. But AI must serve the process—not replace it. In oncology, the choice isn’t between a chatbot and a systematic literature review. It’s between cutting corners and saving lives. The stakes are too high for anything less than living, rigorous, and human-guided evidence. Anna Forsythe Anna Forsythe is the Founder and President of Oncoscope-AI, the first platform to bring together real-time oncology treatment data, clinical guidelines, research publications, and regulatory approvals — all in one place, just like Expedia for cancer care. Available free to oncology professionals worldwide, Oncoscope-AI is redefining how cancer care information is accessed and applied.

Login

Essential

Log in to the Oncoscope-AI Essential platform.

Edge

Log in to the Oncoscope-AI Edge platform.