Controversial New Guidelines Would Diagnose Alzheimer’s Before Symptoms Appear

Diagnosis of Alzheimer's, according to expert recommendations, can proceed by detecting the disease's underlying biology, even before the onset of cognitive decline

470895059_WEB.jpg

David Jakle/Getty Images

It is impossible, of course, to identify the precise moment we first suspected the changes in my mother were something other than normal aging. In my own imperfect memory, what rises up is the first morning of a weeklong trip to Rome, when my mother woke up at 2 A.M., got dressed and went down for breakfast. A hotel employee found her wandering from room to room, looking for toast and coffee. She was jet-lagged, my brother and I assured each other uneasily. It could happen to anyone. But weren’t there cues? Didn’t she notice the darkened lobby, the stillness, the clock?

If we had known then, would it have helped? To date, no Food and Drug Administration–approved therapy exists for asymptomatic people at risk of Alzheimer’s disease (AD). My mother was not a smoker, drank in moderation, read books, took classes, and spent the week soaking up everything the tour guide had to tell her about Caravaggio and Bernini like she was prepping for the quiz.

It was five years before my mother received a diagnosis of dementia. Today, a simple blood test can detect changes in the brain that predict AD up to 15 years before the first symptoms emerge. For researchers, tools for early detection give a peek at the full spectrum of AD, pinpointing early seeds of pathology deep inside the brain. Cognitive decline—what we typically think of as the disease itself—is merely an end-stage denouement. “Dementia is a result. Dementia is a symptom,” explains Clifford R. Jack, Jr., a neuroradiologist at the Mayo Clinic in Rochester, Minn., and chair of the Alzheimer’s Association (AA) working group responsible for new and controversial guidelines for the diagnosis of AD based on the underlying biology, not clinical presentation.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Biomarkers for AD—signs of the physical changes in the brain that contribute to disease progression—have been available for more than two decades. In 2007 an international working group (IWG) of dementia experts described biomarkers as supporting evidence for a diagnosis of the disease, defined at that point largely as it was by neuropathologist Alois Alzheimer back in 1906: progressive memory loss, confusion and personality changes caused by distinctive plaques and tangles in the brain. For almost a century, those brain changes could only be confirmed on autopsy. While the affected person was alive, the label was merely presumptive. In fact, postmortem studies have found that up to 30 percent of people who received a clinical diagnosis of AD did not have the characteristic plaques and tangles.

Does it matter what caused a person’s incurable dementia if ultimately the result is the same? Yes, for many reasons, it does. A lack of specificity means that affected people and their families receive less accurate information about disease prognosis and the risk that it may recur for other family members. And it stymies researchers, whose study populations are a confounding mix of true and false positives. “Can you imagine a clinical trial for cancer where a third of the people didn’t have cancer?” Jack asks, somewhat indignantly.

Incorporating biomarkers into clinical care was envisioned as a way of reducing that uncertainty in the diagnosis of AD. But in 2018 a group sponsored jointly by the AA and the National Institute on Aging (NIA) made a more radical proposal: biomarkers didn’t confirm a diagnosis of AD; they were a diagnosis of AD. Under this paradigm, AD, a condition characterized by abnormal protein deposits in the brain, is a disease that begins with an asymptomatic phase and progresses—if the patient lives long enough—to mild cognitive impairment (MCI) and eventually to a level of dementia that interferes with the processes of daily living.

For researchers, it was helpful to reconceptualize the disease as a continuum with distinct stages that correspond to physiological changes in the brain. As Bruno Dubois, a professor of neurology at Pitié-Salpétrière University Hospital in Paris and a leading member of the IWG, wrote in a paper published in 2018, “Considering AD only at a dementia stage is detrimental to care for patients affected by the disease.” Dementia, he argued, was evidence of damage that was already beyond repair; identifying people earlier opened the door to preventive care—were that to exist—and created a valuable pool of candidates for research. For this reason, and to clarify that the recommendations were not intended for general clinical practice, the 2018 guidelines were designated a “research framework.”

Since 2018 two fundamental new developments have changed the stakes for early diagnosis. First, there is the arrival of “disease-modifying” treatments—not cures, but drugs that slow the course of the disease. Lecanemab, approved by the FDA in July 2023 for people with MCI or mild dementia, is given by infusion every two weeks and has been shown to reduce disease progression by 27 percent. Its effectiveness in the later stages of the disease, as well as in asymptomatic people, is unproven. Donanemab, approved by the FDA almost exactly one year later, is a monthly infusion with a similar side effect profile and comparable effectiveness.

Second, testing may soon become more accessible because of new blood-based biomarkers. In 2018, Jack explains, “biologically based diagnosis required either a PET [positron-emission tomography] scan, which is obviously expensive and not widely available, or required a lumbar puncture for cerebral spinal fluid—again, invasive.” A simple, inexpensive and highly accurate blood test that can be done anywhere has greatly expanded the potential role of biomarker testing in routine clinical care.

And so in 2023 the AA convened another working group, chaired by Jack, to revisit the 2018 criteria. The updated recommendations, which were published on June 27, 2024, double down on the idea of biomarkers as diagnostic. Because there are currently no validated treatments for those without symptoms, the new guidelines discourage testing asymptomatic people except in the context of research. But clearly the authors envision a time when presymptomatic testing will be routine. “The point is,” Jack explains, “we’re setting the stage for the future when there are approved treatments that have been proven to lessen developing impairment in people who are now cognitively unimpaired.”

Under the new proposal—now a “clinical framework” rather than a “research framework”—it is not inconceivable that, someday soon, you will walk into your annual physical feeling perfectly healthy and leave with a diagnosis of AD. In Jack’s view, that is a matter of empowering patients. “Paternalism in medicine is a bad idea,” he insists. “Tell people the truth, and educate people on what it means.” In some future universe, those diagnosed might be offered a simple pill or other treatment to delay or prevent dementia altogether. “That,” Jack says, “is the holy grail.”

But other groups, including the IWG of dementia experts and the American Geriatrics Society, disagree vehemently on the use of the word “diagnosis.” The introduction of biomarkers made the disease a “clinical-biological entity,” Dubois explained in an e-mail to Scientific American. For the IWG, a diagnosis requires a clinical phenotype of disease in combination with biomarkers. “Both are necessary,” Dubois added.

The difference between “diagnosed” and “at risk” might seem like semantics, but the word “diagnosis” has significant real-world implications. It carries weight with clinicians and insurance companies, increasing the odds that therapeutics will be prescribed and covered. This can be a good thing, and it can be a problem. As one of the commenters on the second draft of the 2024 diagnostic criteria pointed out, a substantial number of the people diagnosed with AD under these criteria will die without ever having exhibited signs of dementia. The possibility of developing a screening test to identify presymptomatic AD merits our attention, but, the commenter added, screening programs elsewhere in medicine have shown mixed results: some “have yielded clear benefits and little harm to individuals, [whereas] other efforts have resulted in wasted resources and even potential harm.” Both of these outcomes are possible consequences of the expanded use of lecanemab and donanemab, which cost $26,500 and $32,000 per year, respectively, and come with a side effect profile that includes a risk of swelling and bleeding in the brain.

Using biomarkers for AD as a basis for diagnosis rather than risk assessment also raises questions about how this information will be received by those who are diagnosed. Is it burdensome to live with the knowledge that you are at risk for dementia? This question is familiar to geneticists, who have been wrestling with it since testing became available for genes associated with AD more than two decades ago.

These genes generally fall into two buckets. Mutations in three genes cause an early-onset version of AD, with signs of dementia generally in evidence by age 60 to 65. Collectively, these rare mutations account for less than 1 percent of people with AD, and they are deterministic in a way that is atypical for genetics. If you have one of these mutations, it is a virtual certainty that you will develop symptomatic AD by age 65. Given this stark reality, it was understood that finding out you carried one of these mutations could be traumatic, and genetics professionals developed a protocol to help people decide if they wanted to test for the genes and to help them cope with the result.

In the other bucket, there are susceptibility genes, principally APOE, which has a profound but still incremental effect on the risk of AD. Carriers of a single copy of the variant APOE4 are two to three times more likely than the general population to get AD by age 85. Dosage creates a multiplier effect; people who inherit APOE4 from both of their parents are 12 times more likely to get the disease. These individuals, called APOE4 homozygotes, have a 60 percent chance of developing AD by age 85. And this gene, unlike its more deterministic cousins, is not really rare at all; 2 percent of the U.S. population are APOE4 homozygotes.

Robert C. Green, a medical geneticist at Harvard Medical School, who led a series of trials examining the impact of disclosing APOE AD risk to asymptomatic adults, recalls the uncertainty that swirled around the issue of APOE testing in the early 2000s. “We didn’t know what percentage among families would want this. And most importantly, we didn’t know whether we would be creating catastrophic distress in those family members if we disclosed a risk factor for a then—as now—untreatable condition.” This decades-long effort, called the Risk Evaluation and Education for Alzheimer’s Disease (REVEAL) study, established that many but not all people wanted the information—and that while it was of course upsetting news for those who tested APOE4-positive, it was information that people could handle. In fact, Green says, those who chose to participate in the study reported that getting their APOE results had “personal utility”: “They actually did things with the information that were important to them,” he says. “They purchased a different kind of insurance. They had conversations with their family. They at least considered making changes in their career planning.”

But a key element in the REVEAL messaging was that APOE was not deterministic. Now, as with biomarkers, some researchers are making the argument that two copies of APOE4 equal a diagnosis of AD, potentially muddling that message. According to Jack, under the 2024 AA criteria, APOE4 homozygosity, like a mutation in an early-onset AD gene, is considered a “stage 0 diagnosis.” This logic reflects the fact that people are born with a condition that likely affects processes in their brain throughout their lifetime; it does not mean they will necessarily get dementia, and many will not.

A similar conclusion was reached by the authors of a large study that compared genotype with biomarkers and postmortem investigations of the brain and was led by neurologist Juan Fortea, director of the Sant Pau Memory Unit in Barcelona. In a paper published in May 2024, they presented evidence that almost all APOE4 homozygotes had, at minimum, early evidence of disease-associated changes in their brain by age 65. Earlier onset and worse prognosis, Fortea and his colleagues argued, mean that this was a distinct form of AD for which the genotype was diagnostic. Affecting 2 percent of the world’s population, they concluded, made it “one of the most frequent Mendelian diseases.”

Fortea and his co-authors, like the authors of the 2024 AA guidelines, emphasized that the point of reconceptualizing the disease was not to label healthy people with AD before a treatment becomes available. But that in fact may be the real-world consequence of adopting these changes. Bioethicist Emily A. Largent, an associate professor of medical ethics and chief of the Division of Medical Ethics at the University of Pennsylvania, has written about the social context in which a redefinition of AD will play out. “People are learning risk information in the absence of meaningful protections in key areas,” Largent explains. “We really need to be thinking about what happens when the patient or research participant leaves the clinic, and now they have this information, and they’re operating in the world.”

Largent and her colleagues conducted a study, published in 2020, about the impact of revealing AD biomarker status to healthy adults aged 65 and older, and in many keys ways, their findings echo the lessons of the REVEAL study. Participants with positive biomarkers experienced negative feelings but not “extreme distress.” They perceived the information as useful and made changes in their lives, including getting their finances in order, purchasing insurance, prioritizing “bucket list” activities and moving closer to family. But test results also changed the lens through which they saw their own experiences. Those who tested negative waved off minor lapses and “senior moments” as normal aging, while those who tested positive saw them as evidence of disease. “We have some haunting quotes from people,” Largent adds, “asking, basically, Is this how it starts? Is this how it begins?”

People who tested positive also worried about being viewed by others as dementia adjacent. Largent found that concerns about stigma and discrimination were common and included fears of being treated differently by friends and family members. “They’re like, I’m worried that my kids are going to take away the car keys or won’t let me babysit my grandchildren,” she says.

Participants also expressed concerns about potential discrimination by employers and insurance companies. Green has spoken many times about the extreme distress experienced not by APOE carriers but by insurance executives when they first heard about APOE4 testing. “I got invited to all sorts of insurance meetings,” he recalls. Companies were worried that informed customers would buy long-term care insurance only if they were at increased risk, shifting the actuarial landscape. Now ethicists worry that as testing becomes easier to access and more routine, insurance companies may turn the tables on their customers, refusing coverage to people who are biomarker-positive or an APOE4 homozygote.

Worries about employment discrimination are not unfounded. Under the Genetic Information Nondiscrimination Act (GINA), passed in 2008, employers are forbidden from discriminating based on genetic information about their employee. But redefining biomarkers and genotype as diagnostic may make employees more vulnerable. GINA’s protections do not extend to what is called “manifested disease,” which the government defines as the stage at which a disease could reasonably be diagnosed by a health care professional. What protections survive when the genotype itself, one’s genetic makeup, is declared diagnostic?

People who test biomarker-positive might find their employers disinclined to embrace a wait-and-see attitude, especially if they perceive liability in having an employee at high risk for cognitive impairment. Would you go to a surgeon or hire a lawyer who was biomarker-positive for AD? Would you consider that something they should be required to disclose? Viewing AD as a continuum is intended to facilitate early intervention but could end up making people hesitant to seek care or participate in clinical trials that require they be tested for biomarkers or AD genes. “Ultimately,” Largent says, “I think that we might really limit our ability to recruit people for research if we don’t come up with adequate protections for people outside the research setting.”

Both the 2024 AA guidelines and Fortea’s recent paper stress that the answer is to discourage testing healthy people outside of research settings until a treatment is available. The wild card here is that this is no longer entirely in the control of clinicians or professional organizations. APOE testing has been available direct-to-consumer for more than a decade, and the first biomarker blood test available for purchase was announced in 2023.

But even if you assume that you are only offering the test to people who are symptomatic, where exactly do you draw that line? Dementia does not emerge fully formed like Venus from her shell. By all reasonable measures, my mother was cognitively intact that morning in Rome. Later, as her confusion advanced, she agreed when I urged her to see a doctor and then reneged, telling me tearfully that she was afraid to hear the word. Would the delay afforded by medication have been compensation enough for living that last decent year of her life under the cloud of a diagnosis? These are new questions for an aging population.

“It’s an important step forward in medicine,” Green notes, to redefine conditions by their biology and not by clinical manifestations that show up “late in the game.” That, Green adds, “is a strategy that has proved helpful in cystic fibrosis and sickle cell disease.” But at the same time, he concedes that the use of the word “diagnosis” might incorrectly suggest inevitability. For his APOE4 homozygote patients, Green frames it as “you’re at increased risk for AD, but, you know, it’s not for sure. And there’s a good chance you’ll die of something else. So let’s not get too excited about this. Party on.” REVEAL and other studies have shown that affected people who receive counseling can understand what is meant by being at risk. But as Largent’s work demonstrates, educating such a person alone may not be sufficient if the society around them is unable to disentangle dementia from its literal plaques and tangles.

Early identification of those at risk for dementia may be a mix of hazards and benefits for affected people and their families, but it is a straight up boon for researchers and those marketing therapeutics. Presymptomatic diagnoses will boost sales of preventive measures and provide a population of the worried well who will raise money and put pressure on government agencies to fund research, approve drugs and make sure they’re covered. Identifying those most likely to develop dementia also helps researchers, making clinical trials more efficient and less expensive.

For this reason, there has been criticism of the makeup of the AA working group; a New York Times article reports that about a third of its members “are employed by companies developing drugs and diagnostics” and that another third disclose “research grants or contracts, consulting fees, honorariums or other payments from industry sources.” When asked by Scientific American, Jack says that he himself has no conflict of interest and “honestly didn’t see any kind of bias or commercial interest seeping in.”

Let’s assume that eventually there will be preventive measures that justify routine testing for early intervention. The fact is that a healthy adult diagnosed with AD today will face many uncertainties and few options, even when they potentially contribute toward that better future as a participant in ongoing research.

Our ability to redefine diseases by what they are and not how they manifest is a powerful tool and one that permits us to better predict, diagnose and treat a whole range of conditions. “It's a pretty amazing moment,” Largent acknowledges, “but... the science is really outpacing policy right now. We are having all these changes in midst of a system that needs to find new ethical, legal, social, clinical ways of helping patients and families.

Laura Hercher is a genetic counselor and director of student research at the Joan H. Marks Graduate Program in Human Genetics at Sarah Lawrence College. She has written broadly on ethical, legal and social issues related to genetic medicine. Hercher is the host of The Beagle Has Landed, a podcast for the clinical genetics community and other sci-curious individuals.

More by Laura Hercher