Long COVID continues to be a COMPLICATED story

Most people know somebody who’s had COVID and then has suffered for a long time as a consequence – just never quite right yet.  Those are the “usual” Long COVID people.  Unfortunately, many have not gotten connected with folks who can help them, and thus, their suffering persists longer than may be necessary (we can help those people).  Now, though, there’s another bunch of patients who might be getting Long COVID MONTHS after seemingly being “cured”.  

Most Long COVID patients (86%) tend to improve after a year (doesn’t that sound like a really long time? It does to me.).  But “it (was) common for symptoms to resolve then re-emerge months later.”

Yikes.  

Interestingly, when they compared COVID vs non-COVID groups for symptoms, there really wasn’t a significant difference (18.3% vs 16.1%).  That means that 1 in every 5 or 6 people is walking around every day with symptoms that fall into the Long COVID-type buckets, whether or not they ever had COVID.  This seems like the basic issue – lots of people are walking around with “stuff” that’s not being handled (we can help those people, too.)

Oh, and by the way, the CDC reports that Long COVID has fallen to only 6% of the population. Heck, nothing to worry about – it’s ONLY 20 million people!

Long COVID Symptoms May Emerge Months After Infection

— Fewer Americans have long COVID but many have significant activity limitations, CDC data show

Long COVID symptoms may emerge months after SARS-CoV-2 infection, data from the prospective multicenter INSPIRE study suggested.

Symptom prevalence decreased over 1 year among long COVID patients, but persisted or emerged at different time points in some cases, reported Sharon Saydah, PhD, of the CDC's National Center for Immunization and Respiratory Diseases, and co-authors in the Morbidity and Mortality Weekly Report.

For about 16% of study participants, symptoms lasted 12 months after their initial SARS-CoV-2 test. At 3, 6, 9, and 12 months after testing, some people had ongoing symptoms, while others had emerging symptoms not reported previously.

"It was common for symptoms to resolve then re-emerge months later," noted co-author Juan Carlos Montoy, MD, PhD, of the University of California San Francisco.

"A lot of prior research has focused on symptoms at one or two points in time, but we were able to describe symptom trajectory with greater clarity and nuance," Montoy said in a statement. "It suggests that measurements at a single point in time could underestimate or mischaracterizes the true burden of disease."

INSPIRE was designed to assess long-term symptoms and outcomes among people with COVID-like illness who had a positive or negative SARS-CoV-2 test result at study enrollment. Participants who completed baseline and 3-, 6-, 9-, and 12-month surveys were included to identify emerging and ongoing symptoms.

A total of 1,741 people completed all quarterly surveys through 12 months, including 1,288 COVID test-positive and 453 COVID test-negative participants. Most participants were female.

Outcomes included self-reported symptoms in eight categories: extreme fatigue; cognitive difficulties; cardiovascular; pulmonary; musculoskeletal; gastrointestinal; constitutional; or head, eyes, ears, nose, and throat.

The prevalence of any symptom decreased substantially from baseline to 3-month follow-up -- from 98.4% to 48.2% for COVID-positive participants, and from 88.2% to 36.6% for COVID-negative participants.

Persistent symptoms decreased over the year. Emerging symptoms were reported for every symptom category at each follow-up period for both groups.

At 12 months, symptom prevalence was similar between groups, at 18.3% in the COVID-positive group and 16.1% in the COVID-negative group (P>0.05).

"We were surprised to see how similar the patterns were between the COVID-positive and COVID-negative groups," Montoy noted. "It shows that the burden after COVID may be high, but it might also be high for other non-COVID illnesses. We have a lot to learn about post-illness processes for COVID and other conditions."

In other research published in the Morbidity and Mortality Weekly Report, a national survey showed the prevalence of long COVID fell to 6.0%.

The survey also found that one in four people with long COVID (26.4%) had significant activity limitations, reported Nicole Ford, PhD, of the CDC's National Center for Immunization and Respiratory Diseases, and co-authors. The findings came from the Census Bureau's Household Pulse Survey from June 1-13, 2022 to June 7-19, 2023.

Among people who reported a history of previous SARS-CoV-2 infection, long COVID prevalence fell from 18.9% in 2022 to 11% in 2023. In the overall U.S. population -- irrespective of history of previous COVID-19 -- the prevalence of long COVID dropped from 7.5% to 6.0%.

Among both groups, prevalence declined from June 2022 through January 2023 before stabilizing.

The percentage of people with significant activity limitations didn't change over time, the researchers said. Only adults under age 60 experienced significant rates of decline (P<0.01).

"These findings highlight the importance of COVID prevention, including staying up to date with recommended COVID-19 vaccination, and could inform healthcare service needs planning, disability policy, and other support services for persons experiencing severe activity limitation from long COVID," Ford and colleagues wrote.

"Limited ability to carry out day-to-day activities because of long COVID symptoms can have a significant impact on quality of life, functional status, and ability to work or provide care to others," they added. "Long COVID in U.S. adults has also been associated with lower likelihood of working full time and higher likelihood of being unemployed."

Source: https://www.medpagetoday.com/neurology/lon...

The reports of (my) death are greatly exaggerated

Chat GPT agains tolls the bell of impending doom, etc.  Or not.  

The whole generative AI space is very promising, and some really interesting results have come out of evaluating the technology in the healthcare space.  This article’s title again makes the case that the computer is better than the doctor.  And for some cases, that’s true – where you might think it would be helpful.  Like in the case of some real oddities, complex collections of stuff that’s uncommon, etc.  But there continues to be issues around some stuff that the regular doc would be thinking of long before the computer got around to it.  And don’t get me started on the MSU (Make S**t Up) quotient that I’ve touched on before – it continues to be a real problem.  

The future of AI assisted healthcare is, indeed, very bright.  But don’t get ahead of yourself – it’s like consulting Dr. Google – oh, I have cancer….but it really just looks like a splinter…..

AI Beat Clinicians at Figuring Out Difficult Diagnoses

GPT-4 may help with diagnoses that have been missed by clinicians, study author says

A generative artificial intelligence (AI) program diagnosed elderly patients with extensive medical histories and long hospital stays more accurately than clinicians, suggesting the technology could help identify missed diagnoses, according to a new study.

An analysis of medical histories for six patients over the age of 65 with delayed diagnoses revealed that GPT-4 (Generative Pre-trained Transformer 4, made by OpenAI) accurately diagnosed four out of six patients, according to Yat-Fung Shea, MBBS, of the Department of Medicine at Queen Mary Hospital and University of Hong Kong, and coauthors.

By comparison, clinicians accurately diagnosed only two out of six of those same patients, according to a research letter published in JAMA Network Open.

When differential diagnoses were included, AI's accuracy improved to five out of six patient diagnoses, compared with three out of six correct patient diagnoses made by clinicians.

Differential diagnoses were also generated using a medical diagnostic decision tool known as Isabel DDx Companion. This tool accurately diagnosed none of the patients in the initial attempt, and two out of six patients when provided differential diagnosis information.

"GPT-4 may be able to provide potential diagnoses which have been missed by clinicians," Shea told MedPage Today in an email. "If a doctor encounters elderly patients, who have been admitted into hospital for work-up for at least a month but [are] still without a definite diagnosis, he/she can consider using GPT-4 to analyze the medical histories."

"GPT-4 may help clinicians to analyze clinical situations with diagnostic difficulties, especially in alerting clinicians to possible underlying malignancies or side effects of drugs," he added.

The AI program was able to successfully diagnose patients as a result of the extensive medical histories available for each of them, Shea said, including radiological and pharmacological information.

Shea noted that they chose to work with older patients because they often suffered from multiple comorbidities, which can require prolonged efforts to achieve a correct diagnosis. With GPT-4, clinicians could potentially identify diagnoses they might have otherwise missed, which would help close the time to initial diagnosis in this population.

The AI program successfully diagnosed patients with a range of conditions, including polymyalgia rheumatica (patient 2), mycobacterium tuberculosis-related hemophagocytic lymphohistiocytosis (patient 3), metronidazole-induced encephalopathy (patient 5), and lymphoma (patient 6).

Still, GPT-4 had trouble with certain aspects of diagnosing patients, including multifocal infections. The AI program failed to pinpoint the source of a recurrent infection in one patient, and it did not suggest the use of clinically relevant testing for infections in most of the patients in the study.

Shea noted that GPT-4 should be seen as a tool that can increase a clinician's confidence in a diagnosis or even offer clinicians suggestions similar to those of a specialist. This would be especially beneficial in lower-income countries that lack the wide availability of specialists to assist with consulting on older patients.

"Our results showed that GPT-4 has the potential to improve clinician responses," Shea said. "GPT-4 may alert clinicians [of] certain overlooked things in the clinical history, e.g. potential side effects of drugs or abnormal findings on imaging. These may be relevant especially when certain subspecialties are not immediately available for consultation."

Shea also noted that the study was limited by a very small sample size. The analysis was conducted using the medical histories for six patients (two women and four men) in a single hospital unit -- the Division of Geriatrics in the Department of Medicine at Queen Mary Hospital. All of the patients had delayed definitive diagnosis for longer than 1 month in 2022. Their histories were entered into GPT-4 chronologically starting at admission, at 1 week after admission, and before final diagnosis. The data were entered into the AI program on April 16, 2023.

The authors also cautioned that the AI program is susceptible to regurgitating wrong information based on incorrect medical histories.

They concluded that use of generative AI in diagnosing patients showed promise, especially when extensive medical histories were available, but it also presented several new challenges for clinicians.

Source: https://www.medpagetoday.com/special-repor...

Could there be anything more OBVIOUS concerning Long COVID?

DUH!  It takes an expert panel to recommend that we need to study Long COVID more.  This is specifically around cognitive issues – brain fog, trouble with executive function, stuff like that.  There’s a report from a modeling study of 1.2 million people that demonstrated that 2.2% still had issues months after being sick.  Plus the specter of long term neurodegeneration is possible in those folks, maybe even likely.  That’s 26,000 people in the study.  If we consider that 35% of Americans have been known to test positive for COVID (very much an understated number), this would translate to about ¼ of a million cognitively impaired Americans.  That’s a huge problem, still largely undefined and in conventional practices, largely unaddressed.  

If you are one of the folks who feels like things still aren’t quite right, PLEASE call – don’t expect things to improve on their own if they haven’t by now.  There is help available.

Long COVID Cognitive Research Needs an Overhaul, Task Force Says

Expert group issues recommendations for future studies

Long COVID cognitive research needs better studies, an international task force urged.

The approach to assessing cognitive dysfunction after SARS-CoV-2 infection requires an overhaul to better understand long COVID prevalence, trajectory, mechanisms, phenotypes, and psychosocial factors, said experts from the NeuroCOVID International Neuropsychology Taskforce.

"As one of the most common symptoms of post-COVID-19 condition and one for which affected individuals may seek accommodations and disability benefits in accordance with the Americans With Disabilities Act, it is imperative that we use more rigorous studies of cognitive outcomes," wrote task force member Sara Weisenbach, PhD, of McLean Hospital and Harvard Medical School in Boston, and co-authors, in a viewpoint paper published in JAMA Psychiatry.

Long COVID cognitive dysfunction, including "brain fog," can affect even relatively young people and can last for months. A modeling study based on 1.2 million COVID patients showed that 2.2% had cognitive problems lasting 3 months or longer after symptomatic infection. Moreover, data from patients with severe COVID suggested SARS-CoV-2 infection may raise the risk of subsequent neurodegeneration.

"Since the beginning of the SARS-CoV-2 pandemic, the medical community has experienced an influx of patients reporting new cognitive difficulties months after infection clearance," Weisenbach told MedPage Today.

"There is evidence in the research literature of objective cognitive impairment in some individuals following infection; however, many studies have methodological weaknesses that limit the conclusions that can be drawn and applied in clinical settings," she said.

The task force outlined three recommendations based on initial guidelines the group proposed in 2021.

The first calls for a rigorous assessment of post-COVID cognitive dysfunction. Studies relying on self-reported data early in the pandemic have skewed perceptions about the frequency of cognitive dysfunction, Weisenbach and co-authors pointed out, and objective and subjective findings often don't align with each other. Comprehensive test batteries should be used, and studies should include control groups, diverse samples, and when possible, pre-pandemic and post-pandemic data, they argued.

The group's second recommendation was for new research to identify clinical phenotypes. COVID severity, age, family history, and pre-existing cognitive or psychiatric disorders are factors to consider, the task force observed. Other phenotypes may be based on COVID-19 variants, vaccination status, or history of other viral illnesses or pre-existing autoimmune conditions.

Finally, psychosocial factors need to be assessed given the controversies surrounding post-COVID-19 cognitive dysfunction, including skepticism of its existence and disagreement on its cause, Weisenbach and co-authors said.

"This controversy is familiar to neuropsychologists, who frequently evaluate patients with similarly controversial conditions, such as myalgic encephalomyelitis, or chronic fatigue syndrome," the task force said. "Perhaps because psychiatric disorders can co-occur with these multifaceted conditions, many have dismissed these conditions as being psychosomatic with nonbiologic underpinnings."

"This broad dismissal is contrary to scientific evidence and can be harmful for patients and communities affected," the group added. It's possible that in some people, cognitive symptoms may reflect an interplay between illness and psychological and social factors, and in others it's associated with a postviral syndrome and persistent inflammation, they suggested.

Clinical studies will likely have different results than those from large cohorts, Weisenbach and colleagues noted.

"Together, these data will allow improved clarity regarding the pathophysiology of post-COVID-19 cognitive dysfunction and factors that contribute to symptom persistence," they wrote. "Ultimately, this will create opportunities for the development of effective treatment interventions using a personalized medicine approach."

Source: https://www.medpagetoday.com/neurology/lon...

What if you are NOT "Normal"? I guess you'll keep sniffling.

The FDA is now announcing that an ingredient in many cold and allergy medications, phenylephrine is no more effective than placebo.  This story gets complicated, but I know that, for myself, phenylephrine ABSOLUTELY works to combat nasal congestion.  But then again, I know I’m not normal (😊).  

If the FDA decides to pull this ingredient off the shelves, there will be essentially no appropriate pharmaceutical options for folks with colds (there are some homeopathic remedies, and other options, that of course, conventional practices won’t tell you about).  

The FDA bases their evaluations of population effects.  If you’re “not normal” (middle of the pack), you might fall outside of the usual recommendations.  FeldMed looks at the individual, not the population.  This is the basis for “Personalized Medicine” – it doesn’t have to work for everybody – it just has to work for you! 

FROM NEWSDAY / BY LISA L. COLANGELO

A decongestant found in popular nonprescription cold medications including some types of Sudafed, DayQuil and Mucinex doesn’t work, according to a panel of experts reviewing the ingredient phenylephrine for the U.S. Food and Drug Administration.

The unanimous vote on Tuesday could lead the FDA to pull medications with phenylephrine from store shelves, if it accepts the findings. Sales of products with the ingredient were worth $1.7 billion in 2022.

“Modern studies, when well conducted, are not showing any improvement in congestion with phenylephrine,” said Dr. Mark Dykewicz, a member of the panel and an allergy specialist at the Saint Louis University School of Medicine.

Findings from recent studies as well as interpretation of older data, shows phenylephrine is not effective, said Dr. David Rosenthal, attending physician at the division of allergy/immunology at Northwell Health.

“Science progresses over time, and even though it was approved as being effective in the 1970s, it doesn’t meet the current standards for effectiveness,” he said. “This was typically used as a medication that shrank blood vessels in the nose so people would not be as congested.”

Get the latest stories every week about health and wellness, covering topics from medicine and mental health to updates on the coronavirus and new research.

Members of the FDA’s Nonprescription Drug Advisory Committee had been convened to examine data and help determine whether oral phenylephrine is an effective nasal decongestant. The review did not include nasal sprays with phenylephrine.

The debate over the effectiveness of phenylephrine has gone on for over a decade, led by some medical experts and researchers. But sales of products containing the ingredient are strong, especially during the cold and flu seasons. More than 242 million bottles/packages of over-the-counter cough, cold, allergy oral medications with phenylephrine were sold in retail stores in 2022, according to an FDA briefing document.

FDA reviewers said the research shows how quickly phenylephrine is metabolized when taken orally, leaving only trace levels that reach nasal passages to relieve congestion. The drug appears more effective when applied directly to the nose, in sprays or drops.

Rosenthal said people suffering from allergy symptoms are better off addressing symptoms with antihistamines and seeing an allergist. Cold symptoms will improve when the cold virus goes away with the help of rest, nasal saline and drinking warm liquids.

Rosenthal said preventing infections by getting vaccinated against the flu, COVID-19 and RSV is also important.

Source: https://www.newsday.com/news/health/nasal-...

Who's Trippin' Now?

I love Pink Floyd.  Complex layers, mind-bending instrumentals and  “mess with your head” lyrics.  What’s not to like.  That’s what a bunch of neuroscientists thought, too, when they used Pink Floyd as their brainwave subject matter.  The attempt to recreate the music from actual brainwaves was somewhat successful, which has all kinds of crazy implications which I’ll let your imaginations run away with! 

FROM SCIENTIFIC AMERICAN / bY LUCY TU

Neuroscientists Re-create Pink Floyd Song from Listeners’ Brain Activity

Artificial intelligence has turned the brain’s electrical signals into somewhat garbled classic rock

Researchers hope brain implants will one day help people who have lost the ability to speak to get their voice back—and maybe even to sing. Now, for the first time, scientists have demonstrated that the brain’s electrical activity can be decoded and used to reconstruct music.

A new study analyzed data from 29 people who were already being monitored for epileptic seizures using postage-stamp-size arrays of electrodes that were placed directly on the surface of their brain. As the participants listened to Pink Floyd’s 1979 song “Another Brick in the Wall, Part 1,” the electrodes captured the electrical activity of several brain regions attuned to musical elements such as tone, rhythm, harmony and lyrics. Employing machine learning, the researchers reconstructed garbled but distinctive audio of what the participants were hearing. The study results were published on Tuesday in PLOS Biology.

Neuroscientists have worked for decades to decode what people are seeing, hearing or thinking from brain activity alone. In 2012 a team that included the new study’s senior author—cognitive neuroscientist Robert Knight of the University of California, Berkeley—became the first to successfully reconstruct audio recordings of words participants heard while wearing implanted electrodes. Others have since used similar techniques to reproduce recently viewed or imagined pictures from participants’ brain scans, including human faces and landscape photographs. But the recent PLOS Biology paper by Knight and his colleagues is the first to suggest that scientists can eavesdrop on the brain to synthesize music.

“These exciting findings build on previous work to reconstruct plain speech from brain activity,” says Shailee Jain, a neuroscientist at the University of California, San Francisco, who was not involved in the new study. “Now we’re able to really dig into the brain to unearth the sustenance of sound.”

To turn brain activity data into musical sound in the study, the researchers trained an artificial intelligence model to decipher data captured from thousands of electrodes that were attached to the participants as they listened to the Pink Floyd song while undergoing surgery.

Why did the team choose Pink Floyd—and specifically “Another Brick in the Wall, Part 1”? “The scientific reason, which we mention in the paper, is that the song is very layered. It brings in complex chords, different instruments and diverse rhythms that make it interesting to analyze,” says Ludovic Bellier, a cognitive neuroscientist and the study’s lead author. “The less scientific reason might be that we just really like Pink Floyd.”

The AI model analyzed patterns in the brain’s response to various components of the song’s acoustic profile, picking apart changes in pitch, rhythm and tone. Then another AI model reassembled this disentangled composition to estimate the sounds that the patients heard. Once the brain data were fed through the model, the music returned. Its melody was roughly intact, and its lyrics were garbled but discernible if one knew what to listen for: “All in all, it was just a brick in the wall.”

The model also revealed which parts of the brain responded to different musical features of the song. The researchers found that some portions of the brain’s audio processing center—located in the superior temporal gyrus, just behind and above the ear—respond to the onset of a voice or a synthesizer, while other areas groove to sustained hums.

Although the findings focused on music, the researchers expect their results to be most useful for translating brain waves into human speech. No matter the language, speech contains melodic nuances, including tempo, stress, accents and intonation. “These elements, which we call prosody, carry meaning that we can’t communicate with words alone,” Bellier says. He hopes the model will improve brain-computer interfaces, assistive devices that record speech-associated brain waves and use algorithms to reconstruct intended messages. This technology, still in its infancy, could help people who have lost the ability to speak because of conditions such as stroke or paralysis.

Jain says future research should investigate whether these models can be expanded from music that participants have heard to imagined internal speech. “I’m hopeful that these findings would translate because similar brain regions are engaged when people imagine speaking a word, compared with physically vocalizing that word,” she says. If a brain-computer interface could re-create someone’s speech with the inherent prosody and emotional weight found in music, it could reconstruct far more than just words. “Instead of robotically saying, ‘I. Love. You,’ you can yell, ‘I love you!’” Knight says.

Several hurdles remain before we can put this technology in the hands—or brains—of patients. For one thing, the model relies on electrical recordings taken directly from the surface of the brain. As brain recording techniques improve, it may be possible to gather these data without surgical implants—perhaps using ultrasensitive electrodes attached to the scalp instead. The latter technology can be employed to identify single letters that participants imagine in their head, but the process takes about 20 seconds per letter—nowhere near the speed of natural speech, which hurries by at around 125 words per minute.

The researchers hope to make the garbled playback crisper and more comprehensible by packing the electrodes closer together on the brain’s surface, enabling an even more detailed look at the electrical symphony the brain produces. Last year a team at the University of California, San Diego, developed a densely packed electrode grid that offers brain-signal information at a resolution that is 100 times higher than that of current devices. “Today we reconstructed a song,” Knight says. “Maybe tomorrow we can reconstruct the entire Pink Floyd album.”

Source: https://www.scientificamerican.com/article...

Make your STEPS count

If you’ve been paying attention AT ALL, then you know we’ve talked about getting up and moving.  We’ve previously debunked the “you gotta get 10,000 steps”, but at the same time, more is better.  Here’s a tidy summary of lots of walking data over years that again supports the idea that doing more is better.  Simply, if you add 1000 steps to your daily routine you can expect a 15% reduction on overall mortality (that’s a population risk – we usually only get to die once), while 500 steps will buy you a 7% reduction in cardiovascular risk.  

I’m going to say it again – EXERCISE is the single most important thing you can do to improve your overall health, reduce your risk of chronic disease, and extend your life and healthspan.  I am happy to assist you in developing a specific program to directly address your specific needs.  Just give me a call and we can get started.

FROM THE EUROPEAN JOURNAL OF PREVENTIVE CARDIOLOGY / BY MACIEJ BANACH, JOANNA LEWEK, STANISŁAW SURMA, PETER E PENSON, AMIRHOSSEIN SAHEBKAR, SETH S MARTIN, GANI BAJRAKTARI, MICHAEL Y HENEIN, ŽELJKO REINER, AGATA BIELECKA-DĄBROWA, IBADETE BYTYÇI

The association between daily step count and all-cause and cardiovascular mortality: a meta-analysis


Lay Summary

  • There is strong evidence showing that sedentary life may significantly increase the risk of cardiovascular (CV) disease and shorten the lifespan. However, the optimal number of steps, both the cut-off points over which we can see health benefits, and the upper limit (if any), and their role in health are still unclear.

  • In this meta-analysis of 17 studies with almost 227 000 participants that assessed the health effects of physical activity expressed by walking measured in the number of steps, we showed that a 1000-step increment correlated with a significant reduction of all-cause mortality of 15%, and similarly, a 500-step increment correlated with a reduced risk of CV mortality of 7%. In addition, using the dose–response model, we observed a strong inverse nonlinear association between step count and all-cause mortality with significant differences between younger and older groups.

  • It is the first analysis that not only looked at age and sex but also regional differences based on the weather zones, and for the first time, it assesses the effect of up to 20 000 steps/day on outcomes (confirming the more the better), which was missed in previous analyses. The analysis also revealed that depending on the outcomes, we do not need so many steps to have health benefits starting with even 2500/4000 steps/day, which, in fact, undermines the hitherto definition of a sedentary life.

Abstract Aims

There is good evidence showing that inactivity and walking minimal steps/day increase the risk of cardiovascular (CV) disease and general ill-health. The optimal number of steps and their role in health is, however, still unclear. Therefore, in this meta-analysis, we aimed to evaluate the relationship between step count and all-cause mortality and CV mortality.

Methods and results

We systematically searched relevant electronic databases from inception until 12 June 2022. The main endpoints were all-cause mortality and CV mortality. An inverse-variance weighted random-effects model was used to calculate the number of steps/day and mortality. Seventeen cohort studies with a total of 226 889 participants (generally healthy or patients at CV risk) with a median follow-up 7.1 years were included in the meta-analysis. A 1000-step increment was associated with a 15% decreased risk of all-cause mortality [hazard ratio (HR) 0.85; 95% confidence interval (CI) 0.81–0.91; P < 0.001], while a 500-step increment was associated with a 7% decrease in CV mortality (HR 0.93; 95% CI 0.91–0.95; P < 0.001). Compared with the reference quartile with median steps/day 3967 (2500–6675), the Quartile 1 (Q1, median steps: 5537), Quartile 2 (Q2, median steps 7370), and Quartile 3 (Q3, median steps 11 529) were associated with lower risk for all-cause mortality (48, 55, and 67%, respectively; P < 0.05, for all). Similarly, compared with the lowest quartile of steps/day used as reference [median steps 2337, interquartile range 1596–4000), higher quartiles of steps/day (Q1 = 3982, Q2 = 6661, and Q3 = 10 413) were linearly associated with a reduced risk of CV mortality (16, 49, and 77%; P < 0.05, for all). Using a restricted cubic splines model, we observed a nonlinear dose–response association between step count and all-cause and CV mortality (Pnonlineraly < 0.001, for both) with a progressively lower risk of mortality with an increased step count.

Conclusion

This meta-analysis demonstrates a significant inverse association between daily step count and all-cause mortality and CV mortality with more the better over the cut-off point of 3967 steps/day for all-cause mortality and only 2337 steps for CV mortality.

FOR THE FULL STUDY CLICK HERE


Source: https://academic.oup.com/eurjpc/advance-ar...

AI is everywhere, has real advantages....BUT...is it real?

Hallucinations – it’s a technical term in the Artificial Intelligence (AI) community for “making s*** up”.  It’s a big problem in the healthcare space when you’re trying to get AI to legitimately move a process forward.  There have been studies now that show AI can be more empathic and more helpful than a doctor.  That’s great news, unless of course, you’re the doctor.  What these studies don’t share is the downside risk – what’s being made up?  It turns out that a study of the MSUQ (my term – Making S*** Up Quotient, or a percentage of material presented that’s made up) for ChatGPT was around 20%.  The more specific the “question”, the higher the percentage of MSU.  Yikes!  

AI is the future – unfortunately, it’s still in the future.  With careful consideration, AI can help now, but CAREFUL CONSIDERATION best be the guiding principle.  What does that mean?  Dr. Google still has nothing on me!

FROM JAMA NETWORK / BY ANJUN CHEN AND DRAKE O. CHEN

Accuracy of Chatbots in Citing Journal Articles

Introduction

The recently released generative pretrained transformer chatbot ChatGPT from OpenAI has shown unprecedented capabilities ranging from answering questions to composing new content. Its potential applications in health care and education are being explored and debated. Researchers and students may use it as a copilot in research. It excels at creating new content but falls short in providing scientific references. Journals such as Science have banned chatbot-generated text in their published reports. However, the accuracy of reference citing by ChatGPT is unclear; therefore, this investigation aimed to quantify ChatGTP’s citation error rate.

Methods

This study tested the value of the ChatGPT copilot in creating content for training of learning health systems (LHS).5 A large range of LHS topics were discussed with the latest GPT-4 model from OpenAI from April 20 to May 6, 2023. We used prompts for broad topics, such as LHS and data, as well as specific topics, such as building a stroke risk prediction model using the XGBoost library. Since chatbot responses depended on the prompts, we first asked questions about specific LHS topics, then requested journal articles as references. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology reporting guideline.

We verified each cited journal article by checking its existence in the cited journal and by searching its title using Google Scholar. The article’s title, authors, publication year, volume, issue, and pages were compared. Any article that failed this verification was considered fake. To determine a reliable error rate, over 300 article references were produced on the LHS topics. For comparison, we chatted with OpenAI’s default GPT-3.5 model for the same LHS topics. Exact 95% CIs for error rate were constructed. The error rate between the GPT-4 and GPT-3.5 models was compared using the Fisher exact test, with 2-sided P < .05 indicating statistical significance.

Results

From the default GPT-3.5 model, 162 reference journal articles were fact-checked, 159 (98.1% [95% CI, 94.7%-99.6%]) of which were verified as fake articles. From the GPT-4 model, 257 articles were fact-checked, 53 (20.6% [95% CI, 15.8%-26.1%]) of which were verified as fake articles. The error rate of reference citing for GPT-4 was significantly lower than that for GPT-3.5 (P < .001) but remains non-negligible. Narrower topics tended to have more fake articles than broader topics.

GPT-4 provided answers that could be used as supplementary materials for LHS training after fact-checking and editing. However, it failed to provide information about the latest LHS developments.

Discussion

Our findings suggest that GPT-4 can be a helpful copilot in preparing new LHS education and training materials, although it may lack the latest information. Because GPT-4 cites some fake journal articles, they must be verified manually by humans; GPT-3.5–cited references should not be used.

When asked why it returned fake references, ChatGPT explained that the training data may be unreliable, or the model may not be able to distinguish between reliable and unreliable sources. As generative chatbots are deployed as copilots in health care education and training, understanding their unique abilities (eg, the ability to answer any questions) and inherent defects (eg, the inability to fact-check responses) will help make more effective use of the new GPT technology for improving health care education and training. Additionally, potential ethical issues such as misinformation and data bias should be considered for GPT applications.

This study has some limitations, such as the chat topics not representing all subject areas. However, since the LHS topics covered many subject areas of health care, the findings should be applicable in the health care domain. Furthermore, the findings should be more applicable to deeper discussions with ChatGPT as opposed to superficial discussions.

Source: https://jamanetwork.com/journals/jamanetwo...

The question is not "Am I toxic?, but rather "How Toxic am I?"

We are continually exposed to toxic agents (known technically as toxicants).  These toxicants throughout our history on this planet were essentially natural and our bodies evolved to rid our systems of them, as would be appropriate, according to their toxic levels.  Of course, over the last 100 years or so, technology has allowed mankind to create chemicals that never existed before.  Many of these substances will not reliably leave our bodies without help, causing increasing damage, such as cancer and chronic disease.  Some report that about 10 million chemicals have been created, virtually none have be tested for safety.  (My favorite government certification is GRAS – “generally regarded as safe”, which you can read as “seems okay, we don’t need to test it”). This article is the latest in seriously scary stuff that has been approved as “safe”, or maybe safe enough?  UGH.

EPA approved fuel ingredient with sky-high lifetime cancer risk, document reveals

Chevron component approved even though it could cause cancer in virtually every person exposed over a lifetime

The Environmental Protection Agency approved a component of boat fuel made from discarded plastic that the agency’s own risk formula determined was so hazardous, everyone exposed to the substance continually over a lifetime would be expected to develop cancer.

Current and former EPA scientists said that threat level is unheard of. It is a million times higher than what the agency usually considers acceptable for new chemicals and six times worse than the risk of lung cancer from a lifetime of smoking.

Federal law requires the EPA to conduct safety reviews before allowing new chemical products on to the market. If the agency finds that a substance causes unreasonable risk to health or the environment, the EPA is not allowed to approve it without first finding ways to reduce that risk.

But the agency did not do that in this case. Instead, the EPA decided its scientists were overstating the risks and gave Chevron the go-ahead to make the new boat fuel ingredient at its refinery in Pascagoula, Mississippi. Though the substance can poison air and contaminate water, EPA officials mandated no remedies other than requiring workers to wear gloves, records show.

ProPublica and the Guardian in February reported on the risks of other new plastic-based Chevron fuels that were also approved under an EPA program that the agency had touted as a “climate-friendly” way to boost alternatives to petroleum-based fuels. That story was based on an EPA consent order, a legally binding document the agency issues to address risks to health or the environment. In the Chevron consent order, the highest noted risk came from a jet fuel that was expected to create air pollution so toxic that one out of four people exposed to it over a lifetime could get cancer.

In February, ProPublica and the Guardian asked the EPA for its scientists’ risk assessment, which underpinned the consent order. The agency declined to provide it, so ProPublica requested it under the Freedom of Information Act. The 203-page risk assessment revealed that, for the boat fuel ingredient, there was a far higher risk that was not in the consent order. EPA scientists included figures that made it possible for ProPublica to calculate the lifetime cancer risk from breathing air pollution that comes from a boat engine burning the fuel. That calculation, which was confirmed by the EPA, came out to 1.3 in 1, meaning every person exposed to it over the course of a full lifetime would be expected to get cancer.

Such risks are exceedingly unusual, according to Maria Doa, a scientist who worked at EPA for 30 years and once directed the division that managed the risks posed by chemicals. The EPA division that approves new chemicals usually limits lifetime cancer risk from an air pollutant to one additional case of cancer in a million people. That means that if a million people are continuously exposed over a presumed lifetime of 70 years, there would likely be at least one case of cancer on top of those from other risks people already face.

When Doa first saw the one-in-four cancer risk for the jet fuel, she thought it must have been a typo. The even higher cancer risk for the boat fuel component left her struggling for words. “I had never seen a one-in-four risk before this, let alone a 1.3-in-1,” said Doa. “This is ridiculously high.”

Another serious cancer risk associated with the boat fuel ingredient that was documented in the risk assessment was also missing from the consent order. For every 100 people who ate fish raised in water contaminated with that same product over a lifetime, seven would be expected to develop cancer – a risk that’s 70,000 times what the agency usually considers acceptable.

When asked why it didn’t include those sky-high risks in the consent order, the EPA acknowledged having made a mistake. This information “was inadvertently not included in the consent order”, an agency spokesperson said in an email.

Nevertheless, in response to questions, the agency wrote, “EPA considered the full range of values described in the risk assessment to develop its risk management approach for these” fuels. The statement said that the cancer risk estimates were “extremely unlikely and reported with high uncertainty.” Because it used conservative assumptions when modeling, the EPA said, it had significantly overestimated the cancer risks posed by both the jet fuel and the component of marine fuel. The agency assumed, for instance, that every plane at an airport would be idling on a runway burning an entire tank of fuel, that the cancer-causing components would be present in the exhaust and that residents nearby would breathe that exhaust every day over their lifetime.

In addition, the EPA also said that it determined the risks from the new chemicals were similar to those from fuels that have been made for years, so the agency relied on existing laws rather than calling for additional protections. But the Toxic Substances Control Act requires the EPA to review every new chemical – no matter how similar to existing ones. Most petroleum-based fuels were never assessed under the law because existing chemicals were exempted from review when it passed in 1976. Studies show people living near refineries have elevated cancer rates.

“EPA recognizes that the model it used in its risk assessments was not designed in a way that led to realistic risk estimates for some of the transportation fuel uses,” an agency spokesperson wrote. For weeks, ProPublica asked what a realistic cancer risk estimate for the fuels would be, but the agency did not provide one by the time of publication.

New chemicals are treated differently under federal law than ones that are already being sold. If the agency is unsure of the dangers posed by a new chemical, the law allows the EPA to order tests to clarify the potential health and environmental harms. The agency can also require that companies monitor the air for emissions or reduce the release of pollutants. It can also restrict the use of new products or bar their production altogether. But in this case, the agency didn’t do any of those things.

Six environmental organizations concerned about the risks from the fuels – the Sierra Club, Natural Resources Defense Council, Moms Clean Air Force, Toxic-Free Future, Environmental Defense Fund and Beyond Plastics – are challenging the agency’s characterization of the cancer risks. “EPA’s assertion that the assumptions in the risk assessment are overly conservative is not supported,” the groups wrote in a letter sent Wednesday to EPA administrator Michael Regan. The groups accused the agency of failing to protect people from dangers posed by the fuels and urged the EPA to withdraw the consent order approving them.

Chevron has not started making the new fuels, the agency said.

Separately, the EPA acknowledged that it had mislabeled critical information about the harmful emissions. The consent order said the one-in-four lifetime cancer risk referred to “stack air” – a term for pollution released through a smokestack. The cancer burden from smokestack pollution would fall on residents who live near the refinery. And indeed a community group in Pascagoula sued the EPA, asking the US court of appeals in Washington to invalidate the agency’s approval of the chemicals.

But the agency now says that those numbers in the consent order do not reflect the cancer risk posed by air from refinery smokestacks. When the consent order said stack emissions, the EPA says, it really meant pollution released from the exhaust of the jets and boats powered by these fuels.

“We understand that this may have caused a misunderstanding,” the EPA wrote in its response to ProPublica.

Based on that explanation, the extraordinary cancer burden would fall on people near boats or idling airplanes that use the fuels – not those living near the Chevron refinery in Pascagoula.

Each of the two cancer-causing products is expected to be used at 100 sites, the EPA confirmed. ProPublica asked for the exact locations where the public might encounter them, but Chevron declined to say. The EPA said it didn’t know the locations and didn’t even know whether the marine fuel would be used for a Navy vessel, a cruise ship or a motorboat.

In an email, a Chevron spokesperson referred questions to the EPA and added: “The safety of our employees, contractors and communities are our first priority. We place the highest priority on the health and safety of our workforce and protection of our assets, communities and the environment.”

Doa, the former EPA scientist who worked at the agency for three decades, said she had never known the EPA to misidentify a source of pollution in a consent order. “When I was there, if we said something was stack emissions, we meant that they were stack emissions,” she said.

During multiple email exchanges with ProPublica and the Guardian leading up to the February story, the EPA never said that cancer risks listed as coming from stack emissions were actually from boat and airplane exhaust. The agency did not explain why it initially chose not to tell ProPublica and the Guardian that the EPA had mislabeled the emissions.

The agency faced scrutiny after the February story in ProPublica and the Guardian. In an April letter to Regan, Senator Jeff Merkley, the Oregon Democrat who chairs the Senate’s subcommittee on environmental justice and chemical safety, said he was troubled by the high cancer risks and the fact that the EPA approved the new chemicals using a program meant to address the climate crisis.

EPA assistant administrator Michal Freedhoff told Merkley in a letter earlier this year that the one-in-four cancer risk stemmed from exposure to the exhaust of idling airplanes and the real risk to the residents who live near the Pascagoula refinery was “on the order of one in a hundred thousand,” meaning it would cause one case of cancer in 100,000 people exposed over a lifetime.

Told about the even higher cancer risk from the boat fuel ingredient, Merkley said in an email: “It remains deeply concerning that fossil fuel companies are spinning what is a complicated method of burning plastics, that is actually poisoning communities, as beneficial to the climate. We don’t understand the cancer risks associated with creating or using fuels derived from plastics.”

Merkley said he is “leaving no stone unturned while digging into the full scope of the problem, including looking into EPA’s program”.

He added: “Thanks to the dogged reporting from ProPublica we are getting a better sense of the scale and magnitude of this program that has raised so many concerns.”

The risk assessment makes it clear that cancer is not the only problem. Some of the new fuels pose additional risks to infants, the document said, but the EPA did not quantify the effects or do anything to limit those harms, and the agency would not answer questions about them.

Some of these newly approved toxic chemicals are expected to persist in nature and accumulate in living things, the risk assessment said. That combination is supposed to trigger additional restrictions under EPA policy, including prohibitions on releasing the chemicals into water. Yet the agency lists the risk from eating fish contaminated with several of the compounds, suggesting they are expected to get into water. When asked about this, an EPA spokesperson wrote that the agency’s testing protocols for persistence, bioaccumulation and toxicity are “unsuitable for complex mixtures” and contended that these substances are similar to existing petroleum-based fuels.

The EPA has taken one major step in response to concerns about the plastic-based chemicals. In June, it proposed a rule that would require companies to contact the agency before making any of 18 fuels and related compounds listed in the Chevron consent order. The EPA would then have the option of requiring tests to ensure that the oil used to create the new fuels doesn’t contain unsafe contaminants often found in plastic, including certain flame retardants, heavy metals, dioxins and PFAS. If approved, the rule will require Chevron to undergo such a review before producing the fuels, according to the EPA.

But environmental advocates say that the new information about the plastic-based chemicals has left them convinced that, even without additional contamination, the fuels will pose a grave risk.

“This new information just raises more questions about why they didn’t do this the right way,” said Daniel Rosenberg, director of federal toxics policy at NRDC. “The more that comes out about this, the worse it looks.”

Source: https://www.theguardian.com/environment/20...

You don't have to Squat 400 pounds -- just your body weight will work great.

Here we are again talking about exercise.  Why?  Because it’s the single most important thing you can do to extend your life and healthspan.  Recently discussions supported that isometric exercise actually has a little advantage (time-wise and effectiveness) over other forms (HIIT, aerobic, etc), while pointing out that the best benefit is really seen in combination.  In any case, here’s an easy way to get going – squat against a wall for a couple of minutes.  Rest. Repeat.  Can’t do two minutes – work up to it.  Do it every day, do a little more every day.  Soon you’ll be stronger, your belly won’t stick out as much, your butt will be firmer, and chances are your blood pressure will be lower.  What have you got to loose?  I mean really?  I’ve been saying it’s just not that hard – IT’S NOT!  Get MOVING (or still, whichever the case may be)!

FROM THE NY TIMES / BY DANI BLUM

A Simple 14-Minute Workout That Could Lower Your Blood Pressure

A new study points to the humble wall squat as the most effective tool to fight hypertension.

It has become almost a cliché across doctor’s offices: One of the most trusted tools to lower blood pressure is to exercise.

A jog or stroll around the block, experts consistently find, can have big payoffs in terms of heart health. A new study, however, points to a somewhat surprising exercise that may be able to dramatically reduce someone’s blood pressure: the wall squat.

A team of researchers based in Britain analyzed 270 previous studies that examined the link between exercise and blood pressure. They found that, predictably, exercises like running, walking, cycling, strength training and high-intensity interval workouts all helped to reduce blood pressure; mixing cardio and strength training also appeared to help.

But the most effective type of workout they looked at, especially for those who already had some form of hypertension, was isometric exercise, which involves contracting a set of muscles without moving — think planks.

This new research adds to a growing body of evidence that quick bursts of exercise — like speeding up your walk during a commute or carrying groceries with a bit more vigor — can have significant benefits for people’s overall health.

“Everybody feels this incredible threat to their time — everybody feels like they don’t have enough time,” said Dr. Tamanna Singh, co-director of the Sports Cardiology Center at Cleveland Clinic, who was not involved with the study. “It’s so interesting to see more studies coming out showing, actually, time really is not the limiting factor.”

The British researchers looked at three kinds of isometric workouts in particular: squeezing a handgrip, holding a leg extension machine in place and squatting with your back against a wall. The wall squat (sometimes called a wall sit) is probably the easiest option for people to try, as it doesn’t require any equipment, said Jamie J. Edwards, a researcher at Canterbury Christ Church University and the lead author on the study.

Even though isometric exercises may appear relatively easy, they are often quite intense, Dr. Edwards said — as you hold yourself in place, sweating and straining. He recommends a 14-minute routine you can add to your regular workout perhaps three times a week: a two-minute wall squat, followed by two minutes of rest, repeated four times in total.

You should stay at the same squat height for all four rounds, but the exercise will feel more challenging the more times you do it, said Jim Wiles, a principal lecturer at Canterbury Christ Church University who was also an author on the study. The first bout should feel as if you are exerting yourself at a level of four (out of a possible 10, with 10 feeling as if you could not hold it any longer). The last bout should be around an eight, he said. You should feel reasonably exhausted by the end.

And be careful to not hold your breath while you do it, Dr. Edwards added.

The researchers aren’t entirely sure why isometric exercises seem to be so effective for combating hypertension. One prominent theory, Dr. Edwards said, is that when you clench your muscles without moving, the local blood vessels around them compress — and then when you release, blood flushes back, causing the vessels to widen or dilate if you perform the exercise frequently enough, in a way they don’t during a dynamic exercise like a run.

That change can be critical, because over time, high blood pressure can stiffen our arteries and prevent them from dilating properly, which restricts how much oxygen-rich blood they can deliver. This increases the risk of having a heart attack or stroke, Dr. Singh said.

The study doesn’t mean you should ditch your run and go straight for wall squats — isometric exercise should complement, not replace, your favorite workout, Dr. Edwards said, whether that’s cardio or weight lifting. And if you have any underlying medical conditions, you should consult with your doctor to check that isometric exercise is safe for you, Dr. Wiles suggested.

But if you are looking for a heart-healthy addition to your workout, you could do worse than the humble wall squat.

“You truly only need your body,” Dr. Singh said. “You don’t even need shoes.”

Source: https://www.nytimes.com/2023/07/26/well/bl...

Kombucha COULD be good for diabetics...maybe

The medical “literature” has become largely polluted with “quick-as-you-can” “let’s get something published” so we can claim some part of the zeitgeist.  Huh?  

Ok – I got this idea that something might have utility, so we devise a study that could test the idea.  But, we don’t really have any money, or staff, or time, so we’re gonna do it on the cheap and hope it pans out.  

That’s kind of what this study looks like.  Take 12 people, have them drink kombucha, test blood sugar a couple of times, don’t drink kombucha for a while and test blood sugar again a couple of times.  And guess what?  The kombucha seems to help control blood sugar.  Or at least on a couple of measurements they were better.  And there’s some statistical significance.  

Mark Twain said it best – “There are three kinds of lies: lies, damned lies, and statistics.”

What am I saying?  Yes, there are reasons to believe that kombucha could be good for a diabetic, despite it being “sugary”.  It’s a cool idea, and it might end up being true.  But this study does very little to support the idea, other than telling us that more study is needed.  But that’s never going to be the headline.  Who wants to read – “well, it MIGHT be good, but we don’t really know”

FROM FRONTIERS IN NUTRITION / BY Chagai Mendelson, Sabrina Sparkes, Daniel J. Merenstein, Chloe Christensen, Varun Sharma, Sameer Desale, Jennifer M. Auchtung, Car Reen Kok, Heather E. Hallen-Adams, Robert Hutkins

Kombucha tea as an anti-hyperglycemic agent in humans with diabetes – a randomized controlled pilot investigation

Introduction: Kombucha is a popular fermented tea that has attracted considerable attention due, in part, to its suggested health benefits. Previous results from animal models led us to hypothesize kombucha may reduce blood sugar levels in humans with diabetes. The objective of this pilot clinical study was to evaluate kombucha for its anti-hyperglycemic activities in adults with diabetes mellitus type II.

Methods: The study was organized as a prospective randomized double-blinded crossover study at a single-center urban hospital system. Participants (n = 12) were instructed to consume either a kombucha product or a placebo control (each 240 mL) for 4 weeks. After an 8-week washout period, participants consumed the alternate product. Fasting blood glucose levels were self-determined at baseline and at 1 and 4 weeks during each treatment period. Secondary health outcomes, including overall health, insulin requirement, gut health, skin health, mental health, and vulvovaginal health were measured by questionnaire at the same time points. The kombucha microbiota was assessed by selective culturing and 16S rRNA gene (bacteria) and ITS (fungi) sequencing. Fermentation end products were assessed by HPLC. Statistical significance of changes in fasting blood glucose was determined using paired, two-tailed student’s t-tests.

Results: Kombucha lowered average fasting blood glucose levels at 4 weeks compared to baseline (164 vs. 116 mg/dL, p = 0.035), whereas the placebo did not (162 vs. 141 mg/dL, p = 0.078). The kombucha microbiota, as assessed by cultural enumeration, was mainly comprised of lactic acid bacteria, acetic acid bacteria, and yeast, with each group present at about 106 colony forming units (CFU)/mL. Likewise, 16S rRNA gene sequencing confirmed that lactic acid and acetic acid bacteria were the most abundant bacteria, and ITS sequencing showed Dekkera was the most abundant yeast. The primary fermentation end products were lactic and acetic acids, both less than 1%. Ethanol was present at 1.5%.

Discussion: Although this pilot study was limited by a small sample size, kombucha was associated with reduced blood glucose levels in humans with diabetes. Larger follow-up studies are warranted.

Clinical trial registration: ClinicalTrials.gov, identifier NCT04107207.

TO SEE COMPLETE CLINICAL TRIAL CLICK HERE

Source: https://www.frontiersin.org/articles/10.33...