Honey vs Maple Syrup – Which is Healthier?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Our Verdict
When comparing honey to maple syrup, we picked the honey.
Why?
It was very close, as both have small advantages:
• Honey has some medicinal properties (and depending on type, may contain an antihistamine)
• Maple syrup is a good source of manganese, as well as low-but-present amounts of other minerals
However, you wouldn’t want to eat enough maple syrup to rely on it as a source of those minerals, and honey has the lower GI (average 46 vs 54; for comparison, refined sugar is 65), which works well as a tie-breaker.
(If GI’s very important to you, though, the easy winner here would be agave syrup if we let it compete, with its GI of 15)
Read more:
• Can Honey Relieve Allergies?
• From Apples to Bees, and High-Fructose C’s
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Recommended
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
Good (Or Bad) Health Starts With Your Blood
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Blood Should Be Only Slightly Thicker Than Water
This is Dr. Casey Means, a physician, lecturer (mostly at Stanford), and CMO of a metabolic health company, Levels, as well as being Associate Editor of the International Journal of Diabetes Reversal and Prevention, where she serves alongside such names as Dr. Colin Campbell, Dr. Joel Fuhrman, Dr. Michael Greger, Dr. William Li, Dr. Dean Ornish, and you get the idea: it’s a star-studded cast.
What does she want us to know?
The big blood problem:
❝We’re spending 3.8 trillion dollars a year on healthcare costs in the U.S., and the reality is that people are getting sicker, fatter, and more depressed.
Over 50% of Americans have pre-diabetes or type 2 diabetes; it’s insane, that number should be close to zero.❞
~ Dr. Casey Means
Indeed, pre-diabetes and especially type 2 diabetes should be very avoidable in any wealthy nation.
Unfortunately, the kind of diet that avoids it tends to rely on having at least 2/3 of the following:
- Money
- Time
- Knowledge
For example:
- if you have money and time, you can buy lots of fresh ingredients without undue worry, and take the time to carefully prep and cook them
- if you have money and knowledge you can have someone else shop and cook for you, or at least get meal kits delivered
- if you have time and knowledge, you can actually eat very healthily on a shoestring budget
If you have all three, then the world’s your oyster mushroom steak sautéed in extra virgin olive oil with garlic and cracked black pepper served on a bed of Swiss chard and lashed with Balsamic vinegar.
However, many Americans aren’t in the happy position of having at least 2/3, and a not-insignificant portion of the population don’t even have 1/3.
As an aside: there is a food scientist and chef who’s made it her mission to educate people about food that’s cheap, easy, and healthy:
…but today is about Dr. Means, so, what does she suggest?
Know
thyselfthy blood sugarsDr. Means argues (reasonably; this is well-backed up by general scientific consensus) that much of human disease stems from the diabetes and pre-diabetes that she mentioned above, and so we should focus on that most of all.
Our blood sugar levels being unhealthy will swiftly lead to other metabolic disorders:
Heart disease and non-alcoholic fatty liver disease are perhaps first in line, but waiting in the wings are inflammation-mediated autoimmune disorders, and even dementia, because neuroinflammation is at least as bad as inflammation anywhere else, arguably worse, and our brain can only be as healthy as the blood that feeds it and takes things that shouldn’t be there away.
Indeed,
❝Alzheimer’s dementia is now being called type 3 diabetes because it’s so related to blood sugar❞
~ Dr. Casey Means
…which sounds like a bold claim, but it’s true, even if the name is not “official” yet, it’s well-established in professional circulation:
❝We conclude that the term “type 3 diabetes” accurately reflects the fact that AD represents a form of diabetes that selectively involves the brain and has molecular and biochemical features that overlap with both T1DM and T2DM❞
~ Dr. Suzanne M. de la Monte & Dr. Jack Wands
Read in full: Alzheimer’s Disease Is Type 3 Diabetes–Evidence Reviewed ← this is from the very respectable Journal of Diabetes Science and Technology.
What to do about it
Dr. Means suggests we avoid the “glucose roller-coaster” that most Americans are on, meaning dramatic sugar spikes, or to put it in sciencese: high glycemic variability.
This leads to inflammation, oxidative stress, glycation (where sugar sticks to proteins and DNA), and metabolic dysfunction. Then there’s the flipside: reactive hypoglycemia, a result of a rapid drop in blood sugar after a spike, can cause anxiety, fatigue, weakness/trembling, brain fog, and of course cravings. And so the cycle repeats.
But it doesn’t have to!
By taking it upon ourselves to learn about what causes our blood sugars to rise suddenly or gently, we can manage our diet and other lifestyle factors accordingly.
And yes, it’s not just about diet, Dr. Means tells us. While added sugar and refined carbohydrates or indeed the main drivers of glycemic variability, our sleep, movement, stress management, and even toxin exposure play important parts too.
One way to do this, that Dr. Means recommends, is with a continuous glucose monitor:
Track Your Blood Sugars For Better Personalized Health
Another way is to just apply principles that work for almost everyone:
10 Ways To Balance Blood Sugars
Want to know more from Dr. Means?
You might like her book:
Good Energy – by Dr. Casey Means
…which goes into this in far more detail than we have room to today.
Enjoy!
Share This Post
-
The Blue Zones, Second Edition – by Dan Buettner
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Eat beans & greens, take walks, have a purpose; you can probably list off the top of your head some of the “advices from Blue Zones”, so what makes this book stand out?
This is perhaps one of the most thoughtful investigations; the author (a National Geographic researcher) toured and researched all the Blue Zones, took many many notes (we get details), and asked a lot of questions that others skipped.
For example, a lot of books about the Blue Zones mention the importance of community—but they don’t go into much detail of what that looks like… And they certainly don’t tend to explain what we should do about it.
And that’s because community is often viewed as environmental in a way that we can’t control. If we want to take supplements, eat a certain way, exercise, etc, we can do all those things alone if we want. But if we want community? We’re reliant on other people—and that’s a taboo in the US, and US-influenced places.
So, one way this book excels is in describing how exactly people foster community in the Blue Zones (hint: the big picture—the form of the community—is different in each place, but the individual actions taken are similar), with particular attention to the roles actively taken on by the community elders.
In a similar vein, “reduce stress” is good, but what mindsets and mechanisms do they use that are still reproducible if we are not, for example, Okinawan farmers? Again, Buettner delivers in spades.
Bottom line: this is the Blue Zones book that digs deeper than others, and makes the advices much more applicable no matter where we live.
Click here to check out The Blue Zones, and build these 9 things into your life!
Share This Post
-
When the Body Says No – by Dr. Gabor Maté
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
We know that chronic stress is bad for us because of what it does to our cortisol levels, so what is the rest of this book about?
Dr. Gabor Maté is a medical doctor, heavily specialized in the impact of psychological trauma on long term physical health.
Here, he examies—as the subtitle promises—the connection between stress and disease. As it turns out, it’s not that simple.
We learn not just about the impact that stress has on our immune system (including increasing the risk of autoimmune disorders like rheumatoid arthritis), the cardiovascular system, and various other critical systems fo the body… But also:
- how environmental factors and destructive coping styles contribute to the onset of disease, and
- how traumatic events can warp people’s physical perception of pain
- how certain illnesses are associated with particular personality types.
This latter is not “astrology for doctors”, by the way. It has more to do with what coping strategies people are likely to employ, and thus what diseases become more likely to take hold.
The book has practical advice too, and it’s not just “reduce your stress”. Ideally, of course, indeed reduce your stress. But that’s a) obvious b) not always possible. Rather, Dr. Maté explains which coping strategies result in the least prevalence of disease.
In terms of writing style, the book is very much easy-reading, but be warned that (ironically) this isn’t exactly a feel-good book. There are lot of tragic stories in it. But, even those are very much well-worth reading.
Bottom line: if you (and/or a loved one) are suffering from stress, this book will give you the knowledge and understanding to minimize the harm that it will otherwise do.
Click here to check out When The Body Says No, and take good care of yourself; you’re important!
Share This Post
Related Posts
-
154 million lives saved in 50 years: 5 charts on the global success of vaccines
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
We know vaccines have been a miracle for public health. Now, new research led by the World Health Organization has found vaccines have saved an estimated 154 million lives in the past 50 years from 14 different diseases. Most of these have been children under five, and around two-thirds children under one year old.
In 1974 the World Health Assembly launched the Expanded Programme on Immunization with the goal to vaccinate all children against diphtheria, tetanus, pertussis (whooping cough), measles, polio, tuberculosis and smallpox by 1990. The program was subsequently expanded to include several other diseases.
The modelling, marking 50 years since this program was established, shows a child aged under ten has about a 40% greater chance of living until their next birthday, compared to if we didn’t have vaccines. And these positive effects can be seen well into adult life. A 50-year-old has a 16% greater chance of celebrating their next birthday thanks to vaccines.
What the study did
The researchers developed mathematical and statistical models which took in vaccine coverage data and population numbers from 194 countries for the years 1974–2024. Not all diseases were included (for example smallpox, which was eradicated in 1980, was left out).
The analysis includes vaccines for 14 diseases, with 11 of these included in the Expanded Programme on Immunization. For some countries, additional vaccines such as Japanese encephalitis, meningitis A and yellow fever were included, as these diseases contribute to major disease burden in certain settings.
The models were used to simulate how diseases would have spread from 1974 to now, as vaccines were introduced, for each country and age group, incorporating data on increasing vaccine coverage over time.
Children are the greatest beneficiaries of vaccines
Since 1974, the rates of deaths in children before their first birthday has more than halved. The researchers calculated almost 40% of this reduction is due to vaccines.
The effects have been greatest for children born in the 1980s because of the intensive efforts made globally to reduce the burden of diseases like measles, polio and whooping cough.
Some 60% of the 154 million lives saved would have been lives lost to measles. This is likely due to its ability to spread rapidly. One person with measles can spread the infection to 12–18 people.
The study also found some variation across different parts of the world. For example, vaccination programs have had a much greater impact on the probability of children living longer across low- and middle-income countries and settings with weaker health systems such as the eastern Mediterranean and African regions. These results highlight the important role vaccines play in promoting health equity.
Vaccine success is not assured
Low or declining vaccine coverage can lead to epidemics which can devastate communities and overwhelm health systems.
Notably, the COVID pandemic saw an overall decline in measles vaccine coverage, with 86% of children having received their first dose in 2019 to 83% in 2022. This is concerning because very high levels of vaccination coverage (more than 95%) are required to achieve herd immunity against measles.
In Australia, the coverage for childhood vaccines, including measles, mumps and rubella, has declined compared to before the pandemic.
This study is a reminder of why we need to continue to vaccinate – not just against measles, but against all diseases we have safe and effective vaccines for.
The results of this research don’t tell us the full story about the impact of vaccines. For example, the authors didn’t include data for some vaccines such as COVID and HPV (human papillomavirus). Also, like with all modelling studies, there are some uncertainties, as data was not available for all time periods and countries.
Nonetheless, the results show the success of global vaccination programs over time. If we want to continue to see lives saved, we need to keep investing in vaccination locally, regionally and globally.
Meru Sheel, Associate Professor and Epidemiologist, Infectious Diseases, Immunisation and Emergencies Group, Sydney School of Public Health, University of Sydney and Alexandra Hogan, Mathematical epidemiologist, UNSW Sydney
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
How do science journalists decide whether a psychology study is worth covering?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.
Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.
But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.
Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.
The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.
University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.
But there’s nuance to the findings, the authors note.
“I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.
Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.
Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.
“Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)
“This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.
More on the study’s findings
The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.
“As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.
Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”
The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.
Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.
Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.
“Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.
Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.
For instance, one of the vignettes reads:
“Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”
In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”
Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.
Considering statistical significance
When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.
Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.
“Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.
Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.
In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:
- “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
- “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
- “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
- “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”
Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”
What other research shows about science journalists
A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”
A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.
More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.
Advice for journalists
We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:
1. Examine the study before reporting it.
Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.
Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”
How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.
Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.
“Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.
Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.
Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.
Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology, “Predatory Journals: What They Are and How to Avoid Them.”
2. Zoom in on data.
Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”
What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.
But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.
How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.
Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.
3. Talk to scientists not involved in the study.
If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.
Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.
4. Remember that a single study is simply one piece of a growing body of evidence.
“I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”
Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.
Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.
“We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”
5. Remind readers that science is always changing.
“Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”
Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”
Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could.
The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”
Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”
Additional reading
Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.The Problem with Psychological Research in the Media
Steven Stosny. Psychology Today, September 2022.Critically Evaluating Claims
Megha Satyanarayana, The Open Notebook, January 2022.How Should Journalists Report a Scientific Study?
Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.What Journalists Get Wrong About Social Science: Full Responses
Brian Resnick. Vox, January 2016.From The Journalist’s Resource
8 Ways Journalists Can Access Academic Research for Free
5 Things Journalists Need to Know About Statistical Significance
5 Common Research Designs: A Quick Primer for Journalists
5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct
What’s Standard Deviation? 4 Things Journalists Need to Know
This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
When Carbs, Proteins, & Fats Switch Metabolic Roles
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Strange Things Happening In The Islets Of Langerhans
It is generally known and widely accepted that carbs have the biggest effect on blood sugar levels (and thus insulin response), fats less so, and protein least of all.
And yet, there was a groundbreaking study published yesterday which found:
❝Glucose is the well-known driver of insulin, but we were surprised to see such high variability, with some individuals showing a strong response to proteins, and others to fats, which had never been characterized before.
Insulin plays a major role in human health, in everything from diabetes, where it is too low*, to obesity, weight gain and even some forms of cancer, where it is too high.
These findings lay the groundwork for personalized nutrition that could transform how we treat and manage a range of conditions.❞
*saying ”too low” here is potentially misleading without clarification; yes, Type 1 Diabetics will have too little [endogenous] insulin (because the pancreas is at war with itself and thus isn’t producing useful quantities of insulin, if any). Type 2, however, is more a case of acquired insulin insensitivity, because of having too much at once too often, thus the body stops listening to it, “boy who cried wolf”-style, and the pancreas also starts to get fatigued from producing so much insulin that’s often getting ignored, and does eventually produce less and less while needing more and more insulin to get the same response, so it can be legitimately said “there’s not enough”, but that’s more of a subjective outcome than an objective cause.
Back to the study itself, though…
What they found, and how they found it
Researchers took pancreatic islets from 140 heterogenous donors (varied in age and sex; ostensibly mostly non-diabetic donors, but they acknowledge type 2 diabetes could potentially have gone undiagnosed in some donors*) and tested cell cultures from each with various carbs, proteins, and fats.
They found the expected results in most of the cases, but around 9% responded more strongly to the fats than the carbs (even more strongly than to glucose specifically), and even more surprisingly 8% responded more strongly to the proteins.
*there were also some known type 2 diabetics amongst the donors; as expected, those had a poor insulin response to glucose, but their insulin response to proteins and fats were largely unaffected.
What this means
While this is, in essence, a pilot study (the researchers called for larger and more varied studies, as well as in vivo human studies), the implications so far are important:
It appears that, for a minority of people, a lot of (generally considered very good) antidiabetic advice may not be working in the way previously understood. They’re going to (for example) put fat on their carbs to reduce the blood sugar spike, which will technically still work, but the insulin response is going to be briefly spiked anyway, because of the fats, which very insulin response is what will lower the blood sugars.
In practical terms, there’s not a lot we can do about this at home just yet—even continuous glucose monitors won’t tell us precisely, because they’re monitoring glucose, not the insulin response. We could probably measure everything and do some math and work out what our insulin response has been like based on the pace of change in blood sugar levels (which won’t decrease without insulin to allow such), but even that is at best grounds for a hypothesis for now.
Hopefully, more publicly-available tests will be developed soon, enabling us all to know our “insulin response type” per the proteome predictors discovered in this study, rather than having to just blindly bet on it being “normal”.
Ironically, this very response may have hidden itself for a while—if taking fats raised insulin response without raising blood sugar levels, then if blood sugar levels are the only thing being measured, all we’ll see is “took fats at dinner; blood sugars returned to normal more quickly than when taking carbs without fats”.
You can read the study in full here:
Proteomic predictors of individualized nutrient-specific insulin secretion in health and disease
Want to know more about blood sugar management?
You might like to catch up on:
- 10 Ways To Balance Your Blood Sugars
- Track Your Blood Sugars For Better Personalized Health
- How To Turn Back The Clock On Insulin Resistance
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: