How do science journalists decide whether a psychology study is worth covering?

10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.

Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.

But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.

Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.

The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.

University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.

But there’s nuance to the findings, the authors note.

“I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.

Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.

Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.

“Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)

“This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.

More on the study’s findings

The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.

“As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.

Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”

The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.

Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.

Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.

“Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.

Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.

For instance, one of the vignettes reads:

“Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”

In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”

Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.

Considering statistical significance

When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.

Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.

“Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.

Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.

In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:

  • “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
  • “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
  • “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
  • “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”

Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”

What other research shows about science journalists

A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”

A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.

More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.

Advice for journalists

We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:

1. Examine the study before reporting it.

Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.

Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”

How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.

Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.

“Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.

Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.

Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.

Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology,Predatory Journals: What They Are and How to Avoid Them.”

2. Zoom in on data.

Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”

What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.

But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.

How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.

Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.

3. Talk to scientists not involved in the study.

If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.

Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.

4. Remember that a single study is simply one piece of a growing body of evidence.

“I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”

Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.

Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.

“We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”

5. Remind readers that science is always changing.

“Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”

Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”

Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could. 

The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”

Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”

Additional reading

Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.

The Problem with Psychological Research in the Media
Steven Stosny. Psychology Today, September 2022.

Critically Evaluating Claims
Megha Satyanarayana, The Open Notebook, January 2022.

How Should Journalists Report a Scientific Study?
Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.

What Journalists Get Wrong About Social Science: Full Responses
Brian Resnick. Vox, January 2016.

From The Journalist’s Resource

8 Ways Journalists Can Access Academic Research for Free

5 Things Journalists Need to Know About Statistical Significance

5 Common Research Designs: A Quick Primer for Journalists

5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct

Percent Change versus Percentage-Point Change: What’s the Difference? 4 Tips for Avoiding Math Errors

What’s Standard Deviation? 4 Things Journalists Need to Know

This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.

Don’t Forget…

Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

Recommended

  • California Becomes Latest State To Try Capping Health Care Spending
  • Radical Longevity – by Dr. Ann Gittleman
    Dr. Gittleman’s top-tier anti-aging book offers a comprehensive approach to health and wellness, with a focus on nutrition and lifestyle adjustments.

Learn to Age Gracefully

Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Health Care AI, Intended To Save Money, Turns Out To Require a Lot of Expensive Humans

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Preparing cancer patients for difficult decisions is an oncologist’s job. They don’t always remember to do it, however. At the University of Pennsylvania Health System, doctors are nudged to talk about a patient’s treatment and end-of-life preferences by an artificially intelligent algorithm that predicts the chances of death.

    But it’s far from being a set-it-and-forget-it tool. A routine tech checkup revealed the algorithm decayed during the covid-19 pandemic, getting 7 percentage points worse at predicting who would die, according to a 2022 study.

    There were likely real-life impacts. Ravi Parikh, an Emory University oncologist who was the study’s lead author, told KFF Health News the tool failed hundreds of times to prompt doctors to initiate that important discussion — possibly heading off unnecessary chemotherapy — with patients who needed it.

    He believes several algorithms designed to enhance medical care weakened during the pandemic, not just the one at Penn Medicine. “Many institutions are not routinely monitoring the performance” of their products, Parikh said.

    Algorithm glitches are one facet of a dilemma that computer scientists and doctors have long acknowledged but that is starting to puzzle hospital executives and researchers: Artificial intelligence systems require consistent monitoring and staffing to put in place and to keep them working well.

    In essence: You need people, and more machines, to make sure the new tools don’t mess up.

    “Everybody thinks that AI will help us with our access and capacity and improve care and so on,” said Nigam Shah, chief data scientist at Stanford Health Care. “All of that is nice and good, but if it increases the cost of care by 20%, is that viable?”

    Government officials worry hospitals lack the resources to put these technologies through their paces. “I have looked far and wide,” FDA Commissioner Robert Califf said at a recent agency panel on AI. “I do not believe there’s a single health system, in the United States, that’s capable of validating an AI algorithm that’s put into place in a clinical care system.”

    AI is already widespread in health care. Algorithms are used to predict patients’ risk of death or deterioration, to suggest diagnoses or triage patients, to record and summarize visits to save doctors work, and to approve insurance claims.

    If tech evangelists are right, the technology will become ubiquitous — and profitable. The investment firm Bessemer Venture Partners has identified some 20 health-focused AI startups on track to make $10 million in revenue each in a year. The FDA has approved nearly a thousand artificially intelligent products.

    Evaluating whether these products work is challenging. Evaluating whether they continue to work — or have developed the software equivalent of a blown gasket or leaky engine — is even trickier.

    Take a recent study at Yale Medicine evaluating six “early warning systems,” which alert clinicians when patients are likely to deteriorate rapidly. A supercomputer ran the data for several days, said Dana Edelson, a doctor at the University of Chicago and co-founder of a company that provided one algorithm for the study. The process was fruitful, showing huge differences in performance among the six products.

    It’s not easy for hospitals and providers to select the best algorithms for their needs. The average doctor doesn’t have a supercomputer sitting around, and there is no Consumer Reports for AI.

    “We have no standards,” said Jesse Ehrenfeld, immediate past president of the American Medical Association. “There is nothing I can point you to today that is a standard around how you evaluate, monitor, look at the performance of a model of an algorithm, AI-enabled or not, when it’s deployed.”

    Perhaps the most common AI product in doctors’ offices is called ambient documentation, a tech-enabled assistant that listens to and summarizes patient visits. Last year, investors at Rock Health tracked $353 million flowing into these documentation companies. But, Ehrenfeld said, “There is no standard right now for comparing the output of these tools.”

    And that’s a problem, when even small errors can be devastating. A team at Stanford University tried using large language models — the technology underlying popular AI tools like ChatGPT — to summarize patients’ medical history. They compared the results with what a physician would write.

    “Even in the best case, the models had a 35% error rate,” said Stanford’s Shah. In medicine, “when you’re writing a summary and you forget one word, like ‘fever’ — I mean, that’s a problem, right?”

    Sometimes the reasons algorithms fail are fairly logical. For example, changes to underlying data can erode their effectiveness, like when hospitals switch lab providers.

    Sometimes, however, the pitfalls yawn open for no apparent reason.

    Sandy Aronson, a tech executive at Mass General Brigham’s personalized medicine program in Boston, said that when his team tested one application meant to help genetic counselors locate relevant literature about DNA variants, the product suffered “nondeterminism” — that is, when asked the same question multiple times in a short period, it gave different results.

    Aronson is excited about the potential for large language models to summarize knowledge for overburdened genetic counselors, but “the technology needs to improve.”

    If metrics and standards are sparse and errors can crop up for strange reasons, what are institutions to do? Invest lots of resources. At Stanford, Shah said, it took eight to 10 months and 115 man-hours just to audit two models for fairness and reliability.

    Experts interviewed by KFF Health News floated the idea of artificial intelligence monitoring artificial intelligence, with some (human) data whiz monitoring both. All acknowledged that would require organizations to spend even more money — a tough ask given the realities of hospital budgets and the limited supply of AI tech specialists.

    “It’s great to have a vision where we’re melting icebergs in order to have a model monitoring their model,” Shah said. “But is that really what I wanted? How many more people are we going to need?”

    KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about KFF.

    Subscribe to KFF Health News’ free Morning Briefing.

    This article first appeared on KFF Health News and is republished here under a Creative Commons license.

    Share This Post

  • Going for a bushwalk? 3 handy foods to have in your backpack (including muesli bars)

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    This time of year, many of us love to get out and spend time in nature. This may include hiking through Australia’s many beautiful national parks.

    Walking in nature is a wonderful activity, supporting both physical and mental health. But there can be risks and it’s important to be prepared.

    You may have read the news about hiker, Hadi Nazari, who was recently found alive after spending 13 days lost in Kosciuszko National Park.

    He reportedly survived for almost two weeks in the Snowy Mountains region of New South Wales by drinking fresh water from creeks, and eating foraged berries and two muesli bars.

    So next time you’re heading out for a day of hiking, what foods should you pack?

    Here are my three top foods to carry on a bushwalk that are dense in nutrients and energy, lightweight and available from the local grocery store.

    Leah-Anne Thompson/Shutterstock

    1. Muesli bars

    Nazari reportedly ate two muesli bars he found in a mountain hut. Whoever left the muesli bars there made a great choice.

    Muesli bars come individually wrapped, which helps them last longer and makes them easy to transport.

    They are also a good source of energy. Muesli bars typically contain about 1,5001,900 kilojoules per 100 grams. The average energy content for a 35g bar is about 614kJ.

    This may be a fraction of what you’d usually need in a day. However, the energy from muesli bars is released at a slow to moderate pace, which will help keep you going for longer.

    Muesli bars are also packed with nutrients. They contain all three macronutrients (carbohydrate, protein and fat) that our body needs to function. They’re a good source of carbohydrates, in particular, which are a key energy source. An average Australian muesli bar contains 14g of whole grains, which provide carbohydrates and dietary fibre for long-lasting energy.

    Muesli bars that contain nuts are typically higher in fat (19.9g per 100g) and protein (9.4g per 100g) than those without.

    Fat and protein are helpful for slowing down the release of energy from foods and the protein will help keep you feeling full for longer.

    There are many different types of muesli bars to choose from. I recommend looking for those with whole grains, higher dietary fibre and higher protein content.

    2. Nuts

    Nuts are nature’s savoury snack and are also a great source of energy. Cashews, pistachios and peanuts contain about 2,300-2,400kJ per 100g while Brazil nuts, pecans and macadamias contain about 2,700-3,000kJ per 100g. So a 30g serving of nuts will provide about 700-900kJ depending on the type of nut.

    Just like muesli bars, the energy from nuts is released slowly. So even a relatively small quantity will keep you powering on.

    Nuts are also full of nutrients, such as protein, fat and fibre, which will help to stave off hunger and keep you moving for longer.

    When choosing which nuts to pack, almost any type of nut is going to be great.

    Peanuts are often the best value for money, or go for something like walnuts that are high in omega-3 fatty acids, or a nut mix.

    Whichever nut you choose, go for the unsalted natural or roasted varieties. Salted nuts will make you thirsty.

    Nut bars are also a great option and have the added benefit of coming in pre-packed serves (although nuts can also be easily packed into re-usable containers).

    If you’re allergic to nuts, roasted chickpeas are another option. Just try to avoid those with added salt.

    Handful of natural nuts with other nuts on a dark background
    Nuts are nature’s savoury snack and are also a great source of energy. Eakrat/Shutterstock

    3. Dried fruit

    If nuts are nature’s savoury snack, fruit is nature’s candy. Fresh fruits (such as grapes, frozen in advance) are wonderfully refreshing and perfect as an everyday snack, although can add a bit of weight to your hiking pack.

    So if you’re looking to reduce the weight you’re carrying, go for dried fruit. It’s lighter and will withstand various conditions better than fresh fruit, so is less likely to spoil or bruise on the journey.

    There are lots of varieties of dried fruits, such as sultanas, dried mango, dried apricots and dried apple slices.

    These are good sources of sugar for energy, fibre for fullness and healthy digestion, and contain lots of vitamins and minerals. So choose one (or a combination) that works for you.

    Don’t forget water

    Next time you head out hiking for the day, you’re all set with these easily available, lightweight, energy- and nutrient-dense snacks.

    This is not the time to be overly concerned about limiting your sugar or fat intake. Hiking, particularly in rough terrain, places demands on your body and energy needs. For instance, an adult hiking in rough terrain can burn upwards of about 2,000kJ per hour.

    And of course, don’t forget to take plenty of water.

    Having access to even limited food, and plenty of fresh water, will not only make your hike more pleasurable, it can save your life.

    Margaret Murray, Senior Lecturer, Nutrition, Swinburne University of Technology

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Share This Post

  • Missing Microbes – by Dr. Martin Blaser

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    You probably know that antibiotic resistance is a problem, but you might not realize just what a many-headed beast antibiotic overuse is.

    From growing antibiotic superbugs, to killing the friendly bacteria that normally keep pathogens down to harmless numbers (resulting in death of the host, as the pathogens multiply unopposed), to multiple levels of dangers in antibiotic overuse in the farming of animals, this book is scary enough that you might want to save it for Halloween.

    But, Dr. Blaser does not argue against antibiotic use when it’s necessary; many people are alive because of antibiotics—he himself recovered from typhoid because of such.

    The style of the book is narrative, but information-dense. It does not succumb to undue sensationalization, but it’s also far from being a dry textbook.

    Bottom line: if you’d like to understand the real problems caused by antibiotics, and how we can combat that beyond merely “try not to take them unnecessarily”, this book is very worthy reading.

    Click here to check out Missing Microbes, and learn more about yours!

    Share This Post

Related Posts

  • California Becomes Latest State To Try Capping Health Care Spending
  • Do Hard Things – by Steve Magness

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    It’s easy to say that we must push ourselves if we want to achieve worthwhile things—and it’s also easy to push ourselves into an early grave by overreaching. So, how to do the former, without doing the latter?

    That’s what this book’s about. The author, speaking from a background in the science of sports psychology, applies his accumulated knowledge and understanding to the more general problems of life.

    Most of us are, after all, not sportspeople or if we are, not serious ones. Those few who are, will get benefit from this book too! But it’s mostly aimed at the rest of us who are trying to work out whether/when we should scale up, scale back, change track, or double down:

    • How much can we really achieve in our career?
    • How about in retirement?
    • Do we ever really get too old for athletic feats, or should we keep pressing on?

    Magness brings philosophy and psychological science together, to help us sort our way through.

    Nor is this just a pep talk—there’s readily applicable, practical, real-world advice here, things to enable us to do our (real!) best without getting overwhelmed.

    The style is pop-science, very easy-reading, and clear and comprehensible throughout—without succumbing to undue padding either.

    Bottom line: this is a very pleasant read, that promises to make life more meaningful and manageable at the same time. Highly recommendable!

    Click here to check out Do Hard Things, and get the most out of life!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Tight Hamstrings? Here’s A Test To Know If It’s Actually Your Sciatic Nerve

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Tight hamstrings are often not actually due to hamstring issues, but rather, are often being limited by the sciatic nerve. This video offers a home test to determine if the sciatic nerve is causing mobility problems (and how to improve it, if so):

    The Connection

    Try this test:

    • Sit down with a slumped posture.
    • Extend one leg with the ankle flexed.
    • Note any stretching or pulling sensation behind the knee or in the calf.
    • Bring your head down to your chest

    If this increases the sensation, it likely indicates sciatic nerve involvement.

    If only the hamstrings are tight, head movement won’t change the stretch sensation.

    This is because the nervous system is a continuous structure, so head movement can affect nerve tension throughout the body. While this can cause problems, it can also be integral in the solution. Here are two ways:

    • Flossing method: sit with “poor” slumped posture, extend the knee, keep the ankle flexed, and lift the head to relieve nerve tension. This movement helps the sciatic nerve slide without stretching it.
    • Even easier method: lie on your back, grab behind the knee, and extend the leg while extending the neck. This position avoids compression in the gluteal area, making it suitable for severely compromised nerves. Perform the movement without significant stretching or pain.

    In both cases: move gently to avoid straining the nerve, which can worsen muscle tension. Do 10 repetitions per leg, multiple times a day; after a week, increase to 20 reps.

    A word of caution: speak with your doctor before trying these exercises if you have underlying neurological diseases, cut or infected nerves, or other severe conditions.

    For more on all of this, plus visual demonstrations, enjoy:

    Click Here If The Embedded Video Doesn’t Load Automatically!

    Want to learn more?

    You might also like to read:

    Exercises for Sciatica Pain Relief

    Take care!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Lies I Taught in Medical School – by Dr. Robert Lufkin

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    There seems to be a pattern of doctors who practice medicine one way, get a serious disease personally, and then completely change their practice of medicine afterwards. This is one of those cases.

    Dr. Lufkin here presents, on a chapter-by-chapter basis, the titularly promised “lies” or, in more legally compliant speak (as he acknowledges in his preface), flawed hypotheses that are generally taught as truths. In many cases, the “lie” is some manner of “xyz is normal and nothing to worry about”, and/or “there is nothing to be done about xyz; suck it up”.

    The end result of the information is not complicated—enjoy a plants-forward whole foods low-carb diet to avoid metabolic diseases and all the other things to branch off from same (Dr. Lufkin makes a fair case for metabolic disease leading to a lot of secondary diseases that aren’t considered metabolic diseases per se). But, the journey there is actually important, as it answers a lot of questions that are much less commonly understood, and often not even especially talked-about, despite their great import and how they may affect health decisions beyond the dietary. Things like understanding the downsides of statins, or the statistical models that can be used to skew studies, per relative risk reduction and so forth.

    Bottom line: this book gives the ins and outs of what can go right or wrong with metabolic health and why, and how to make sure you don’t sabotage your health through missing information.

    Click here to check out Lies I Taught In Medical School, and arm yourself with knowledge!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: