Why Has Nobody Told Me This Before? – by Dr. Julie Smith

10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

Superficially, this can be called a “self-help” book, but that undersells it rather. It’s a professionally-written (as in, by a professional psychologist) handbook full of resources. Its goal? Optimizing your mental health to help you stay resilient no matter what life throws your way.

While the marketing of this book is heavily centered around Dr. Smith’s Internet Celebrity™ status, a lot of her motivation for writing it seems to be precisely so that she can delve deeper into the ideas that her social media “bites” don’t allow room for.

Many authors of this genre pad their chapters with examples; there are no lengthy story-telling asides here, and her style doesn’t need them. She knows her field well, and knows well how to communicate the ideas that may benefit the reader.

The main “meat” of the book? Tips, tricks, guides, resources, systems, flowcharts, mental frameworks, and “if all else fails, do this” guidance. The style of the book is clear and simple, with very readable content that she keeps free from jargon without “dumbing down” or patronizing the reader.

All in all, a fine set of tools for anyone’s “getting through life” toolbox.

Get Your Personal Copy Of “Why Has Nobody Told Me This Before?” on Amazon Now!

Don’t Forget…

Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

Recommended

  • The Menopause Manifesto – by Dr. Jen Gunter
  • The Seven Principles for Making Marriage Work – by Dr. John Gottman
    Dr. Gottman’s groundbreaking research reveals the four factors that predict divorce with 91% accuracy, along with seven principles for a successful marriage. A must-read for all couples.

Learn to Age Gracefully

Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Treadmill vs Road

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Have a question or a request? We love to hear from you!

    In cases where we’ve already covered something, we might link to what we wrote before, but will always be happy to revisit any of our topics again in the future too—there’s always more to say!

    As ever: if the question/request can be answered briefly, we’ll do it here in our Q&A Thursday edition. If not, we’ll make a main feature of it shortly afterwards!

    So, no question/request too big or small 😎

    ❝Why do I get tired much more quickly running outside, than I do on the treadmill? Every time I get worn out quickly but at home I can go for much longer!❞

    Short answer: the reason is Newton’s laws of motion.

    In other words: on a treadmill, you need only maintain your position in space relative the the Earth while the treadmill moves beneath you, whereas on the road, you need to push against the Earth with sufficient force to move it relative to your body.

    Illustrative thought experiment to make that clearer: if you were to stand on a treadmill with roller skates, and hold onto the bar with even just one finger, you would maintain your speed as far as the treadmill’s computer is concerned—whereas to maintain your speed on a flat road, you’d still need to push with your back foot every few yards or so.

    More interesting answer: it’s a qualitatively different exercise (i.e. not just quantitively different). This is because of all that pushing you’re having to do on the road, while on a treadmill, the only pushing you have to do is just enough to counteract gravity (i.e. to keep you upright).

    As such, both forms of running are a cardio exercise (because simply moving your legs quickly, even without having to apply much force, is still something that requires oxygenated blood feeding the muscles), but road-running adds an extra element of resistance exercise for the muscles of your lower body. Thus, road-running will enable you to build-maintain muscle much more than treadmill-running will.

    Some extra things to bear in mind, however:

    1) You can increase the resistance work for either form of running, by adding weight (such as by wearing a weight vest):

    Weight Vests Against Osteoporosis: Do They Really Build Bone?

    …and while road-running will still be the superior form of resistance work (for the reasons we outlined above), adding a weight vest will still be improving your stabilization muscles, just as it would if you were standing still while holding the weight up.

    2) Stationary cycling does not have the same physics differences as stationary running. By this we mean: an exercise bike will require your muscles to do just as much pushing as they would on a road. This makes stationary cycling an excellent choice for high intensity resistance training (HIRT):

    HIIT, But Make It HIRT

    3) The best form of exercise is the one that you will actually do. Thus, when it’s raining sidewise outside, a treadmill inside will get exercise done better than no running at all. Similarly, a treadmill exercise session takes a lot less preparation (“switch it on”) than a running session outside (“get dressed appropriately for the weather, apply sunscreen if necessary, remember to bring water, etc etc”), and thus is also much more likely to actually occur. The ability to stop whenever one wants is also a reassuring factor that makes one much more likely to start. See for example:

    How To Do HIIT (Without Wrecking Your Body)

    Take care!

    Share This Post

  • Top 10 Causes Of High Blood Pressure

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    As Dr. Frita Fisher explains, these are actually the top 10 known causes of high blood pressure. Number zero on the list would be “primary hypertension”, which means high blood pressure with no clear underlying cause.

    Superficially, this feels a little like the sometime practice of writing the catch-all “heart failure” as the cause of death on a death certificate, because yes, that heart sure did stop beating. But in reality, primary hypertension is most likely often caused by such things as unmanaged chronic stress—something that doesn’t show up on most health screenings.

    Dr. Fisher’s Top 10

    • Thyroid disease: both hyperthyroidism and hypothyroidism can cause high blood pressure.
    • Obstructive sleep apnea: characterized by snoring, daytime sleepiness, and headaches, this condition can lead to hypertension.
    • Chronic kidney disease: diseases ranging from diabetic nephropathy to renal vascular disease can cause high blood pressure.
    • Elevated cortisol levels: conditions like Cushing’s syndrome or disease, which involve high cortisol levels, can lead to hypertension—as can a lifestyle with a lot of chronic stress, but that’s less readily diagnosed as such than something one can tell from a blood test.
    • Elevated aldosterone levels: excess aldosterone from the adrenal glands causes the body to retain salt and water, increasing blood pressure, because more stuff = more pressure.
    • Brain tumor: tumors that increase intracranial pressure can cause a rise in blood pressure to ensure adequate brain perfusion. In these cases, the hypertension is keeping you alive—unless it kills you first. If this seems like a strange bodily response, remember that our bodily response to an infection is often fever, to kill off the infection which can’t survive at such high temperatures (but neither can we, so it becomes a game of chicken with our life on the line), so sometimes our body does kill us with one thing while trying to save us from another.
    • Coarctation of the aorta: this congenital heart defect results in narrowing of the aorta, leading to hypertension, especially in the upper body.
    • Pregnancy: pregnancy can either induce or worsen existing hypertension.
    • Obesity: excess weight increases blood flow and pressure on arteries, raising the risk of hypertension and associated conditions, e.g. diabetes etc.
    • Drugs: certain medications and recreational drugs (including, counterintuitively, alcohol!) can elevate blood pressure.

    For more information on each of these, enjoy:

    Click Here If The Embedded Video Doesn’t Load Automatically!

    Want to learn more?

    You might also like to read:

    Hypertension: Factors Far More Relevant Than Salt

    Take care!

    Share This Post

  • Small Changes For A Healthier Life

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    It’s Q&A Day at 10almonds!

    Have a question or a request? You can always hit “reply” to any of our emails, or use the feedback widget at the bottom!

    In cases where we’ve already covered something, we might link to what we wrote before, but will always be happy to revisit any of our topics again in the future too—there’s always more to say!

    As ever: if the question/request can be answered briefly, we’ll do it here in our Q&A Thursday edition. If not, we’ll make a main feature of it shortly afterwards!

    So, no question/request too big or small

    I am interested in what I can substitute for ham in bean soup?

    Well, that depends on what the ham was like! You can certainly buy ready-made vegan lardons (i.e. small bacon/ham bits, often in tiny cubes or similar) in any reasonably-sized supermarket. Being processed, they’re not amazing for the health, but are still an improvement on pork.

    Alternatively, you can make your own seitan! Again, seitan is really not a health food, but again, it’s still relatively less bad than pork (unless you are allergic to gluten, in which case, definitely skip this one).

    Alternatively alternatively, in a soup that already contains beans (so the protein element is already covered), you could just skip the ham as an added ingredient, and instead bring the extra flavor by means of a little salt, a little yeast extract (if you don’t like yeast extract, don’t worry, it won’t taste like it if you just use a teaspoon in a big pot, or half a teaspoon in a smaller pot), and a little smoked paprika. If you want to go healthier, you can swap out the salt for MSG, which enhances flavor in a similar fashion while containing less sodium.

    Wondering about the health aspects of MSG? Check out our main feature on this, from last month:

    What’s the deal with MSG?

    I thoroughly enjoy your daily delivery. I’d love to see one for teens too!

    That’s great to hear! The average age of our subscribers is generally rather older, but it’s good to know there’s an interest in topics for younger people. We’ll bear that in mind, and see what we can do to cater to that without alienating our older readers!

    That said: it’s never too soon to be learning about stuff that affects us when we’re older—there are lifestyle factors at 20 that affect Alzheimer’s risk at 60, for example (e.g. drinking—excessive drinking at 20* is correlated to higher Alzheimer’s risk at 60).

    *This one may be less of an issue for our US readers, since the US doesn’t have nearly as much of a culture of drinking under 21 as some places. Compare for example with general European practices of drinking moderately from the mid-teens, or the (happily, diminishing—but historically notable) British practice of drinking heavily from the mid-teens.

    How much turmeric should I take each day?

    Dr. Michael Greger’s research (of “Dr. Greger’s Daily Dozen” and “How Not To Die” fame) recommends getting at least ¼ tsp turmeric per day

    Remember to take it with black pepper though, for a 2000% absorption bonus!

    A great way to get it, if you don’t want to take capsules and don’t want to eat spicy food every day, is to throw a teaspoon of turmeric in when making a pot of (we recommend wholegrain!) rice. Turmeric is very water-soluble, so it’ll be transferred into the rice easily during cooking. It’ll make the rice a nice golden yellow color, and/but won’t noticeably change the taste.

    Again remember to throw in some black pepper, and if you really want to boost the nutritional content,some chia seeds are a great addition too (they’ll get cooked with the rice and so it won’t be like eating seeds later, but the nutrients will be there in the rice dish).

    You can do the same with par-boiled potatoes or other root vegetables, but because cooking those has water to be thrown away at the end (unlike rice), you’ll lose some turmeric in the water.

    Request: more people need to be aware of suicidal tendencies and what they can do to ward them off

    That’s certainly a very important topic! We’ll cover that properly in one of our Psychology Sunday editions. In the meantime, we’ll mention a previous special that we did, that was mostly about handling depression (in oneself or a loved one), and obviously there’s a degree of crossover:

    The Mental Health First-Aid That You’ll Hopefully Never Need

    Share This Post

Related Posts

  • The Menopause Manifesto – by Dr. Jen Gunter
  • How do science journalists decide whether a psychology study is worth covering?

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.

    Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.

    But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.

    Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.

    The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.

    University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.

    But there’s nuance to the findings, the authors note.

    “I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.

    Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.

    Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.

    “Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)

    “This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.

    More on the study’s findings

    The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.

    “As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.

    Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”

    The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.

    Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.

    Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.

    “Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.

    Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.

    For instance, one of the vignettes reads:

    “Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”

    In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”

    Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.

    Considering statistical significance

    When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.

    Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.

    “Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.

    Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.

    In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:

    • “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
    • “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
    • “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
    • “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”

    Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”

    What other research shows about science journalists

    A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”

    A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.

    More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.

    Advice for journalists

    We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:

    1. Examine the study before reporting it.

    Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.

    Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”

    How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.

    Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.

    “Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.

    Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.

    Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.

    Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology,Predatory Journals: What They Are and How to Avoid Them.”

    2. Zoom in on data.

    Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”

    What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.

    But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.

    How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.

    Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.

    3. Talk to scientists not involved in the study.

    If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.

    Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.

    4. Remember that a single study is simply one piece of a growing body of evidence.

    “I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”

    Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.

    Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.

    “We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”

    5. Remind readers that science is always changing.

    “Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”

    Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”

    Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could. 

    The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”

    Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”

    Additional reading

    Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
    Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.

    The Problem with Psychological Research in the Media
    Steven Stosny. Psychology Today, September 2022.

    Critically Evaluating Claims
    Megha Satyanarayana, The Open Notebook, January 2022.

    How Should Journalists Report a Scientific Study?
    Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.

    What Journalists Get Wrong About Social Science: Full Responses
    Brian Resnick. Vox, January 2016.

    From The Journalist’s Resource

    8 Ways Journalists Can Access Academic Research for Free

    5 Things Journalists Need to Know About Statistical Significance

    5 Common Research Designs: A Quick Primer for Journalists

    5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct

    Percent Change versus Percentage-Point Change: What’s the Difference? 4 Tips for Avoiding Math Errors

    What’s Standard Deviation? 4 Things Journalists Need to Know

    This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Shrimp vs Caviar – Which is Healthier?

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Our Verdict

    When comparing shrimp to caviar, we picked the caviar.

    Why?

    Both of these seafoods share a common history (also shared with lobster, by the way) of “nutrient-dense peasant-food that got gentrified and now it’s more expensive despite being easier to source”. But, cost and social quirks aside, what are their strengths and weaknesses?

    In terms of macros, both are high in protein, but caviar is much higher in fat. You may be wondering: are the fats healthy? And the answer is that it’s a fairly even mix between monounsaturated (healthy), polyunsaturated (healthy), and saturated (unhealthy). The fact that caviar is generally enjoyed in very small portions is its saving grace here, but quantity for quantity, shrimp is the natural winner on macros.

    …unless we take into account the omega-3 and omega-6 balance, in which case, it’s worthy of note that caviar has more omega-3 (which most people could do with consuming more of) while shrimp has more omega-6 (which most people could do with consuming less of).

    When it comes to vitamins, caviar has more of vitamins A, B1, B2, B5, B6, B9, B12, D, K, and choline; nor are the margins small in most cases, being multiples (or sometimes, tens of multiples) higher. Shrimp, meanwhile, boasts only more vitamin B3.

    In the category of minerals, caviar leads with more calcium, iron, magnesium, manganese, phosphorus, potassium, and selenium, while shrimp has more copper and zinc.

    All in all, while shrimp has its benefits for being lower in fat (and thus also, for those whom that may interest, lower in calories), caviar wins the day by virtue of its overwhelming nutritional density.

    Want to learn more?

    You might like to read:

    What Omega-3 Fatty Acids Really Do For Us

    Take care!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • How to Stay Sane – by Philippa Perry

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    First, what this book is not: a guide of “how to stay sane” in the popular use of the word “sane”, meaning free from serious mental illness of all and any kinds in general, and especially free from psychotic delusions. Alas, this book will not help with those.

    What, then, is it? A guide of “how to stay sane” in the more casual sense of resiliently and adaptively managing stress, anxiety, and suchlike. The “light end” of mental health struggles, that nonetheless may not always feel light when dealing with them.

    The author, a psychotherapist, draws from her professional experience and training to lay out psychological tools for our use, as well as giving the reader a broader understanding of the most common ills that may ail us.

    The writing style is relaxed and personable; it’s not at all like reading a textbook.

    The psychotherapeutic style is not tied to one model, and rather hops from one to another, per what is most likely to help for a given thing. This is, in this reviewer’s opinion at least, far better than the (all-too common) attempt made by a lot of writers to try to present their personal favorite model as the cure for all ills, instead of embracing the whole toolbox as this one does.

    Bottom line: if your mental health is anywhere between “mostly good” and “a little frayed around the edges but hanging on by at least a few threads”, then this book likely can help you gain/maintain the surer foundation you’re surely seeking.

    Click here to check out How To Stay Sane, and do just that!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: