The Five Pillars Of Longevity

10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

The Five Pillars Of Longevity

This is Dr. Mark Hyman. He’s a medical doctor, and he’s the board president of clinical affairs of the Institute for Functional Medicine. He’s also the founder and medical director of the UltraWellness Center!

What he’d like you to know about is what he calls the “Five Pillars of Longevity”.

Now, here at 10almonds, we often talk about certain things that science finds to be good for almost any health condition, and have made a habit of referencing what we call “The Usual Five Things™” (not really a trademark, by the way—just a figure of speech), which are:

  1. Have a good diet
  2. Get good exercise
  3. Get good sleep
  4. Reduce (or eliminate) alcohol consumption
  5. Don’t smoke

…and when we’re talking about a specific health consideration, we usually provide sources as to why each of them are particularly relevant, and pointers as to the what/how associated with them (ie what diet is good, how to get good sleep, etc).

Dr. Hyman’s “Five Pillars of Longevity” are based on observations from the world’s “Blue Zones”, the popular name for areas with an unusually high concentration of supercentenarians—Sardinia and Okinawa being famous examples, with a particular village in each being especially exemplary.

These Five Pillars of Longevity partially overlap with ours for three out of five, and they are:

  1. Good nutrition
  2. Optimized workouts
  3. Reduce stress
  4. Get quality sleep
  5. Find (and live) your purpose

We won’t argue against those! But what does he have to say, for each of them?

Good nutrition

Dr. Hyman advocates for a diet he calls “pegan”, which he considers to combine the paleo and vegan diets. Here at 10almonds, we generally advocate for the Mediterranean Diet because of the mountains of evidence for it, but his approach may be similar in some ways, since it looks to consume a majority plant diet, with some unprocessed meats/fish, limited dairy, and no grains.

By the science, honestly, we stand by the Mediterranean (which includes whole grains), but if for example your body may have issues of some kind with grains, his approach may be a worthy consideration.

Optimized workouts

For Dr. Hyman, this means getting in three kinds of exercise regularly:

  • Aerobic/cardio, to look after your heart health
  • Resistance training (e.g. weights or bodyweight strength-training) to look after your skeletal and muscular health
  • Yoga or similar suppleness training, to look after your joint health

Can’t argue with that, and it can be all too easy to fall into the trap of thinking “I’m healthy because I do x” while forgetting y and/or z! Thus, a three-pronged approach definitely has its merits.

Reduce stress

Acute stress (say, a cold shower) is can confer some health benefits, but chronic stress is ruinous to our health and it ages us. So, reducing this is critical. Dr. Hyman advocates for the practice of mindfulness and meditation, as well as journaling.

Get quality sleep

Quality here, not just quantity. As well as the usual “sleep hygiene” advices, he has some more unorthodox methods, such as the use of binaural beats to increase theta-wave activity in the brain (and thus induce more restful sleep), and the practice of turning off Wi-Fi, on the grounds that Wi-Fi signals interfere with our sleep.

We were curious about these recommendations, so we checked out what the science had to say! Here’s what we found:

In short: probably not too much to worry about in those regards. On the other hand, worrying less, unlike those two things, is a well-established way improve sleep!

(Surprised we disagreed with our featured expert on a piece of advice? Please know: you can always rely on us to stand by what the science says; we pride ourselves on being as reliable as possible!)

Find (and live!) your purpose

This one’s an ikigai thing, to borrow a word from Japanese, or finding one’s raison d’être, as we say in English using French, because English is like that. It’s about having purpose.

Dr. Hyman’s advice here is consistent with what many write on the subject, and it’d be an interesting to have more science on, but meanwhile, it definitely seems consistent with commonalities in the Blue Zone longevity hotspots, where people foster community, have a sense of belonging, know what they are doing for others and keep doing it because they want to, and trying to make the world—or even just their little part of it—better for those who will follow.

Being bitter, resentful, and self-absorbed is not, it seems a path to longevity. But a life of purpose, or even just random acts of kindness, may well be.

Don’t Forget…

Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

Recommended

  • The Physical Exercises That Build Your Brain
  • Sweet Dreams Are Made of THC (Or Are They?)
    Got questions on THC, sleep, or CBD? Dive into our latest Q&A to find evidence-based insights on improving rest and understanding cannabis impacts on health.

Learn to Age Gracefully

Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • What’s the difference between ADD and ADHD?

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Around one in 20 people has attention-deficit hyperactivity disorder (ADHD). It’s one of the most common neurodevelopmental disorders in childhood and often continues into adulthood.

    ADHD is diagnosed when people experience problems with inattention and/or hyperactivity and impulsivity that negatively impacts them at school or work, in social settings and at home.

    Some people call the condition attention-deficit disorder, or ADD. So what’s the difference?

    In short, what was previously called ADD is now known as ADHD. So how did we get here?

    Let’s start with some history

    The first clinical description of children with inattention, hyperactivity and impulsivity was in 1902. British paediatrician Professor George Still presented a series of lectures about his observations of 43 children who were defiant, aggressive, undisciplined and extremely emotional or passionate.

    Since then, our understanding of the condition evolved and made its way into the Diagnostic and Statistical Manual of Mental Disorders, known as the DSM. Clinicians use the DSM to diagnose mental health and neurodevelopmental conditions.

    The first DSM, published in 1952, did not include a specific related child or adolescent category. But the second edition, published in 1968, included a section on behaviour disorders in young people. It referred to ADHD-type characteristics as “hyperkinetic reaction of childhood or adolescence”. This described the excessive, involuntary movement of children with the disorder.

    Kids in the 60s playing
    It took a while for ADHD-type behaviour to make in into the diagnostic manual. Elzbieta Sekowska/Shutterstock

    In the early 1980s, the third DSM added a condition it called “attention deficit disorder”, listing two types: attention deficit disorder with hyperactivity (ADDH) and attention deficit disorder as the subtype without the hyperactivity.

    However, seven years later, a revised DSM (DSM-III-R) replaced ADD (and its two sub-types) with ADHD and three sub-types we have today:

    • predominantly inattentive
    • predominantly hyperactive-impulsive
    • combined.

    Why change ADD to ADHD?

    ADHD replaced ADD in the DSM-III-R in 1987 for a number of reasons.

    First was the controversy and debate over the presence or absence of hyperactivity: the “H” in ADHD. When ADD was initially named, little research had been done to determine the similarities and differences between the two sub-types.

    The next issue was around the term “attention-deficit” and whether these deficits were similar or different across both sub-types. Questions also arose about the extent of these differences: if these sub-types were so different, were they actually different conditions?

    Meanwhile, a new focus on inattention (an “attention deficit”) recognised that children with inattentive behaviours may not necessarily be disruptive and challenging but are more likely to be forgetful and daydreamers.

    Woman daydreams
    People with inattentive behaviours may be more forgetful or daydreamers. fizkes/Shutterstock

    Why do some people use the term ADD?

    There was a surge of diagnoses in the 1980s. So it’s understandable that some people still hold onto the term ADD.

    Some may identify as having ADD because out of habit, because this is what they were originally diagnosed with or because they don’t have hyperactivity/impulsivity traits.

    Others who don’t have ADHD may use the term they came across in the 80s or 90s, not knowing the terminology has changed.

    How is ADHD currently diagnosed?

    The three sub-types of ADHD, outlined in the DSM-5 are:

    • predominantly inattentive. People with the inattentive sub-type have difficulty sustaining concentration, are easily distracted and forgetful, lose things frequently, and are unable to follow detailed instructions
    • predominantly hyperactive-impulsive. Those with this sub-type find it hard to be still, need to move constantly in structured situations, frequently interrupt others, talk non-stop and struggle with self control
    • combined. Those with the combined sub-type experience the characteristics of those who are inattentive and hyperactive-impulsive.

    ADHD diagnoses continue to rise among children and adults. And while ADHD was commonly diagnosed in boys, more recently we have seen growing numbers of girls and women seeking diagnoses.

    However, some international experts contest the expanded definition of ADHD, driven by clinical practice in the United States. They argue the challenges of unwanted behaviours and educational outcomes for young people with the condition are uniquely shaped by each country’s cultural, political and local factors.

    Regardless of the name change to reflect what we know about the condition, ADHD continues to impact educational, social and life situations of many children, adolescents and adults.

    Kathy Gibbs, Program Director for the Bachelor of Education, Griffith University

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Share This Post

  • Survival of the Prettiest – by Dr. Nancy Etcoff

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Beauty is in the eye of the beholder, right? And what does it matter, in this modern world, especially if we are already in a happy stable partnership?

    The science of it, as it turns out, is less poetic. Not only is evolutionary psychology still the foundation of our perception of human beauty (yes, even if we have zero possibility of further procreation personally), but also, its effects are far, far wider than partner selection.

    From how nice people are to you, to how much they trust you, to how easily they will forgive a (real or perceived) misdeed, to what kind of medical care you get (or don’t), your looks shape your experiences.

    In this very easy-reading work that nevertheless contains very many references, Dr. Etcoff explores the science of beauty. Not just what traits are attractive and why, but also, what they will do for (or against) us—in concrete terms, with numbers.

    Bottom line: if you’d like to better understand the subconscious biases held by yourself and others, this book is a top-tier primer.

    Click here to check out Survival of the Prettiest, and learn more about how this blessing/curse affects you and those around you!

    Share This Post

  • How do science journalists decide whether a psychology study is worth covering?

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.

    Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.

    But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.

    Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.

    The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.

    University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.

    But there’s nuance to the findings, the authors note.

    “I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.

    Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.

    Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.

    “Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)

    “This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.

    More on the study’s findings

    The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.

    “As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.

    Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”

    The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.

    Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.

    Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.

    “Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.

    Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.

    For instance, one of the vignettes reads:

    “Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”

    In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”

    Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.

    Considering statistical significance

    When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.

    Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.

    “Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.

    Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.

    In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:

    • “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
    • “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
    • “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
    • “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”

    Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”

    What other research shows about science journalists

    A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”

    A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.

    More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.

    Advice for journalists

    We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:

    1. Examine the study before reporting it.

    Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.

    Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”

    How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.

    Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.

    “Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.

    Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.

    Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.

    Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology,Predatory Journals: What They Are and How to Avoid Them.”

    2. Zoom in on data.

    Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”

    What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.

    But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.

    How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.

    Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.

    3. Talk to scientists not involved in the study.

    If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.

    Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.

    4. Remember that a single study is simply one piece of a growing body of evidence.

    “I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”

    Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.

    Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.

    “We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”

    5. Remind readers that science is always changing.

    “Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”

    Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”

    Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could. 

    The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”

    Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”

    Additional reading

    Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
    Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.

    The Problem with Psychological Research in the Media
    Steven Stosny. Psychology Today, September 2022.

    Critically Evaluating Claims
    Megha Satyanarayana, The Open Notebook, January 2022.

    How Should Journalists Report a Scientific Study?
    Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.

    What Journalists Get Wrong About Social Science: Full Responses
    Brian Resnick. Vox, January 2016.

    From The Journalist’s Resource

    8 Ways Journalists Can Access Academic Research for Free

    5 Things Journalists Need to Know About Statistical Significance

    5 Common Research Designs: A Quick Primer for Journalists

    5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct

    Percent Change versus Percentage-Point Change: What’s the Difference? 4 Tips for Avoiding Math Errors

    What’s Standard Deviation? 4 Things Journalists Need to Know

    This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.

    Share This Post

Related Posts

  • The Physical Exercises That Build Your Brain
  • The Two-Second Advantage – by Vivek Ranadive and Kevin Maney

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    The titular “two-second advantage” can in some cases be literal (imagine you got a two-second head-start in a boxing match!), in other cases can refer to being just a little ahead of things in a way that can confer a great advantage, often cumulatively—as anyone who’s played Monopoly can certainly attest.

    Vivek Ranadivé and Kevin Maney give us lots of examples from business, sports, politics, economics, and more, in a way that seeks to cultivate a habit of asking the right questions in order to anticipate the future and not just be ahead of the competition—some areas of life don’t have competition for most people, like health, for example—but to generally have things “in hand”.

    When it comes to personal finances, health, personal projects, and the like, those tiny initial advantages that lead to incremental further improvements, can be the difference between continually (and frantically) playing catch-up, or making the jump past breaking even to going from strength to strength.

    Check out today’s book on Amazon!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Science of Pilates – by Tracy Ward

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    We’ve reviewed other books in this series, “Science of Yoga” and “Science of HIIT” (they’re great too; check them out!). What does this one add to the mix?

    Pilates is a top-tier “combination exercise” insofar as it checks a lot of boxes, e.g:

    • Strength—especially core strength, but also limbs
    • Mobility—range of motion and resultant reduction in injury risk
    • Stability—impossible without the above two things, but Pilates trains this too
    • Fitness—many dynamic Pilates exercises can be performed as cardio and/or HIIT.

    The author, a physiotherapist, explains (as the title promises!) the science of Pilates, with:

    • the beautifully clear diagrams we’ve come to expect of this series,
    • equally clear explanations, with a great balance of simplicity of terms and depth where necessary, and
    • plenty of citations for the claims made, linking to lots of the best up-to-date science.

    Bottom line: if you are in a position to make a little time for Pilates (if you don’t already), then there is nobody who would not benefit from reading this book.

    Click here to check out Science of Pilates, and keep your body well!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Quinoa vs Couscous – Which is Healthier?

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Our Verdict

    When comparing quinoa to couscous, we picked the quinoa.

    Why?

    Firstly, quinoa is the least processed by far. Couscous, even if wholewheat, has by necessity been processed to make what is more or less the same general “stuff” as pasta. Now, the degree to which something has or has not been processed is a common indicator of healthiness, but not necessarily declarative. There are some processed foods that are healthy (e.g. many fermented products) and there are some unprocessed plant or animal products that can kill you (e.g. red meat’s health risks, or the wrong mushrooms). But in this case—quinoa vs couscous—it’s all borne out pretty much as expected.

    For the purposes of the following comparisons, we’ll be looking at uncooked/dry weights.

    In terms of macros, quinoa has a little more protein, slightly lower carbs, and several times the fiber. The amino acids making up quinoa’s protein are also much more varied.

    In the category of vitamins, quinoa has more of vitamins A, B1, B2, B6, and B9, while couscous boasts a little more of vitamins B3 and B5. Given the respective margins of difference, as well as the total vitamins contained, this category is an easy win for quinoa.

    When it comes to minerals, this one’s not even more clear. Quinoa has a lot more calcium, copper, iron, magnesium, manganese, phosphorus, potassium, selenium, and zinc. Couscous, meanwhile has more of just one mineral: sodium. So, maybe not one you want more of.

    All in all, today’s is an easy pick: quinoa!

    Want to learn more?

    You might like to read:

    Take care!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: