Is chocolate milk a good recovery drink after a workout? A dietitian reviews the evidence

10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

Whether you enjoy chocolate milk regularly, as a weekend treat, or as an occasional dose of childhood nostalgia, it probably wouldn’t be the first option you think of for post-workout recovery.

Unless you’re on TikTok, perhaps. According to many people on the social media platform, chocolate milk is not only delicious, but it offers benefits comparable to sports drinks after a workout.

So is there any evidence to support this? Let’s take a look.

eldar nurkovic/Shutterstock

Rehydrating after a workout is important

Water accounts for somewhere between 50% and 60% of our body weight. Water has many important functions in the body, including helping to keep our body at the right temperature through sweating.

We lose water naturally from our bodies when we sweat, as well as through our breathing and when we go to the toilet. So it’s important to stay hydrated to replenish the water we lose.

When we don’t, we become dehydrated, which can put a strain on our bodies. Signs and symptoms of dehydration can range from thirst and dizziness to low blood pressure and confusion.

Athletes, because of their higher levels of exertion, lose more water through sweating and from respiration (when their breathing rate gets faster). If they’re training or competing in hot or humid environments they will sweat even more.

Dehydration impacts athletes’ performance and like for all of us, can affect their health.

So finding ways to ensure athletes rehydrate quickly during and after they train or compete is important. Fortunately, sports scientists and dietitians have done research looking at the composition of different fluids to understand which ones rehydrate athletes most effectively.

The beverage hydration index

The best hydrating drinks are those the body retains the most of once they’ve been consumed. By doing studies where they give people different drinks in standardised conditions, scientists have been able to determine how various options stack up.

To this end, they’ve developed something called the beverage hydration index, which measures to what degree different fluids hydrate a person compared to still water.

According to this index beverages with similar fluid retention to still water include sparkling water, sports drinks, cola, diet cola, tea, coffee, and beer below 4% alcohol. That said, alcohol is probably best avoided when recovering from exercise.

Beverages with superior fluid retention to still water include milk (both full-fat and skim), soy milk, orange juice and oral rehydration solutions.

This body of research indicates that when it comes to rehydration after exercise, unflavoured milk (full fat, skim or soy) is better than sports drinks.

But what about chocolate milk?

A small study looked at the effects of chocolate milk compared to plain milk on rehydration and exercise performance in futsal players (futsal is similar to soccer but played on a court indoors). The researchers found no difference in rehydration between the two. There’s no other published research to my knowledge looking at how chocolate milk compares to regular milk for rehydration during or after exercise.

But rehydration isn’t the only thing athletes look for in sports drinks. In the same study, drinking chocolate milk after play (referred to as the recovery period) increased the time it took for the futsal players to become exhausted in further exercise (a shuttle run test) four hours later.

This was also shown in a review of several clinical trials. The analysis found that, compared to different placebos (such as water) or other drinks containing fat, protein and carbohydrates, chocolate milk lengthened the time to exhaustion during exercise.

What’s in chocolate milk?

Milk contains protein, carbohydrates and electrolytes, each of which can affect hydration, performance, or both.

Protein is important for building muscle, which is beneficial for performance. The electrolytes in milk (including sodium and potassium) help to replace electrolytes lost through sweating, so can also be good for performance, and aid hydration.

Compared to regular milk, chocolate milk contains added sugar. This provides extra carbohydrates, which are likewise beneficial for performance. Carbohydrates provide an immediate source of energy for athletes’ working muscles, where they’re stored as glycogen. This might contribute to the edge chocolate milk appears to have over plain milk in terms of athletic endurance.

A birds-eye view of a glass of chocolate milk with a red straw.
The added sugar in chocolate milk provides extra carbohydrates. Brent Hofacker/Shutterstock

Coffee-flavoured milk has an additional advantage. It contains caffeine, which can improve athletic performance by reducing the perceived effort that goes into exercise.

One study showed that a frappe-type drink prepared with filtered coffee, skim milk and sugar led to better muscle glycogen levels after exercise compared to plain milk with an equivalent amount of sugar added.

So what’s the verdict?

Evidence shows chocolate milk can rehydrate better than water or sports drinks after exercise. But there isn’t evidence to suggest it can rehydrate better than plain milk. Chocolate milk does appear to improve athletic endurance compared to plain milk though.

Ultimately, the best drink for athletes to consume to rehydrate is the one they’re most likely to drink.

While many TikTok trends are not based on evidence, it seems chocolate milk could actually be a good option for recovery from exercise. And it will be cheaper than specialised sports nutrition products. You can buy different brands from the supermarket or make your own at home with a drinking chocolate powder.

This doesn’t mean everyone should look to chocolate milk when they’re feeling thirsty. Chocolate milk does have more calories than plain milk and many other drinks because of the added sugar. For most of us, chocolate milk may be best enjoyed as an occasional treat.

Evangeline Mantzioris, Program Director of Nutrition and Food Sciences, Accredited Practising Dietitian, University of South Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Don’t Forget…

Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

Recommended

  • Forget Ringing the Button for the Nurse. Patients Now Stay Connected by Wearing One.
  • Creamy Fortifying Cauliflower Soup
    Whip up a health-packed vegan feast! This creamy soup combines protein-rich ingredients with fiber, healthy fats, and potent spices for a hearty, nourishing meal.

Learn to Age Gracefully

Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Aging with Grace – by Dr. David Snowdon

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    First, what this book is not: a book about Christianity. Don’t worry, we didn’t suddenly change the theme of 10almonds.

    Rather, what this book is: a book about a famous large (n=678) study into the biology of aging, that took a population sample of women who had many factors already controlled-for, e.g. they ate the same food, had the same schedule, did the same activities, etc—for many years on end. In other words, a convent of nuns.

    This allowed for a lot more to be learned about other factors that influence aging, such as:

    • Heredity / genetics in general
    • Speaking more than one language
    • Supplementing with vitamins or not
    • Key adverse events (e.g. stroke)
    • Key chronic conditions (e.g. depression)

    The book does also cover (as one might expect) the role that community and faith can play in healthy longevity, but since the subjects were 678 communally-dwelling people of faith (thus: no control group of faithless loners), this aspect is discussed only in anecdote, or in reference to other studies.

    The author of this book, by the way, was the lead researcher of the study, and he is a well-recognised expert in the field of Alzheimer’s in particular (and Alzheimer’s does feature quite a bit throughout).

    The writing style is largely narrative, and/but with a lot of clinical detail and specific data; this is by no means a wishy-washy book.

    Bottom line: if you’d like to know what nuns were doing in the 1980s to disproportionally live into three-figure ages, then this book will answer those questions.

    Click here to check out Aging with Grace, and indeed age with grace!

    Share This Post

  • How do science journalists decide whether a psychology study is worth covering?

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.

    Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.

    But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.

    Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.

    The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.

    University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.

    But there’s nuance to the findings, the authors note.

    “I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.

    Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.

    Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.

    “Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)

    “This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.

    More on the study’s findings

    The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.

    “As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.

    Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”

    The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.

    Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.

    Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.

    “Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.

    Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.

    For instance, one of the vignettes reads:

    “Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”

    In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”

    Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.

    Considering statistical significance

    When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.

    Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.

    “Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.

    Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.

    In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:

    • “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
    • “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
    • “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
    • “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”

    Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”

    What other research shows about science journalists

    A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”

    A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.

    More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.

    Advice for journalists

    We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:

    1. Examine the study before reporting it.

    Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.

    Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”

    How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.

    Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.

    “Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.

    Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.

    Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.

    Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology,Predatory Journals: What They Are and How to Avoid Them.”

    2. Zoom in on data.

    Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”

    What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.

    But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.

    How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.

    Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.

    3. Talk to scientists not involved in the study.

    If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.

    Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.

    4. Remember that a single study is simply one piece of a growing body of evidence.

    “I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”

    Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.

    Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.

    “We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”

    5. Remind readers that science is always changing.

    “Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”

    Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”

    Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could. 

    The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”

    Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”

    Additional reading

    Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
    Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.

    The Problem with Psychological Research in the Media
    Steven Stosny. Psychology Today, September 2022.

    Critically Evaluating Claims
    Megha Satyanarayana, The Open Notebook, January 2022.

    How Should Journalists Report a Scientific Study?
    Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.

    What Journalists Get Wrong About Social Science: Full Responses
    Brian Resnick. Vox, January 2016.

    From The Journalist’s Resource

    8 Ways Journalists Can Access Academic Research for Free

    5 Things Journalists Need to Know About Statistical Significance

    5 Common Research Designs: A Quick Primer for Journalists

    5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct

    Percent Change versus Percentage-Point Change: What’s the Difference? 4 Tips for Avoiding Math Errors

    What’s Standard Deviation? 4 Things Journalists Need to Know

    This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.

    Share This Post

  • How many vegetables influence brain waves and control brain states?

    Is it OK if my child eats lots of fruit but no vegetables?

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Does it seem like most vegetables you serve your children end up left on the plate, or worse, strewn across the floor? But mention dessert, and your fruit skewers are polished off in an instant.

    Or maybe the carrot and cucumber sticks keep coming home in your child’s lunchbox untouched, yet the orange slices are nowhere to be seen.

    If you’re facing these struggles with your child, you’re not alone. Many children prefer fruit to vegetables.

    So if your child eats lots of fruit but minimal or no vegetables, is that OK? And how can you get them to eat more veggies?

    Children have an innate preference for fruit

    The Australian Dietary Guidelines’ recommended daily intakes for vegetables and fruit depend on a child’s age.

    A chart showing the serving amounts of fruit and veg for ages 4-18.
    Fruit and vegetable serving sizes by age. The Conversation.
    National Health and Medical Research Council, CC BY-SA

    Consumption among Australian children falls well below recommendations. Around 62.6% of children aged over two meet the recommended daily fruit intake, but only 9% meet the recommended vegetable intake.

    This is not surprising given children have a natural preference for fruit. At least in part, this is due to its sweetness and texture, whether crispy, crunchy or juicy. The texture of fruit has been linked to a positive sensory experience among children.

    Vegetables, on the other hand, are more of an acquired taste, and certain types, such as cruciferous vegetables, can be perceived by children as bitter.

    The reason children often prefer fruit over vegetables could also be related to the parents’ preferences. Some research has even suggested we develop food preferences before birth based on what our mother consumes during pregnancy.

    Balance is key

    So, a preference for fruit is common. But is it OK if your child eats lots of fruit but little to no vegetables? This is a question we, as dietitians, get asked regularly.

    You might be thinking, at least my child is eating fruit. They could be eating no veggies and no fruit. This is true. But while it’s great your child loves fruit, vegetables are just as important as part of a balanced eating pattern.

    Vegetables provide us with energy, essential vitamins and minerals, as well as water and fibre, which help keep our bowels regular. They also support a strong immune system.

    If your child is only eating fruit, they are missing some essential nutrients. But the same is true if they are eating only veggies.

    Fruit likewise provides the body with a variety of essential vitamins and minerals, as well as phytochemicals, which can help reduce inflammation.

    Evidence shows healthy consumption of fruit and vegetables protects against chronic diseases including high blood pressure, heart disease and stroke.

    Consumed together, fruit and vegetables in a variety of colours provide different nutrients we need, some of which we can’t get from other foods. We should encourage kids to eat a “rainbow” of fruit and vegetables each day to support their growth and development.

    What if my child eats too much fruit?

    If your child is eating slightly more fruit than what’s recommended each day, it’s not usually a problem.

    Fruit contains natural sugar which is good for you. But too much of a good thing, even if it’s natural, can create problems. Fruit also contains virtually no fat and very little to no protein, both essential for a growing child.

    When overindulging in fruit starts to displace other food groups such as vegetables, dairy products and meat, that’s when things can get tricky.

    6 tips to get your kids to love vegetables

    1. Get them involved

    Take your child with you when you go shopping. Let them choose new vegetables. See if you can find vegetables even you haven’t tried, so you’re both having a new experience. Then ask them to help you with preparing or cooking the vegetables using a recipe you have chosen together. This will expose your child to veggies in a positive way and encourage them to eat more.

    2. Sensory learning

    Try to expose your child to vegetables rather than hiding them. Kids are more likely to eat veggies when they see, smell and feel them. This is called sensory learning.

    3. Have fun with food

    Use colourful vegetables of different sizes and textures. Make them fun by creating scenes or faces on your child’s plate. Add edible flowers or mint for decoration. You can even serve this with a side of veggie-based dip such as hummus or guacamole for some bonus healthy fats.

    4. Teach them to grow their own

    Teach your child how to grow their own vegetables. Evidence shows kids are more inclined to try the food they have helped and watched grow. You don’t need to have a big backyard to do this. A windowsill with a pot plant is a perfect start.

    5. Lead by example

    Your child learns from you, and your eating habits will influence theirs. Ensure they see you eating and enjoying veggies, whether in meals or as snacks.

    6. Practise persistence

    If your child refuses a particular vegetable once, don’t give up. It can take many attempts to encourage children to try a new food.The Conversation

    Yasmine Probst, Associate Professor, School of Medical, Indigenous and Health Sciences, University of Wollongong; Olivia Wills, Accredited Practising Dietitian, PhD candidate, University of Wollongong, and Shoroog Allogmanny, Accredited Practising Dietitian, PhD candidate, University of Wollongong

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Share This Post

Related Posts

  • Forget Ringing the Button for the Nurse. Patients Now Stay Connected by Wearing One.
  • The Checklist Manifesto – by Dr. Atul Gawande

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Dr. Gawande, himself a general surgeon, uses checklists a lot. He is, unequivocally, an expert in his field. He “shouldn’t” need a checklist to tell him to do such things as “Check you have the correct patient”. But checklists are there as a safety net. And, famously, “safety regulations are written in blood”, after all.

    And, who amongst us has never made such a “silly” error? From forgetting to turn the oven on, to forgetting to take the handbrake off, it takes only a momentary distraction to think we’ve done something we haven’t.

    You may be wondering: why a whole book on this? Is it just many examples of the usefulness of checklists? Because I’m already sold on that, so, what else am I going to get out of it?

    Dr. Gawande also explains in clear terms:

    • How to optimize “all necessary steps” with “as few steps as possible”
    • The important difference between read-do checklists and do-confirm checklists
    • To what extent we should try to account for the unexpected
    • How to improve compliance (i.e., making sure you actually use it, no matter how tempting it will be to go “yeah this is automatic for me now” and gloss over it)
    • The role of checklists in teams, and in passing on knowledge

    …and more.

    Bottom line: if you’ve ever tried to make tea without putting the tea-leaves in the pot, this is the book that will help you avoid making more costly mistakes—whatever your area of activity or interest.

    Click here to check out the Checklist Manifesto, and make fewer mistakes!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Get Past Executive Dysfunction

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    In mathematics, there is a thing called the “travelling salesman problem”, and it is hard. Not just subjectively; it is classified in mathematical terms as an “NP-hard problem”, wherein NP stands for “nondeterministic polynomial”.

    The problem is: a travelling salesman must visit a certain list of cities, order undetermined, by the shortest possible route that visits them all.

    To work out what the shortest route is involves either very advanced mathematics, or else solving it by brute force, which means measuring every possible combination order (which number gets exponentially larger very quickly after the first few cities) and then selecting the shortest.

    Why are we telling you this?

    Executive dysfunction’s analysis paralysis

    Executive dysfunction is the state of knowing you have things to do, wanting to do them, intending to do them, and then simply not doing them.

    Colloquially, this can be called “analysis paralysis” and is considered a problem of planning and organizing, as much as it is a problem of initiating tasks.

    Let’s give a simple example:

    You wake up in the morning, and you need to go to the bathroom. But the bathroom will be cold, so you’ll want to get dressed first. However, it will be uncomfortable to get dressed while you still need to use the bathroom, so you contemplate doing that first. Those two items are already a closed loop now. You’re thirsty, so you want to have a drink, but the bathroom is calling to you. Sitting up, it’s colder than under the covers, so you think about getting dressed. Maybe you should have just a sip of water first. What else do you need to do today anyway? You grab your phone to check, drink untouched, clothes unselected, bathroom unvisited.

    That was a simple example; now apply that to other parts of your day that have much more complex planning possible.

    This is like the travelling salesman problem, except that now, some things are better if done before or after certain other things. Sometimes, possibly, they are outright required to be done before or after certain other things.

    So you have four options:

    • Solve the problem of your travelling-salesman-like tasklist using advanced mathematics (good luck if you don’t have advanced mathematics)
    • Solve the problem by brute force, calculating all possible variations and selecting the shortest (good luck getting that done the same day)
    • Go with a gut feeling and stick to it (people without executive dysfunction do this)
    • Go towards the nearest item, notice another item on the way, go towards that, notice a different item on the way there, and another one, get stuck for a while choosing between those two, head towards one, notice another one, and so on until you’ve done a very long scenic curly route that has narrowly missed all of your targetted items (this is the executive dysfunction approach).

    So instead, just pick one, do it, pick another one, do it, and so forth.

    That may seem “easier said than done”, but there are tools available…

    Task zero

    We’ve mentioned this before in the little section at the top of our daily newsletter that we often use for tips.

    One of the problems that leads to executive function is a shortage of “working memory”, like the RAM of a computer, so it’s easy to get overwhelmed with lists of things to do.

    So instead, hold only two items in your mind:

    • Task zero: the thing you are doing right now
    • Task one: the thing you plan to do next

    When you’ve completed task zero, move on to task one, renaming it task zero, and select a new task one.

    With this approach, you will never:

    • Think “what did I come into this room for?”
    • Get distracted by alluring side-quests

    Do not get corrupted by the cursed artefact

    In fantasy, and occasionally science fiction, there is a trope: an item that people are drawn towards, but which corrupts them, changes their motivations and behaviors for the worse, as well as making them resistant to giving the item up.

    An archetypal example of this would be the One Ring from The Lord of the Rings.

    It’s easy to read/watch and think “well I would simply not be corrupted by the cursed artefact”.

    And then pick up one’s phone to open the same three apps in a cycle for the next 40 minutes.

    This is because technology that is designed to be addictive hijacks our dopamine processing, and takes advantage of executive dysfunction, while worsening it.

    There are some ways to mitigate this:

    Rebalancing Dopamine (Without “Dopamine Fasting”)

    …but one way to avoid it entirely is to mentally narrate your choices. It’s a lot harder to make bad choices with an internal narrator going:

    • “She picked up her phone absent-mindedly, certain that this time it really would be only a few seconds”
    • “She picked up her phone for the eleventy-third time”
    • “Despite her plan to put her shoes on, she headed instead for the kitchen”

    This method also helps against other bad choices aside from those pertaining to executive dysfunction, too:

    • “Abandoning her plan to eat healthily, she lingered in the confectionary aisle, scanning the shelves for sugary treats”
    • “Monday morning will be the best time to start my new exercise regime”, she thought, for the 35th week so far this year

    Get pharmaceutical or nutraceutical help

    While it’s not for everyone, many people with executive dysfunction benefit from ADHD meds. However, they have their pros and cons (perhaps we’ll do a run-down one of these days).

    There are also gentler options that can significantly ameliorate executive dysfunction, for example:

    Bacopa Monnieri: A Well-Evidenced Cognitive Enhancer For Focus & More

    Enjoy!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Beetroot vs Carrot – Which is Healthier?

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Our Verdict

    When comparing beetroot to carrot, we picked the carrot.

    Why?

    It was close! And beetroot does have its advantages, but we say carrot wins on balance.

    In terms of macros, these two root vegetables are close to identical, down to both having 9.57g carbs per 100g, and 2.8g fiber per 100g. Technically, beetroot has a smidgen more protein, but nobody’s eating these for their tiny protein content.

    When it comes to vitamins, it’s not close and the margins are mostly huge: carrots have a lot more of vitamins A, B1, B2, B3, B5, B6, C, E, K, and choline, while beetroot has more vitamin B9.

    In the category of minerals, superficially it swings the other way, but the margins this time are small. Nevertheless, beetroot has more copper, iron, magnesium, manganese, phosphorus, potassium, selenium, and zinc, while carrots have more calcium.

    This would make things, on balance, a tie: equal on macros, carrots win on vitamins, beetroot wins on minerals.

    But because of the relative margins of difference, carrots win the day, because they’re almost as good as beetroot on those minerals, whereas beetroot doesn’t come close to carrot on the vitamins.

    Want to learn more?

    You might like to read:

    From Apples to Bees, and high-fructose C’s: Which Sugars Are Healthier, And Which Are Just The Same?

    Take care!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: