No, sugar doesn’t make your kids hyperactive
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
It’s a Saturday afternoon at a kids’ birthday party. Hordes of children are swarming between the spread of birthday treats and party games. Half-eaten cupcakes, biscuits and lollies litter the floor, and the kids seem to have gained superhuman speed and bounce-off-the-wall energy. But is sugar to blame?
The belief that eating sugary foods and drinks leads to hyperactivity has steadfastly persisted for decades. And parents have curtailed their children’s intake accordingly.
Balanced nutrition is critical during childhood. As a neuroscientist who has studied the negative effects of high sugar “junk food” diets on brain function, I can confidently say excessive sugar consumption does not have benefits to the young mind. In fact, neuroimaging studies show the brains of children who eat more processed snack foods are smaller in volume, particularly in the frontal cortices, than those of children who eat a more healthful diet.
But today’s scientific evidence does not support the claim sugar makes kids hyperactive.
The hyperactivity myth
Sugar is a rapid source of fuel for the body. The myth of sugar-induced hyperactivity can be traced to a handful of studies conducted in the 1970s and early 1980s. These were focused on the Feingold Diet as a treatment for what we now call Attention Deficit Hyperactivity Disorder (ADHD), a neurodivergent profile where problems with inattention and/or hyperactivity and impulsivity can negatively affect school, work or relationships.
Devised by American paediatric allergist Benjamin Feingold, the diet is extremely restrictive. Artificial colours, sweeteners (including sugar) and flavourings, salicylates including aspirin, and three preservatives (butylated hydroxyanisole, butylated hydroxytoluene, and tert-Butrylhdryquinone) are eliminated.
Salicylates occur naturally in many healthy foods, including apples, berries, tomatoes, broccoli, cucumbers, capsicums, nuts, seeds, spices and some grains. So, as well as eliminating processed foods containing artificial colours, flavours, preservatives and sweeteners, the Feingold diet eliminates many nutritious foods helpful for healthy development.
However, Feingold believed avoiding these ingredients improved focus and behaviour. He conducted some small studies, which he claimed showed a large proportion of hyperactive children responded favourably to his diet.
Flawed by design
The methods used in the studies were flawed, particularly with respect to adequate control groups (who did not restrict foods) and failed to establish a causal link between sugar consumption and hyperactive behaviour.
Subsequent studies suggested less than 2% responded to restrictions rather than Feingold’s claimed 75%. But the idea still took hold in the public consciousness and was perpetuated by anecdotal experiences.
Fast forward to the present day. The scientific landscape looks vastly different. Rigorous research conducted by experts has consistently failed to find a connection between sugar and hyperactivity. Numerous placebo-controlled studies have demonstrated sugar does not significantly impact children’s behaviour or attention span.
One landmark meta-analysis study, published almost 20 years ago, compared the effects of sugar versus a placebo on children’s behaviour across multiple studies. The results were clear: in the vast majority of studies, sugar consumption did not lead to increased hyperactivity or disruptive behaviour.
Subsequent research has reinforced these findings, providing further evidence sugar does not cause hyperactivity in children, even in those diagnosed with ADHD.
While Feingold’s original claims were overstated, a small proportion of children do experience allergies to artificial food flavourings and dyes.
Pre-school aged children may be more sensitive to food additives than older children. This is potentially due to their smaller body size, or their still-developing brain and body.
Hooked on dopamine?
Although the link between sugar and hyperactivity is murky at best, there is a proven link between the neurotransmitter dopamine and increased activity.
The brain releases dopamine when a reward is encountered – such as an unexpected sweet treat. A surge of dopamine also invigorates movement – we see this increased activity after taking psychostimulant drugs like amphetamine. The excited behaviour of children towards sugary foods may be attributed to a burst of dopamine released in expectation of a reward, although the level of dopamine release is much less than that of a psychostimulant drug.
Dopamine function is also critically linked to ADHD, which is thought to be due to diminished dopamine receptor function in the brain. Some ADHD treatments such as methylphenidate (labelled Ritalin or Concerta) and lisdexamfetamine (sold as Vyvanse) are also psychostimulants. But in the ADHD brain the increased dopamine from these drugs recalibrates brain function to aid focus and behavioural control.
Why does the myth persist?
The complex interplay between diet, behaviour and societal beliefs endures. Expecting sugar to change your child’s behaviour can influence how you interpret what you see. In a study where parents were told their child had either received a sugary drink, or a placebo drink (with a non-sugar sweetener), those parents who expected their child to be hyperactive after having sugar perceived this effect, even when they’d only had the sugar-free placebo.
The allure of a simple explanation – blaming sugar for hyperactivity – can also be appealing in a world filled with many choices and conflicting voices.
Healthy foods, healthy brains
Sugar itself may not make your child hyperactive, but it can affect your child’s mental and physical health. Rather than demonising sugar, we should encourage moderation and balanced nutrition, teaching children healthy eating habits and fostering a positive relationship with food.
In both children and adults, the World Health Organization (WHO) recommends limiting free sugar consumption to less than 10% of energy intake, and a reduction to 5% for further health benefits. Free sugars include sugars added to foods during manufacturing, and naturally present sugars in honey, syrups, fruit juices and fruit juice concentrates.
Treating sugary foods as rewards can result in them becoming highly valued by children. Non-sugar rewards also have this effect, so it’s a good idea to use stickers, toys or a fun activity as incentives for positive behaviour instead.
While sugar may provide a temporary energy boost, it does not turn children into hyperactive whirlwinds.
Amy Reichelt, Senior Lecturer (Adjunct), Nutritional neuroscientist, University of Adelaide
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Recommended
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
Red Lentils vs Green Lentils – Which is Healthier?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Our Verdict
When comparing red lentils to green lentils, we picked the green.
Why?
Yes, they’re both great. But there are some clear distinctions!
First, know: red lentils are, secretly, hulled brown lentils. Brown lentils are similar to green lentils, just a little less popular and with (very) slightly lower nutritional values, as a rule.
By hulling the lentils, the first thing that needs mentioning is that they lose some of their fiber, since this is what was removed. While we’re talking macros, this does mean that red lentils have proportionally more protein, because of the fiber weight lost. However, because green lentils are still a good source of protein, we think the fat that green lentils have much more fiber is a point in their favor.
In terms of micronutrients, they’re quite similar in vitamins (mostly B-vitamins, of which, mostly folate / vitamin B9), and when it comes to minerals, they’re similarly good sources of iron, but green lentils contain more magnesium and potassium.
Green lentils also contain more antixoidants.
All in all, they both continue to be very respectable parts of anyone’s diet—but in a head-to-head, green lentils do come out on top (unless you want to prioritize slightly higher protein above everything else, in which case, red).
Want to get some in? Here are the specific products we featured today:
Enjoy!
Want to learn more?
You might like to read:
- Why You’re Probably Not Getting Enough Fiber (And How To Fix It)
- Eat More (Of This) For Lower Blood Pressure ← incidentally, the potassium content of green lentils also helps minimize the harm done by sodium in one’s diet
Take care!
Share This Post
-
How do science journalists decide whether a psychology study is worth covering?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.
Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.
But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.
Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.
The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.
University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.
But there’s nuance to the findings, the authors note.
“I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.
Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.
Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.
“Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)
“This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.
More on the study’s findings
The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.
“As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.
Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”
The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.
Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.
Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.
“Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.
Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.
For instance, one of the vignettes reads:
“Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”
In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”
Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.
Considering statistical significance
When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.
Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.
“Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.
Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.
In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:
- “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
- “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
- “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
- “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”
Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”
What other research shows about science journalists
A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”
A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.
More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.
Advice for journalists
We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:
1. Examine the study before reporting it.
Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.
Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”
How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.
Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.
“Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.
Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.
Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.
Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology, “Predatory Journals: What They Are and How to Avoid Them.”
2. Zoom in on data.
Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”
What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.
But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.
How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.
Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.
3. Talk to scientists not involved in the study.
If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.
Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.
4. Remember that a single study is simply one piece of a growing body of evidence.
“I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”
Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.
Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.
“We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”
5. Remind readers that science is always changing.
“Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”
Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”
Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could.
The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”
Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”
Additional reading
Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.The Problem with Psychological Research in the Media
Steven Stosny. Psychology Today, September 2022.Critically Evaluating Claims
Megha Satyanarayana, The Open Notebook, January 2022.How Should Journalists Report a Scientific Study?
Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.What Journalists Get Wrong About Social Science: Full Responses
Brian Resnick. Vox, January 2016.From The Journalist’s Resource
8 Ways Journalists Can Access Academic Research for Free
5 Things Journalists Need to Know About Statistical Significance
5 Common Research Designs: A Quick Primer for Journalists
5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct
What’s Standard Deviation? 4 Things Journalists Need to Know
This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.
Share This Post
-
Pistachios vs Pine Nuts – Which is Healthier?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Our Verdict
When comparing pistachios to pine nuts, we picked the pistachios.
Why?
First looking at the macros, pistachios have nearly 2x the protein while pine nuts have nearly 2x the fat. The fats are healthy in moderation (mostly polyunsaturated, a fair portion of monounsaturated, and a little saturated), but we’re going to value the protein content higher. Also, pistachios have approximately 2x the carbs, and/but nearly 3x the fiber. All in all, we’ll call this section a moderate win for pistachios.
When it comes to vitamins, pistachios have more of vitamins A, B1, B5, B6, B9, and C, while pine nuts have more of vitamins B2, B3, E, K, and choline. All in all, pistachios are scraping a 6:5 win here, or we could call it a tie if we want to value pine nuts’ vitamins more (due to the difference in how many foods each vitamin is found in, and thus the likelihood of having a deficiency or not).
In the category of minerals, pistachios have more calcium, copper, potassium, and selenium, while pine nuts have more iron, magnesium, manganese, and zinc. This would be a tie if we just call it 4:4, but what’s worth noting is that while both of these nuts are a good source of most of the minerals mentioned, pine nuts aren’t a very good source of calcium or selenium, so we’re going to declare this section a very marginal win for pistachios.
Adding up the moderate win, the scraped win, and the barely scraped win, all adds up to a win for pistachios. However, as you might have noticed, both are great so do enjoy both if you can!
Want to learn more?
You might like to read:
Why You Should Diversify Your Nuts
Take care!
Share This Post
Related Posts
-
6 Lifestyle Factors To Measurably Reduce Biological Age
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Julie Gibson Clark competes on a global leaderboard of people actively fighting aging (including billionaire Bryan Johnson, who is famously very focused on such). She’s currently ahead of him on that leaderboard, so what’s she doing?
Top tips
We’ll not keep the six factors a mystery; they are:
- Exercise: her weekly exercise includes VO2 Max training, strength training, balance work, and low-intensity cardio. She exercises outdoors on Saturdays and takes rest days on Fridays and Sundays.
- Diet: she follows a 16-hour intermittent fasting schedule (eating between 09:00–17:00), consumes a clean omnivore diet with an emphasis on vegetables and adequate protein, and avoids junk food.
- Brain: she meditates for 20 minutes daily, prioritizes mental health, and ensures sufficient quality sleep, helped by morning sunlight exposure and time in nature.
- Hormesis: she engages in 20-minute sauna sessions followed by cold showers four times per week to support recovery and longevity.
- Supplements: she takes longevity supplements and bioidentical hormones to optimize her health and aging process.
- Testing: she regularly monitors her biological age and health markers through various tests, including DEXA scans, VO2 Max tests, lipid panels, and epigenetic aging clocks, allowing her to adjust her routine accordingly.
For more on all of these, enjoy:
Click Here If The Embedded Video Doesn’t Load Automatically!
Want to learn more?
You might also like to read:
Age & Aging: What Can (And Can’t) We Do About It?
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
How To Grow New Brain Cells (At Any Age)
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
How To Grow New Brain Cells (At Any Age)
It was long believed that brain growth could not occur later in life, due to expending our innate stock of pluripotent stem cells. However, this was mostly based on rodent studies.
Rodent studies are often used for brain research, because it’s difficult to find human volunteers willing to have their brains sliced thinly (so that the cells can be viewed under a microscope) at the end of the study.
However, neurobiologist Dr. Maura Boldrini led a team that did a lot of research by means of autopsies on the hippocampi of (previously) healthy individuals ranging in age from 14 to 79.
What she found is that while indeed the younger subjects did predictably have more young brain cells (neural progenitors and immature neurons), even the oldest subject, at the age of 79, had been producing new brain cells up until death.
Read her landmark study: Human Hippocampal Neurogenesis Persists throughout Aging
There was briefly a flurry of news articles about a study by Dr. Shawn Sorrels that refuted this, however, it later came to light that Dr. Sorrels had accidentally destroyed his own evidence during the cell-fixing process—these things happen; it’s just unfortunate the mistake was not picked up until after publication.
A later study by a Dr. Elena Moreno-Jiménez fixed this flaw by using a shorter fixation time for the cell samples they wanted to look at, and found that there were tens of thousands of newly-made brain cells in samples from adults ranging from 43 to 87.
Now, there was still a difference: the samples from the youngest adult had 30% more newly-made braincells than the 87-year-old, but given that previous science thought brain cell generation stopped in childhood, the fact that an 87-year-old was generating new brain cells 30% less quickly than a 43-year-old is hardly much of a criticism!
As an aside: samples from patients with Alzheimer’s also had a 30% reduction in new braincell generation, compared to samples from patients of the same age without Alzheimer’s. But again… Even patients with Alzheimer’s were still growing some new brain cells.
Read it for yourself: Adult hippocampal neurogenesis is abundant in neurologically healthy subjects and drops sharply in patients with Alzheimer’s disease
Practical advice based on this information
Since we can do neurogenesis at any age, but the rate does drop with age (and drops sharply in the case of Alzheimer’s disease), we need to:
Feed your brain. The brain is the most calorie-consuming organ we have, by far, and it’s also made mostly of fat* and water. So, get plenty of healthy fats, and get plenty of water.
*Fun fact: while depictions in fiction (and/or chemically preserved brains) may lead many to believe the brain has a rubbery consistency, the untreated brain being made of mostly fat and water gives it more of a blancmange-like consistency in reality. That thing is delicate and spatters easily. There’s a reason it’s kept cushioned inside the strongest structure of our body, far more protected than anything in our torso.
Exercise. Specifically, exercise that gets your blood pumping. This (as our earlier-featured video today referenced) is one of the biggest things we can do to boost Brain-Derived Neurotrophic Factor, or BDNF.
Here be science: Brain-Derived Neurotrophic Factor, Depression, and Physical Activity: Making the Neuroplastic Connection
However, that’s not the only way to increase BDNF; another is to enjoy a diet rich in polyphenols. These can be found in, for example, berries, tea, coffee, and chocolate. Technically those last two are also botanically berries, but given how we usually consume them, and given how rich they are in polyphenols, they merit a special mention.
See for example: Effects of nutritional interventions on BDNF concentrations in humans: a systematic review
Some supplements can help neuron (re)growth too, so if you haven’t already, you might want to check out our previous main feature on lion’s mane mushroom, a supplement which does exactly that.
For those who like videos, you may also enjoy this TED talk by neuroscientist Dr. Sandrine Thuret:
Prefer text? Click here to read the transcript
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
Which gut drugs might end up in a lawsuit? Are there really links with cancer and kidney disease? Should I stop taking them?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Common medicines used to treat conditions including heartburn, reflux, indigestion and stomach ulcers may be the subject of a class action lawsuit in Australia.
Lawyers are exploring whether long-term use of these over-the-counter and prescription drugs are linked to stomach cancer or kidney disease.
The potential class action follows the settlement of a related multi-million dollar lawsuit in the United States. Last year, international pharmaceutical company AstraZeneca settled for US$425 million (A$637 million) after patients made the case that two of its drugs caused significant and potentially life-threatening side effects.
Specifically, patients claimed the company’s drugs Nexium (esomeprazole) and Prilosec (omeprazole) increased the risk of kidney damage.
Doucefleur/Shutterstock Which drugs are involved in Australia?
The class of drugs we’re talking about are “proton pump inhibitors” (sometimes called PPIs). In the case of the Australian potential class action, lawyers are investigating:
- Nexium (esomeprazole)
- Losec, Asimax (omeprazole)
- Somac (pantoprazole)
- Pariet (rabeprazole)
- Zoton (lansoprazole).
Depending on their strength and quantity, these medicines are available over-the-counter in pharmacies or by prescription.
They have been available in Australia for more than 20 years and are in the top ten medicines dispensed through the Pharmaceutical Benefits Scheme.
They are used to treat conditions exacerbated by stomach acid. These include heartburn, gastric reflux and indigestion. They work by blocking the protein responsible for pumping acid into the stomach.
These drugs are also prescribed with antibiotics to treat the bacterium Helicobacter pylori, which causes stomach ulcers and stomach cancer.
This class of drugs is also used with antibiotics to treat Helicobacter pylori infections. nobeastsofierce/Shutterstock What do we know about the risks?
Appropriate use of proton pump inhibitors plays an important role in treating several serious digestive problems. Like all medicines, there are risks associated with their use depending on how much and how long they are used.
When proton pump inhibitors are used appropriately for the short-term treatment of stomach problems, they are generally well tolerated, safe and effective.
Their risks are mostly associated with long-term use (using them for more than a year) due to the negative effects from having reduced levels of stomach acid. In elderly people, these include an increased risk of gut and respiratory tract infections, nutrient deficiencies and fractures. Long-term use of these drugs in elderly people has also been associated with an increased risk of dementia.
In children, there is an increased risk of serious infection associated with using these drugs, regardless of how long they are used.
How about the cancer and kidney risk?
Currently, the Australian consumer medicine information sheets that come with the medicines, like this one for esomeprazole, do not list stomach cancer or kidney injury as a risk associated with using proton pump inhibitors.
So what does the evidence say about the risk?
Over the past few years, there have been large studies based on observing people in the general population who have used proton pump inhibitors. These studies have found people who take them are almost two times more likely to develop stomach cancer and 1.7 times more likely to develop chronic kidney disease when compared with people who are not taking them.
In particular, these studies report that users of the drugs lansoprazole and pantoprazole have about a three to four times higher risk than non-users of developing chronic kidney disease.
While these observational studies show a link between using the drugs and these outcomes, we cannot say from this evidence that one causes the other.
Researchers have not yet shown these drugs cause kidney disease. crystal light/Shutterstock What can I do if I’m worried?
Several digestive conditions, especially reflux and heartburn, may benefit from simple dietary and lifestyle changes. But the overall evidence for these is not strong and how well they work varies between individuals.
But it may help to avoid large meals within two to three hours before bed, and reduce your intake of fatty food, alcohol and coffee. Eating slowly and getting your weight down if you are overweight may also help your symptoms.
There are also medications other than proton pump inhibitors that can be used for heartburn, reflux and stomach ulcers.
These include over-the-counter antacids (such as Gaviscon and Mylanta), which work by neutralising the acidic environment of the stomach.
Alternatives for prescription drugs include nizatidine and famotidine. These work by blocking histamine receptors in the stomach, which decreases stomach acid production.
If you are concerned about your use of proton pump inhibitors it is important to speak with your doctor or pharmacist before you stop using them. That’s because when you have been using them for a while, stopping them may result in increased or “rebound” acid production.
Nial Wheate, Professor and Director – Academic Excellence, Macquarie University; Joanna Harnett, Senior Lecturer – Sydney Pharmacy School, Faculty of Medicine and Health, University of Sydney, and Wai-Jo Jocelin Chan, Pharmacist and Associate Lecturer, University of Sydney
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: