Are Supplements Worth Taking?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
It’s Q&A Day at 10almonds!
Have a question or a request? We love to hear from you!
In cases where we’ve already covered something, we might link to what we wrote before, but will always be happy to revisit any of our topics again in the future too—there’s always more to say!
As ever: if the question/request can be answered briefly, we’ll do it here in our Q&A Thursday edition. If not, we’ll make a main feature of it shortly afterwards!
So, no question/request too big or small 😎
❝There seems to be a lot of suggestions to take supplements for every thing, from your head to your toes. I know it’s up to the individual but what are the facts or stats to support taking them versus not?❞
Short answer:
- supplementary vitamins and minerals are probably neither needed nor beneficial for most (more on this later) people, with the exception of vitamin D which most people over a certain age need unless they are white and getting a lot of sun.
- other kinds of supplement can be very beneficial or useless, depending on what they are, of course, and also your own personal physiology.
With regard to vitamins and minerals, in most cases they should be covered by a healthy balanced diet, and the bioavailability is usually better from food anyway (bearing in mind, we say vitamin such-and-such, or name an elemental mineral, but there are usually multiple, often many, forms of each—and supplements will usually use whatever is cheapest to produce and most chemically stable).
However! It is also quite common for food to be grown in whatever way is cheapest and produces the greatest visible yield, rather than for micronutrient coverage.
This goes for most if not all plants, and it goes extra for animals (because of the greater costs and inefficiencies involved in rearing animals).
We wrote about this a while back in a mythbusting edition of 10almonds, covering:
- Food is less nutritious now than it used to be: True or False?
- Supplements aren’t absorbed properly and thus are a waste of money: True or False?
- We can get everything we need from our diet: True or False?
You can read the answers and explanations, and see the science that we presented, here:
Do We Need Supplements, And Do They Work?
You may be wondering: what was that about “most (more on this later) people”?
Sometimes someone will have a nutrient deficiency that can’t be easily remedied with diet. Often this occurs when their body:
- has trouble absorbing that nutrient, or
- does something inconvenient with it that makes a lot of it unusable when it gets it.
…which is why calcium, iron, vitamin B12, and vitamin D are quite common supplements to get prescribed by doctors after a certain age.
Still, it’s best to try getting things from one’s diet first all of all, of course.
Things we can’t (reasonably) get from food
This is another category entirely. There are many supplements that are convenient forms of things readily found in a lot of food, such as vitamins and minerals, or phytochemicals like quercetin, fisetin, and lycopene (to name just a few of very many).
Then there are things not readily found in food, or at least, not in food that’s readily available in supermarkets.
For example, if you go to your local supermarket and ask where the mimosa is, they’ll try to sell you a cocktail mix instead of the roots, bark, or leaves of a tropical tree. It is also unlikely they’ll stock lion’s mane mushroom, or reishi.
If perchance you do get the chance to acquire fresh lion’s mane mushroom, by the way, give it a try! It’s delicious shallow-fried in a little olive oil with black pepper and garlic.
In short, this last category, the things most of us can’t reasonably get from food without going far out of our way, are the kind of thing whereby supplements actually can be helpful.
And yet, still, not every supplement has evidence to support the claims made by its sellers, so it’s good to do your research beforehand. We do that on Mondays, with our “Research Review Monday” editions, of which you can find in our searchable research review archive ← we also review some drugs that can’t be classified as supplements, but mostly, it’s supplements.
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Recommended
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
Imposter Syndrome (and why almost everyone has it)
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Imposter Syndrome (and why almost everyone has it)
Imposter syndrome is the pervasive idea that we’re not actually good enough, people think we are better than we are, and at any moment we’re going to get found out and disappoint everyone.
Beyond the workplace
Imposter syndrome is most associated with professionals. It can range from a medical professional who feels like they’ve been projecting an image of confidence too much, to a writer or musician who is sure that their next piece will never live up to the acclaim of previous pieces and everyone will suddenly realize they don’t know what they’re doing, to a middle-manager who feels like nobody above or below them realizes how little they know how to do.
But! Less talked-about (but no less prevalent) is imposter syndrome in other areas of life. New parents tend to feel this strongly, as can the “elders” of a family that everyone looks to for advice and strength and support. Perhaps worst is when the person most responsible for the finances of a household feels like everyone just trusts them to keep everything running smoothly, and maybe they shouldn’t because it could all come crashing down at any moment and everyone will see them for the hopeless shambles of a human being that they really are.
Feelings are not facts
And yet (while everyone makes mistakes sometimes) the reality is that we’re all doing our best. Given that imposter syndrome affects up to 82% of people, let’s remember to have some perspective. Everyone feels like they’re winging it sometimes. Everyone feels the pressure.
Well, perhaps not everyone. There’s that other 18%. Some people are sure they’re the best thing ever. Then again, there’s probably some in that 18% that actually feel worse than the 82%—they just couldn’t admit it, even in an anonymized study.
But one thing’s for sure: it’s very, very common. Especially in high-performing women, by the way, and people of color. In other words, people who typically “have to do twice as much to get recognized as half as good”.
That said, the flipside of this is that people who are not in any of those categories may feel “everything is in my favor, so I really have no excuse to not achieve the most”, and can sometimes take very extreme actions to try to avoid perceived failure, and it can be their family that pays the price.
Things to remember
If you find imposter syndrome nagging at you, remember these things:
- There are people far less competent than you, doing the same thing
- Nobody knows how to do everything themselves, especially at first
- If you don’t know how to do something, you can usually find out
- There is always someone to ask for help, or at least advice, or at least support
At the end of the day, we evolved to eat fruit and enjoy the sun. None of us are fully equipped for all the challenges of the modern world, but if we do our reasonable best, and look after each other (and that means that you too, dear reader, deserve looking after as well), we can all do ok.
Share This Post
-
‘Noisy’ autistic brains seem better at certain tasks. Here’s why neuroaffirmative research matters
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Pratik Raul, University of Canberra; Jeroen van Boxtel, University of Canberra, and Jovana Acevska, University of Canberra
Autism is a neurodevelopmental difference associated with specific experiences and characteristics.
For decades, autism research has focused on behavioural, cognitive, social and communication difficulties. These studies highlighted how autistic people face issues with everyday tasks that allistic (meaning non-autistic) people do not. Some difficulties may include recognising emotions or social cues.
But some research, including our own study, has explored specific advantages in autism. Studies have shown that in some cognitive tasks, autistic people perform better than allistic people. Autistic people may have greater success in identifying a simple shape embedded within a more complex design, arranging blocks of different shapes and colours, or spotting an object within a cluttered visual environment (similar to Where’s Wally?). Such enhanced performance has been recorded in babies as young as nine months who show emerging signs of autism.
How and why do autistic individuals do so well on these tasks? The answer may be surprising: more “neural noise”.
What is neural noise?
Generally, when you think of noise, you probably think of auditory noise, the ups and downs in the amplitude of sound frequencies we hear.
A similar thing happens in the brain with random fluctuations in neural activity. This is called neural noise.
This noise is always present, and comes on top of any brain activity caused by things we see, hear, smell and touch. This means that in the brain, an identical stimulus that is presented multiple times won’t cause exactly the same activity. Sometimes the brain is more active, sometimes less. In fact, even the response to a single stimulus or event will fluctuate continuously.
Neural noise in autism
There are many sources of neural noise in the brain. These include how the neurons become excited and calm again, changes in attention and arousal levels, and biochemical processes at the cellular level, among others. An allistic brain has mechanisms to manage and use this noise. For instance, cells in the hippocampus (the brain’s memory system) can make use of neural noise to enhance memory encoding and recall.
Evidence for high neural noise in autism can be seen in electroencephalography (EEG) recordings, where increased levels of neural fluctuations were observed in autistic children. This means their neural activity is less predictable, showing a wider range of activity (higher ups and downs) in response to the same stimulus.
In simple terms, if we imagine the EEG responses like a sound wave, we would expect to see small ups and downs (amplitude) in allistic brains each time they encounter a stimulus. But autistic brains seem to show bigger ups and downs, demonstrating greater amplitude of neural noise.
Many studies have linked this noisy autistic brain with cognitive, social and behavioural difficulties.
But could noise be a bonus?
The diagnosis of autism has a long clinical history. A shift from the medical to a more social model has also seen advocacy for it to be reframed as a difference, rather than a disorder or deficit. This change has also entered autism research. Neuroaffirming research can examine the uniqueness and strengths of neurodivergence.
Psychology and perception researcher David Simmons and colleagues at the University of Glasgow were the first to suggest that while high neural noise is generally a disadvantage in autism, it can sometimes provide benefits due to a phenomenon called stochastic resonance. This is where optimal amounts of noise can enhance performance. In line with this theory, high neural noise in the autistic brain might enhance performance for some cognitive tasks.
Our 2023 research explores this idea. We recruited participants from the general population and investigated their performance on letter-detection tasks. At the same time, we measured their level of autistic traits.
We performed two letter-detection experiments (one in a lab and one online) where participants had to identify a letter when displayed among background visual static of various intensities.
By using the static, we added additional visual noise to the neural noise already present in our participants’ brains. We hypothesised the visual noise would push participants with low internal brain noise (or low autistic traits) to perform better (as suggested by previous research on stochastic resonance). The more interesting prediction was that noise would not help individuals who already had a lot of brain noise (that is, those with high autistic traits), because their own neural noise already ensured optimal performance.
Indeed, one of our experiments showed people with high neural noise (high autistic traits) did not benefit from additional noise. Moreover, they showed superior performance (greater accuracy) relative to people with low neural noise when the added visual static was low. This suggests their own neural noise already caused a natural stochastic resonance effect, resulting in better performance.
It is important to note we did not include clinically diagnosed autistic participants, but overall, we showed the theory of enhanced performance due to stochastic resonance in autism has merits.
Why this is important?
Autistic people face ignorance, prejudice and discrimination that can harm wellbeing. Poor mental and physical health, reduced social connections and increased “camouflaging” of autistic traits are some of the negative impacts that autistic people face.
So, research underlining and investigating the strengths inherent in autism can help reduce stigma, allow autistic people to be themselves and acknowledge autistic people do not require “fixing”.
The autistic brain is different. It comes with limitations, but it also has its strengths.
Pratik Raul, PhD candidiate, University of Canberra; Jeroen van Boxtel, Associate professor, University of Canberra, and Jovana Acevska, Honours Graduate Student, University of Canberra
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Share This Post
-
What’s the difference between ‘strep throat’ and a sore throat? We’re developing a vaccine for one of them
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
What’s the difference? is a new editorial product that explains the similarities and differences between commonly confused health and medical terms, and why they matter.
It’s the time of the year for coughs, colds and sore throats. So you might have heard people talk about having a “strep throat”.
But what is that? Is it just a bad sore throat that goes away by itself in a day or two? Should you be worried?
Here’s what we know about the similarities and differences between strep throat and a sore throat, and why they matter.
Prostock-studio/Shutterstock How are they similar?
It’s difficult to tell the difference between a sore throat and strep throat as they look and feel similar.
People usually have a fever, a bright red throat and sometimes painful lumps in the neck (swollen lymph nodes). A throat swab can help diagnose strep throat, but the results can take a few days.
Thankfully, both types of sore throat usually get better by themselves.
How are they different?
Most sore throats are caused by viruses such as common cold viruses, the flu (influenza virus), or the virus that causes glandular fever (Epstein-Barr virus).
These viral sore throats can occur at any age. Antibiotics don’t work against viruses so if you have a viral sore throat, you won’t get better faster if you take antibiotics. You might even have some unwanted antibiotic side-effects.
But strep throat is caused by Streptococcus pyogenes bacteria, also known as strep A. Strep throat is most common in school-aged children, but can affect other age groups. In some cases, you may need antibiotics to avoid some rare but serious complications.
In fact, the potential for complications is one key difference between a viral sore throat and strep throat.
Generally, a viral sore throat is very unlikely to cause complications (one exception is those caused by Epstein-Barr virus which has been associated with illnesses such as chronic fatigue syndrome, multiple sclerosis and certain cancers).
But strep A can cause invasive disease, a rare but serious complication. This is when bacteria living somewhere on the body (usually the skin or throat) get into another part of the body where there shouldn’t be bacteria, such as the bloodstream. This can make people extremely sick.
Invasive strep A infections and deaths have been rising in recent years around the world, especially in young children and older adults. This may be due to a number of factors such as increased social mixing at this stage of the COVID pandemic and an increase in circulating common cold viruses. But overall the reasons behind the increase in invasive strep A infections are not clear.
Another rare but serious side effect of strep A is autoimmune disease. This is when the body’s immune system makes antibodies that react against its own cells.
The most common example is rheumatic heart disease. This is when the body’s immune system damages the heart valves a few weeks or months after a strep throat or skin infection.
Around the world more than 40 million people live with rheumatic heart disease and more than 300,000 die from its complications every year, mostly in developing countries.
However, parts of Australia have some of the highest rates of rheumatic heart disease in the world. More than 5,300 Indigenous Australians live with it.
Strep throat is caused by Streptococcus bacteria and can be treated with antibiotics if needed. Kateryna Kon/Shutterstock Why do some people get sicker than others?
We know strep A infections and rheumatic heart disease are more common in low socioeconomic communities where poverty and overcrowding lead to increased strep A transmission and disease.
However, we don’t fully understand why some people only get a mild infection with strep throat while others get very sick with invasive disease.
We also don’t understand why some people get rheumatic heart disease after strep A infections when most others don’t. Our research team is trying to find out.
How about a vaccine for strep A?
There is no strep A vaccine but many groups in Australia, New Zealand and worldwide are working towards one.
For instance, Murdoch Children’s Research Institute and Telethon Kids Institute have formed the Australian Strep A Vaccine Initiative to develop strep A vaccines. There’s also a global consortium working towards the same goal.
Companies such as Vaxcyte and GlaxoSmithKline have also been developing strep A vaccines.
What if I have a sore throat?
Most sore throats will get better by themselves. But if yours doesn’t get better in a few days or you have ongoing fever, see your GP.
Your GP can examine you, consider running some tests and help you decide if you need antibiotics.
Kim Davis, General paediatrician and paediatric infectious diseases specialist, Murdoch Children’s Research Institute; Alma Fulurija, Immunologist and the Australian Strep A Vaccine Initiative project lead, Telethon Kids Institute, and Myra Hardy, Postdoctoral Researcher, Infection, Immunity and Global Health, Murdoch Children’s Research Institute
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Share This Post
Related Posts
-
The Brain As A Work-In-Progress
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
And The Brain Goes Marching On!
In Tuesday’s newsletter, we asked you “when does the human brain stop developing?” and got the above-depicted, below-described, set of responses:
- About 64% of people said “Never”
- About 16% of people said “25 years”
- About 9% of people said “65 years”
- About 5% of people said “13 years”
- About 3% of people said “18 years”
- About 3% of people said “45 years”
Some thoughts, before we get into the science:
An alternative wording for the original question was “when does the human brain finish developing”; the meaning is the same but the feeling is slightly different:
- “When does the human brain stop developing?” focuses attention on the idea of cessation, and will skew responses to later ages
- When does the human brain finish developing?” focuses on attention on a kind of “is it done yet?” and will skew responses to earlier ages
Ultimately, since we had to chose one word or another, we picked the shortest one, but it would have been interesting if we could have done an A/B test, and asked half one way, and half the other way!
Why we picked those ages
We picked those ages as poll options for reasons people might be drawn to them:
- 13 years: in English-speaking cultures, an important milestone of entering adolescence (note that the concept of a “teenager” is not precisely universal as most languages do not have “-teen” numbers in the same way; the concept of “adolescent” may thus be tied to other milestones)
- 18 years: age of legal majority in N. America and many other places
- 25 years: age popularly believed to be when the brain is finished developing, due to a study that we’ll talk about shortly (we guess that’s why there’s a spike in our results for this, too!)
- 45 years: age where many midlife hormonal changes occur, and many professionals are considered to have peaked in competence and start looking towards retirement
- 65 years: age considered “senior” in much of N. America and many other places, as well as the cut-off and/or starting point for a lot of medical research
Notice, therefore, how a lot of things are coming from places they really shouldn’t. For example, because there are many studies saying “n% of people over 65 get Alzheimer’s” or “n% of people over 65 get age-related cognitive decline”, etc, 65 becomes the age where we start expecting this—because of an arbitrary human choice of where to draw the cut-off for the study enrollment!
Similarly, we may look at common ages of legal majority, or retirement pensions, and assume “well it must be for a good reason”, and dear reader, those reasons are more often economically motivated than they are biologically reasoned.
So, what does the science say?
Our brains are never finished developing: True or False?
True! If we define “finished developing” as “we cease doing neurogenesis and neuroplasticity is no longer in effect”.
Glossary:
- Neurogenesis: the process of creating new brain cells
- Neuroplasticity: the process of the brain adapting to changes by essentially rebuilding itself to suit our perceived current needs
We say “perceived” because sometimes neuroplasticity can do very unhelpful things to us (e.g: psychological trauma, or even just bad habits), but on a biological level, it is always doing its best to serve our overall success as an organism.
For a long time it was thought that we don’t do neurogenesis at all as adults, but this was found to be untrue:
How To Grow New Brain Cells (At Any Age)
Summary of conclusions of the above: we’re all growing new brain cells at every age, even if we be in our 80s and with Alzheimer’s disease, but there are things we can do to enhance our neurogenic potential along the way.
Neuroplasticity will always be somewhat enhanced by neurogenesis (after all, new neurons get given jobs to do), and we reviewed a great book about the marvels of neuroplasticity including in older age:
Our brains are still developing up to the age of 25: True or False?
True! And then it keeps on developing after that, too. Now this is abundantly obvious considering what we just talked about, but see what a difference the phrasing makes? Now it makes it sound like it stops at 25, which this statement doesn’t claim at all—it only speaks for the time up to that age.
A lot of the popular press about “the brain isn’t fully mature until the age of 25” stems from a 2006 study that found:
❝For instance, frontal gray matter volume peaks at about age 11.0 years in girls and 12.1 years in boys, whereas temporal gray matter volume peaks at about age at 16.7 years in girls and 16.2 years in boys. The dorsal lateral prefrontal cortex, important for controlling impulses, is among the latest brain regions to mature without reaching adult dimensions until the early 20s.❞
Source: Structural Magnetic Resonance Imaging of the Adolescent Brain
There are several things to note here:
- The above statement is talking about the physical size of the brain growing
- Nowhere does he say “and stops developing at 25”
However… The study only looked at brains up to the age of 25. After that, they stopped looking, because the study was about “the adolescent brain” so there has to be a cut-off somewhere, and that was the cut-off they chose.
This is the equivalent of saying “it didn’t stop raining until four o’clock” when the reality is that four o’clock is simply when you gave up on checking.
The study didn’t misrepresent this, by the way, but the popular press did!
Another 2012 study looked at various metrics of brain development, and found:
- Synapse overproduction into the teens
- Cortex pruning into the late 20s
- Prefrontal pruning into middle age at least (they stopped looking)
- Myelination beyond middle age (they stopped looking)
Source: Experience and the developing prefrontal cortex ← check out figure 1, and make sure you’re looking at the human data not the rat data
So how’s the most recent research looking?
Here’s a 2022 study that looked at 123,984 brain scans spanning the age range from mid-gestation to 100 postnatal years, and as you can see from its own figure 1… Most (if not all) brain-things keep growing for life, even though most slow down at some point, they don’t stop:
Brain charts for the human lifespan ← check out figure 1; don’t get too excited about the ventricular volume column as that is basically “brain that isn’t being a brain”. Do get excited about the rest, though!
Want to know how not to get caught out by science being misrepresented by the popular press? Check out:
How Science News Outlets Can Lie To You (Yes, Even If They Cite Studies!)
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
See what other 10almonds subscribers are asking!
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
It’s Q&A Day at 10almonds!
Q: I would be interested in learning more about collagen and especially collagen supplements/powders and of course if needed, what is the best collagen product to take. What is collagen? Why do we need to supplement the collagen in our body? Thank you PS love the information I am receiving in the news letters. Keep it up
We’re glad you’re enjoying them! Your request prompted us to do our recent Research Review Monday main feature on collagen supplementation—we hope it helped, and if you’ve any more specific (or other) question, go ahead and let us know! We love questions and requests
Q: Great article about the health risks of salt to organs other than the heart! Is pink Himalayan sea salt, the pink kind, healthier?
Thank you! And, no, sorry. Any salt that is sodium chloride has the exact same effect because it’s chemically the same substance, even if impurities (however pretty) make it look different.
If you want a lower-sodium salt, we recommend the kind that says “low sodium” or “reduced sodium” or similar. Check the ingredients, it’ll probably be sodium chloride cut with potassium chloride. Potassium chloride is not only not a source of sodium, but also, it’s a source of potassium, which (unlike sodium) most of us could stand to get a little more of.
For your convenience: here’s an example on Amazon!
Bonus: you can get a reduced sodium version of pink Himalayan salt too!
Q: Can you let us know about more studies that have been done on statins? Are they really worth taking?
That is a great question! We imagine it might have been our recent book recommendation that prompted it? It’s quite a broad question though, so we’ll do that as a main feature in the near future!
Q: Is MSG healthier than salt in terms of sodium content or is it the same or worse?
Great question, and for that matter, MSG itself is a great topic for another day. But your actual question, we can readily answer here and now:
- Firstly, by “salt” we’re assuming from context that you mean sodium chloride.
- Both salt and MSG do contain sodium. However…
- MSG contains only about a third of the sodium that salt does, gram-for-gram.
- It’s still wise to be mindful of it, though. Same with sodium in other ingredients!
- Baking soda contains about twice as much sodium, gram for gram, as MSG.
Wondering why this happens?
Salt (sodium chloride, NaCl) is equal parts sodium and chlorine, by atom count, but sodium’s atomic mass is lower than chlorine’s, so 100g of salt contains only 39.34g of sodium.
Baking soda (sodium bicarbonate, NaHCO₃) is one part sodium for one part hydrogen, one part carbon, and three parts oxygen. Taking each of their diverse atomic masses into account, we see that 100g of baking soda contains 27.4g sodium.
MSG (monosodium glutamate, C₅H₈NO₄Na) is only one part sodium for 5 parts carbon, 8 parts hydrogen, 1 part nitrogen, and 4 parts oxygen… And all those other atoms put together weigh a lot (comparatively), so 100g of MSG contains only 12.28g sodium.
Q: Thanks for the info about dairy. As a vegan, I look forward to a future comment about milk alternatives
Thanks for bringing it up! What we research and write about is heavily driven by subscriber feedback, so notes like this really help us know there’s an audience for a given topic!
We’ll do a main feature on it, to do it justice. Watch out for Research Review Monday!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
Pine Nuts vs Peanuts – Which is Healthier?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Our Verdict
When comparing pine nuts to peanuts, we picked the pine nuts.
Why?
An argument could be made for either, honestly, as it depends on what we prioritize the most. These are both very high-calorie foods, and/but are far from empty calories, as they both contain main nutrients. Obviously, if you are allergic to nuts, this one is just not a comparison for you, sorry.
Looking at the macros first, peanuts are higher in protein, carbs, and fiber, while pine nuts are higher in fats—though the fats are healthy, being mostly polyunsaturated, with about a third of the total fats monounsaturated, and a low amount of saturated fat (peanuts have nearly 2x the saturated fat). On balance, we’ll call the macros category a moderate win for peanuts, though.
In terms of vitamins, peanuts have more of vitamins B1, B3, B5, B6, and B9, while pine nuts have more of vitamins A, B2, C, E, K, and choline. All in all, a marginal win for pine nuts.
In the category of minerals, peanuts have more calcium and selenium, while pine nuts have more copper, iron, magnesium, manganese, phosphorus, and zinc. An easy win for pine nuts, even before we take into account that peanuts have nearly 10x as much sodium. And yes, we are talking about the raw nuts, not nuts that have been roasted and salted.
Adding up the categories gives a win for pine nuts—but if you have certain particular priorities, you might still prefer peanuts for the areas in which peanuts are stronger.
Of course, the best solution is to enjoy both!
Want to learn more?
You might like to read:
Why You Should Diversify Your Nuts!
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: