
Fennel vs Artichoke – Which is Healthier?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Our Verdict
When comparing fennel to artichoke, we picked the artichoke.
Why?
Both are great! But artichoke wins on nutritional density.
In terms of macros, artichoke has more protein and more fiber, for only slightly more carbs.
Vitamins are another win for artichoke, boasting more of vitamins B1, B2, B3, B5, B6, B9, and choline. Meanwhile, fennel has more of vitamins A, E, and K, which is also very respectable but does allow artichoke a 6:3 lead.
In the category of minerals, artichoke has a lot more copper, iron, magnesium, manganese, and phosphorus, while fennel has a little more calcium, potassium, and selenium.
One other relevant factor is that fennel is a moderate appetite suppressant, which may be good or bad depending on your food-related goals.
All in all though, we say the artichoke wins by virtue of its greater abundance of nutrients!
Want to learn more?
You might like to read:
What Matters Most For Your Heart? ← appropriately enough, with fennel hearts and artichoke hearts!
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Recommended
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
Cherries’ Very Healthy Wealth Of Benefits!
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Cherries’ Health Benefits Simply Pop
First, be aware, there are different kinds:
Sweet & Sour
Cherries can be divided into sweet vs sour. These are mostly nutritionally similar, though sour ones do have some extra benefits.
Sweet and sour cherries are closely related but botanically different plants; it’s not simply a matter of ripeness (or preparation).
These can mostly be sorted into varieties of Prunus avium and Prunus cerasus, respectively:
Cherry Antioxidants: From Farm to Table
Sour cherry varieties include morello and montmorency, so look out for those names in particular when doing your grocery-shopping.
You may remember that it’s a good rule of thumb that foods that are more “bitter, astringent, or pungent” will tend to have a higher polyphenol content (that’s good):
Enjoy Bitter Foods For Your Heart & Brain
Juiced up
Almost certainly for reasons of budget and convenience, as much as for standardization, most studies into the benefits of cherries have been conducted using concentrated cherry juice as a supplement.
At home, we need not worry so much about standardization, and our budget and convenience are ours to manage. To this end, as a general rule of thumb, whole fruits are pretty much always better than juice:
Which Sugars Are Healthier, And Which Are Just The Same?
Antioxidant & anti-inflammatory!
Cherries are a very good source of antioxidants, and as such they also reduce inflammation, which in turn means ameliorating autoimmune diseases, from common things like arthritis…
…to less common things like gout:
Cherry Consumption and the Risk of Recurrent Gout Attacks
This can also be measured by monitoring uric acid metabolites:
Consumption of cherries lowers plasma urate in healthy women
Anti-diabetic effect
Most of the studies on this have been rat studies, and the human studies have been less “the effect of cherry consumption on diabetes” and more a matter of separate studies adding up to this conclusion in, the manner of “cherries have this substance, this substance has this effect, therefore cherries will have this effect”. You can see an example of this discussed over the course of 15 studies, here:
A Review of the Health Benefits of Cherries ← skip to section 2.2.1: “Cherry Intake And Diabetes”
In short, the jury is out on cherry juice, but eating cherries themselves (much like getting plenty of fruit in general) is considered good against diabetes.
Good for healthy sleep
For this one, the juice suffices (actual cherries are still recommended, but the juice gave clear significant positive results):
Pilot Study of the Tart Cherry Juice for the Treatment of Insomnia and Investigation of Mechanisms ← this was specifically in people over the age of 50
Importantly, it’s not that cherries have a sedative effect, but rather they support the body’s ability to produce melatonin adequately when the time comes:
Effect of tart cherry juice (Prunus cerasus) on melatonin levels and enhanced sleep quality
Post-exercise recovery
Cherries are well-known for boosting post-exercise recovery, though they may actually improve performance during exercise too, if eaten beforehand/
For example, these marathon-runners who averaged 13% compared to placebo control:
As for its recovery benefits, we wrote about this before:
How To Speed Up Recovery After A Workout (According To Actual Science)
Want to get some?
We recommend your local supermarket (or farmer’s market!), but if for any reason you prefer to take a supplement, here’s an example product on Amazon
Enjoy!
Share This Post
-
What’s the difference between ADD and ADHD?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Around one in 20 people has attention-deficit hyperactivity disorder (ADHD). It’s one of the most common neurodevelopmental disorders in childhood and often continues into adulthood.
ADHD is diagnosed when people experience problems with inattention and/or hyperactivity and impulsivity that negatively impacts them at school or work, in social settings and at home.
Some people call the condition attention-deficit disorder, or ADD. So what’s the difference?
In short, what was previously called ADD is now known as ADHD. So how did we get here?
Let’s start with some history
The first clinical description of children with inattention, hyperactivity and impulsivity was in 1902. British paediatrician Professor George Still presented a series of lectures about his observations of 43 children who were defiant, aggressive, undisciplined and extremely emotional or passionate.
Since then, our understanding of the condition evolved and made its way into the Diagnostic and Statistical Manual of Mental Disorders, known as the DSM. Clinicians use the DSM to diagnose mental health and neurodevelopmental conditions.
The first DSM, published in 1952, did not include a specific related child or adolescent category. But the second edition, published in 1968, included a section on behaviour disorders in young people. It referred to ADHD-type characteristics as “hyperkinetic reaction of childhood or adolescence”. This described the excessive, involuntary movement of children with the disorder.
It took a while for ADHD-type behaviour to make in into the diagnostic manual. Elzbieta Sekowska/Shutterstock In the early 1980s, the third DSM added a condition it called “attention deficit disorder”, listing two types: attention deficit disorder with hyperactivity (ADDH) and attention deficit disorder as the subtype without the hyperactivity.
However, seven years later, a revised DSM (DSM-III-R) replaced ADD (and its two sub-types) with ADHD and three sub-types we have today:
- predominantly inattentive
- predominantly hyperactive-impulsive
- combined.
Why change ADD to ADHD?
ADHD replaced ADD in the DSM-III-R in 1987 for a number of reasons.
First was the controversy and debate over the presence or absence of hyperactivity: the “H” in ADHD. When ADD was initially named, little research had been done to determine the similarities and differences between the two sub-types.
The next issue was around the term “attention-deficit” and whether these deficits were similar or different across both sub-types. Questions also arose about the extent of these differences: if these sub-types were so different, were they actually different conditions?
Meanwhile, a new focus on inattention (an “attention deficit”) recognised that children with inattentive behaviours may not necessarily be disruptive and challenging but are more likely to be forgetful and daydreamers.
People with inattentive behaviours may be more forgetful or daydreamers. fizkes/Shutterstock Why do some people use the term ADD?
There was a surge of diagnoses in the 1980s. So it’s understandable that some people still hold onto the term ADD.
Some may identify as having ADD because out of habit, because this is what they were originally diagnosed with or because they don’t have hyperactivity/impulsivity traits.
Others who don’t have ADHD may use the term they came across in the 80s or 90s, not knowing the terminology has changed.
How is ADHD currently diagnosed?
The three sub-types of ADHD, outlined in the DSM-5 are:
- predominantly inattentive. People with the inattentive sub-type have difficulty sustaining concentration, are easily distracted and forgetful, lose things frequently, and are unable to follow detailed instructions
- predominantly hyperactive-impulsive. Those with this sub-type find it hard to be still, need to move constantly in structured situations, frequently interrupt others, talk non-stop and struggle with self control
- combined. Those with the combined sub-type experience the characteristics of those who are inattentive and hyperactive-impulsive.
ADHD diagnoses continue to rise among children and adults. And while ADHD was commonly diagnosed in boys, more recently we have seen growing numbers of girls and women seeking diagnoses.
However, some international experts contest the expanded definition of ADHD, driven by clinical practice in the United States. They argue the challenges of unwanted behaviours and educational outcomes for young people with the condition are uniquely shaped by each country’s cultural, political and local factors.
Regardless of the name change to reflect what we know about the condition, ADHD continues to impact educational, social and life situations of many children, adolescents and adults.
Kathy Gibbs, Program Director for the Bachelor of Education, Griffith University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Share This Post
-
Honey vs Maple Syrup – Which is Healthier?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Our Verdict
When comparing honey to maple syrup, we picked the honey.
Why?
It was very close, as both have small advantages:
• Honey has some medicinal properties (and depending on type, may contain an antihistamine)
• Maple syrup is a good source of manganese, as well as low-but-present amounts of other mineralsHowever, you wouldn’t want to eat enough maple syrup to rely on it as a source of those minerals, and honey has the lower GI (average 46 vs 54; for comparison, refined sugar is 65), which works well as a tie-breaker.
(If GI’s very important to you, though, the easy winner here would be agave syrup if we let it compete, with its GI of 15)
Read more:
• Can Honey Relieve Allergies?
• From Apples to Bees, and High-Fructose C’sShare This Post
Related Posts
-
Beetroot vs Red Cabbage – Which is Healthier?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Our Verdict
When comparing beetroot to red cabbage, we picked the red cabbage.
Why?
Both are great, and both have their strengths!
In terms of macros, beetroot has very slightly more protein, carbs, and fiber, but the margins of difference are very small in each case. However, in terms of glycemic index, red cabbage has the considerably lower glycemic index, of 32 (low) as opposed to beetroot’s GI of 64 (medium). On the strength of this GI difference, we call this category a win for red cabbage.
In the category of vitamins, beetroot has more of vitamin B9, while red cabbage has a lot more of vitamins A, B1, B2, B3, B6, C, E, K, and choline. By strength of numbers and also by having very large margins of difference on most of those, red cabbage is the clear winner here.
When it comes to minerals, beetroot has more copper, magnesium, manganese, phosphorus, and potassium, while red cabbage has more calcium (and about ⅓ of the sodium). By the numbers, this is a win for beetroot, though it’s worth noting that the margins of difference were small, i.e. red cabbage was right behind beetroot on each of those.
Adding up the sections makes for an overall red cabbage win, but as we say, beetroot is great too, especially when it comes to minerals!
As ever, enjoy either or both; diversity is good.
Want to learn more?
You might like to read:
No, beetroot isn’t vegetable Viagra. But here’s what it can do!
Enjoy!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
Neurotransmitter Cheatsheet
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Which Neurotransmitter?
There are a lot of neurotransmitters that are important for good mental health (and, by way of knock-on effects, physical health).
However, when pop-science headlines refer to them as “feel-good chemicals” (yes but which one?!) or “the love molecule” (yes but which one?!) or other such vague names when referring to a specific neurotransmitter, it’s easy to get them mixed up.
So today we’re going to do a little disambiguation of some of the main mood-related neurotransmitters (there are many more, but we only have so much room), and what things we can do to help manage them.
Dopamine
This one predominantly regulates reward responses, though it’s also necessary for critical path analysis (e.g. planning), language faculties, and motor functions. It makes us feel happy, motivated, and awake.
To have more:
- eat foods that are rich in dopamine or its precursors such as tyrosine (bananas and almonds are great)
- do things that you find rewarding
Downsides: is instrumental in most addictions, and also too much can result in psychosis. For most people, that level of “too much” isn’t obtainable due to the homeostatic system, however.
See also: Rebalancing Dopamine (Without “Dopamine Fasting”)
Serotonin
This one predominantly helps regulate our circadian rhythm. It also makes us feel happy, calm, and awake.
To have more:
- get more sunlight, or if the light must be artificial, then (ideally) full-spectrum light, or (if it’s what’s available) blue light
- spend time in nature; we are hardwired to feel happy in the environments in which we evolved, which for most of human history was large open grassy expanses with occasional trees (however, for modern purposes, a park or appropriate garden will suffice).
Downsides: this is what keeps us awake at night if we had too much light before bed, and also too much serotonin can result in (potentially fatal) serotonin syndrome. Most people can’t get that much serotonin due to our homeostatic system, but some drugs can force it upon us.
See also: Seasonal Affective Disorder Strategies
Oxytocin
This one predominantly helps us connect to others on an emotional level. It also makes us feel happy, calm, and relaxed.
To have more:
- hug a loved one (or even just think about doing so, if they’re not available)
- look at pictures/videos of cute puppies, kittens, and the like—this triggers a similar response
Downsides: negligible. Socially speaking, it can cause us to drop our guard, most for most people most of the time, this is not a problem. It can also reduce sexual desire—it’s in large part responsible for the peaceful lulled state post-orgasm. It’s not responsible for the sleepiness in men though; that’s mostly prolactin.
See also: Only One Kind Of Relationship Promotes Longevity This Much!
Adrenaline
This one predominantly affects our sympathetic nervous system; it elevates heart rate, blood pressure, and other similar functions. It makes us feel alert, ready for action, and energized.
To have more:
- listen to a “power anthem” piece of music. What it is can depend on your musical tastes; whatever gets you riled up in an empowering way.
- engage in something competitive that you feel strongly about while doing it—or by the same mechanism, a solitary activity where the stakes feel high even if it’s actually quite safe (e.g. watching a thriller or a horror movie, if that’s your thing).
Downsides: its effects are not sustainable, and (in cases of chronic stress) the body will try to sustain them anyway, which has a deleterious effect. Because adrenaline and cortisol are closely linked, chronically high adrenal action will tend to mean chronically high cortisol also.
See also: Lower Your Cortisol! (Here’s Why & How)
PS: it is also called epinephrine, and chemically different but almost identical in most ways, noradrenaline or norepinephrine
Some final words
You’ll notice that in none of the “how to have more” did we mention drugs. That’s because:
- a drug-free approach is generally the best thing to try first, at the very least
- there are simply a lot of drugs to affect each one (or more), and talking about them would require talking about each drug in some detail.
However, the following may be of interest for some readers:
Antidepressants: Personalization Is Key!
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
Regular Nail Polish vs Gel Nail Polish – Which is Healthier?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Our Verdict
When comparing regular nail polish to gel nail polish, we picked the regular.
Why?
This one’s less about what’s in the bottle, and more about what gets done to your hands:
- Regular nail polish application involves carefully brushing it on.
- Regular nail polish removal involves wiping with acetone.
…whereas:
- Gel nail polish application involves deliberately damaging (roughing up) the nail to allow the color coat to adhere, then when the top coat is applied, holding the nails (and thus, the attached fingers) under a UV light to set it. That UV lamp exposure is very bad for the skin.
- Gel nail polish removal involves soaking in acetone, which is definitely worse than wiping with acetone. Failure to adequately soak it will result in further damage to the nail while trying to get the base coat off the nail that you already deliberately damaged when first applying it.
All in all, regular nail polish isn’t amazing for nail health (healthiest is for nails to be free and naked), but for those of us who like a little bit of color there, regular is a lot better than gel.
Gel nail polish damages the nail itself by necessity, and presents a cumulative skin cancer risk and accelerated aging of the skin, by way of the UV lamp use.
For your interest, here are the specific products that we compared, but the above goes for any of this kind:
Regular nail polish | Gel nail polish
If you’d like to read more about nail health, you might enjoy reading:
The Counterintuitive Dos and Don’ts of Nail Health
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: