Neuroepigenetics: How our environment changes our genes

DNA wrapping around an octamer of histones (known together as a nucleosome). Many neuroepigenetic mechanisms work at this level.

On August 21, 2017, there was a total solar eclipse–the first one in my quarter century on earth–and I missed it. While it is true that it would have only been a partial solar eclipse where I live, there is something distressing about missing out on a major public event that all your friends and family are participating in across the United States.

On the other hand, I was in Italy participating in a one week intensive course on neuroepigenetics (which, the nerd that I am, was very exciting as well).

If “neuroepigenetics” is a word you have never heard before, then you are not alone. My spell check doesn’t even know this word, so I thought I would take this opportunity to write about the history and concepts that I learned about this last week (this post goes out to my parents, who keep asking what it is that I do). It’s going to be a little technical, so bear with me and message me if there are topics I need to go more in depth with.

What is “Neuroepigenetics?”

First, we have the concept of “epigenetics.” This was defined as a stable and heritable set of changes to our genetic phenotype without changing the actual genotype. Basically, how our environment can change our genetic expression without changing our DNA sequence that can then be inherited by our children. Let’s unpack this a little further with an analogy or two:

If you give a musical conductor nothing but notes (in this case the notes would be the DNA) with no indication of when or how to play those notes, the music played as-is wouldn’t sound very pretty–assuming the conductor could make any sense of the composition at all. Epigenetics is the annotations on the music telling someone how to play it (time signature, cleft, tempo, whether notes are half or quarter notes, etc.).

Epigenetics music analogy
Visualization of the epigenetics music analogy.

If that was too far from genetics, you can think of the epigenome as the stickie notes stuck to the DNA to tell it which parts are important to express and which are not in each cell.

But what about “neuroepigenetics?” This is a newer term coined in the past fifteen years to broaden the rigid classical ideas in epigenetics to include other mechanisms and also the to loosen the description to “partially stable, sometimes heritable” changes in phenotype without changing the genotype. This gives us as researchers more wiggle room to not prove that three generations out our phenotype stands, because oftentimes it does not but the change is still persistent throughout one lifetime, or two, and these intergenerational changes are still interesting and important.

How Neuroepigenetics Works (Briefly)

There are many different ways in which our environment can modify our genes. I will write more in depth posts on each of these later on, but I wanted to give a brief explanation of the concepts involved.

First, we need to talk about how DNA is packaged into the nucleus of the cell. DNA winds itself around octamers (sets of 8) of things called histone proteins. These further condense together to create chromatin, which is dense enough to fit into the nucleus of every cell.

DNA winds around histones, which compact together to create chromatin which is what your chromosomes are made of.

There are these things called “histone tails” (labeled in the image depicting DNA packaging) that can be modified by different acetyl and methyl (and other) groups. The idea behind these histone modifications is to either loosen up the DNA around the histone to make it accessible for gene transcription, or to close it up around the DNA so it cannot be accessed for gene transcription. These additions can either suppress gene expression or enhance gene expression, depending on how many methyl/acetyl/other groups are added and at which parts of the histone tail of a specific histone variant. For example, three methyl groups added to the fourth amino acid (a Lysine) of the H3 histone variant is often associated with increased gene expression, whereas two methyl groups added to the ninth amino acid (also a Lysine–these are often what is acetylated or methylated) of the H3 histone variant is often associated with decreased gene expression.

Methyl groups can also be added to the DNA itself to “silence” that part of the DNA.

Also the chromatin itself can be remodeled by other molecules that can affect things.

If that seemed complicated, you and I are in the same boat. After a week of hearing from experts in neuroepigenetics, I’m overwhelmed with the types of modifications and the amount of histone variants and the immense quantity of things I don’t know yet.

Real-Life Implications

All of these individual mechanisms and their integrations are pretty cool if you’re a nerd like me, but why does anyone else have to think about it? Well, I’m glad you asked.

When we think about, say, memory problems as we all get older (which is going to happen regardless of whether someone has dementia or not), we may think it is something that is just built into our genetics and that there is nothing we can do about it. Well, if Dr. Marcelo Wood, one of the professors that lectured at this neuroepigenetics course I just returned from, has anything to say about it, it’s that epigenetics are also at play.

Aged Rat Rescue
Aged animals got either saline (left); or NaB which blocks de-acetylation (right). Both groups of animals had the same ability to recognize novel object (white bars), but there is a larger increase in learning in the NaB group (seen with the taller filled bar) which looks more like young animal learning.

I will do a further blog post on the study itself, but in essence Dr. Wood’s lab noticed that there was hypoacetylation (not enough acetyl groups added) of histones in the hippocampus (an area associated with memory formation) in aged rats. Well, if they inhibited the molecules that de-acetylate (take away acetyl groups) histones in the hippocampus, these rats had much better memory than other aged rats that received no treatment. While there is no human treatment for something like this yet, it has major implications in both aging and intellectual disability populations.

I have given only one of many, many examples of topics that are being studied with neuroepigenetics. Dr. Farah Lubin told us about not only learning and memory, but also epilepsy. Dr. Art Petronis discussed epigenetic mechanisms that may be contributing to the risk and development of schizophrenia. I’m personally planning on looking into the neuroepigenetics underlying opiate addiction. The function prospects are pretty interesting, and I hope that the immense amount of the field that is unknown means job security rather than a dead end.

If any of this confused you, do not despair. I have plans to do more in depth posts on different aspects of neuroepigenetics. If there are specifics you want to know more about, let me know and I can focus in on those!

Have a science news article you want a closer look at? A concept you want to learn more about? Leave it in a comment or email us at!
About the author
14595564_10155083719871840_4974523626401782470_nMari is a graduate student studying psychology and neuroscience at the University of Minnesota. Her research focuses on affective, behavioral, and molecular aspects of opiate addiction. Mari loves all science and wants everyone else to be as excited about it and as critical of it as she is.

This is your brain [A1 adenosine receptor] on sleep deprivation

A1 Receptor Availability during sleep deprivation and after recovery sleep
PET scan of A1 receptor availability after 52 hours of sleep deprivation (top) compared to after 14 hours of recovery sleep (bottom). 

So I just finished my finals and moved and got a new puppy and am trying to settle into my summer lab schedule, which means I am a little late on getting this post out (but just a little late, right?). My puppy likes to wake me up early every morning with sad whining, and if not her, then the other dog finds ways to kick me in her sleep. Needless to say I am not sleeping as well as I used to, so I thought a post on sleep deprivation might be fun! I came across this article (“How the brain reacts to sleep deprivation”) a few weeks ago on social media and thought, ‘I wanna know what happens to my brain when I don’t sleep enough.’ Upon clicking on the article I realized it wasn’t just about those nights when you don’t get enough sleep, but is instead about what can happen to your adenosine a1 receptors in your brain after 52 hours of being awake (which I do not do nor do I plan on ever doing if I can help it). Still, seemed pretty cool. I kept reading.

According to the news article, researchers had some men stay awake for 52 hours straight with no caffeine while they measured cognitive abilities (specifically reaction time and memory). At the end they used positron emission tomography (PET) to look at the available adenosine A1 receptors in the brain. ‘Wait, what is adenosine?’ you may be asking. Adenosine is one of the main neural “ingredients” of feeling sleepy, and is what you block when you drink caffeine, so that’s why they focused on it for a sleep study.

Well, apparently when you deprive yourself of sleep, you end up with more available A1 receptors. This might mean that you have less adenosine binding to receptors, or maybe your brain is actually making more of the receptors themselves. What I really found interesting about this study was that people who had a larger increase in available A1 receptors did better on those cognitive tasks I mentioned earlier. Finally, the news brief states that this might help in clinical literature about depression because sleep deprivation can decrease depressive symptoms (but also once you sleep again that relief goes away). I really wanted to know more about this last point, and the only place to find more information on that is in the actual article, so that’s where I went.

I clicked on the nifty link at the bottom of the page that took me right to the original article titled, “Recovery sleep after extended wakefulness restores elevated A1 receptor availability in the human brain” (scientific journals haven’t incorporated “clickbait” into their article titles just yet). This is not an open source article, so here is my PDF copy of “Recovery sleep after extended wakefulness restores elevated A1 receptor availability in the human brain”(god please no one report me to the authorities or whoever will get mad at me for sharing this I just want science to be more available to the public).

Article Summary

What they did: Previous studies had found an increase in A1 receptor availability after sleep deprivation, so this research wanted to both replicate that as well as add another component of recovery sleep (like getting a good night’s sleep after pushing through finals week). Finally, they wanted to explore whether there might be a link between A1 receptor availability and cognitive performance during sleep deprivation.

To do this, 14 healthy male volunteers participated in the study. One week before the study participants had to keep a strict sleep-wake cycle (asleep at 11pm and awake at 7am), and four days before the study participants had to stop drinking any caffeine. This was to assure that all participants started from the same amount of sleep and that there were no confounding effects of caffeine or caffeine withdrawal. Next, they came into the lab to stay up for 52 hours straight and had a PET scan to look at A1 receptor availability. At this time they also did a bunch of cognitive tasks, such as reaction time, working memory, and also reported on general sleepiness. Finally, they slept for 14 hours and had another PET scan at the end of that to see if A1 receptor availability went back to baseline levels after a good night’s sleep. A separate group of people had their A1 receptor availability measured after a baseline 8 hours of sleep for comparison.

A1 receptor availability increase after sleep deprivation
Brain regions with significant increase in A1 receptor availability due to sleep deprivation. Error bars denote standard error of the mean (SEM). Asterisks show significant difference between sleep deprivation and recovery sleep.

What they found: The study compared the 8hr baseline group’s PET data with the experimental group’s sleep deprivation and recovery PET data using an analysis of variance (ANOVA), post-hoc t-tests, and false discovery rate (FDR) measurement of significance. They found that after sleep deprivation, A1 receptor availability increased. This was mainly seen in striatum and thalamus areas of the brain (but also in other areas). Confirming their main hypothesis, they found that a good night’s sleep (aka 14 hours following sleep deprivation) restores A1 receptor availability to that 8hr baseline level.

A1 receptor availability correlates with PVT lapses (attention/Cognitive ability), 3-back Omission (working memory), specific type of sleep during 8hr baseline, and sleepiness ratings
A1 receptor availability correlates negatively with A. PVT lapses (attention/cognition errors), B. 3-back Omission (working memory errors), C. amount of time spent in slow wave sleep during 8hr baseline, and D. sleepiness ratings

There were a lot of individual differences in how much A1 receptor availability increased, as well as how well participants did on cognitive tasks and sleepiness ratings, and these differences correlated with each other. That means that participants with a larger increase in A1 receptor availability had fewer errors and felt less sleepy. The article reports this as though there are these correlations in all the brain areas they found to have significantly increased A1 receptor availability, but the graphs (which I have included here) only show the striatum, the insula, and the temporal cortex. I had to search a little to find the supplementary material for this article, but I got to it and found that PVT lapses correlated with A1 increases in three areas (anterior cingulate cortex, orbitofrontal cortex, and striatum), 3-back omissions correlated with all those areas shown on that initial graph I showed you guys, slow wave sleep correlated with four areas (anterior cingulate cortex, insula, parietal cortex, and striatum), and sleepiness ONLY correlated with increases in the temporal cortex. But that would have made a very cluttered graph so they used the strongest relationships to showcase the correlations.

What is this link with depression? I had absolutely no idea that sleep deprivation could help with depressive symptoms. It looks like sleep deprivation can be used as a way of curbing depressive symptoms (see this review for more details) but that these symptoms come right back after a night of recovery sleep. This study thinks that adenosine must play an important role. There is actually an over-the-counter drug to combat depression that uses a precursor protein for adenosine. This, along with a few rodent studies suggesting an important role for adenosine A1 receptors in depressive symptoms, made the researchers of this study think that maybe these differences in A1 receptor availability could help tease apart some of the underlying reasons for the depressive symptoms coming back after recovery sleep. More research will be necessary for this specific interest, obviously.

Inner Skeptic

First, there were only 14 healthy men involved in this study. Not only is this just a very small sample, but this means that if women have any difference in adenosine receptors, then this is not generalizable to half the population. This is a pretty big problem that a lot of studies have, mostly because it’s expensive and time consuming to have to double your sample size and include sex or gender as a statistical factor. I would still like to see it done. You know, for science.

Next, receptor availability does not give us the full story. Are there more receptors being made (or being put into the cell membrane) or are the receptors binding adenosine less (maybe through the physical folding of the receptor protein changing)? In understanding the specific mechanisms of sleep deprivation and how that might affect mood and cognitive abilities, that seems like an important fact.

Finally, and this is more of an interesting thing that comes up during talking to other people in science, one of the professors I work with asked me why they didn’t measure glucocorticoid receptors (involved in stress). Apparently glucocorticoid receptors are also very important in the sleep wake cycle and cognitive performance, so if you’re in the field, here’s your golden idea you’ve been waiting for.

Final Thoughts

Even with current technology, it’s still difficult to look at the living human brain without damaging it. Sleep is also pretty hard to study, because it’s difficult to tell how similar sleep is between species. So I’m gonna say props to this study for doing what it can with what it has. I’m not sure whether the same things happen during normal sleep fluctuations, or how caffeine interacts with all of that, but that’s something for one of you readers out there to find out and for me to read in the future.

Have a science news article you want a closer look at? A concept you want to learn more about? Leave it in a comment or email us at!
About the author
14595564_10155083719871840_4974523626401782470_nMari is a graduate student studying psychology and neuroscience at the University of Minnesota. Her research focuses on affective, behavioral, and molecular aspects of opiate addiction. Mari loves all science and wants everyone else to be as excited about it and as critical of it as she is.

Why My Brain Size is Different from Yours: Biological Sex as a Factor

Screen Shot 2017-04-13 at 1.48.28 PM
Complicated-looking brain maps from today’s research article

I do brain outreach with children, meaning I go around to elementary and middle schools and teach about brain health, anatomy, and function. Without fail, someone always asks me about why some of the human brains we bring (donated to the university pre-mortem for the purposes of education) were smaller than others. Part of this is due to differences in how the brains were preserved but part of it is, I assume, due to how big the brain was to begin with. I would tell them this and would IMMEDIATELY remind them that just because a brain is bigger doesn’t mean it’s better because bigger people have bigger brains.

Screen Shot 2017-04-13 at 1.51.33 PM
Wrinkly bits seen here in this photo by Fred Hossler from the National Geographic website

Well that’s true and that isn’t true, apparently. According to the recent Science article, “Study Finds Some Significant Differences in Brains of Men and Women,” biological sex* might also play a factor. The gist of the article is that there are neuroanatomical differences between men and women in this sample of 5,216 citizens of the UK between the ages of 44 and 77. They looked at thickness of the cerebral cortex (cortex is the wrinkly stuff you think of when you think of a brain) and volume of 68 different “areas within the brain” (which I took to mean the areas of the brain underneath the wrinkly bits).

Look at all those areas that aren’t wrinkly bits! (Thanks University of British Columbia for this excellent image)

According to Science (the magazine, not the systematic study of the structure and behavior of the physical and natural world through observation and experiment), the research found that men have bigger subcortical regions (all 68 of them the way it sounds) but women have thicker cortex. But then they tell me that when a bunch of statistical controls were done (which I feel like probably should have been done in the first place) 14 subcortical areas were larger in men and 10 were larger in women (If you’re thinking, “wait, I thought you said they were all larger in men?” then you and me both, my friend).

As always, I wanted to go to the source. This one was less easy to find because Science didn’t directly cite it for me (the nerve of them, amiright?).I looked up “Ritchie UK Biobank” in Google Scholar and this research article by Ritchie, Cox, Shen, Lombardo, Reus, Alloza, Harris, Alderson, Hunter, Neilson, Liewald, Auyeung, Whalley, Lawrie, Gale, Bastin, McIntosh, and Deary (science means never publishing a paper alone, apparently) was the only one that fit the description.

I read this one on a plane flight complete with TWO tiny bags of pretzels

 Article Summary

What they did: The UK has this thing called a Biobank, where biological data on the population is stored (all volunteers!). This study used the data from 2,750 women and 2,466 men. This data was collected by putting people in a really big magnet and getting pretty pictures of their brains, essentially. Then they took all those images and compared them between the sexes to see if there were any differences. They specifically looked at the volume of 7 subcortical structures (hippocampus, nucleus accumbens, amygdala, caudate nucleus, dorsal pallidum, putamen, and thalamus, in case you were interested) as well as the volume, surface area, and thickness of 68 cortical regions. They also looked at white matter tract directionality/complexity as well as functional connectivity, but as that is not discussed in the original online article and I’m trying to keep this to ~1,500 words, I won’t be discussing that in this blog post (I’ll find another paper specifically on things like that for you lovely people, don’t worry!)

What they found: They found a lot of things. Overall, men had larger brains, even accounting for their larger bodies. So when they tell me that all seven subcortical structures were larger in men than in women I’m thinking, ‘of course they are, men

Here’s a nifty table I drew of their results in case you needed a visual representation of the chaos.

have larger brains in general.’ When they finally decided to control for overall brain volume, the results were more interesting (and, I’d like to think, more accurate.) They found that of those seven structures, men actually only had three larger than women, and women had one larger than men (the nucleus accumbens, if anyone is interested).

At this point I’m a little annoyed because they dragged me through all this suspense of men having larger volumes of all these things when they don’t actually, but I kept reading because I love science and also because I had already wrote the first half of this post.

Let’s get onto those cortical areas. Without adjusting for overall brain size, ALL 68 areas were larger in men than in women. But, and this should surprise no one, when overall brain volume was taken into consideration, men had 14 larger areas than women and women had 10 larger areas than men, and 44 of those areas there were no significant differences. Thanks, statistics (see the image at the top of the page and look at the difference between the first and middle panel for a visualization of this). Men also had larger surface areas than women. I could not tell if this was the result controlling for overall brain volume or not. Don’t worry ladies, even if men have larger cortices by volume and surface area, our slightly smaller cortices are thicker. This was found in 47 areas of the brain unadjusted for brain size and still 46 areas when adjusted. So, go us for being consistent I guess?

Inner Skeptic:

I wanted to take a quick moment to point out that the online news release from Science got the subcortical and cortical areas confused (saying there were 68 subcortical regions looked at but also talking about their thickness). I wanted to remind people that copywriters, regardless of the scientific background, might not always get everything right which is why we need to think critically about each article we come across (including, for example, mine!)

I had a few issues with this paper. I think the largest issue is that it wasn’t actually a peer reviewed publication. I know this because it told me at the top of each page (see below). This doesn’t mean it’s not science, nor does it mean it can’t be good, but it does mean that it hasn’t been critically looked at by other scientists yet, and the results should probably be questioned a bit more than your average paper. I will be interested to see if they publish in a peer reviewed journal and how similar that paper will look to this one.

Second, I didn’t like that I had to read through all that crap about brain area sizes being different when the overall brain volume wasn’t accounted for. I felt like they should have just automatically done that and used those results as their only results.

Third, their age range was 44-77. This does not take into account a very large part of the population. It is also at the time when women are going through menopause which does things to their brains and bodies that no one understands because no one controlled for it in this study. This means the results might not be able to be generalized to the rest of the population.

Final Thoughts:

My first thought is that I should have chosen an article with fewer list-like results. But sometimes that doesn’t happen so I will forgive myself.

More importantly, brain size doesn’t necessarily mean anything with regard to intelligence or daily activities. Remember that. It’s not the same as smartness or talent or anything like that. It is simply a physical attribute unique to you as a person. Overall I think it’s a really interesting thing to look at, because women and men have different outcomes in psychiatric situations so it would be amazing if we could find a good underlying reason for that. They weren’t able to do that in this study (which was never meant to be anything more than descriptive, according to the author), but if you want to go get a bunch of brain images and measure volume/surface area/thickness against different psychiatric disorders, I’m sure there’s a pretty sum of grant money in it for you.


*Because the study in question did not take into account gender versus biological sex, and did not discuss intersex, this is going to be about biologically male vs biologically female brains with complete disregard to whether a participant’s gender lines up with their sex.

Have a science news article you want a closer look at? A concept you want to learn more about? Leave it in a comment or email us at!
About the author
14595564_10155083719871840_4974523626401782470_nMari is a graduate student studying psychology and neuroscience at the University of Minnesota. Her research focuses on affective, behavioral, and molecular aspects of opiate addiction. Mari loves all science and wants everyone else to be as excited about it and as critical of it as she is.

Tea May or May Not Lower Your Risk of Developing Dementia Later in Life

This is the first in our new series of blog posts looking into current and past scientific research and scientific concepts that are often misinterpreted or generally misunderstood. The goal of this blog series is for us as scientists and writers to struggle through articles and research so that you don’t have to. If you have a science news piece you’ve read, a concept you want to know more about, or general science questions, email them to and you might see your topic in a blog post! 

To begin our new blog series, I wanted to talk about this article that I came across recently on Facebook. It was posted by a neuroscience research page that I follow, and also it’s about tea. I ADORE tea (and neuroscience), so of course I had to read it and subsequently tell the world about it.

The gist of the article is that drinking tea every day may help fend off cognitive decline as we get older and decreases the risk of developing dementia and Alzheimer’s disease. If this is true, then this is HUGE. Alzheimer’s Disease is a devastating neurodegenerative disease that affects a lot of people and comes with some pretty severe memory, mood, and physical issues. As someone who has had a family member die from Alzheimer’s Disease, I’m always on the look out for research that has found a cure. As a neuroscientist, though, I’m always skeptical.

So let’s look a little more closely at the research that this article is citing. First, I went to the original National University of Singapore news release (found here). It begins, “A cup of tea a day can keep dementia away,” (what a freaking amazing first sentence, am I right?) before proceeding to describe the study and how drinking tea every day can lower the risk of cognitive decline in older age. This is pretty similar to the article that cited it–similar enough to make me question how plagiarism works in the news.

At this point I’m reaching for my tea kettle, partly because I adore tea but also partly because I don’t want to develop Alzheimer’s Disease. But then I realized that as a skeptic I should probably go to the original article to see what’s up.

Proof that I not only read this paper but was drinking tea the whole time

I realize not everyone has access to articles the way I do because I’m a graduate student and my university pays an enormous amount of money for access to academic journals. I don’t know how illegal it is for me to share the PDF, but here it is if you’re interested and please no one report me to the feds: Tea consumption reduces the incidence of neurocognitive disorders: Findings from the Singapore longitudinal aging study (a catchy title, I know). If reading academic articles makes your brain hurt, you’re not alone. In theory I’ve been doing this for years and should be able to understand many types of academic articles well enough to breeze through them. In practice, different fields of science have such vastly different ways of describing their experiments, peppered with mass quantities of jargon and–god forbid–mathematical models specific only to that field. So don’t worry, I’ve already struggled through it for you and here’s what it says and what I thought:

Article Summary:

What they did: 957 Singaporeans 55 years or older participated in this study. They had normal cognitive functioning between 2003-2005 (measured through something called the Mini Mental State Exam, which is legit according to my clinical psychologist friends). During those years, participants answered questions about their tea drinking habits: how often did they drink tea and what type of tea did they drink? The options for types of teas were “Ceylon/English” tea aka black tea, “Chinese tea” which I think is oolong, and “Green” tea which–correct me if I’m wrong–refers to green tea. They also asked for gender (male or female, in this case) and whether participants had a specific allele of a gene that is a risk for developing Alzheimer’s disease (an APOEε4 gene, if you’re interested). Finally, they looked at incidence of developing a “neurocognitive disorder” such as Alzheimer’s disease between 2006 and 2010 (72 of the participants fell into this category). They controlled for a bunch of variables, too, like BMI, whether people smoked, education, age, etc, to try to only be looking at the effect of tea.

What they found: This is where it gets cool. They did some magic stats (not really, they did some two-sided Chi-square tests and logistic regression models) where they looked at whether drinking tea was associated with a lower incidence of neurocognitive disorders at their later time point. They did this first with just tea as their comparison and then also while controlling for a bunch of other variables (education level, alcohol consumption, physical activity, etc.) that also might have an effect on cognition. They partly control for other variables because they want to make sure they’re testing JUST FOR TEA, but also because their groups had some baseline differences in these measures so we need to put them into our statistical models in an effort to say, “if I hold this constant, does tea still have an effect.”

Guys, it totally does. They found that the group with consistent tea consumption throughout the experiment (can be black, oolong, or green) had a much lower incidence of neurocognitive disorders (aka percentage of people who developed neurocognitive disorders) later on. This protection, however, is ONLY for women and maybe also (but the stats aren’t sure) for people with that APOE risk allele we talked about earlier. So men without an APOE risk gene, you don’t have extra protection (but good job on not having a risk allele, I guess?) but can still enjoy tea for it’s other helpful properties like attention or alertness.

So ladies (and everyone with an Alzheimer’s risk allele), go forth and buy tea.

Inner Skeptic:

Let’s talk about how to look at this paper critically. I’ll go through just a few of my thoughts (not all of them but some). First, there were some big baseline differences between tea drinkers and non-tea drinkers in 2003-2005. Does this mean these two groups of people are fundamentally different other than their tea drinking? Are there separate factors that drive both tea drinking AND a lower chance of neurocognitive disorders? Maybe. Science is hard and we can’t always control for every little thing. These guys actually did a pretty good job with what they had to work with. I’ll give them the benefit of the doubt.

Also, what about white tea? I love me some white tea and I keep hearing about how good it is for you. How am I supposed to know if these claims are real if science doesn’t include it? I assume it wasn’t included because white tea isn’t as prominently imbibed in this participant pool. Still would have loved to see it.

Finally, and they say this in their discussion section, statistical power is better with more participants, and they didn’t have as many as they wanted in some of their groups (like green tea only drinkers) which makes it hard to say for sure whether this is real or whether it’s by chance because we have a small population. It’s a pretty big group of people and the oolong/black tea group is definitely large enough to show support for tea drinking, but to be certain it would be nice to see more participants in the future.

Final Thoughts:

Let me start by just saying that tea is delicious and cozy and perfect so you may as well drink a cup every day. But also tea may or may not decrease your risk for developing a neurocognitive disorder when you get older. So, best casenario, you fend off dementia. Worst casenario, you drink delicious tea your whole life.

I’m hoping to see future studies looking at the underlying reasons for this protective effect. They mention the components of tea that they think are at play, but, you know, I want to see concrete evidence and all that jazz.

Have a science news article you want a closer look at? A concept you want to learn more about? Leave it in a comment or email us at!
About the author
14595564_10155083719871840_4974523626401782470_nMari is a graduate student studying psychology and neuroscience at the University of Minnesota. Her research focuses on affective, behavioral, and molecular aspects of opiate addiction. Mari loves all science and wants everyone else to be as excited about it and as critical of it as she is.

Cream Scone Recipe


You know what go well with tea? Scones. You can eat them with your breakfast tea, your afternoon tea, or your evening tea (everyone has that much tea in one day, right?) And there is no better place to start than with the basic cream scone. This is a scone that can–and should–be eaten with jam, butter, or clotted cream.

Basic Scone


2 cups flour
1/3 cup granulated sugar
1 tsp baking powder
1/4 tsp baking soda
1/2 tsp salt
1/4 cup (1 stick) unsalted butter, cold and either cubed or shredded
4 Tbsp heavy cream, divided
1/2 cup sour cream
1 large egg


  1. Preheat oven to 400 degrees.
  2. In a medium bowl, combine flour, sugar, baking powder, baking soda, and salt
  3. Cut in butter until mixture is crumbly/grainy
  4. In a larger bowl, combine 2 Tbsp heavy cream, sour cream, and egg
  5. Slowly mix dry ingredients into wet ingredients until just barely combined. Knead dough to bind it.IMG_6296
  6. Shape dough into a rectangle or circle on a floured surface, and cut into triangles (I personally got a fancy mini scone pan for Christmas this year, so I smush the dough into that instead)
  7. Brush the remaining heavy whipping cream on top of the scones, and sprinkle with sugar.
  8. Bake 18-22 minutes or until golden brown.

Happy baking!


Steeped in Despair


On a cold, windy evening she sat drinking tea,

finals bearing down, quite tempted to flee.

Alas, she resigned and walked to the stove,

flipped on the burner, and the kettle did glow.

‘Perhaps one more pot will make life less bleak’,

she said to herself, defeated an meek.

The water was boiled, into the pot it was poured,

with thoughts of studying, a bit less abhorred.