Culinary ChemistryRead More
Chemistry is frequently likened to cooking, only involving more dangerous and inedible materials.
The comparisons are, to be fair, obvious. Both involve the conversion of starting ingredients into a final product that differs in appearance or chemical structure, be it cooking ingredients such as meat or vegetables, or chemical reagents with long complex names. Both require this conversion to be conducted after following a specific set of regimented instructions that maximise a delicacy’s flavour or the yield – total amount – of product formed.
The comparison ends when it comes to creating edible materials. As a general rule, one shouldn’t eat or drink in laboratories. It might not end well.
So yes, I conceive an argument can be made that chemists are scientific chefs.
But I would argue an extension to this logic. We are all chemists.
Now before I receive any hatred for insinuating such a statement, allow me to make an argument that we are all actually chemists even if we are currently unaware of the fact.
I am willing to bet that at some point in our lives every one of us will have cooked food. But aside from the need to create a sustaining meal, have we ever thought about what is happening in our pan or oven? Have we considered the detailed and complex chemical composition changes that are taking place as we cook our Sunday roast chicken dinner? Well, let me change your cooking perceptions as we understand the theory behind the chemical experiments we are all, guiltily or happily, performing on a daily basis.
To some the process of cooking might seem secondary, a means to an end. But I thoroughly enjoy it. My interests lie in the chemistry of everyday processes, including those individual chemical reactions that govern the fine line between a culinary delicacy and burnt cinders.
Let us take the cooking of a chicken breast as an example and apply the scientific method to the cooking process. So begins the (tasty) experiment. To cook chicken, we must apply heat, typically taking about 20-30 minutes at 200 °C in an oven. Now, throughout cooking we can make a series of observations about the exterior changes occurring. The most obvious changes are in colour and texture. The chicken breast turns from a soft rubbery pink colour to an entity with a harder white interior and a soft golden exterior (clearly assuming that it wasn’t burnt in some cooking mishap). While this might sound patronising to those who know how to cook chicken and how to tell when it is cooked, does everyone know why food turns a different colour when cooked?
The answer, of course, is chemistry. Whether you want to admit it or not, you, as the budding chef of your chicken dish, have just instigated, supervised and observed a chemical reaction. Or to be more specific, the cascading series of Maillard Reactions.
The Route to Culinary Prowess
In layman’s terms, Maillard reactions result in the distinctive flavours and textures of food. Have you ever wondered why chicken tastes like chicken and not beef? Well it all comes down to the specific chemicals created during the Maillard reaction. It is not just involved in the cooking of chicken. The beautiful brown glaze of cooked meat, browning bread in a toaster, roasting of coffee beans; the Maillard reaction forms an integral part of all of these processes.
The chemical process is named after the French chemist Louis-Camile Maillard whose initial research in 1912 detailed the series of reactions.1 So let’s get more specific. The Maillard reaction defines a set of chemical reactions between amino acids and reducing sugars in the presence of heat that create new molecules that are responsible for food’s distinctive flavours, colours and aromas.
But this reaction has been around for thousands of years, arguably since the discovery of fire and the subsequent cooking of meat. As our society has advanced, so too has our understanding of the complex chemistry being undertaken to the point where we are now able to control the reaction with such precision as to prevent unwanted side reactions and create the perfect palate pleasing flavour blend.
So, let us delve deeper into the individual components of the Maillard reaction, and understand what makes each component essential. Amino acids are the building blocks of life and exist in all foods. They are the individual building blocks of larger molecules known as proteins, which are large complicated biological molecules responsible for swathes of life’s processes. A large variety of proteins exist, with specific types found only in certain foods. This diversity is the reason for the different flavour molecules, known as flavouroids, in food. The second essential component is the reducing sugar, such as glucose. Heating the raw food initiates a cascade of reactions and sees the formation of a variety of different functional group molecules including aldehyde, ester, and amine products that form the basis of flavouroids. The chemistry is extremely detailed and complicated and has been discussed further in several other sources.2 The longer food is heated, the more diverse these flavour compounds become. One particularly important class of molecules are known as Strecker Aldehydes, which are responsible for flavours in coffee, beers and other foods.3
But how do you explain the colour change? Well towards the end of the Maillard process compounds known as melanoidins are formed. These are long polymeric molecules and act as brown pigments, hence turning food brown as it cooks. This results in an alternative name for the Maillard reaction as a method of non-enzymatic browning. This classification distinguishes it from enzymatic browning where enzymes are responsible for the browning colour, as for example is the case in avocado browning.
Maillard reactions produce hundreds of different compounds, specific to the amino acids and sugars present in the raw food and the conditions in which cooking occurs. We can imagine several examples of this process. The reason chicken tastes like chicken and not beef comes down to the protein structure and how this structure changes throughout the cooking process. This structural rearrangement is temperature dependent, making the process of developing flavour differ depending on the cooking process. Different cooking conditions can lead to the formation of different molecules and thus potentially new flavours and textures.
Undesirable Maillard Reactions
Perfecting cooking depends on one’s ability to strike the right balance in the Maillard reaction. Cooking, as Goldilocks would say, must be just right. Everything discussed so far has involved favourable Maillard reactions. But this is not always the case. Excessive cooking has been known to result in the formation of toxic by-products such as acrylamide or furans.4 Acrylamide in particular is a molecule worthy of its own discussion. Animal studies investigating the effects of acrylamide and its metabolite glycidamide, have suggested they are genotoxic – meaning they affect genetic information – and carcinogenic.5 Since these molecules accumulate in overcooked or processed foods, there have been several mainstream news reports that have referred to the cancer-causing effects of burnt toast to name just one. However, before we all make the sudden decision to never eat toast (or to be more extreme, cooked food) again, human studies on the effects of acrylamide are inconclusive.
We are all chemists at heart
Cooking is a complex chemical process. Such a simple task such as heating a chicken breast in the oven results in a variety of chemical reactions that distinctly change the flavour and aroma of certain foods. Food science is a fascinating topic that coalesces the fields of chemistry and biology. But I think you will now agree that in the heart of every kitchen lies a good chef chemist.
Do you agree that we are all chemists, or did I whet your appetite for more chemistry cooking facts? Let me know your thoughts at @JoeAtNotch
- Maillard, L. C., R. Acad. Sci. 1912, 154, 66
- OK I’m a chemist so I must discuss the, initially simple, chemical reactions that cascade into complex products, that occur during the Maillard reactions. A model of the reactions was described by John Hodge in 1953 as a three-stage process. First, the carbonyl group of the sugar undergoes nucleophilic attack by an amine group of the amino acid to produce an unstable glycosylamine intermediate. This intermediate undergoes a rearrangement (known as Amadori rearrangement) to produce several aminoketose compounds. Then finally, these aminoketose compounds undergo further rearrangements and reactions to produce the final flavour, aroma, colour and other compounds. The chemistry is, as you can now probably understand, extremely complex.
- Lund M., Ray C., J. Agric. Food Chem., 2017, 65, 4537 Link
- Tareke E., et al., Agric. Food Chem. 2002, 50, 4998 Link
- European Food Safety Authority: Acrylamide
Mankind’s deadliest chemicals: Nerve AgentsRead More
Warfare. Weapons of mass destruction. Poison. Death. All four of these phrases adequately summarise nerve agents.
But what are nerve agents, how do they affect the body and what are the differences between the agents implicated in the murder of Syrian civilians, North Korean Kim Jong-Nam, and most recently implicated in the attempted murder of Sergei Skripal?
This is the scientific history behind the deadly nerve agent.
What are nerve agents?
Unlike poisons such as arsenic, chlorine gas and cyanide, nerve agents are not naturally occurring molecules or elements, concentrated into lethal doses. Instead, they must be synthesised in a laboratory. They are known as organophosphates, terminology describing the blend of composed elements including phosphorus, carbon and others. Organophosphates have one universal use in the world: they are insecticides. It is perhaps unsurprising then that nerve agents have the same effect on humans as insecticides have on insects. A chilling thought.
Nerve agents themselves are liquids at normal temperatures. They are sometimes referred to under the misnomer of nerve gases, often due to the specific method of aerosol dispersal since this has the potential to deliver a lethal dose faster. Being readily absorbed through the skin, eyes and respiratory tract, there are various methods of administration. Exposure can be achievable via vapour or gas dispersion, through contact with the skin, or even through ingestion.
As the public is now only too aware, nerve agents are lethal. The levels of toxic molecules necessary to deliver a fatal dose differ between the different modes of administration – inhalation is the deadliest. In the case of sarin, the concentration of agent that is likely to be fatal through inhalation administration is only 100 mg min m-3, whereas the concentration necessary to achieve the same effect through contact with the skin is 1700 mg min m-3. In both cases, concentration is measured through exposure and is dependent on the time exposed, the mass of compound and the area of exposure.
Furthermore, the latency period – the period before the manifestation of symptoms – varies between the agents. Some have been reported to be as short as 30 seconds, but others can last to the order of hours.
How do nerve agents affect the body?
As the name suggests, nerve agents affect the body by disrupting the central nervous system messaging channels, effectively shutting down the body’s nervous system. Cellular messaging is a key component of the nervous system, and is achieved in the body through the transmission of electrical impulses along nerves and pass from neuron to neuron via neurotransmitters. A very common neurotransmitter is Acetylcholine (ACh), which transmits these essential electrical impulses down the neuronal network and also mediates muscle contact. Once ACh has performed its role transmitting cell messages, it must be destroyed to prevent overstimulation of the nervous system. The body performs this naturally using the enzyme acetylcholinesterase (AChE) before the process begins anew.
Organophosphate nerve agents effectively disrupt this process. The compounds bind to the specific sites of the AChE enzyme rendering it ineffective. Thus, a toxic accumulation of acetylcholine occurs, resulting in the overstimulation of the nervous system.1
As the effects of nerve agent exposure increases, the victim suffers from the loss of muscular functions leading to constriction of pupils, drooling, convulsions, paralysis and respiratory arrest. Without treatment, death is inevitable.
Fortunately, these effects can be reversed using an antidote – but it must be administered rapidly. First, the bond joining the nerve agent and enzyme can be broken using specific “oxime” drugs, regenerating the enzyme for normal use. Speed is of the essence to reverse these effects; the bond “ages” over time, strengthening the link between the agent and the serine and rendering oxime drugs ineffective.
Atropine has the effect of blocking the ACh receptors, thus inhibiting the transmission possible between cells. In a healthy patient, atropine is poisonous, owing to its communication disruption. But, for patients poisoned with nerve agent, it has the potential to save their lives.
The G-Series of nerve agents were the first series of synthetic chemical weapons created. The first nerve agent dates back to 1936, when German scientist Gerhard Schrader attempted to synthesise a new insecticide that was cheaper than nicotine. What he created was more toxic than previously imagined. In fact, it is reported that the spilling of one drop in the laboratory led to Schrader and his assistant stopping work for three weeks. This would later become known as tabun.
It is perhaps no wonder then that upon the outbreak of World War II, the German military began preparing for the large-scale production of tabun as a replacement for chlorine and mustard gas, chemical weapons synonymous with World War I. It was during this process that further research resulted in more potent, deadlier weapons being discovered. Thankfully, no nerve agents were ever used during WWII, as production plants were not fully operational before the collapse of Nazi Germany. But Schrader’s research birthed the G-series of nerve agents, which have continued to be used well into the modern day.
The agents in this series includes, in order of potency: tabun (GA), sarin (GB), soman (GD) and cyclosarin (GF).
Sarin is perhaps the most widely recognised agent from this class. Named after the scientists behind its discovery, Schrader, Ambros, Ritter & Van der Linde, it is the most volatile of the G-series, which makes dispersal as a gas easier.
G-series agents are also reported as being non-persistent. This is an expression of the duration of chemical effect, which impacts the ease and feasibility of decontamination methods. Common decontamination methods of sarin include simply washing exposed area with copious amounts of water to dilute the agent.
After WWII, pesticide research continued, and, ironically, more lethal nerve agents were again developed from attempts to synthesise an effective insecticide. It was here in the UK, at Imperial Chemical Industries (ICI), where amiton was created. This insecticide eventually had to be removed from sale due to its toxicity. However, this research was continued at Porton Down Chemical Weapons Research Centre, near Salisbury, where amiton was given a new name: VE.
Between 1952 and 1955, other V-series nerve agents were synthesised at Porton Down, most notably VX. Often cited as a colourless or amber-coloured liquid, the V-series agents have lower volatilities than other agents such as sarin. This low volatility means the primary method of dispersal is through skin contact since aerosol dispersion is difficult. What sets the V-series apart from their G-series brethren is their toxicity. VX is many times more toxic than sarin or tabun, with the lethal concentration being around 10-15 mg min m-3 for both skin contact and inhalation. Compare this to the lethal concentration of sarin, at 100 and 1700 mg min m-3 for inhalation and skin contact respectively, VX is clearly a deadly chemical.
The V-series are also persistent agents, due to their low volatility and reactivity. Bleach and alkali are effective means of decontaminating any exposed area, and often a mix of water and bleach is used.
The Novichok agents
The most recent example of nerve agent use, in the poisoning of Sergei Skripal, has now been identified as belonging to the Novichok class of nerve agents.
The Novichok, or N-series, nerve agents are a secretive class of chemicals that, prior to leaks made in the 1990’s by Russian defectors, were unknown to the world. The spotlight has now been shone on these agents, but very little is still known. What is known is that they were developed in the Soviet Union in the 1970’s or 80’s, and various sources, including a compendium of chemical warfare agents, has estimated that the Novichok agents are around 10 times more lethal than VX.
When have nerve agents been used?
Thankfully, usage has been largely absent from warfare.
VX itself has itself been used on multiple occasions. In 1968, the accidental discharge of VX during military testing resulted in the death of over 3000 sheep in Dugway, Utah. But, VX has been implicated in two assassinations. The first was committed by members of the Japanese Aum Shinrikyo cult in 1994 to assassinate a former cult member in Osaka (this cult was also responsible for the Tokyo subway station sarin release in 1995 in which 13 people died). Until recently, this was the only confirmed human fatality attributed to VX. That changed following in 2017 with the murder of Kim Jong-Nam, the half-brother of North Korean leader Kim Jong-un, alleged to have been caused by the smearing of VX across his face.
Turning to the darker side of history, sarin has unfortunately been used to inflict mass indiscriminate carnage. The oldest documented use was in March 1988 by Saddam Hussein; it is believed that he used sarin against Kurdish citizens in Halabja, leaving 5000 people dead. There have been two other documented uses of sarin, alongside other chemical weapons such as mustard and chlorine gases, during the Syrian civil war in 2013 and 2017.
Fortunately, due to the serious nature of these chemicals, their synthesis requires expertise, facilities, equipment and funding. It is therefore unlikely that individual terrorist organisations or rogue chemists would be capable of their synthesis without extensive assistance. Furthermore, the Chemical Weapons Convention, which came into effect in 1997, outlawed the stockpiling and production of nerve agents, including sarin or VX. Enacted by 192 countries, part of the treaty is the commitment to the disposal of agents. In November 2017, a pivotal goal was reached as it was announced over 96% of global chemical weapons have been destroyed.
There are few words that I can offer that adequately summarise nerve agents. They can kill or injure indiscriminately. Throughout my research, I was both amazed and terrified by the lethality and potency of what scientists have created over the years. Nerve agents are weapons of war, synonymous with lethality, suffering, pain and death, and are without a doubt amongst the worst of mankind’s self-made monsters.
- Anders Allgardsson et al., PNAS, 2016, 113, 5514-5519. (Link)
I wish to thank the following publications, all accessed on 14/03/2018 for the information that constitutes the bulk of this article, so I encourage the reader to visit the following articles.
- Scientific American: Nerve Agents What are they and how do they work
- Chemistry World: VX
- Chemistry World: What we know about Russia’s Novichok nerve agents
- University of Birmingham: Nerve gas – the dark side of warfare
- Medscape: CBRNE – Nerve Agents, V-series – VE, VG, VM, VX
- Medscape: CBRNE – Nerve Agents, G-series –Tabun, Sarin, Soman
- Compound Interest: Chemical Warfare & Nerve Agents – Part I: The G Series
- Compound Interest: Chemical Warfare & Nerve Agents – Part I: The V Series
Furthermore, facts were found and corroborated from the following websites.
Due to the constant stream of updated information, all data and facts published here are correct at the time of writing.
What is Science?Read More
No, this is not a rhetorical question nor is it an extension of the Big Bang Theory where Sheldon incessantly asks Penny the age-old question “What is physics?” With the rise of fake news, and scientific inaccuracies, it is prudent to return to our basic understanding of science. Therefore, I ask you, the reader, again: what is science?
For such a seemingly simple question, the answer is not so obvious. The challenge in defining science is the need to summarise its major concepts while also addressing the inherent limitations. Science is a process, the means to achieve a greater understanding on the world’s surroundings. How is it that this can be defined in a sentence?
In 2009, the Science Council saw fit to spend a year coining a definition of science. In a world where pseudoscience, popularised by practices such as homeopathy, mingles with genuine science, a new definitive definition was needed. In response the Science Council proposed the following definition:
“Science is the pursuit of knowledge and understanding of the natural and social world following a systematic methodology based on evidence.”
Beautiful. This definition truly is a marvellous choice of phrasing. It envelops three of the most important fundamental aspects of science all into one sentence.
“… systematic methodology based on evidence.”
Starting first with the definition’s finale. Once again, the phrasing is vital as systematic methodology addresses several ideological positions of science.
Science is built off hypotheses, which is an idea that offers an explanation to an observed effect. Hypotheses seek to answer an underlying effect, and as such, they lead us to define different theories. Any idea or suggestion can be considered a hypothesis. So long as the result is fundamentally driven by observation and data, leading to acceptance or refutation, it can be considered valid.
What separates scientific hypotheses from others is the ability to apply experimentation that will either support or refute the proposed hypothesis. A scientific hypothesis can never be proven, instead it is considered yet to be refuted. The ability to test a given hypothesis is fundamental to the scientific method. Any hypothesis must be subject to quantitative methodology that assembles data to help deliver an informed result.
An example of a scientific hypothesis would be:
The mass of a popcorn kernel is proportional to the mass of a popcorn flake.
Now this hypothesis is valid since mass is a measurable quantity. This bring us to the based on evidence component of the definition. Science is a journey driven by evidence. Any hypothesis is accepted or refuted based on data obtained from testing. If after taking a series of measurements of mass I found there was no correlation between the masses of kernels and flakes, I would refute my hypothesis. The process then begins anew, constructing an alternative hypothesis for experimentation.
Testing and experimentation are platforms science is built on. But we must also delve deeper into the word methodology. Methodology means that the process of hypothesis, testing, data collection, analysis and communication, is all performed methodically. This manner is essential as it enables the possibility of repetition and criticism. Incidentally, there are numerous examples of scientific research that has been retracted due to flawed or falsified data, uncovered through other researchers attempting to reproduce results.
“… natural and social world …”
We must always consider the environment under investigation. While it is perhaps obvious that we should only consider the real world, the limits need to be expressed in any definition to exclude unjustly extending to the realm of the supernatural and science-fiction. By phrasing the parameters as the natural and social world, genuine scientific research is then confined to this realm of reality.
“Science is the pursuit of knowledge and understanding …”
I summarise by returning to the beginning. Science is a journey. It is built on the principles of experimentation, testing and reproducibility. Its path is forged through testable evidence with the overarching goal to address the world’s greatest challenges. From understanding our complex internal biological processes to the composition and lifespan of stars and planets, science presents the ability to answer life’s biggest mysteries. It can be applied to any scale, from the infinitesimally small to the magnitude of the cosmos. The scientific process is a wonderful, circular, repeatable process that is ever changing with the advent of new technology and our improved understanding. The iterative nature of scientific research, which in some cases can be lifetimes in the making, makes for an engaging, fascinating journey.
What do you think about the definition of science? Let me know at @JoeAtNotch
What is circadian rhythm?Read More
Circadian rhythm is integral to our functioning; yet many have never heard of the phenomenon, or don’t know why it’s so significant. With the 2017 Nobel Prize in Physiology or Medicine having recently been awarded to Jeffrey C. Hall, Michael Rosbash and Michael W. Young for their “discoveries of molecular mechanisms controlling the circadian rhythm”, perhaps this will be the year circadian rhythms reach the science spotlight.
So what exactly is circadian rhythm, and why did the researchers exploring it deserve the Nobel Prize?
Circadian rhythm is essentially our body clock; the approximately 24-hour rhythm that occurs in cellular processes in almost every tissue of the body. The 24-hour rhythmicity of the circadian system is driven by environmental time cues, such as the natural day and night light cycle, cycles of rest and activity, and even feeding behaviour. Our bodies then translate these timing cues into molecular oscillations within individual cells to drive our functioning.
Circadian rhythm is controlled by the suprachiasmatic nucleus (SCN), a small area in the centre of the brain. The SCN is not actually required for peripheral organs to generate their own rhythms. Rather, it acts more like the conductor of an orchestra, guiding each organ to oscillate in the ideal phase for that specific tissue.
Intrinsic circadian clocks are evolutionarily conserved across the animal kingdom, with organisms as small and as ancient as cyanobacteria exhibiting the same clock systems as us humans. In experiments using fruit flies, the 2017 Nobel Prize laureates identified several of the genes that made up this core molecular clock that controls daily biological rhythm.
This includes the period gene, which encodes the protein PER. They discovered that PER accumulated in the nucleus of cells during the night, and degraded during the day, causing rhythmic 24-hour oscillations. Beyond this first discovery, the team went on to identify that these oscillations were auto-regulatory, meaning that they continuously self-regulate their levels in a cyclic fashion.
With many of our genes coordinated by the circadian clock, the impact they have on our complex physiology is vast. Circadian research since the initial discoveries by this year’s Nobel laureates has exposed how disruptions to our circadian system, for example nocturnal lifestyle, working night shifts, and jet lag, can cause an array of problems with sleep, behaviour, body temperature and metabolism, and even impact cancer.
Circadian disruption therefore imposes a major public health issue that has yet to receive the recognition it deserves. One can only hope that the awarding of the Nobel Prize to circadian researchers will increase public awareness of the circadian influences on health risks, leading to enhanced lifestyle choices that improve the alignment of physiological systems with the daily body clock.
Tori Blakeman is PR Account Manager & Writer at Notch. Follow her on Twitter @ttttor.
For more information on circadian clocks and their implication in breast cancer, read Tori’s review in Breast Cancer Research.
What makes a left-hander?Read More
Imagine someone is passing you a cup of coffee – which hand will you use to take it? Did you have to think about that? In an everyday situation, this is usually a subconscious decision and you simply reach with your dominant hand, just as you do to write and eat. However, as a lefty I’ve always been curious about this, so I’ve decided to have a look in to what might cause this difference. About 9 in 10 people are right-handed, leaving a small minority of left-handers,1 with famous left-handers including Barack Obama, Leonardo da Vinci and Buzz Aldrin.
Over the years, left-handers have been the subject of many superstitions2, ranging from a bit of bad luck all the way to associations with the devil. If you have a left-handed grandparent, they may have even been forced to use their right hand when learning to write at school3. In a slightly more modern context, any left-hander will know that tools such as scissors and tin openers are often designed for easy use in the right hand, and this can leave us struggling with what should be simple tasks! So why does the world have such a bias towards right-handedness?
Among many other factors, language is a strong contributor to the ongoing negative reputation of left-handedness. For example, in French, “droit” not only means “right”, but also straight, law and right in the legal sense, so has strong positive connotations. Left, “gauche”, is similar to more negative words with meanings such as clumsy, graceless and unhappy. This trend is echoed in many other languages around the world, in fact in Latin, left can be translated as “sinister”. Even in certain religions, the right-hand side is preferred, for example in Christianity the right hand of God is the favoured hand.
So, we know that there is a long-standing negative reputation for use of the left hand, but this isn’t something a baby will take into account when developing a dominant hand. Here follows a quick run-through of some theories behind hand dominance.
Hand preference does in fact begin to develop in the womb, with around 40 genes chipping in to choose a dominant side6, but this isn’t the only influence! In fact, in around 20% of identical twin sets, one twin is right-handed and the other is left-handed. This means there must be more factors at play. One study showed that a significant right hand bias is established by 13 months4, and further insight suggests that handedness develops at different ages for different tasks5. Therefore, it is likely that you have an overall bias but that your choice of hand for less important tasks is a mixture of nature and nurture.
Another idea is that the spinal cord indicates a preference for right or left handedness9. During pregnancy, a child will develop a preference for moving one hand. Expanding on this, certain precursors of handedness develop before the motor cortex in the brain has formed a connection to the spine. It is because of this that researchers now think the spinal cord is more significant than the brain in the development of a dominant hand.
The final factor I have explored is “social cooperation”. Humans are a social species, placing high value on cooperativity in society, and as a result, the trend of right-handedness has evolved8. This TED-Ed video gives a brilliant explanation of social competition vs cooperation as well as a good overview of some extra factors10.
Although you may use different hands for different tasks, you are likely to have a definite overall tendency to use one hand more. Ambidexterity is the ability to use both left and right hands equally well, and is a rare phenomenon, with less than 1% of the population being classified as truly ambidextrous. There was a time when being ambidextrous was encouraged, it was even claimed that learning to use your weaker hand could give you “cross dominance” and improved brain function, but science has since shown that this is not true11.
It is of course possible to learn to use your non-dominant hand, for example after a stroke or serious injury many people have had to learn to use their weaker hand in order to write, but unless this happens at a very early age it is unlikely to result in complete ambidexterity12. However, being naturally ambidextrous from birth has been associated with disadvantages such as academic difficulties and developmental conditions13.
It’s clear that there are many complicated factors behind hand dominance, and there’s still a lot more to learn before we fully understand it. If you want to find out more, this quick test can tell you just how left/right inclined you are. Your result might surprise you! Tweet me @HelenAtNotch and let me know what you get.
Storytelling – why and how does it work?Read More
In a recent blog post, Gaby told us about eight ways to communicate science better. One of them was “Tell a story”. Indeed, storytelling has become a buzzword for many aspects of marketing, not just for science communication. But how and why does this powerful mechanism work? Neuroscience, and a bit of Shakespeare, can give us the answer!
Storytelling has a long history, with cave paintings in France dating back more than 40,000 years. So it is not surprising that our brains are designed to remember engaging stories.
Memory experts have shown that narratives make things easier to remember and understand1. You may have also noticed that words can trigger memories and emotions. So, consider this:
- • Just verbally describing an intense situation is enough to activate areas of the brain that deal with emotional responses2 and release a neurochemical called oxytocin3.
- • Listening to a story activates the brain areas involved in imagining scenarios4.
- • Imagining a visual scene activates the same areas of the brain as when we actually see the scene5.
Oxytocin is a small peptide made in the hypothalamus of mammalian brains3. It is synthesised when we are shown kindness or trusted, and motivates cooperation with others. Release of this molecule can also be induced when we hear or see a character-driven story. We can consider oxytocin as the neural substrate for the Golden Rule: If you treat me well, in most cases my brain will synthesize oxytocin and this will motivate me to treat you well in return. In marketing, this means that a well-told story could get a customer to buy from your company, or donate to your cause.
Therefore, a story must sustain attention and contain emotional content to induce the release of oxytocin and make a positive, lasting impression in your listener’s brain.
What makes a story successful?
Keith Quesenberry, marketing professor at Johns Hopkins University, and Michael Coolsen from Shippensburg University conducted a two-year analysis of 108 Super Bowl commercials to investigate what makes an ad successful6. By doing so, they realised that the secret ingredient wasn’t that secret, but had been used already by good old Shakespeare, in his famous five-act plays. Already in 335 B.C., Aristoteles began to develop dramatic theory, and his theories were expanded by German novelist and play writer Gustav Freytag, into what is known as Freytag’s Pyramid, used by Shakespeare and others.
Analysing the Super Bowl commercials, Quesenberry and Coolsen found that ads with more acts (a more complete story with a plot) achieved higher ratings.
When we hear a story, classical language regions in our brain are activated. In addition, a story following Freytag´s Pyramid is also associated with activations in areas beyond the conventional verbal regions7.
But why do the stories stick with us? Well, it is because we don’t just listen to them, we see the images and feel the emotions; we actually experience the story as if it is happening to us (and therefore the oxytocin is released). And again, our brain is involved in this ‘experience’ – more specifically the mirror neuron system. Mirror neurons are a class of nerve cells that modulate their activity both when an individual executes a specific motor act and when they observe the same or similar act performed by another individual8. A study by Ramachandra et al. employed psychophysiological methods to elucidate the role of this system in processing vocal emotions9. Skin conductance and heart rate were measured for 25 undergraduate students while they were both listening to emotional vocalisations and thinking (internal production) about them. The results revealed changes in skin conductance response and heart rate during both “listening” and “thinking” conditions. This suggests an active role of the mirror neuron system in processing emotions from stories.
To sum it up, our brain, nerve cells and chemicals are why storytelling is such a powerful tool, that correctly used can work miracles in marketing. And when building your story, turn to Shakespeare to make it memorable!
What do you think about storytelling? Does it work? Tweet me your thoughts at @fraidifrida
- Baddeley, A. D. (1999). Essentials of human memory. New York: Psychology Press.
- Wallentin, M., Nielsen, A. H., Vuust, P., Dohn, A., Roepstorff, A., & Lund, T. E. (2011). Amygdala and heart rate variability responses from listening to emotionally intense parts of a story. NeuroImage, 58, 963-973.
- Zac, P.J. (2015) Cerebrum, February 02
- Abdul Sabar, N. Y., Xu, Y., Liu, S., Chow, H., Baxter, M., Carson, J., & Braun, A. R. (2014). Neural correlates and network connectivity underlying narrative production and comprehension: A combined fMRI and PET study. Cortex, 57, 107-127.
- Kosslyn, S. M., Alpert, N. M., Thompson, W. L., Maljkovic, V., Weise, S., Chabris, C., Hamilton, S. E., Rauch, S. L., & Buonanno, F. S. (1993). Visual mental imagery activates topographically organized visual cortex: PET investigations. Journal of Cognitive Neuroscience, 5, 263-287.
- Quesenberry, K.A. & Coolsen, M.K. (2014). What Makes a Super Bowl Ad Super? Five-Act Dramatic Form Affects Consumer Super Bowl Advertising Ratings. Journal of Marketing Theory and Practice, 4, 437-454.
- Babajani-Feremi A. (2017) Neural Mechanism Underling Comprehension of Narrative Speech and Its Heritability: Study in a Large Population. Brain Topogr. Feb 18.
- Kilner, J.M and Lemon, R.N. (2013) What We Know Currently about Mirror Neurons. Current Biology 23, R1057–R1062
- Ramachandra, V., Depalma, N., & Lisiewski, S. (2009). The role of mirror neurons in processing vocal emotions: Evidence for psychophysiological data. International Journal of Neuroscience, 119, 681-690
How to Communicate Complex Science to the PublicRead More
Scientific research is moving faster than ever before, leading to ground-breaking discoveries moving from the lab to our everyday lives quicker with each new breakthrough. Innovative science is now such an important part of the world we live in, that it is crucial for the public to be able to understand the science behind these discoveries. For example, the cutting-edge of science is often surrounded by debates due to ethical implications, complex benefits and risks. Therefore, to take an active role in these discussions and make informed decisions, consumers need to be educated in a way they can understand.
However, until recently, scientists were not trained to communicate their science effectively to the public. Published papers communicating the latest in scientific research focus heavily on accuracy of information and high levels of detail, so much so that in some cases they would require a translator to make sense of.
So how do you go about communicating these complex subjects to the public when they are hard enough to convey to a trained scientist?
1. Know your audience
Number one and by far the most important. Know and define whom you are talking to. How old are they? What is their previous education? What are their interests or life experiences? These are all important factors to consider when communicating science and when applying the next seven pieces of advice. A 10 year old at school, a college-educated 50-year-old and a high-school educated 30-year-old are all very different audiences and will require a different tactic to reach them. Use the first person – talk to them as a human not as a ‘scientist’, because what you are talking about affects everyone.
2. Talk to them as an individual
Once you know who your audience is, talk to them. I recommend using the first person as much as possible when communicating to the public in general, but especially when trying to educate. Half the battle with communicating complex subjects is engaging the audience, and the first person lends itself to this well.
3. Tell the story
When conveying anything, not just science, it is most engaging to do this as a story. Have a clear beginning, middle and end. Think about how to set the scene, get into the ‘whys’ before the ‘hows’. This way you will engage the reader and keep them interested as one point clearly and simply leads on to the next. Don’t delve into the nitty-gritty too early (or at all if it isn’t relevant), as you will defeat the whole point of trying to tell the story.
4. Make it relevant
The truth is, as interesting as you might find this area of science, the public won’t care about the story unless it has a relevant impact. The good thing is that science will inevitably have an impact on everyone in some way; you just have to find it. Start your story with an experience/event/feeling that they can identify with and go from there. I find it useful to start with the big picture, what is the end goal of this research?
5. Show and tell
We all know that a picture is worth 1000 words, but what is important is that those 1000 words are in a language everyone on the planet can understand. Whether your audience has previous knowledge of science or none at all, illustrating your point with an image/diagram will always help.
If you cannot find a suitable image, or you are trying to explain a concept, then you can paint a picture in the imagination of your audience. Analogies and examples go a long way to making a complex concept easy to understand. Talking about a nucleus doesn’t mean a lot to many but the “brain of a cell” explains it well. Even if it is not 100% accurate it gets across your point without confusing the reader. Which leads me on to…
7. Let the little things go
The process of simplifying a complex scientific concept can be a painful process to those that have deep and detailed knowledge of the subject. The nucleus is not the brain of the cell. It’s just not and I understand why that annoys cell biologists and neuroscientists alike. A nucleus has no conscience, can’t think and isn’t structurally comparable to a central nervous system in any way. But it’s a good enough comparison if you are trying to convey that it controls the cell.
My advice (that may be easier said than done) is to let the little things slide and resist the urge to explain too much too soon. No analogy will be perfect, but try not to criticise one that gets across the point. Who knows, if you really engage a reader they may go on to learn more about the subject and discover the details themselves.
8. Leave the politics aside
Science is very often the subject of political, ethical and religious debates. For a lay audience, it can be even more difficult to isolate fact from opinion. So it is up to scientists to either make the differentiation clear or to remove opinion from it altogether. My opinion is that it is up to science communicators to convey the facts and discuss the amazing job that science has done to achieve these breakthroughs. It is the job of political commentators, ethical debaters and even the philosophers to argue whether it is right or wrong. By all means give an opinion, but make sure your audience know that is what it is.
So that is it for my top tips on how to communicate your science. Let me know if you found this helpful, and if you have any tips that you would add Tweet me @GabyAtNotch.
Asthma, what is it and how do we treat it?Read More
Today, 2nd May 2017, is World Asthma Day, a day dedicated to asthma prevention, diagnosis and treatment.
What is asthma?
Asthma is a heterogeneous disease characterised by chronic airway inflammation and variable airway obstruction that is reversible, either spontaneously or after treatment. It affects people of all ages and often starts in childhood, although it can also appear for the first time in adults. The disease is long-term or chronic and the prevalence in different countries varies widely, but the disparity is narrowing due to rising prevalence in low and middle income countries and plateauing in high- income countries.
An estimated 300 million people worldwide suffer from asthma, with 250,000 annual deaths attributed to the disease. It is estimated that the number of people with asthma will grow by more than 100 million by 2025. Approximately 250,000 people die prematurely each year from asthma. Almost all of these deaths are avoidable.
There’s currently no cure for asthma, but there are simple treatments that can help keep the symptoms under control so it doesn’t have a significant impact on the patient´s life. Some people, particularly children, may eventually grow out of asthma, but for many it is a lifelong condition.
Treatment with inhaled corticosteroids is the dominating anti-inflammatory treatment during asthma and is recommended at all stages of the disease, except for the mildest. The inhaled corticosteroids can be combined with long-acting beta-2 agonists, these are symptom-controllers that are helpful in opening the airways. (Reference: http://www.aaaai.org/conditions-and-treatments/asthma)
In addition, leukotriene modifiers can further relieve symptoms for some patients, as leukotrienes are important mediators in asthma. Produced by eosinophils, mast cells and macrophages they contribute to chronic inflammation during asthma.
New drug treatments
In addition to traditional treatments, new drugs are being developed to relieve the different symptoms of asthma. One of them, anti-IL-5 (Mepolizumab) has recently been approved both in Sweden and the UK.
This drug is used to help patients with severe, difficult to treat asthma. Approximately five per cent of asthma patients fall within this category, but since asthma is such a prevalent disease, this proportion adds up to quite a few people.
Mepolizumab targets severe eosinophilic asthma – where the inflammation of the airways is linked to a particular type of white blood cell (eosinophils). It is a humanised monoclonal antibody that binds to interleukin-5 (IL-5) and hinders IL-5 from binding to its receptor on eosinophils, leading to a decrease of eosinophils in blood, tissue and sputum. It is believed that around 40% of people with severe asthma will have an eosinophilic phenotype – meaning that they may be able to benefit from the new treatment.
Mepolizumab is administered through sub-cutaneous injection every two to four weeks. Despite the high cost of the drug doctors are positive.
“A very badly affected group of patients can get help and if a few of these individuals can get a better control over their asthma, their need for healthcare would decrease and their ability to work would increase. This could mean economic benefits for both healthcare and the society,” says Christer Jansson, professor and consultant at the lung and allergy clinic at Akademiska sjukhuset, Uppsala, Sweden.
Benralizumab is another drug targeting eosinophilic asthma, that is undergoing testing right now. Unlike mepolizumab it uses a different pathway; targeting the IL-5 receptor, causing eosinophil apoptosis (cell death). One potential advantage of benralizumab is that it can be given less often, every two months instead of every two weeks, which may lower the cost of the treatment.
Into the future
Hopefully these drugs are just the first of a new line of treatments available targeted at severe asthma. Research is needed to help patients with other types of severe asthma and better diagnostic tests are needed to help ensure that people can have a confirmed diagnosis quickly. This will mean appropriate treatments can be offered, freeing people to go to work, school, raise families and live unrestricted lives that are not overshadowed by asthma.
What are your thoughts on future treatments and diagnostics for asthma? Let me know @fraidifrida
Are we the generation to eliminate one of the biggest killers in human history?Read More
April 25th marks World Malaria Day, a day dedicated to promoting the global efforts to understand and control Malaria – one of the biggest killers in human history. A disease so deadly, some researchers believe it may be responsible for the deaths of almost half of all people who have ever lived.
Caused by different forms of the Plasmodium parasite, there are four types of this life-threatening disease of varying severities. In its most serious form it can affect the kidneys and brain, causing anaemia, coma and death. Malaria is present in over 90 countries and roughly half of the population is currently at risk of catching the disease, with the greatest burden being in the least developed areas where there is very limited access to life-saving preventions, diagnoses and treatments.
How is malaria spread?
It is quite fitting that this lethal disease is transmitted to people through the deadliest animal on the planet – the mosquito. The Mosquito itself does not benefit from transmitting the malaria parasite, it is merely the disease vector – but, having survived for hundreds of millennia, with a population in the trillions and the ability to lay hundreds of eggs at a time, it’s an organism that certainly makes a very effective carrier.
A mosquito bite is simply the beginning of the process for the plasmodium sporozoites (an immature form of the parasite), which have accumulated in the mosquitos’ salivary glands, ready to be released into your body once your skin has been penetrated. This is where the human infection begins, and the sporozoites parasitize the liver, where they appear dormant as they mature and multiply to merozoites. The cells they inhabit eventually erupt and the merozoites are released into the bloodstream, cunningly disguising themselves with the liver cell membranes to avoid an immune attack. This is where they begin their second assault, causing red blood cells to erupt and release toxins that stimulate an immune response – it is this that leads you to experience flu-like symptoms such as fever. In severe cases, if the blood-brain barrier is breached, this can lead to a coma, neurological damage or even death.
The current situation
There have been large-scale efforts to eradicate malaria in the last 75 years. For example, during the WHO’s anti-malarial campaign in the 1950s and 60s DDT was used which, at the time, was hailed as kryptonite to mosquitoes. Bill Gates has famously stated that the world’s fight against malaria is one of the greatest success stories in the history of human health, and yes, over the last couple of decades there certainly has been a significant decline in the global burden of malaria. In fact, since 2000, almost 60 countries have seen a drop of at least 75% in new malaria cases, contributing to a 37% drop globally. However, the 2016 WHO report shows that in 2015 alone more than 400,000 people died of malaria and 214 million were infected. So, the job is far from finished.
Target Malaria – a new approach
There have been remarkable advances in gene-editing technologies in recent years, so one of the main focuses in malaria research lies in exploring different strategies to reduce or modify the populations of Anopheles mosquitoes; specifically, the three species in this genus that are responsible for most of the malaria transmission in Africa. Target Malaria is a not-for-profit research consortium that aims to develop and share technology for malaria control. Their focus is reducing the number of the deadliest malaria-transmitting mosquitoes in Africa – Anopheles gambiae. Specifically, they are interested in targeting female mosquitoes, as these are the only ones that bite, and this is an effective approach to control population size. Target Malaria are investigating the potential of using nuclease enzymes, that cut specific sequences of DNA, to modify mosquito genes. By changing certain genes, malarial resistance, female infertility and almost exclusively male offspring can be induced. The researchers are inserting genes that code for these enzymes into mosquito eggs, with the hope of affecting their reproduction. An example of this research involves nucleases that cut the X chromosome while males are making their sperm, resulting in mainly male offspring. Alongside this, researchers are also investigating how to disrupt the fertility of female mosquitoes to reduce the number of offspring, as well as engineering mosquitoes that are unable to transmit malaria.
These scientists are utilising a method called ‘gene drive’, a powerful emerging technology that is able to override genetic rules to ensure all offspring acquire a trait, as opposed to just, half as would normally be the case, allowing the trait to be spread extremely quickly.
Nowhere are the devastating effects of malaria as obvious as in sub-Saharan Africa, where hundreds of thousands fall victim each year, making up 90% of the total mortality count for the disease. Target Malaria researchers are currently working in Mali, Uganda and Burkina Faso with Bana, a small village in Burkina Faso, having the potential to be the site of a revolutionary genetic experiment. At Imperial College London, gene drive mosquitoes are being designed to have reduced female offspring or the inability to reproduce in general, and are then planned to be released into the wild in Bana. Their hope is that this would nearly eradicate Anopheles gambiae, to a point sufficient to prevent malaria transmission.
So what are we waiting for?
For one thing, the communities need to be prepared for the release. Firstly, there needs to be education, not just regarding genetic engineering and the impact the release will have, but also basic genetics – which may be a challenge in a community where there is no equivalent term, even for the word gene. Additionally, there are still years before scientists will be able to fully develop test gene drive mosquitoes in this manner.
If an experiment of this type is successful in the future, not only could this essentially eradicate malaria, but it could also pave the way for eliminating other mosquito-borne diseases such as dengue fever or even other insect-transmitted diseases like Lyme disease. However, humans have never before changed the genetic code of a free-living organism on this scale and released it into the wild. This genetic-engineering technology is very powerful and definitely needs to be treated as such. But, with millions dying and suffering at the hands of malaria each year, should we look to do this sooner rather than later?
What do you think? Could we be the generation that ends one of the oldest and deadliest diseases in human history? Tweet me your thoughts @PranikaAtNotch.
Progress for patients with Parkinson’s diseaseRead More
On April 11th this year, World Parkinson’s Day will mark 262 years since James Parkinson was born and 200 years since he published his essay ‘On the Shaking Palsy’, which led to an official recognition of Parkinson’s Disease (PD). Today, it’s estimated that over 10 million people worldwide have PD. Despite widespread awareness of PD and its most common symptoms, scientists don’t know why PD develops, and there is no cure. As a result, treatment has been restricted mostly to drugs that ease the symptoms, and physiotherapy.
Researchers have been exploring PD extensively over the decades and are closer to understanding its underlying biology. These studies are leading to promising new drug treatments that are now entering clinical trials, as well as new possibilities for reversing PD by repairing patients’ brains. Here’s a quick summary of a few recent developments.
What is Parkinson’s?
PD is a progressive neurodegenerative disease. It causes nerve cells in parts of the brain that control movement to stop working and die off. In healthy brains, these neurons rely on the brain chemical, dopamine, to communicate with one another. Replacing the lost dopamine in PD patients’ brains has therefore been the focus of many treatments over the decades.
Although PD is a degenerative disease that is more common in older people, we now know it is not specifically a disease of old age: around five to ten per cent of PD patients are aged under 50. Currently, there are no biochemical tests for PD; diagnosis depends on observation of the patient by a clinical and/or neurological specialist.
Every patient’s experience of PD can be different, but common symptoms include tremors – especially in hands or fingers when the limbs are at rest, slowness of movement and stiff, rigid muscles. These effects can be painful as well as debilitating, and become progressively worse.
It’s a challenging disease to diagnose, predict and treat for several reasons. The speed at which the disease progresses and symptoms develop can vary from one patient to the next. Sometimes Parkinson’s is hereditary, but most of the time it’s not. More recently, scientists have discovered that Parkinson’s can also affect parts of the brain that don’t control movement, resulting in a variety of ‘non-motor’ effects that include mental illness such as depression.
Since the 1960s, PD patients have been prescribed drugs such as levodopa that increase dopamine in the brain. Such drugs help to improve patients’ mobility but are associated with unpleasant side effects that typically get worse over time and can contribute to the patient’s illness. It’s also common for patients on these drugs to experience sudden “off periods” when the treatments just stop working. In the long term, the side effects can seriously outweigh the benefits of the treatment and there is an urgent need for more effective drugs.
Finding new drug treatments
In recent years, scientists have learned more about the biology of Parkinson’s and how it causes nerve cells to malfunction. Researchers have been particularly interested in Lewy bodies, which are clumps of proteins that typically appear in the affected brain cells of PD patients. One of the main components of Lewy bodies is alpha-synuclein, and a number of experiments have shown that alpha-synuclein could play a role in the development of PD. As a result, drug companies are now investigating whether new therapies targeting alpha-synuclein could prevent PD development, or at least slow down the disease progression in patients. Clinical trials have recently started for some of these potential new drugs and the Parkinson’s community is eagerly awaiting the results.
Replacing damaged brain cells
An alternative approach to PD treatment is to transplant new cells into the brain, to replace the dead cells. Several different methods have been tried over the past few decades, including transplants of dopamine-producing foetal cells and, more recently, stem cell grafts. In the late 1980s, researchers at Lund University in Sweden successfully transplanted dopaminergic foetal cells into the brains of 18 patients with Parkinson’s. The majority of the patients showed long-term improvements in their symptoms and some of them were able to stop taking levodopa.
One of the patients from the study died recently, 24 years after the transplant, and post-mortem analysis provided a detailed picture of what happened to the transplant in the patient’s brain. During his life, the patient had initially responded very well to the transplant: he was able to come off levodopa completely for a few years, then continued for ten years on a reduced drug dose. The patient then started to decline and, by 18 years after the transplant, the patient’s disease symptoms were similar to those shown before the study. In line with these behavioural observations, post-mortem analysis of the patient’s brain showed that the transplanted cells had grown into the damaged brain areas and successfully formed new nerve connections (re-innervation). However, signs of Parkinson’s disease, such as Lewy bodies, were found in a small proportion of the transplanted cells.
Further transplant studies have been carried out since the pioneering Lund study, but with mixed success. However, it has been generally accepted that cell replacement could be beneficial for PD, and researchers are now investigating modified approaches using stem cells that can develop into dopamine-producing neurons when transplanted into the brain.
Stem cells have attracted a lot of interest for repairing human brains and other organs in recent years. These immature cells have not yet differentiated into their final cell type (such as skin, muscle or brain cells) and, therefore, have important advantages for brain repair. Importantly, stem cells are much more widely available than foetal tissue because stem cells can come from a variety of sources, including adult humans, and can also be grown in the lab. A special type of inducible stem cell (iPSC) can be manipulated to grow into almost any type of cell that’s specialised for the brain or body region of interest. Scientists are now researching iPSCs as well as other types of stem cell for transplanting into Parkinson’s brains, and it’s expected that these will soon be ready for testing in PD patients.
Tailoring treatments to patients
Another area of research that could be beneficial for PD in the future is personalised medicine. This approach relies on collecting individual patients’ biological information and using that to decide the best course of treatment for the patient. For example, the data might include details about a patient’s immune system, their genes, and levels of hormones and other proteins or biomarkers. This can provide important information about the patient’s stage of disease and response to treatment. In turn, this helps with their prognosis and finding more tailored treatment regimes. Although much work has yet to be done before new Parkinson’s treatments become widely available, the personalised medicine approach could be particularly beneficial for PD given the variation seen in patients’ symptoms, disease progression and response to existing treatments.
What are your thoughts on future treatments for PD? Let me know Kate@Notch
Lindvall O, Rehncrona S, Brundin P et al. (1989). Arch Neurol 46(6): 615-631.
Lindvall O, Brundin P, Widner H et al. (1990). Science 247(4942): 574-577.
Stoker TB & Barker RA (2016). Regenerative Medicine 11(8): 778-786.
Parkinson’s Disease Foundation
The Michael J Fox Foundation