Mankind’s deadliest chemicals: Nerve AgentsRead More
Warfare. Weapons of mass destruction. Poison. Death. All four of these phrases adequately summarise nerve agents.
But what are nerve agents, how do they affect the body and what are the differences between the agents implicated in the murder of Syrian civilians, North Korean Kim Jong-Nam, and most recently implicated in the attempted murder of Sergei Skripal?
This is the scientific history behind the deadly nerve agent.
What are nerve agents?
Unlike poisons such as arsenic, chlorine gas and cyanide, nerve agents are not naturally occurring molecules or elements, concentrated into lethal doses. Instead, they must be synthesised in a laboratory. They are known as organophosphates, terminology describing the blend of composed elements including phosphorus, carbon and others. Organophosphates have one universal use in the world: they are insecticides. It is perhaps unsurprising then that nerve agents have the same effect on humans as insecticides have on insects. A chilling thought.
Nerve agents themselves are liquids at normal temperatures. They are sometimes referred to under the misnomer of nerve gases, often due to the specific method of aerosol dispersal since this has the potential to deliver a lethal dose faster. Being readily absorbed through the skin, eyes and respiratory tract, there are various methods of administration. Exposure can be achievable via vapour or gas dispersion, through contact with the skin, or even through ingestion.
As the public is now only too aware, nerve agents are lethal. The levels of toxic molecules necessary to deliver a fatal dose differ between the different modes of administration – inhalation is the deadliest. In the case of sarin, the concentration of agent that is likely to be fatal through inhalation administration is only 100 mg min m-3, whereas the concentration necessary to achieve the same effect through contact with the skin is 1700 mg min m-3. In both cases, concentration is measured through exposure and is dependent on the time exposed, the mass of compound and the area of exposure.
Furthermore, the latency period – the period before the manifestation of symptoms – varies between the agents. Some have been reported to be as short as 30 seconds, but others can last to the order of hours.
How do nerve agents affect the body?
As the name suggests, nerve agents affect the body by disrupting the central nervous system messaging channels, effectively shutting down the body’s nervous system. Cellular messaging is a key component of the nervous system, and is achieved in the body through the transmission of electrical impulses along nerves and pass from neuron to neuron via neurotransmitters. A very common neurotransmitter is Acetylcholine (ACh), which transmits these essential electrical impulses down the neuronal network and also mediates muscle contact. Once ACh has performed its role transmitting cell messages, it must be destroyed to prevent overstimulation of the nervous system. The body performs this naturally using the enzyme acetylcholinesterase (AChE) before the process begins anew.
Organophosphate nerve agents effectively disrupt this process. The compounds bind to the specific sites of the AChE enzyme rendering it ineffective. Thus, a toxic accumulation of acetylcholine occurs, resulting in the overstimulation of the nervous system.1
As the effects of nerve agent exposure increases, the victim suffers from the loss of muscular functions leading to constriction of pupils, drooling, convulsions, paralysis and respiratory arrest. Without treatment, death is inevitable.
Fortunately, these effects can be reversed using an antidote – but it must be administered rapidly. First, the bond joining the nerve agent and enzyme can be broken using specific “oxime” drugs, regenerating the enzyme for normal use. Speed is of the essence to reverse these effects; the bond “ages” over time, strengthening the link between the agent and the serine and rendering oxime drugs ineffective.
Atropine has the effect of blocking the ACh receptors, thus inhibiting the transmission possible between cells. In a healthy patient, atropine is poisonous, owing to its communication disruption. But, for patients poisoned with nerve agent, it has the potential to save their lives.
The G-Series of nerve agents were the first series of synthetic chemical weapons created. The first nerve agent dates back to 1936, when German scientist Gerhard Schrader attempted to synthesise a new insecticide that was cheaper than nicotine. What he created was more toxic than previously imagined. In fact, it is reported that the spilling of one drop in the laboratory led to Schrader and his assistant stopping work for three weeks. This would later become known as tabun.
It is perhaps no wonder then that upon the outbreak of World War II, the German military began preparing for the large-scale production of tabun as a replacement for chlorine and mustard gas, chemical weapons synonymous with World War I. It was during this process that further research resulted in more potent, deadlier weapons being discovered. Thankfully, no nerve agents were ever used during WWII, as production plants were not fully operational before the collapse of Nazi Germany. But Schrader’s research birthed the G-series of nerve agents, which have continued to be used well into the modern day.
The agents in this series includes, in order of potency: tabun (GA), sarin (GB), soman (GD) and cyclosarin (GF).
Sarin is perhaps the most widely recognised agent from this class. Named after the scientists behind its discovery, Schrader, Ambros, Ritter & Van der Linde, it is the most volatile of the G-series, which makes dispersal as a gas easier.
G-series agents are also reported as being non-persistent. This is an expression of the duration of chemical effect, which impacts the ease and feasibility of decontamination methods. Common decontamination methods of sarin include simply washing exposed area with copious amounts of water to dilute the agent.
After WWII, pesticide research continued, and, ironically, more lethal nerve agents were again developed from attempts to synthesise an effective insecticide. It was here in the UK, at Imperial Chemical Industries (ICI), where amiton was created. This insecticide eventually had to be removed from sale due to its toxicity. However, this research was continued at Porton Down Chemical Weapons Research Centre, near Salisbury, where amiton was given a new name: VE.
Between 1952 and 1955, other V-series nerve agents were synthesised at Porton Down, most notably VX. Often cited as a colourless or amber-coloured liquid, the V-series agents have lower volatilities than other agents such as sarin. This low volatility means the primary method of dispersal is through skin contact since aerosol dispersion is difficult. What sets the V-series apart from their G-series brethren is their toxicity. VX is many times more toxic than sarin or tabun, with the lethal concentration being around 10-15 mg min m-3 for both skin contact and inhalation. Compare this to the lethal concentration of sarin, at 100 and 1700 mg min m-3 for inhalation and skin contact respectively, VX is clearly a deadly chemical.
The V-series are also persistent agents, due to their low volatility and reactivity. Bleach and alkali are effective means of decontaminating any exposed area, and often a mix of water and bleach is used.
The Novichok agents
The most recent example of nerve agent use, in the poisoning of Sergei Skripal, has now been identified as belonging to the Novichok class of nerve agents.
The Novichok, or N-series, nerve agents are a secretive class of chemicals that, prior to leaks made in the 1990’s by Russian defectors, were unknown to the world. The spotlight has now been shone on these agents, but very little is still known. What is known is that they were developed in the Soviet Union in the 1970’s or 80’s, and various sources, including a compendium of chemical warfare agents, has estimated that the Novichok agents are around 10 times more lethal than VX.
When have nerve agents been used?
Thankfully, usage has been largely absent from warfare.
VX itself has itself been used on multiple occasions. In 1968, the accidental discharge of VX during military testing resulted in the death of over 3000 sheep in Dugway, Utah. But, VX has been implicated in two assassinations. The first was committed by members of the Japanese Aum Shinrikyo cult in 1994 to assassinate a former cult member in Osaka (this cult was also responsible for the Tokyo subway station sarin release in 1995 in which 13 people died). Until recently, this was the only confirmed human fatality attributed to VX. That changed following in 2017 with the murder of Kim Jong-Nam, the half-brother of North Korean leader Kim Jong-un, alleged to have been caused by the smearing of VX across his face.
Turning to the darker side of history, sarin has unfortunately been used to inflict mass indiscriminate carnage. The oldest documented use was in March 1988 by Saddam Hussein; it is believed that he used sarin against Kurdish citizens in Halabja, leaving 5000 people dead. There have been two other documented uses of sarin, alongside other chemical weapons such as mustard and chlorine gases, during the Syrian civil war in 2013 and 2017.
Fortunately, due to the serious nature of these chemicals, their synthesis requires expertise, facilities, equipment and funding. It is therefore unlikely that individual terrorist organisations or rogue chemists would be capable of their synthesis without extensive assistance. Furthermore, the Chemical Weapons Convention, which came into effect in 1997, outlawed the stockpiling and production of nerve agents, including sarin or VX. Enacted by 192 countries, part of the treaty is the commitment to the disposal of agents. In November 2017, a pivotal goal was reached as it was announced over 96% of global chemical weapons have been destroyed.
There are few words that I can offer that adequately summarise nerve agents. They can kill or injure indiscriminately. Throughout my research, I was both amazed and terrified by the lethality and potency of what scientists have created over the years. Nerve agents are weapons of war, synonymous with lethality, suffering, pain and death, and are without a doubt amongst the worst of mankind’s self-made monsters.
- Anders Allgardsson et al., PNAS, 2016, 113, 5514-5519. (Link)
I wish to thank the following publications, all accessed on 14/03/2018 for the information that constitutes the bulk of this article, so I encourage the reader to visit the following articles.
- Scientific American: Nerve Agents What are they and how do they work
- Chemistry World: VX
- Chemistry World: What we know about Russia’s Novichok nerve agents
- University of Birmingham: Nerve gas – the dark side of warfare
- Medscape: CBRNE – Nerve Agents, V-series – VE, VG, VM, VX
- Medscape: CBRNE – Nerve Agents, G-series –Tabun, Sarin, Soman
- Compound Interest: Chemical Warfare & Nerve Agents – Part I: The G Series
- Compound Interest: Chemical Warfare & Nerve Agents – Part I: The V Series
Furthermore, facts were found and corroborated from the following websites.
Due to the constant stream of updated information, all data and facts published here are correct at the time of writing.
What is circadian rhythm?Read More
Circadian rhythm is integral to our functioning; yet many have never heard of the phenomenon, or don’t know why it’s so significant. With the 2017 Nobel Prize in Physiology or Medicine having recently been awarded to Jeffrey C. Hall, Michael Rosbash and Michael W. Young for their “discoveries of molecular mechanisms controlling the circadian rhythm”, perhaps this will be the year circadian rhythms reach the science spotlight.
So what exactly is circadian rhythm, and why did the researchers exploring it deserve the Nobel Prize?
Circadian rhythm is essentially our body clock; the approximately 24-hour rhythm that occurs in cellular processes in almost every tissue of the body. The 24-hour rhythmicity of the circadian system is driven by environmental time cues, such as the natural day and night light cycle, cycles of rest and activity, and even feeding behaviour. Our bodies then translate these timing cues into molecular oscillations within individual cells to drive our functioning.
Circadian rhythm is controlled by the suprachiasmatic nucleus (SCN), a small area in the centre of the brain. The SCN is not actually required for peripheral organs to generate their own rhythms. Rather, it acts more like the conductor of an orchestra, guiding each organ to oscillate in the ideal phase for that specific tissue.
Intrinsic circadian clocks are evolutionarily conserved across the animal kingdom, with organisms as small and as ancient as cyanobacteria exhibiting the same clock systems as us humans. In experiments using fruit flies, the 2017 Nobel Prize laureates identified several of the genes that made up this core molecular clock that controls daily biological rhythm.
This includes the period gene, which encodes the protein PER. They discovered that PER accumulated in the nucleus of cells during the night, and degraded during the day, causing rhythmic 24-hour oscillations. Beyond this first discovery, the team went on to identify that these oscillations were auto-regulatory, meaning that they continuously self-regulate their levels in a cyclic fashion.
With many of our genes coordinated by the circadian clock, the impact they have on our complex physiology is vast. Circadian research since the initial discoveries by this year’s Nobel laureates has exposed how disruptions to our circadian system, for example nocturnal lifestyle, working night shifts, and jet lag, can cause an array of problems with sleep, behaviour, body temperature and metabolism, and even impact cancer.
Circadian disruption therefore imposes a major public health issue that has yet to receive the recognition it deserves. One can only hope that the awarding of the Nobel Prize to circadian researchers will increase public awareness of the circadian influences on health risks, leading to enhanced lifestyle choices that improve the alignment of physiological systems with the daily body clock.
Tori Blakeman is PR Account Manager & Writer at Notch. Follow her on Twitter @ttttor.
For more information on circadian clocks and their implication in breast cancer, read Tori’s review in Breast Cancer Research.
Does the weather affect our mood?Read More
You’re on the way to work, it’s overcast and grey, rain has just started spitting from the sky, how do you feel? How about if you’re on the way to work, the sun shining down, blue skies clear above, would you feel different then? There has been a lot of speculation on the extent that weather can affect our mood and actions, but which lack evidence and which are right as rain?
The effect of sunlight
We are all aware of some of the harmful effects that arise from the sun’s UV rays, such as skin cancer, premature skin ageing and retina damage. Yet most of us would agree that exposure to the sun can make us feel happier and more energised. This is because the sun affects our serotonin levels, a neurotransmitter that helps balance our ‘good mood’ . When we are exposed to sunlight our optic nerve perceives this light and then sends signals to our serotonin-producing glands. These then increase the production of serotonin and elevate our mood. Serotonin production is further linked to sunlight because vitamin D helps to maintain high levels of serotonin, and our bodies make vitamin D using the sun’s UV rays . Additionally to serotonin, sunlight also affects our melatonin levels, a hormone that helps to regulate our sleep. When we are exposed to the sun, melatonin secretion decreases and then increases again when the sun goes down . Meaning that when we have more sun exposure we feel more awake and energised, but when there is less sun we can feel tired and lazy . Therefore, serotonin and melatonin have an inverse-proportional relationship that results in us having an awake, happier persona in sunnier weather conditions.
Seasonal affective disorder (SAD)
Seasonal affective disorder, often referred to as ‘winter depression’ or ‘winter blues’, is a form of depression that occurs in a seasonal pattern, often when a sufferer experiences a lack of light in the winter time . There is a higher percentage of sufferers in areas where there are fewer hours of sunlight in the winter, such as Scandinavia, North West US and Canada. Symptoms can include feeling tired, depressed and lacking in motivation. These symptoms can be linked to the similar habits that are seen in some animal species during hibernation in the winter months. To assist with the symptoms of SAD doctors recommend light therapy where a lightbox emits bright white light to provide the body and brain with the levels of light it needs .
The affect of temperature
The weather often controls the temperature of the environment that we live, work and play in, so what are the affects of this temperature? Although sunlight energises us through its melatonin effects, too much sun can deviate our body from its optimum temperature, meaning that our bodies have to work hard to cool down and dissipate heat, via perspiring, dilation and radiation. Similarly, when we are too cold our body has to work hard to keep ourselves warm via muscle contractions to shiver and raise our body hairs. These processes require energy, leading us to feel more tired .
Bad weather effects
Most of us would consider rain to be bad weather, but this rain could actually be causing us slight physical discomfort as well as emotionally bringing us down. This is due to the reduction in atmospheric pressure when it rains. The reduction in pressure causes our body fluids to move from our blood vessels to our tissues, leading to reduced mobility . Many patients claim that changes in the weather can influence their chronic pain, but this may be related to the above affects of serotonin and melatonin levels, rather than changes in humidity, heat and atmospheric pressures .
The weather does seem to have an affect on our mood and how we feel, which further impacts many aspects of our lives. Either way, there appears to be lots of good evidence on how to convince your boss that the sunny Barbados office needs a bit more attention.
If you think the weather has an influential effect on your mood tweet me @SarahatNotch
1. http://www.nytimes.com/1981/06/23/science/from-fertility-to-mood-sunlight-found-to-affect- human-biology.html?sec=health&spon=&partner=permalink&exprod=permalink
6. https://www.mind.org.uk/information-support/types-of-mental-health-problems/seasonal- affective-disorder-sad/? gclid=EAIaIQobChMIiuG4jb7g1QIVLJPtCh3L0At9EAAYASAAEgJeTfD_BwE#.WZ6rUZOGPVp
8. https://www.bustle.com/articles/113278-6-scientific-ways-weather-affects-your-mood-so-you- can-adapt-your-mind-and-body-through
Asthma, what is it and how do we treat it?Read More
Today, 2nd May 2017, is World Asthma Day, a day dedicated to asthma prevention, diagnosis and treatment.
What is asthma?
Asthma is a heterogeneous disease characterised by chronic airway inflammation and variable airway obstruction that is reversible, either spontaneously or after treatment. It affects people of all ages and often starts in childhood, although it can also appear for the first time in adults. The disease is long-term or chronic and the prevalence in different countries varies widely, but the disparity is narrowing due to rising prevalence in low and middle income countries and plateauing in high- income countries.
An estimated 300 million people worldwide suffer from asthma, with 250,000 annual deaths attributed to the disease. It is estimated that the number of people with asthma will grow by more than 100 million by 2025. Approximately 250,000 people die prematurely each year from asthma. Almost all of these deaths are avoidable.
There’s currently no cure for asthma, but there are simple treatments that can help keep the symptoms under control so it doesn’t have a significant impact on the patient´s life. Some people, particularly children, may eventually grow out of asthma, but for many it is a lifelong condition.
Treatment with inhaled corticosteroids is the dominating anti-inflammatory treatment during asthma and is recommended at all stages of the disease, except for the mildest. The inhaled corticosteroids can be combined with long-acting beta-2 agonists, these are symptom-controllers that are helpful in opening the airways. (Reference: http://www.aaaai.org/conditions-and-treatments/asthma)
In addition, leukotriene modifiers can further relieve symptoms for some patients, as leukotrienes are important mediators in asthma. Produced by eosinophils, mast cells and macrophages they contribute to chronic inflammation during asthma.
New drug treatments
In addition to traditional treatments, new drugs are being developed to relieve the different symptoms of asthma. One of them, anti-IL-5 (Mepolizumab) has recently been approved both in Sweden and the UK.
This drug is used to help patients with severe, difficult to treat asthma. Approximately five per cent of asthma patients fall within this category, but since asthma is such a prevalent disease, this proportion adds up to quite a few people.
Mepolizumab targets severe eosinophilic asthma – where the inflammation of the airways is linked to a particular type of white blood cell (eosinophils). It is a humanised monoclonal antibody that binds to interleukin-5 (IL-5) and hinders IL-5 from binding to its receptor on eosinophils, leading to a decrease of eosinophils in blood, tissue and sputum. It is believed that around 40% of people with severe asthma will have an eosinophilic phenotype – meaning that they may be able to benefit from the new treatment.
Mepolizumab is administered through sub-cutaneous injection every two to four weeks. Despite the high cost of the drug doctors are positive.
“A very badly affected group of patients can get help and if a few of these individuals can get a better control over their asthma, their need for healthcare would decrease and their ability to work would increase. This could mean economic benefits for both healthcare and the society,” says Christer Jansson, professor and consultant at the lung and allergy clinic at Akademiska sjukhuset, Uppsala, Sweden.
Benralizumab is another drug targeting eosinophilic asthma, that is undergoing testing right now. Unlike mepolizumab it uses a different pathway; targeting the IL-5 receptor, causing eosinophil apoptosis (cell death). One potential advantage of benralizumab is that it can be given less often, every two months instead of every two weeks, which may lower the cost of the treatment.
Into the future
Hopefully these drugs are just the first of a new line of treatments available targeted at severe asthma. Research is needed to help patients with other types of severe asthma and better diagnostic tests are needed to help ensure that people can have a confirmed diagnosis quickly. This will mean appropriate treatments can be offered, freeing people to go to work, school, raise families and live unrestricted lives that are not overshadowed by asthma.
What are your thoughts on future treatments and diagnostics for asthma? Let me know @fraidifrida
Are we the generation to eliminate one of the biggest killers in human history?Read More
April 25th marks World Malaria Day, a day dedicated to promoting the global efforts to understand and control Malaria – one of the biggest killers in human history. A disease so deadly, some researchers believe it may be responsible for the deaths of almost half of all people who have ever lived.
Caused by different forms of the Plasmodium parasite, there are four types of this life-threatening disease of varying severities. In its most serious form it can affect the kidneys and brain, causing anaemia, coma and death. Malaria is present in over 90 countries and roughly half of the population is currently at risk of catching the disease, with the greatest burden being in the least developed areas where there is very limited access to life-saving preventions, diagnoses and treatments.
How is malaria spread?
It is quite fitting that this lethal disease is transmitted to people through the deadliest animal on the planet – the mosquito. The Mosquito itself does not benefit from transmitting the malaria parasite, it is merely the disease vector – but, having survived for hundreds of millennia, with a population in the trillions and the ability to lay hundreds of eggs at a time, it’s an organism that certainly makes a very effective carrier.
A mosquito bite is simply the beginning of the process for the plasmodium sporozoites (an immature form of the parasite), which have accumulated in the mosquitos’ salivary glands, ready to be released into your body once your skin has been penetrated. This is where the human infection begins, and the sporozoites parasitize the liver, where they appear dormant as they mature and multiply to merozoites. The cells they inhabit eventually erupt and the merozoites are released into the bloodstream, cunningly disguising themselves with the liver cell membranes to avoid an immune attack. This is where they begin their second assault, causing red blood cells to erupt and release toxins that stimulate an immune response – it is this that leads you to experience flu-like symptoms such as fever. In severe cases, if the blood-brain barrier is breached, this can lead to a coma, neurological damage or even death.
The current situation
There have been large-scale efforts to eradicate malaria in the last 75 years. For example, during the WHO’s anti-malarial campaign in the 1950s and 60s DDT was used which, at the time, was hailed as kryptonite to mosquitoes. Bill Gates has famously stated that the world’s fight against malaria is one of the greatest success stories in the history of human health, and yes, over the last couple of decades there certainly has been a significant decline in the global burden of malaria. In fact, since 2000, almost 60 countries have seen a drop of at least 75% in new malaria cases, contributing to a 37% drop globally. However, the 2016 WHO report shows that in 2015 alone more than 400,000 people died of malaria and 214 million were infected. So, the job is far from finished.
Target Malaria – a new approach
There have been remarkable advances in gene-editing technologies in recent years, so one of the main focuses in malaria research lies in exploring different strategies to reduce or modify the populations of Anopheles mosquitoes; specifically, the three species in this genus that are responsible for most of the malaria transmission in Africa. Target Malaria is a not-for-profit research consortium that aims to develop and share technology for malaria control. Their focus is reducing the number of the deadliest malaria-transmitting mosquitoes in Africa – Anopheles gambiae. Specifically, they are interested in targeting female mosquitoes, as these are the only ones that bite, and this is an effective approach to control population size. Target Malaria are investigating the potential of using nuclease enzymes, that cut specific sequences of DNA, to modify mosquito genes. By changing certain genes, malarial resistance, female infertility and almost exclusively male offspring can be induced. The researchers are inserting genes that code for these enzymes into mosquito eggs, with the hope of affecting their reproduction. An example of this research involves nucleases that cut the X chromosome while males are making their sperm, resulting in mainly male offspring. Alongside this, researchers are also investigating how to disrupt the fertility of female mosquitoes to reduce the number of offspring, as well as engineering mosquitoes that are unable to transmit malaria.
These scientists are utilising a method called ‘gene drive’, a powerful emerging technology that is able to override genetic rules to ensure all offspring acquire a trait, as opposed to just, half as would normally be the case, allowing the trait to be spread extremely quickly.
Nowhere are the devastating effects of malaria as obvious as in sub-Saharan Africa, where hundreds of thousands fall victim each year, making up 90% of the total mortality count for the disease. Target Malaria researchers are currently working in Mali, Uganda and Burkina Faso with Bana, a small village in Burkina Faso, having the potential to be the site of a revolutionary genetic experiment. At Imperial College London, gene drive mosquitoes are being designed to have reduced female offspring or the inability to reproduce in general, and are then planned to be released into the wild in Bana. Their hope is that this would nearly eradicate Anopheles gambiae, to a point sufficient to prevent malaria transmission.
So what are we waiting for?
For one thing, the communities need to be prepared for the release. Firstly, there needs to be education, not just regarding genetic engineering and the impact the release will have, but also basic genetics – which may be a challenge in a community where there is no equivalent term, even for the word gene. Additionally, there are still years before scientists will be able to fully develop test gene drive mosquitoes in this manner.
If an experiment of this type is successful in the future, not only could this essentially eradicate malaria, but it could also pave the way for eliminating other mosquito-borne diseases such as dengue fever or even other insect-transmitted diseases like Lyme disease. However, humans have never before changed the genetic code of a free-living organism on this scale and released it into the wild. This genetic-engineering technology is very powerful and definitely needs to be treated as such. But, with millions dying and suffering at the hands of malaria each year, should we look to do this sooner rather than later?
What do you think? Could we be the generation that ends one of the oldest and deadliest diseases in human history? Tweet me your thoughts @PranikaAtNotch.
Progress for patients with Parkinson’s diseaseRead More
On April 11th this year, World Parkinson’s Day will mark 262 years since James Parkinson was born and 200 years since he published his essay ‘On the Shaking Palsy’, which led to an official recognition of Parkinson’s Disease (PD). Today, it’s estimated that over 10 million people worldwide have PD. Despite widespread awareness of PD and its most common symptoms, scientists don’t know why PD develops, and there is no cure. As a result, treatment has been restricted mostly to drugs that ease the symptoms, and physiotherapy.
Researchers have been exploring PD extensively over the decades and are closer to understanding its underlying biology. These studies are leading to promising new drug treatments that are now entering clinical trials, as well as new possibilities for reversing PD by repairing patients’ brains. Here’s a quick summary of a few recent developments.
What is Parkinson’s?
PD is a progressive neurodegenerative disease. It causes nerve cells in parts of the brain that control movement to stop working and die off. In healthy brains, these neurons rely on the brain chemical, dopamine, to communicate with one another. Replacing the lost dopamine in PD patients’ brains has therefore been the focus of many treatments over the decades.
Although PD is a degenerative disease that is more common in older people, we now know it is not specifically a disease of old age: around five to ten per cent of PD patients are aged under 50. Currently, there are no biochemical tests for PD; diagnosis depends on observation of the patient by a clinical and/or neurological specialist.
Every patient’s experience of PD can be different, but common symptoms include tremors – especially in hands or fingers when the limbs are at rest, slowness of movement and stiff, rigid muscles. These effects can be painful as well as debilitating, and become progressively worse.
It’s a challenging disease to diagnose, predict and treat for several reasons. The speed at which the disease progresses and symptoms develop can vary from one patient to the next. Sometimes Parkinson’s is hereditary, but most of the time it’s not. More recently, scientists have discovered that Parkinson’s can also affect parts of the brain that don’t control movement, resulting in a variety of ‘non-motor’ effects that include mental illness such as depression.
Since the 1960s, PD patients have been prescribed drugs such as levodopa that increase dopamine in the brain. Such drugs help to improve patients’ mobility but are associated with unpleasant side effects that typically get worse over time and can contribute to the patient’s illness. It’s also common for patients on these drugs to experience sudden “off periods” when the treatments just stop working. In the long term, the side effects can seriously outweigh the benefits of the treatment and there is an urgent need for more effective drugs.
Finding new drug treatments
In recent years, scientists have learned more about the biology of Parkinson’s and how it causes nerve cells to malfunction. Researchers have been particularly interested in Lewy bodies, which are clumps of proteins that typically appear in the affected brain cells of PD patients. One of the main components of Lewy bodies is alpha-synuclein, and a number of experiments have shown that alpha-synuclein could play a role in the development of PD. As a result, drug companies are now investigating whether new therapies targeting alpha-synuclein could prevent PD development, or at least slow down the disease progression in patients. Clinical trials have recently started for some of these potential new drugs and the Parkinson’s community is eagerly awaiting the results.
Replacing damaged brain cells
An alternative approach to PD treatment is to transplant new cells into the brain, to replace the dead cells. Several different methods have been tried over the past few decades, including transplants of dopamine-producing foetal cells and, more recently, stem cell grafts. In the late 1980s, researchers at Lund University in Sweden successfully transplanted dopaminergic foetal cells into the brains of 18 patients with Parkinson’s. The majority of the patients showed long-term improvements in their symptoms and some of them were able to stop taking levodopa.
One of the patients from the study died recently, 24 years after the transplant, and post-mortem analysis provided a detailed picture of what happened to the transplant in the patient’s brain. During his life, the patient had initially responded very well to the transplant: he was able to come off levodopa completely for a few years, then continued for ten years on a reduced drug dose. The patient then started to decline and, by 18 years after the transplant, the patient’s disease symptoms were similar to those shown before the study. In line with these behavioural observations, post-mortem analysis of the patient’s brain showed that the transplanted cells had grown into the damaged brain areas and successfully formed new nerve connections (re-innervation). However, signs of Parkinson’s disease, such as Lewy bodies, were found in a small proportion of the transplanted cells.
Further transplant studies have been carried out since the pioneering Lund study, but with mixed success. However, it has been generally accepted that cell replacement could be beneficial for PD, and researchers are now investigating modified approaches using stem cells that can develop into dopamine-producing neurons when transplanted into the brain.
Stem cells have attracted a lot of interest for repairing human brains and other organs in recent years. These immature cells have not yet differentiated into their final cell type (such as skin, muscle or brain cells) and, therefore, have important advantages for brain repair. Importantly, stem cells are much more widely available than foetal tissue because stem cells can come from a variety of sources, including adult humans, and can also be grown in the lab. A special type of inducible stem cell (iPSC) can be manipulated to grow into almost any type of cell that’s specialised for the brain or body region of interest. Scientists are now researching iPSCs as well as other types of stem cell for transplanting into Parkinson’s brains, and it’s expected that these will soon be ready for testing in PD patients.
Tailoring treatments to patients
Another area of research that could be beneficial for PD in the future is personalised medicine. This approach relies on collecting individual patients’ biological information and using that to decide the best course of treatment for the patient. For example, the data might include details about a patient’s immune system, their genes, and levels of hormones and other proteins or biomarkers. This can provide important information about the patient’s stage of disease and response to treatment. In turn, this helps with their prognosis and finding more tailored treatment regimes. Although much work has yet to be done before new Parkinson’s treatments become widely available, the personalised medicine approach could be particularly beneficial for PD given the variation seen in patients’ symptoms, disease progression and response to existing treatments.
What are your thoughts on future treatments for PD? Let me know Kate@Notch
Lindvall O, Rehncrona S, Brundin P et al. (1989). Arch Neurol 46(6): 615-631.
Lindvall O, Brundin P, Widner H et al. (1990). Science 247(4942): 574-577.
Stoker TB & Barker RA (2016). Regenerative Medicine 11(8): 778-786.
Parkinson’s Disease Foundation
The Michael J Fox Foundation
Revealing the mind of a synaestheteRead More
“Human life is but a series of footnotes to a vast obscure unfinished masterpiece” – Vladimir Nabokov, Lolita
Have you ever wondered if you’re not seeing the whole picture? Can science even define what the ‘whole picture is’ and categorise human sensation? What can we learn from the experiences of synaesthesia today?
Synaesthesia is one neurological example of how our brains truly do differentiate our senses from each other. There are many varied manifestations of synaesthesia, but they share the condition where one sense, such as hearing, triggers a sensation in another, such as taste.
Brain imaging studies have found that synaesthetic colour experience activates colour regions in the occipito-temporal cortex. Additionally, six brain regions are activated, these regions are in the motor and sensory regions as well as ‘higher level’ regions in the parietal and frontal lobe. This has led to scientific speculation that a synaesthete’s brain is wired differently or has extra connections.
Interestingly, this performs a spectrum of artistic characteristics in music, art and literature. I was first introduced to this phenomenon at a talk hosted by the Manchester Literary and Philosophical Society. It is studied with as much interest by the arts as it is by science.
Many music artists of today have claimed possession of synaesthetic senses, such as Pharrell, Kanye West, Billy Joel and Stevie Wonder. Pharrell describes his ‘Happy’ 2013 single as “yellow with accents of mustard and sherbet orange”. Synaesthesia almost neurologically embodies the idea that people can have different ‘taste’ in the arts.
My favourite example of synaesthesia at work is in Vladimir Nabokov’s masterpiece Lolita, where much of the language is chiastic (ABBA) to create alliterative and musical sounds. Nabokov himself had grapheme- colour synaesthesia, which is the association of colours with numbers and letters. Some synaesthetes say his prose reads as if it were meant to be visually pleasing, almost like a word painting.
“Lolita, light of my life, fire of my loins. My sin, my soul. Lo-lee-ta: the tip of the tongue taking a trip of three steps down the palate to tap, at three, on the teeth. Lo. Lee. Ta.”
Here, Nabokov imagines the syllables of Lolita walking in his ‘palate’ to ‘teeth’. It contends that his beautiful idea of Lolita does not come from a physical person but is created from the sensory feeling of her syllables inside his head. We could argue the narrator’s love for Lolita stems from a vivid synaesthetic experience.
So are we non-synaesthetes missing out? Most synaesthetes claim that they feel sorry for those people without the condition. Yet, it’s not without its negatives too, such as feeling intensely disgusted by common sounds or words. Personally, I think synaesthesia is still relatable to those without the condition. Nabokov reiterates this idea that synaesthesia is an extrapolation of taste, and allows one to experience the world more intensely:
“I am therefore inclined to think of my synaesthesia as an extension of the typical writer’s overinvestment in words: an extension…”
What are your thoughts on taste and synaesthesia? Do you think science can understand it better by analysing art? Send me your Qs via Twitter @ZaraAtNotch
Gaby’s Top 5 Science moments 2016Read More
This year for my top 5 science moments, I have taken a different tactic to past yearly reviews. I have resisted the temptations to choose a discovery from each discipline of science for the sake of balance and, instead, have included the stories that spoke to me. So, if you are looking for a wide-reaching view of the science of 2016, then this may not be for you. But if you are interested in the science discoveries that captured the imaginations and hopes of this geneticist then grab a cuppa. Here is my top 5 moments from 2016.
5. Pocket-sized DNA sequencer
The ability to sequence a genome and read the code to life is arguably one of the greatest breakthroughs in the history of modern science. However, the hardware involved is normally at least the size of a microwave oven and can be very fragile. This year, a biotechnology company made a significant breakthrough with a sequencing machine, the MinION. This sequencer is only 86 grams and is small enough to be forgotten in a pocket! This year however, it was proven to be not only functional but has also been shown to work in microgravity.
In June this year the MinION was sent to the International Space Station to be tested on board. The future holds great things for this technology and space exploration. In theory, the crew could use it to quickly identify the precise cause of any illness to ensure that it is treated effectively. This type of diagnosis is imperative for future missions to Mars and beyond when there is no possibility to restock the limited supply of antibiotics.
However, it is not only for space travel that this development will be useful. Reducing DNA sequencing to a small size means it could be combined with other technologies to allow patients to monitor levels of certain DNA sequences at home. In theory, cancer patients could track the progress of their disease by the level of fusion chromosomes and HIV patients could monitor viral levels as easily as diabetics can monitor their blood sugar.
Whatever the future uses are, the pocket-sized DNA sequencing technology opens new doors for genomics, therapeutics and disease management.
4. Promising results from stem cell treatments for stroke
Stroke research, especially developing therapies, is a complex field that is subject to many challenges. For a long time, the industry belief was that the most effective treatment for stroke would be one that can be administered to patients as soon as possible after the fact, even in the back of an ambulance.
However, new research from Stanford University has broken new ground with a treatment that can be administered 3 years after a stroke. Adult mesenchymal stem cells were injected into the brain of volunteer stroke victims between 6 months and 3 years after the stroke had occurred. Normally, after 6 months doctors would expect no future improvement to occur. However, after the procedure, one patient regained movement in her right arm and right leg even after being confined to a wheelchair for the previous few years.
Mesenchymal stem cells have interesting therapeutic potential as they have been shown to repress the immune system which may have contributed to the high success rate and low number of side effects observed in this trial.
Whatever the theory and the reason behind the success, this trial has paved the way for more successful therapies for stroke victims and has given hope to those that currently live with a disability as a result.
3. Progress in the field of human CRISPR research
2015 was undoubtedly the year of gene editing. As Science’s breakthrough of the year and with multiple advances, it was the beginning of the gene editing revolution. As a result, this year was expected to be when all of that research and progress was finally applied and the true value of CRISPR was revealed. It did not disappoint.
2016 saw the first human trial in China using CRISPR-Cas9 in an experimental therapy for a patient with advanced lung cancer. In this trial, CRISPR was targeted to PD-1 in the targeted cells, which aimed to induce cell death and halt the growth of the cancer.
Equally notable progress was made closer to home in the USA with the start of a safety test of CRISPR for human use. The safety test is administering CRISPR to 18 patients with various cancers but will not be assessed for efficacy. The completion of this safety screen should allow the development of CRISPR therapeutics in the USA and encourage investment into applying CRISPR to proven gene editing based therapies. Such proven techniques include the removal of rejection genes with TALENS by Great Ormond Street Hospital or the addition of HIV resistance genes to patients using techniques done with ZFNs.
The approval of these trials is a big moment for gene editing based therapeutics. After the death of Jesse Gelsinger in 1999, the industry is understandably cautious surrounding these techniques. However, recent developments, improvements and precautions for conflict-of-interest all contribute to making CRISPR-based therapeutics that little bit closer.
2. The continued race for a Zika vaccine
Two years ago, the first reports began to surface about the outbreaks of microcephaly in South America. Quickly, research abounded into the detection of the cause and the Zika virus made headlines worldwide. Reminiscent of the Ebola outbreak, a known virus became more dangerous and was posing a real threat to millions of people.
The response was instant. Never before have so many corporations, research groups and academics reacted so quickly to develop a vaccine for an outbreak. Some vaccines are on track to finish development in a remarkable and record-breaking 2-year turnaround. Lessons have obviously been learned from the Ebola outbreak and teams are reacting quickly to not miss the critical window for a vaccine.
Many have taken the opportunity of the outbreak to develop innovative vaccine technologies. One such technique involves administering spliced viral DNA. The DNA enters the nuclei of cells and is synthesized into partial viral particles. Antibodies can then be created in response so the body is prepared for a future infection. To improve the vaccine, some manufacturers are using RNA as a more flexible alternative to enter the nucleus.
The development of the Zika vaccines has made it into my top 5, not only because new and innovative techniques are being used. The response by the science industry has given me a lot of hope for the future of science. In the face of the crisis, the industry has shown how teams from across the world can work together to create solutions.
1. Discovery of a key moment in evolutionary history
Few moments in evolutionary history can be argued to be as impactful as the point where life transitioned from single-celled amoeba to complex multicellular organisms. The ability to form a multicellular organism is the point at which life, as we know it, became possible. This year it was revealed that this breakthrough in evolution might have been the result of a single mutation and the consequence of simple dumb luck.
For the formation of multicellular organisms, communication between cells is imperative and a failure to communicate, can lead to cancer, developmental abnormalities and death. Researchers found that, approximately one billion years ago, a single mutation occurred in the gene GK-PID.
This mutation allowed the protein to orient the divisional direction of cells by dictating the position of the mitotic spindle in the cell. However this mutation has an intriguing history when you consider how it functions. The mutation gave GK-PID the ability to link an anchor in the cell membrane to the mitotic spindle. The intriguing point is that, at the time of GK-PID’s mutation, the anchor had not yet evolved!
The reason that this discovery is my number 1 of the year is simple. As a geneticist, I enjoy how this discovery reveals the seldom-admitted secret of biology. Life as we know it, and the key moments of evolution, all came down to plain, old, boring, dumb luck!
So, which of my top 5 got you excited about what science has to offer in 2017? Do you agree with my list? Is there something missing?
Let me know on Twitter @GabyAtNotch
Phantom Limbs and Virtual RealityRead More
After watching the inspirational Paralympic games this September, it got me thinking about amputees and the challenges they face. As a neuroscientist, my immediate thoughts went to a condition called Phantom Limb Syndrome, a very peculiar sensation that occurs in around 90% of amputees. It made me wonder how on earth does this happen! It didn’t seem logical, so I looked into the background and causes of the syndrome and that led me to some very interesting treatments for patients – spanning a period of over 450 years!
What is a Phantom Limb?
Firstly, let’s get a little bit of background on the term Phantom Limb Syndrome. It is the sensation or feeling that a limb is still part of a person after that particular limb has been amputated. The feeling can be characterised into both painful and non-painful sensations. Non-painful sensations include the feelings of touch, temperature, pressure and often itching, whilst the painful sensations usually come in the form of burning and shooting pains. This phantom limb pain does not originate from the site of amputation; it is a completely separate experience. But for me it begs the question, how can you ‘feel’ anything in a limb that doesn’t physically exist?
Why does it occur?
This question has had scientists baffled since 1552 when the syndrome was first described, but to this day the exact causes still remain unclear. A little more recently, research using Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) scans has led current thinking to believe the feelings originate in the brain and spinal cord. It was found that the portions of the brain that had once been neurologically connected to the nerves in the amputated limb showed activity in the scans when the patient was experiencing their phantom pain. The patient was experiencing the genuine sensation of pain in a limb that shouldn’t be able to feel anything, a slightly confusing concept right? The explanation for this baffling condition is by no means set in stone, but it is mostly thought to occur due to a lack of sensory input from the missing limb. In everyday life when you produce movement via your limbs, the brain is constantly receiving sensory feedback from that moving limb. In a phantom limb patient, after the limb is amputated, comprehensive sensory input from the limb ceases to be sent; the brain becomes confused and so triggers the body’s most rudimentary response of pain. A different example is the noise heard by people suffering from tinnitus – individuals hear a high-pitched noise that doesn’t actually exist as no-one else can hear it – and current theory is that this is also caused by an anomaly in the brain and spinal cord.
Phantom limb pain is extremely prolific in amputees but varies in duration and seriousness between patients. Phantom pain is a type of nerve pain and can be extremely severe and debilitating. It is extremely common for phantom pain to severely affect patients quality-of-life with many people being driven to near madness from the constant pain. The condition is usually treated via drugs such as painkillers, sedative-hypnotics and anticonvulsants but sadly without much success, often leaving patients still having to cope with their chronic pain. An effective form of treatment is therefore very much in need. But what would happen if we could trick the brain into thinking the limb still existed? This concept was first used by neuroscientist Ramachandran in the 1990s whereby he placed a mirror between a patient’s limbs and asked the patient to move their healthy limb and phantom limb whilst looking at the mirror. This could effectively trick the brain into thinking that the phantom limb was moving, removing some of the discordance in brain signals and relieving the patient of pain.
Cutting edge treatments
Now usually, when thinking of the term ‘Virtual Reality’ (VR) I would immediately think of state-of-the-art video games, where users are completely immersed to the point where they think they are in the game. A recent study has allowed this technology to be used to help sufferers of phantom limb pain. Through VR, users are able to see an image of their moving (phantom) limb in their minds. The technology allows the virtual image to move in accordance with the intact parts of the limb creating a life-like and extremely believable sensation of an intact limb. This then tricks the brain, stops the confusion and therefore reduces the patients’ pain. It’s a more modern take on Ramachandran’s mirror therapy concept and has proven to be even more effective. It’s exciting to see how technology can improve people’s lives, what are your predictions for the future?
Tweet me your thoughts to @MegAtNotch
Removing Barriers to Technology InnovationRead More
Science, technology and healthcare have advanced dramatically over the past few decades, but there is still great scope for new innovation as technologies continue to develop. True innovation requires stepping into the unknown, and this is often limited by perceived hurdles – including tangible barriers, such as lack of resources, and emotional barriers such as fear. What can be done to help drive innovation forwards? Aside from the obvious factors, such as time, money and fresh ideas, I’d like to consider some of the influences that societal and workplace cultures can have on promoting or preventing progression. I’ve classed them into three broad areas of relevance for the life sciences and pharmaceutical industries.
Collaboration vs Competition
Many industries are moving away from closed, secretive cultures towards more open approaches that allow collaboration and sharing of information between organisations, including private companies. The common aim is to accelerate progress, such as finding new therapies more quickly through sharing academic and industrial scientific research data (eg, Cancer Research Technology’s various programmes). In software, there have been attempts to pool technical expertise across groups of developers and across industries for rapid creation of new software tools and platforms, notably the well-established Linux community and, more recently, the Open Compute Project.
This movement towards greater collaboration could be seen as very risky. It is driven by urgent consumer or end-user needs – conflicting with the usual corporate drivers of increased profit and gain of market share. Furthermore, collaboration between academics and/or companies requires sharing of data that not only gives away perceived knowledge advantages to potential competitors, but ultimately risks losing ownership of intellectual property. Why, then, does it occur? Is it the result of a philanthropic urge, or could there be advantages for participating organisations in addition to producing end-user benefits?
It seems there are potential advantages and these are emerging due to recent economic shifts. The life sciences industry, and particularly pharmaceuticals, remains permanently changed by recent recessions that have resulted in significant layoffs within numerous R&D departments, and many ongoing mergers and acquisitions. There’s less funding available for fundamental academic research and more emphasis on grants with tangible outputs. The industry as a whole is facing greater requirements for accountability, with justification of budgets through demonstrating return on investment.
As a result, many organisations lack the internal resources and expertise they need for scientific discoveries or innovative product development, which are essential to remain successful in the life sciences. Some companies can outsource or insource certain R&D projects and niche expertise, but this still requires budget, project management and building trust with third parties. The alternative is to form true collaborations that rely on different capabilities from each party to achieve the desired goals. There is no client-supplier relationship in such arrangements, and the investment can often be jointly managed, typically requiring time and internal resources as opposed to significant cash budgets. Importantly, the risks can be shared by all contributing parties.
To be successful, this model requires truly equal commitment to the project from all parties and total agreement on the desired outcomes. The priority has to be the success of the project, and this necessitates a change in employee mentality and business cultures.
Whether or not this can be sustained in the long-term remains questionable. Firstly, products arising from inter-organisational collaborations may be innovative but their profits would be diluted across different contributing parties or, in some cases, non-existent: collaborative efforts in the software industry usually aim for open-source software. Secondly, it would have the effect of reducing competition, which would not only be damaging to the economy and reduce consumer choice, but ultimately would take away the need for companies to innovate. Allowing more collaboration between organisations can be beneficial for innovation, but only when it enables true synergy.
Progression vs Privacy
The arrival of smart phones, along with improvements in wireless technologies and mobile data collection, has led to significant changes in the way we make purchases, consume entertainment, and read and engage with media. In turn this has led to large-scale developments in rapid data collection and analysis that have allowed major innovations to emerge, such as fitness bands and other wearable technologies.
These changes also offer great advantages for healthcare, opening new possibilities for automatic submission and monitoring of live outpatient data via smart phone apps. One example is monitoring blood glucose levels in people with diabetes, where digital collection and submission of patient data provides a more accurate, reliable and traceable approach than current self-monitoring methods. Similarly, these technologies hold the key to improved collection and submission of data for clinical trials, which could greatly enhance the quality of trials data as well as reducing the economic and labour burden of current data collection methods.
In countries such as Sweden, where healthcare records and drug dispensation are fully digitalised and linked with every citizen’s personal ID number, these emerging developments are becoming a real possibility. A compulsory ID card system has numerous advantages because the personal ID number can be used for storing almost all personal data. This allows reliable keeping of electronic medical records, as well as instant and hassle-free systems for numerous daily activities, from collecting loyalty points when shopping to receiving parcels, borrowing library books or hiring a car.
However, these ID numbers also hold the key to vital information such as the individual’s address, mobile phone number and even their income and tax returns. In some populations there remains a general aversion to sharing of personal data, despite the widespread embracement of smart phone technologies, and self-submission of data and content to all kinds of apps and platforms. Polling in the UK has established that the majority of Brits are strongly against compulsory ID cards, which are perceived as representing an invasion of privacy. The UK is also relatively over-populated and vital changes – such as an electronic medical records system – that would be necessary to underpin revolutionary digital healthcare innovations remain exceptionally difficult to implement. Furthermore, the country’s over-burdened mobile phone network still can’t guarantee even 3G networks nationwide, which removes the practicality of many new data-collecting technology developments. By contrast, less populated countries, such as Sweden and Finland, that are leading digitalisation of healthcare are also implementing 5G.
Digitalisation of healthcare has great potential to change the lives of patients and healthcare providers, but in some countries the decaying infrastructure combined with societal privacy concerns are impeding implementation of such innovative, life-changing technologies.
Democracy vs Decisiveness
Successful innovation across the life science and pharmaceutical sectors also depends on agility. This is essential for allowing businesses or researchers to respond to new developments, to rethink their strategies and to reshape their ideas accordingly.
Although few business decisions are made by a single person, the way in which decisions are made and information is handled varies from one organisation to the next. This is strongly related to the organisation’s degree of democracy and culture of equality. In the corporate world, it has been traditional to empower small groups with appropriate decision-making responsibilities. These groups may report directly to the senior management and the outcomes of their decisions are fed downwards through the organisation in a single-minded and relatively autocratic manner. This approach is effective and decisive, setting clear boundaries within the work environment. However, it is not particularly open or flexible for accommodating differences of opinion and, in larger organisations with long chains of command and reporting, this can become a very slow-moving and cumbersome process. Furthermore, a rigid and procedural-based mentality is not conducive to developing a creative and innovative working environment.
In some organisations, there is greater emphasis on involving wider groups in decisions. This ensures that many individual voices are heard across different areas of an organisation, and large teams can be used to discuss and finalise the outcomes. This creates a more open, democratic and transparent culture, that’s often assumed to be more conducive to creativity. In reality, too many decision makers can result in extremely prolonged decision-making that requires significant time and resources. In some cases this time and resource may be better spent simply taking the action, rather than discussing what actions to take. An agile workplace culture is vitally important for innovation and creativity, regardless of how many decision-makers are needed to purchase a new light bulb.
What other influences affect innovation and how can we remove these barriers? Contact me @kateatnoch