How to Communicate Complex Science to the PublicRead More
Scientific research is moving faster than ever before, leading to ground-breaking discoveries moving from the lab to our everyday lives quicker with each new breakthrough. Innovative science is now such an important part of the world we live in, that it is crucial for the public to be able to understand the science behind these discoveries. For example, the cutting-edge of science is often surrounded by debates due to ethical implications, complex benefits and risks. Therefore, to take an active role in these discussions and make informed decisions, consumers need to be educated in a way they can understand.
However, until recently, scientists were not trained to communicate their science effectively to the public. Published papers communicating the latest in scientific research focus heavily on accuracy of information and high levels of detail, so much so that in some cases they would require a translator to make sense of.
So how do you go about communicating these complex subjects to the public when they are hard enough to convey to a trained scientist?
1. Know your audience
Number one and by far the most important. Know and define whom you are talking to. How old are they? What is their previous education? What are their interests or life experiences? These are all important factors to consider when communicating science and when applying the next seven pieces of advice. A 10 year old at school, a college-educated 50-year-old and a high-school educated 30-year-old are all very different audiences and will require a different tactic to reach them. Use the first person – talk to them as a human not as a ‘scientist’, because what you are talking about affects everyone.
2. Talk to them as an individual
Once you know who your audience is, talk to them. I recommend using the first person as much as possible when communicating to the public in general, but especially when trying to educate. Half the battle with communicating complex subjects is engaging the audience, and the first person lends itself to this well.
3. Tell the story
When conveying anything, not just science, it is most engaging to do this as a story. Have a clear beginning, middle and end. Think about how to set the scene, get into the ‘whys’ before the ‘hows’. This way you will engage the reader and keep them interested as one point clearly and simply leads on to the next. Don’t delve into the nitty-gritty too early (or at all if it isn’t relevant), as you will defeat the whole point of trying to tell the story.
4. Make it relevant
The truth is, as interesting as you might find this area of science, the public won’t care about the story unless it has a relevant impact. The good thing is that science will inevitably have an impact on everyone in some way; you just have to find it. Start your story with an experience/event/feeling that they can identify with and go from there. I find it useful to start with the big picture, what is the end goal of this research?
5. Show and tell
We all know that a picture is worth 1000 words, but what is important is that those 1000 words are in a language everyone on the planet can understand. Whether your audience has previous knowledge of science or none at all, illustrating your point with an image/diagram will always help.
If you cannot find a suitable image, or you are trying to explain a concept, then you can paint a picture in the imagination of your audience. Analogies and examples go a long way to making a complex concept easy to understand. Talking about a nucleus doesn’t mean a lot to many but the “brain of a cell” explains it well. Even if it is not 100% accurate it gets across your point without confusing the reader. Which leads me on to…
7. Let the little things go
The process of simplifying a complex scientific concept can be a painful process to those that have deep and detailed knowledge of the subject. The nucleus is not the brain of the cell. It’s just not and I understand why that annoys cell biologists and neuroscientists alike. A nucleus has no conscience, can’t think and isn’t structurally comparable to a central nervous system in any way. But it’s a good enough comparison if you are trying to convey that it controls the cell.
My advice (that may be easier said than done) is to let the little things slide and resist the urge to explain too much too soon. No analogy will be perfect, but try not to criticise one that gets across the point. Who knows, if you really engage a reader they may go on to learn more about the subject and discover the details themselves.
8. Leave the politics aside
Science is very often the subject of political, ethical and religious debates. For a lay audience, it can be even more difficult to isolate fact from opinion. So it is up to scientists to either make the differentiation clear or to remove opinion from it altogether. My opinion is that it is up to science communicators to convey the facts and discuss the amazing job that science has done to achieve these breakthroughs. It is the job of political commentators, ethical debaters and even the philosophers to argue whether it is right or wrong. By all means give an opinions, but make sure your audience know that is what it is.
So that is it for my top tips on how to communicate your science. Let me know if you found this helpful, and if you have any tips that you would add Tweet me @GabyAtNotch.
Asthma, what is it and how do we treat it?Read More
Today, 2nd May 2017, is World Asthma Day, a day dedicated to asthma prevention, diagnosis and treatment.
What is asthma?
Asthma is a heterogeneous disease characterised by chronic airway inflammation and variable airway obstruction that is reversible, either spontaneously or after treatment. It affects people of all ages and often starts in childhood, although it can also appear for the first time in adults. The disease is long-term or chronic and the prevalence in different countries varies widely, but the disparity is narrowing due to rising prevalence in low and middle income countries and plateauing in high- income countries.
An estimated 300 million people worldwide suffer from asthma, with 250,000 annual deaths attributed to the disease. It is estimated that the number of people with asthma will grow by more than 100 million by 2025. Approximately 250,000 people die prematurely each year from asthma. Almost all of these deaths are avoidable.
There’s currently no cure for asthma, but there are simple treatments that can help keep the symptoms under control so it doesn’t have a significant impact on the patient´s life. Some people, particularly children, may eventually grow out of asthma, but for many it is a lifelong condition.
Treatment with inhaled corticosteroids is the dominating anti-inflammatory treatment during asthma and is recommended at all stages of the disease, except for the mildest. The inhaled corticosteroids can be combined with long-acting beta-2 agonists, these are symptom-controllers that are helpful in opening the airways. (Reference: http://www.aaaai.org/conditions-and-treatments/asthma)
In addition, leukotriene modifiers can further relieve symptoms for some patients, as leukotrienes are important mediators in asthma. Produced by eosinophils, mast cells and macrophages they contribute to chronic inflammation during asthma.
New drug treatments
In addition to traditional treatments, new drugs are being developed to relieve the different symptoms of asthma. One of them, anti-IL-5 (Mepolizumab) has recently been approved both in Sweden and the UK.
This drug is used to help patients with severe, difficult to treat asthma. Approximately five per cent of asthma patients fall within this category, but since asthma is such a prevalent disease, this proportion adds up to quite a few people.
Mepolizumab targets severe eosinophilic asthma – where the inflammation of the airways is linked to a particular type of white blood cell (eosinophils). It is a humanised monoclonal antibody that binds to interleukin-5 (IL-5) and hinders IL-5 from binding to its receptor on eosinophils, leading to a decrease of eosinophils in blood, tissue and sputum. It is believed that around 40% of people with severe asthma will have an eosinophilic phenotype – meaning that they may be able to benefit from the new treatment.
Mepolizumab is administered through sub-cutaneous injection every two to four weeks. Despite the high cost of the drug doctors are positive.
“A very badly affected group of patients can get help and if a few of these individuals can get a better control over their asthma, their need for healthcare would decrease and their ability to work would increase. This could mean economic benefits for both healthcare and the society,” says Christer Jansson, professor and consultant at the lung and allergy clinic at Akademiska sjukhuset, Uppsala, Sweden.
Benralizumab is another drug targeting eosinophilic asthma, that is undergoing testing right now. Unlike mepolizumab it uses a different pathway; targeting the IL-5 receptor, causing eosinophil apoptosis (cell death). One potential advantage of benralizumab is that it can be given less often, every two months instead of every two weeks, which may lower the cost of the treatment.
Into the future
Hopefully these drugs are just the first of a new line of treatments available targeted at severe asthma. Research is needed to help patients with other types of severe asthma and better diagnostic tests are needed to help ensure that people can have a confirmed diagnosis quickly. This will mean appropriate treatments can be offered, freeing people to go to work, school, raise families and live unrestricted lives that are not overshadowed by asthma.
What are your thoughts on future treatments and diagnostics for asthma? Let me know @fraidifrida
Are we the generation to eliminate one of the biggest killers in human history?Read More
April 25th marks World Malaria Day, a day dedicated to promoting the global efforts to understand and control Malaria – one of the biggest killers in human history. A disease so deadly, some researchers believe it may be responsible for the deaths of almost half of all people who have ever lived.
Caused by different forms of the Plasmodium parasite, there are four types of this life-threatening disease of varying severities. In its most serious form it can affect the kidneys and brain, causing anaemia, coma and death. Malaria is present in over 90 countries and roughly half of the population is currently at risk of catching the disease, with the greatest burden being in the least developed areas where there is very limited access to life-saving preventions, diagnoses and treatments.
How is malaria spread?
It is quite fitting that this lethal disease is transmitted to people through the deadliest animal on the planet – the mosquito. The Mosquito itself does not benefit from transmitting the malaria parasite, it is merely the disease vector – but, having survived for hundreds of millennia, with a population in the trillions and the ability to lay hundreds of eggs at a time, it’s an organism that certainly makes a very effective carrier.
A mosquito bite is simply the beginning of the process for the plasmodium sporozoites (an immature form of the parasite), which have accumulated in the mosquitos’ salivary glands, ready to be released into your body once your skin has been penetrated. This is where the human infection begins, and the sporozoites parasitize the liver, where they appear dormant as they mature and multiply to merozoites. The cells they inhabit eventually erupt and the merozoites are released into the bloodstream, cunningly disguising themselves with the liver cell membranes to avoid an immune attack. This is where they begin their second assault, causing red blood cells to erupt and release toxins that stimulate an immune response – it is this that leads you to experience flu-like symptoms such as fever. In severe cases, if the blood-brain barrier is breached, this can lead to a coma, neurological damage or even death.
The current situation
There have been large-scale efforts to eradicate malaria in the last 75 years. For example, during the WHO’s anti-malarial campaign in the 1950s and 60s DDT was used which, at the time, was hailed as kryptonite to mosquitoes. Bill Gates has famously stated that the world’s fight against malaria is one of the greatest success stories in the history of human health, and yes, over the last couple of decades there certainly has been a significant decline in the global burden of malaria. In fact, since 2000, almost 60 countries have seen a drop of at least 75% in new malaria cases, contributing to a 37% drop globally. However, the 2016 WHO report shows that in 2015 alone more than 400,000 people died of malaria and 214 million were infected. So, the job is far from finished.
Target Malaria – a new approach
There have been remarkable advances in gene-editing technologies in recent years, so one of the main focuses in malaria research lies in exploring different strategies to reduce or modify the populations of Anopheles mosquitoes; specifically, the three species in this genus that are responsible for most of the malaria transmission in Africa. Target Malaria is a not-for-profit research consortium that aims to develop and share technology for malaria control. Their focus is reducing the number of the deadliest malaria-transmitting mosquitoes in Africa – Anopheles gambiae. Specifically, they are interested in targeting female mosquitoes, as these are the only ones that bite, and this is an effective approach to control population size. Target Malaria are investigating the potential of using nuclease enzymes, that cut specific sequences of DNA, to modify mosquito genes. By changing certain genes, malarial resistance, female infertility and almost exclusively male offspring can be induced. The researchers are inserting genes that code for these enzymes into mosquito eggs, with the hope of affecting their reproduction. An example of this research involves nucleases that cut the X chromosome while males are making their sperm, resulting in mainly male offspring. Alongside this, researchers are also investigating how to disrupt the fertility of female mosquitoes to reduce the number of offspring, as well as engineering mosquitoes that are unable to transmit malaria.
These scientists are utilising a method called ‘gene drive’, a powerful emerging technology that is able to override genetic rules to ensure all offspring acquire a trait, as opposed to just, half as would normally be the case, allowing the trait to be spread extremely quickly.
Nowhere are the devastating effects of malaria as obvious as in sub-Saharan Africa, where hundreds of thousands fall victim each year, making up 90% of the total mortality count for the disease. Target Malaria researchers are currently working in Mali, Uganda and Burkina Faso with Bana, a small village in Burkina Faso, having the potential to be the site of a revolutionary genetic experiment. At Imperial College London, gene drive mosquitoes are being designed to have reduced female offspring or the inability to reproduce in general, and are then planned to be released into the wild in Bana. Their hope is that this would nearly eradicate Anopheles gambiae, to a point sufficient to prevent malaria transmission.
So what are we waiting for?
For one thing, the communities need to be prepared for the release. Firstly, there needs to be education, not just regarding genetic engineering and the impact the release will have, but also basic genetics – which may be a challenge in a community where there is no equivalent term, even for the word gene. Additionally, there are still years before scientists will be able to fully develop test gene drive mosquitoes in this manner.
If an experiment of this type is successful in the future, not only could this essentially eradicate malaria, but it could also pave the way for eliminating other mosquito-borne diseases such as dengue fever or even other insect-transmitted diseases like Lyme disease. However, humans have never before changed the genetic code of a free-living organism on this scale and released it into the wild. This genetic-engineering technology is very powerful and definitely needs to be treated as such. But, with millions dying and suffering at the hands of malaria each year, should we look to do this sooner rather than later?
What do you think? Could we be the generation that ends one of the oldest and deadliest diseases in human history? Tweet me your thoughts @PranikaAtNotch.
Progress for patients with Parkinson’s diseaseRead More
On April 11th this year, World Parkinson’s Day will mark 262 years since James Parkinson was born and 200 years since he published his essay ‘On the Shaking Palsy’, which led to an official recognition of Parkinson’s Disease (PD). Today, it’s estimated that over 10 million people worldwide have PD. Despite widespread awareness of PD and its most common symptoms, scientists don’t know why PD develops, and there is no cure. As a result, treatment has been restricted mostly to drugs that ease the symptoms, and physiotherapy.
Researchers have been exploring PD extensively over the decades and are closer to understanding its underlying biology. These studies are leading to promising new drug treatments that are now entering clinical trials, as well as new possibilities for reversing PD by repairing patients’ brains. Here’s a quick summary of a few recent developments.
What is Parkinson’s?
PD is a progressive neurodegenerative disease. It causes nerve cells in parts of the brain that control movement to stop working and die off. In healthy brains, these neurons rely on the brain chemical, dopamine, to communicate with one another. Replacing the lost dopamine in PD patients’ brains has therefore been the focus of many treatments over the decades.
Although PD is a degenerative disease that is more common in older people, we now know it is not specifically a disease of old age: around five to ten per cent of PD patients are aged under 50. Currently, there are no biochemical tests for PD; diagnosis depends on observation of the patient by a clinical and/or neurological specialist.
Every patient’s experience of PD can be different, but common symptoms include tremors – especially in hands or fingers when the limbs are at rest, slowness of movement and stiff, rigid muscles. These effects can be painful as well as debilitating, and become progressively worse.
It’s a challenging disease to diagnose, predict and treat for several reasons. The speed at which the disease progresses and symptoms develop can vary from one patient to the next. Sometimes Parkinson’s is hereditary, but most of the time it’s not. More recently, scientists have discovered that Parkinson’s can also affect parts of the brain that don’t control movement, resulting in a variety of ‘non-motor’ effects that include mental illness such as depression.
Since the 1960s, PD patients have been prescribed drugs such as levodopa that increase dopamine in the brain. Such drugs help to improve patients’ mobility but are associated with unpleasant side effects that typically get worse over time and can contribute to the patient’s illness. It’s also common for patients on these drugs to experience sudden “off periods” when the treatments just stop working. In the long term, the side effects can seriously outweigh the benefits of the treatment and there is an urgent need for more effective drugs.
Finding new drug treatments
In recent years, scientists have learned more about the biology of Parkinson’s and how it causes nerve cells to malfunction. Researchers have been particularly interested in Lewy bodies, which are clumps of proteins that typically appear in the affected brain cells of PD patients. One of the main components of Lewy bodies is alpha-synuclein, and a number of experiments have shown that alpha-synuclein could play a role in the development of PD. As a result, drug companies are now investigating whether new therapies targeting alpha-synuclein could prevent PD development, or at least slow down the disease progression in patients. Clinical trials have recently started for some of these potential new drugs and the Parkinson’s community is eagerly awaiting the results.
Replacing damaged brain cells
An alternative approach to PD treatment is to transplant new cells into the brain, to replace the dead cells. Several different methods have been tried over the past few decades, including transplants of dopamine-producing foetal cells and, more recently, stem cell grafts. In the late 1980s, researchers at Lund University in Sweden successfully transplanted dopaminergic foetal cells into the brains of 18 patients with Parkinson’s. The majority of the patients showed long-term improvements in their symptoms and some of them were able to stop taking levodopa.
One of the patients from the study died recently, 24 years after the transplant, and post-mortem analysis provided a detailed picture of what happened to the transplant in the patient’s brain. During his life, the patient had initially responded very well to the transplant: he was able to come off levodopa completely for a few years, then continued for ten years on a reduced drug dose. The patient then started to decline and, by 18 years after the transplant, the patient’s disease symptoms were similar to those shown before the study. In line with these behavioural observations, post-mortem analysis of the patient’s brain showed that the transplanted cells had grown into the damaged brain areas and successfully formed new nerve connections (re-innervation). However, signs of Parkinson’s disease, such as Lewy bodies, were found in a small proportion of the transplanted cells.
Further transplant studies have been carried out since the pioneering Lund study, but with mixed success. However, it has been generally accepted that cell replacement could be beneficial for PD, and researchers are now investigating modified approaches using stem cells that can develop into dopamine-producing neurons when transplanted into the brain.
Stem cells have attracted a lot of interest for repairing human brains and other organs in recent years. These immature cells have not yet differentiated into their final cell type (such as skin, muscle or brain cells) and, therefore, have important advantages for brain repair. Importantly, stem cells are much more widely available than foetal tissue because stem cells can come from a variety of sources, including adult humans, and can also be grown in the lab. A special type of inducible stem cell (iPSC) can be manipulated to grow into almost any type of cell that’s specialised for the brain or body region of interest. Scientists are now researching iPSCs as well as other types of stem cell for transplanting into Parkinson’s brains, and it’s expected that these will soon be ready for testing in PD patients.
Tailoring treatments to patients
Another area of research that could be beneficial for PD in the future is personalised medicine. This approach relies on collecting individual patients’ biological information and using that to decide the best course of treatment for the patient. For example, the data might include details about a patient’s immune system, their genes, and levels of hormones and other proteins or biomarkers. This can provide important information about the patient’s stage of disease and response to treatment. In turn, this helps with their prognosis and finding more tailored treatment regimes. Although much work has yet to be done before new Parkinson’s treatments become widely available, the personalised medicine approach could be particularly beneficial for PD given the variation seen in patients’ symptoms, disease progression and response to existing treatments.
What are your thoughts on future treatments for PD? Let me know Kate@Notch
Lindvall O, Rehncrona S, Brundin P et al. (1989). Arch Neurol 46(6): 615-631.
Lindvall O, Brundin P, Widner H et al. (1990). Science 247(4942): 574-577.
Stoker TB & Barker RA (2016). Regenerative Medicine 11(8): 778-786.
Parkinson’s Disease Foundation
The Michael J Fox Foundation
Revealing the mind of a synaestheteRead More
“Human life is but a series of footnotes to a vast obscure unfinished masterpiece” – Vladimir Nabokov, Lolita
Have you ever wondered if you’re not seeing the whole picture? Can science even define what the ‘whole picture is’ and categorise human sensation? What can we learn from the experiences of synaesthesia today?
Synaesthesia is one neurological example of how our brains truly do differentiate our senses from each other. There are many varied manifestations of synaesthesia, but they share the condition where one sense, such as hearing, triggers a sensation in another, such as taste.
Brain imaging studies have found that synaesthetic colour experience activates colour regions in the occipito-temporal cortex. Additionally, six brain regions are activated, these regions are in the motor and sensory regions as well as ‘higher level’ regions in the parietal and frontal lobe. This has led to scientific speculation that a synaesthete’s brain is wired differently or has extra connections.
Interestingly, this performs a spectrum of artistic characteristics in music, art and literature. I was first introduced to this phenomenon at a talk hosted by the Manchester Literary and Philosophical Society. It is studied with as much interest by the arts as it is by science.
Many music artists of today have claimed possession of synaesthetic senses, such as Pharrell, Kanye West, Billy Joel and Stevie Wonder. Pharrell describes his ‘Happy’ 2013 single as “yellow with accents of mustard and sherbet orange”. Synaesthesia almost neurologically embodies the idea that people can have different ‘taste’ in the arts.
My favourite example of synaesthesia at work is in Vladimir Nabokov’s masterpiece Lolita, where much of the language is chiastic (ABBA) to create alliterative and musical sounds. Nabokov himself had grapheme- colour synaesthesia, which is the association of colours with numbers and letters. Some synaesthetes say his prose reads as if it were meant to be visually pleasing, almost like a word painting.
“Lolita, light of my life, fire of my loins. My sin, my soul. Lo-lee-ta: the tip of the tongue taking a trip of three steps down the palate to tap, at three, on the teeth. Lo. Lee. Ta.”
Here, Nabokov imagines the syllables of Lolita walking in his ‘palate’ to ‘teeth’. It contends that his beautiful idea of Lolita does not come from a physical person but is created from the sensory feeling of her syllables inside his head. We could argue the narrator’s love for Lolita stems from a vivid synaesthetic experience.
So are we non-synaesthetes missing out? Most synaesthetes claim that they feel sorry for those people without the condition. Yet, it’s not without its negatives too, such as feeling intensely disgusted by common sounds or words. Personally, I think synaesthesia is still relatable to those without the condition. Nabokov reiterates this idea that synaesthesia is an extrapolation of taste, and allows one to experience the world more intensely:
“I am therefore inclined to think of my synaesthesia as an extension of the typical writer’s overinvestment in words: an extension…”
What are your thoughts on taste and synaesthesia? Do you think science can understand it better by analysing art? Send me your Qs via Twitter @ZaraAtNotch
Gaby’s Top 5 Science moments 2016Read More
This year for my top 5 science moments, I have taken a different tactic to past yearly reviews. I have resisted the temptations to choose a discovery from each discipline of science for the sake of balance and, instead, have included the stories that spoke to me. So, if you are looking for a wide-reaching view of the science of 2016, then this may not be for you. But if you are interested in the science discoveries that captured the imaginations and hopes of this geneticist then grab a cuppa. Here is my top 5 moments from 2016.
5. Pocket-sized DNA sequencer
The ability to sequence a genome and read the code to life is arguably one of the greatest breakthroughs in the history of modern science. However, the hardware involved is normally at least the size of a microwave oven and can be very fragile. This year, a biotechnology company made a significant breakthrough with a sequencing machine, the MinION. This sequencer is only 86 grams and is small enough to be forgotten in a pocket! This year however, it was proven to be not only functional but has also been shown to work in microgravity.
In June this year the MinION was sent to the International Space Station to be tested on board. The future holds great things for this technology and space exploration. In theory, the crew could use it to quickly identify the precise cause of any illness to ensure that it is treated effectively. This type of diagnosis is imperative for future missions to Mars and beyond when there is no possibility to restock the limited supply of antibiotics.
However, it is not only for space travel that this development will be useful. Reducing DNA sequencing to a small size means it could be combined with other technologies to allow patients to monitor levels of certain DNA sequences at home. In theory, cancer patients could track the progress of their disease by the level of fusion chromosomes and HIV patients could monitor viral levels as easily as diabetics can monitor their blood sugar.
Whatever the future uses are, the pocket-sized DNA sequencing technology opens new doors for genomics, therapeutics and disease management.
4. Promising results from stem cell treatments for stroke
Stroke research, especially developing therapies, is a complex field that is subject to many challenges. For a long time, the industry belief was that the most effective treatment for stroke would be one that can be administered to patients as soon as possible after the fact, even in the back of an ambulance.
However, new research from Stanford University has broken new ground with a treatment that can be administered 3 years after a stroke. Adult mesenchymal stem cells were injected into the brain of volunteer stroke victims between 6 months and 3 years after the stroke had occurred. Normally, after 6 months doctors would expect no future improvement to occur. However, after the procedure, one patient regained movement in her right arm and right leg even after being confined to a wheelchair for the previous few years.
Mesenchymal stem cells have interesting therapeutic potential as they have been shown to repress the immune system which may have contributed to the high success rate and low number of side effects observed in this trial.
Whatever the theory and the reason behind the success, this trial has paved the way for more successful therapies for stroke victims and has given hope to those that currently live with a disability as a result.
3. Progress in the field of human CRISPR research
2015 was undoubtedly the year of gene editing. As Science’s breakthrough of the year and with multiple advances, it was the beginning of the gene editing revolution. As a result, this year was expected to be when all of that research and progress was finally applied and the true value of CRISPR was revealed. It did not disappoint.
2016 saw the first human trial in China using CRISPR-Cas9 in an experimental therapy for a patient with advanced lung cancer. In this trial, CRISPR was targeted to PD-1 in the targeted cells, which aimed to induce cell death and halt the growth of the cancer.
Equally notable progress was made closer to home in the USA with the start of a safety test of CRISPR for human use. The safety test is administering CRISPR to 18 patients with various cancers but will not be assessed for efficacy. The completion of this safety screen should allow the development of CRISPR therapeutics in the USA and encourage investment into applying CRISPR to proven gene editing based therapies. Such proven techniques include the removal of rejection genes with TALENS by Great Ormond Street Hospital or the addition of HIV resistance genes to patients using techniques done with ZFNs.
The approval of these trials is a big moment for gene editing based therapeutics. After the death of Jesse Gelsinger in 1999, the industry is understandably cautious surrounding these techniques. However, recent developments, improvements and precautions for conflict-of-interest all contribute to making CRISPR-based therapeutics that little bit closer.
2. The continued race for a Zika vaccine
Two years ago, the first reports began to surface about the outbreaks of microcephaly in South America. Quickly, research abounded into the detection of the cause and the Zika virus made headlines worldwide. Reminiscent of the Ebola outbreak, a known virus became more dangerous and was posing a real threat to millions of people.
The response was instant. Never before have so many corporations, research groups and academics reacted so quickly to develop a vaccine for an outbreak. Some vaccines are on track to finish development in a remarkable and record-breaking 2-year turnaround. Lessons have obviously been learned from the Ebola outbreak and teams are reacting quickly to not miss the critical window for a vaccine.
Many have taken the opportunity of the outbreak to develop innovative vaccine technologies. One such technique involves administering spliced viral DNA. The DNA enters the nuclei of cells and is synthesized into partial viral particles. Antibodies can then be created in response so the body is prepared for a future infection. To improve the vaccine, some manufacturers are using RNA as a more flexible alternative to enter the nucleus.
The development of the Zika vaccines has made it into my top 5, not only because new and innovative techniques are being used. The response by the science industry has given me a lot of hope for the future of science. In the face of the crisis, the industry has shown how teams from across the world can work together to create solutions.
1. Discovery of a key moment in evolutionary history
Few moments in evolutionary history can be argued to be as impactful as the point where life transitioned from single-celled amoeba to complex multicellular organisms. The ability to form a multicellular organism is the point at which life, as we know it, became possible. This year it was revealed that this breakthrough in evolution might have been the result of a single mutation and the consequence of simple dumb luck.
For the formation of multicellular organisms, communication between cells is imperative and a failure to communicate, can lead to cancer, developmental abnormalities and death. Researchers found that, approximately one billion years ago, a single mutation occurred in the gene GK-PID.
This mutation allowed the protein to orient the divisional direction of cells by dictating the position of the mitotic spindle in the cell. However this mutation has an intriguing history when you consider how it functions. The mutation gave GK-PID the ability to link an anchor in the cell membrane to the mitotic spindle. The intriguing point is that, at the time of GK-PID’s mutation, the anchor had not yet evolved!
The reason that this discovery is my number 1 of the year is simple. As a geneticist, I enjoy how this discovery reveals the seldom-admitted secret of biology. Life as we know it, and the key moments of evolution, all came down to plain, old, boring, dumb luck!
So, which of my top 5 got you excited about what science has to offer in 2017? Do you agree with my list? Is there something missing?
Let me know on Twitter @GabyAtNotch
The Men Who Enabled Us to View the World in Colour
Bringing colour to the living room
50 years ago this year, the British Broadcasting Company published its intentions to begin broadcasting in colour. One year later the British people saw the green lawns of Wimbledon in the first colour broadcast. On that day, Britain became the first country in Europe to offer regular colour television starting at just 4 hours a week.
John Logie Baird was the man behind the colour transmission system used by the BBC, but was also an important pioneer in developing the first television set, in collaboration with other inventors. He first created and demonstrated colour transmission in 1928, nearly 30 years before it would make its way into British homes.
He died in 1946 before he could see his invention become a widespread phenomenon only a few years later. Without doubt, John Logie Baird can be credited for not only bringing television into the homes of millions but also a few years later, bringing these images in full colour.
Bringing colour to the lab
Roger Tsien was another pioneer who sought to bring colour to images, but rather than viewing sport and the news in colour, he brought colour to the images of science.
Roger Tsien died in September of this year, but certainly lived to see his discovery change how we look at biology forever. In 1994, he discovered an interesting protein in the North American crystal jelly, 14 years later he was awarded the Nobel Prize in Chemistry. The protein, GFP, has become an irreplaceable asset to researchers across all disciplines. GFP (green fluorescent protein) was identified in Aequorea victoria, the crystal jelly, as the protein responsible for the ethereal glow at the edges of the jellyfish.
After being isolated by Tsien, he found that GFP was able to fluoresce without any other factors than oxygen. This breakthrough led to Tsien proving that GFP could be tagged to proteins in cells, bacteria and living organisms to visualise individual proteins.
After discovering GFP, Tsien and his team began to improve GFP by creating mutants that increased the fluorescence beyond what was found in nature. They also developed a whole palette of colours so that multiple proteins can be tagged at the same time and all seen.
There are now dozens of colours making up a whole toolbox of GFP-like proteins, for scientists to view the subjects of their research in all the colours of the rainbow. Although only a single protein, GFP has enabled scientists to see almost anything in biology, from a single molecule of calcium in a heart cell to an entire organism.
Without a doubt, I am most thankful to Roger Tsien’s discovery for all of the beautiful images it has allowed scientists to capture in the name of research. The British Society for Cell Biology runs a competition every year to find some of the most stunning images and is certainly worth a browse. This is the 2016 winner identifying the substructures within the head of a fruit fly.
Tweet me your thoughts and favourite images to @GabyAtNotch.
Phantom Limbs and Virtual RealityRead More
After watching the inspirational Paralympic games this September, it got me thinking about amputees and the challenges they face. As a neuroscientist, my immediate thoughts went to a condition called Phantom Limb Syndrome, a very peculiar sensation that occurs in around 90% of amputees. It made me wonder how on earth does this happen! It didn’t seem logical, so I looked into the background and causes of the syndrome and that led me to some very interesting treatments for patients – spanning a period of over 450 years!
What is a Phantom Limb?
Firstly, let’s get a little bit of background on the term Phantom Limb Syndrome. It is the sensation or feeling that a limb is still part of a person after that particular limb has been amputated. The feeling can be characterised into both painful and non-painful sensations. Non-painful sensations include the feelings of touch, temperature, pressure and often itching, whilst the painful sensations usually come in the form of burning and shooting pains. This phantom limb pain does not originate from the site of amputation; it is a completely separate experience. But for me it begs the question, how can you ‘feel’ anything in a limb that doesn’t physically exist?
Why does it occur?
This question has had scientists baffled since 1552 when the syndrome was first described, but to this day the exact causes still remain unclear. A little more recently, research using Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) scans has led current thinking to believe the feelings originate in the brain and spinal cord. It was found that the portions of the brain that had once been neurologically connected to the nerves in the amputated limb showed activity in the scans when the patient was experiencing their phantom pain. The patient was experiencing the genuine sensation of pain in a limb that shouldn’t be able to feel anything, a slightly confusing concept right? The explanation for this baffling condition is by no means set in stone, but it is mostly thought to occur due to a lack of sensory input from the missing limb. In everyday life when you produce movement via your limbs, the brain is constantly receiving sensory feedback from that moving limb. In a phantom limb patient, after the limb is amputated, comprehensive sensory input from the limb ceases to be sent; the brain becomes confused and so triggers the body’s most rudimentary response of pain. A different example is the noise heard by people suffering from tinnitus – individuals hear a high-pitched noise that doesn’t actually exist as no-one else can hear it – and current theory is that this is also caused by an anomaly in the brain and spinal cord.
Phantom limb pain is extremely prolific in amputees but varies in duration and seriousness between patients. Phantom pain is a type of nerve pain and can be extremely severe and debilitating. It is extremely common for phantom pain to severely affect patients quality-of-life with many people being driven to near madness from the constant pain. The condition is usually treated via drugs such as painkillers, sedative-hypnotics and anticonvulsants but sadly without much success, often leaving patients still having to cope with their chronic pain. An effective form of treatment is therefore very much in need. But what would happen if we could trick the brain into thinking the limb still existed? This concept was first used by neuroscientist Ramachandran in the 1990s whereby he placed a mirror between a patient’s limbs and asked the patient to move their healthy limb and phantom limb whilst looking at the mirror. This could effectively trick the brain into thinking that the phantom limb was moving, removing some of the discordance in brain signals and relieving the patient of pain.
Cutting edge treatments
Now usually, when thinking of the term ‘Virtual Reality’ (VR) I would immediately think of state-of-the-art video games, where users are completely immersed to the point where they think they are in the game. A recent study has allowed this technology to be used to help sufferers of phantom limb pain. Through VR, users are able to see an image of their moving (phantom) limb in their minds. The technology allows the virtual image to move in accordance with the intact parts of the limb creating a life-like and extremely believable sensation of an intact limb. This then tricks the brain, stops the confusion and therefore reduces the patients’ pain. It’s a more modern take on Ramachandran’s mirror therapy concept and has proven to be even more effective. It’s exciting to see how technology can improve people’s lives, what are your predictions for the future?
Tweet me your thoughts to @MegAtNotch
Climate Change: 2016 Set to be the Warmest Year on RecordRead More
2016 is set to be another hottest year on record, beating the records set in 2015, and 2014 before that. Global temperatures have always fluctuated due to natural factors such as shifts in the earth’s orbit, however, with the ever-increasing population there are now many human factors influencing our climate.
Scientists believe that at the end of past ice ages the planet warmed up by around 4-7 degrees Celsius over the next 5000 years. The warming that we are experiencing today is occurring at a rate approximately eight times faster than ice age warming. This indicates that global warming is not a natural event.
How are humans responsible?
Greenhouse gases and aerosols produced and used by humans have changed the composition of the atmosphere leading to temperature imbalances in the climate system. However, our largest contribution to climate change is the release of carbon dioxide from the burning of fossil fuels. Human effects on the climate are far more significant than most natural influences, such as changes in solar output. However, there is a more powerful natural force that is contributing hugely to the current upward trend in temperatures, El Niño.
El Niño is a natural phenomenon that drives up global temperatures as a consequence of weaker winds across the tropical Pacific. Usually, these ‘trade’ winds will push warmer surface water to the west side of the ocean. This leaves colder water at the eastern side, and subsequently causes a temperature difference between the east and west. Warmer air caused by the accumulation of warm water to the west rises up and can cause wetter, unsettled weather.
El Niño comes into effect when certain conditions result in weaker trade winds. This allows the temperature difference to equalise somewhat across the ocean, with the warmest water now moving away from the west coast. Movement of the warm water results in changes in rain and wind patterns, and causes a knock-on effect on weather that can travel around the globe. The increase in global temperatures is caused by the release of extra heat at the surface of the Pacific – hence El Niño years can be warmer than usual.
For more information about El Niño, watch this video
So El Niño may be partially responsible for the recent increases in temperature, however this is not a long-term effect. To what extent is this increase also linked to greenhouse gases and global warming? Now that El Niño is fading, it is likely that 2017 will in fact be slightly cooler than 2016. Despite this, overall increases in temperature are still expected over a longer time period. So what are we doing to try and minimise global warming?
The Paris Climate Treaty
In December 2015, 195 countries came together to make a treaty with the aim of keeping the long-term global temperature increase to below 2°C. The countries will reconvene every 5 years to review targets and ensure the latest science and technology is being employed. Not only does the treaty address the idea of limiting climate change, but the governments have also agreed to strengthen societies’ abilities to deal with the inevitable impacts of a warmer globe.
A country whose efforts to minimise climate change cannot be overlooked is Bhutan. This year Bhutan is boasting not only a carbon neutral footprint, this small, land-locked country is actually carbon negative! Seventy-two percent of the country is under forest cover, acting as a large carbon sink which, in time, has the potential to offset 50 million tonnes of carbon dioxide per year. Bhutan’s promise to remain carbon neutral forever gained recognition at the Paris climate treaty in 2015. With a 15-year transitional funding plan and numerous strategies including investing in sustainable transport, subsidising the cost of LED lights and aiming for a paperless government, Bhutan looks set to be the leader in carbon neutral living for the foreseeable future.
Find out more about Bhutan’s plans from this TED talk.
Do you think we can slow down climate change? What are you doing to look after our planet? Let me know at @HelenAtNotch
Removing Barriers to Technology InnovationRead More
Science, technology and healthcare have advanced dramatically over the past few decades, but there is still great scope for new innovation as technologies continue to develop. True innovation requires stepping into the unknown, and this is often limited by perceived hurdles – including tangible barriers, such as lack of resources, and emotional barriers such as fear. What can be done to help drive innovation forwards? Aside from the obvious factors, such as time, money and fresh ideas, I’d like to consider some of the influences that societal and workplace cultures can have on promoting or preventing progression. I’ve classed them into three broad areas of relevance for the life sciences and pharmaceutical industries.
Collaboration vs Competition
Many industries are moving away from closed, secretive cultures towards more open approaches that allow collaboration and sharing of information between organisations, including private companies. The common aim is to accelerate progress, such as finding new therapies more quickly through sharing academic and industrial scientific research data (eg, Cancer Research Technology’s various programmes). In software, there have been attempts to pool technical expertise across groups of developers and across industries for rapid creation of new software tools and platforms, notably the well-established Linux community and, more recently, the Open Compute Project.
This movement towards greater collaboration could be seen as very risky. It is driven by urgent consumer or end-user needs – conflicting with the usual corporate drivers of increased profit and gain of market share. Furthermore, collaboration between academics and/or companies requires sharing of data that not only gives away perceived knowledge advantages to potential competitors, but ultimately risks losing ownership of intellectual property. Why, then, does it occur? Is it the result of a philanthropic urge, or could there be advantages for participating organisations in addition to producing end-user benefits?
It seems there are potential advantages and these are emerging due to recent economic shifts. The life sciences industry, and particularly pharmaceuticals, remains permanently changed by recent recessions that have resulted in significant layoffs within numerous R&D departments, and many ongoing mergers and acquisitions. There’s less funding available for fundamental academic research and more emphasis on grants with tangible outputs. The industry as a whole is facing greater requirements for accountability, with justification of budgets through demonstrating return on investment.
As a result, many organisations lack the internal resources and expertise they need for scientific discoveries or innovative product development, which are essential to remain successful in the life sciences. Some companies can outsource or insource certain R&D projects and niche expertise, but this still requires budget, project management and building trust with third parties. The alternative is to form true collaborations that rely on different capabilities from each party to achieve the desired goals. There is no client-supplier relationship in such arrangements, and the investment can often be jointly managed, typically requiring time and internal resources as opposed to significant cash budgets. Importantly, the risks can be shared by all contributing parties.
To be successful, this model requires truly equal commitment to the project from all parties and total agreement on the desired outcomes. The priority has to be the success of the project, and this necessitates a change in employee mentality and business cultures.
Whether or not this can be sustained in the long-term remains questionable. Firstly, products arising from inter-organisational collaborations may be innovative but their profits would be diluted across different contributing parties or, in some cases, non-existent: collaborative efforts in the software industry usually aim for open-source software. Secondly, it would have the effect of reducing competition, which would not only be damaging to the economy and reduce consumer choice, but ultimately would take away the need for companies to innovate. Allowing more collaboration between organisations can be beneficial for innovation, but only when it enables true synergy.
Progression vs Privacy
The arrival of smart phones, along with improvements in wireless technologies and mobile data collection, has led to significant changes in the way we make purchases, consume entertainment, and read and engage with media. In turn this has led to large-scale developments in rapid data collection and analysis that have allowed major innovations to emerge, such as fitness bands and other wearable technologies.
These changes also offer great advantages for healthcare, opening new possibilities for automatic submission and monitoring of live outpatient data via smart phone apps. One example is monitoring blood glucose levels in people with diabetes, where digital collection and submission of patient data provides a more accurate, reliable and traceable approach than current self-monitoring methods. Similarly, these technologies hold the key to improved collection and submission of data for clinical trials, which could greatly enhance the quality of trials data as well as reducing the economic and labour burden of current data collection methods.
In countries such as Sweden, where healthcare records and drug dispensation are fully digitalised and linked with every citizen’s personal ID number, these emerging developments are becoming a real possibility. A compulsory ID card system has numerous advantages because the personal ID number can be used for storing almost all personal data. This allows reliable keeping of electronic medical records, as well as instant and hassle-free systems for numerous daily activities, from collecting loyalty points when shopping to receiving parcels, borrowing library books or hiring a car.
However, these ID numbers also hold the key to vital information such as the individual’s address, mobile phone number and even their income and tax returns. In some populations there remains a general aversion to sharing of personal data, despite the widespread embracement of smart phone technologies, and self-submission of data and content to all kinds of apps and platforms. Polling in the UK has established that the majority of Brits are strongly against compulsory ID cards, which are perceived as representing an invasion of privacy. The UK is also relatively over-populated and vital changes – such as an electronic medical records system – that would be necessary to underpin revolutionary digital healthcare innovations remain exceptionally difficult to implement. Furthermore, the country’s over-burdened mobile phone network still can’t guarantee even 3G networks nationwide, which removes the practicality of many new data-collecting technology developments. By contrast, less populated countries, such as Sweden and Finland, that are leading digitalisation of healthcare are also implementing 5G.
Digitalisation of healthcare has great potential to change the lives of patients and healthcare providers, but in some countries the decaying infrastructure combined with societal privacy concerns are impeding implementation of such innovative, life-changing technologies.
Democracy vs Decisiveness
Successful innovation across the life science and pharmaceutical sectors also depends on agility. This is essential for allowing businesses or researchers to respond to new developments, to rethink their strategies and to reshape their ideas accordingly.
Although few business decisions are made by a single person, the way in which decisions are made and information is handled varies from one organisation to the next. This is strongly related to the organisation’s degree of democracy and culture of equality. In the corporate world, it has been traditional to empower small groups with appropriate decision-making responsibilities. These groups may report directly to the senior management and the outcomes of their decisions are fed downwards through the organisation in a single-minded and relatively autocratic manner. This approach is effective and decisive, setting clear boundaries within the work environment. However, it is not particularly open or flexible for accommodating differences of opinion and, in larger organisations with long chains of command and reporting, this can become a very slow-moving and cumbersome process. Furthermore, a rigid and procedural-based mentality is not conducive to developing a creative and innovative working environment.
In some organisations, there is greater emphasis on involving wider groups in decisions. This ensures that many individual voices are heard across different areas of an organisation, and large teams can be used to discuss and finalise the outcomes. This creates a more open, democratic and transparent culture, that’s often assumed to be more conducive to creativity. In reality, too many decision makers can result in extremely prolonged decision-making that requires significant time and resources. In some cases this time and resource may be better spent simply taking the action, rather than discussing what actions to take. An agile workplace culture is vitally important for innovation and creativity, regardless of how many decision-makers are needed to purchase a new light bulb.
What other influences affect innovation and how can we remove these barriers? Contact me @kateatnoch