Electric Kool-Aid Acid Therapy?

 

By Danielle Gerhard

[quote]When the light turns green, you go. When the light turns red, you stop. But what do you do when the light turns blue with orange and lavender spots?[/quote]

- Shel Silverstein, A Light in the Attic

 

Research and development of drug therapies for treating mental illnesses burgeoned in the early to mid-20th century, coinciding with a more permissive public sentiment on the origins of psychological disorders.  Gradually, psychopharmacological discoveries shifted from serendipitous findings to rational drug design to target specific chemical systems in the brain. However, many treatments, such as selective serotonin reuptake inhibitors (SSRIs) for depression or atypical antipsychotics for schizophrenia and bipolar disorder, can take weeks to months to be effective and require chronic treatment. This often results in undesirable, and sometimes permanent, side effects as a result of the drug’s unintended off-target effects. Therefore, many researchers have directed their studies towards rapid-acting, acute treatments, particularly psychedelics.

 

Psychedelics entered the experimental world due to subjective reports of not only sensory hallucinations but importantly the expansion of consciousness experienced following use. Popular psychedelics include MDMA, LSD, ketamine, peyote, psilocybin (magic mushrooms) and marijuana. Like many drugs used to treat mood disorders, psychedelics also increase neural levels of the neurotransmitter serotonin. The general American public stance on legalization of illicit drugs has become more lax since the days of prohibition and “reefer madness.”

 

One example of societal shifts can be seen with the most popular illicit drug in the US: marijuana. Marijuana legalization has attracted a lot of attention lately, so much so that it has entered daily political rhetoric.  The Gallup Poll on Illegal Drugs  found that the percentage of individuals in favor of legalizing marijuana has risen from 12% in 1969 to 51% in 2014. The percent of individuals who report having tried marijuana has surged from 4% in 1969 to 38% in 2013. While only 38% of those polled have tried marijuana, 70% approved of the drugs use to alleviate pain and suffering.

 

Given increasing public support for the legalization of marijuana, why is it still considered illegal at the federal level and furthermore, why is it still classified as a Schedule I drug under the Controlled Substances Act that was enacted in 1970? Schedule I drugs are characterized as having a high potential for abuse, no medical use, and a lack of accepted safety. Other drugs in this category include heroin and methaqualone but also other psychedelics like MDMA, LSD, and psilocybin. Advocates of marijuana legalization and individuals urging for a revised categorization of psychedelics are calling on Congress to revise the CSA classification of these drugs to correspond with their science-based scheduling process.

 

There has been a lot of stigma and misconceptions circulating about the effects of psychedelics, which largely stem from conservative backlash to Vietnam-era rebellion in the youth who were reported to be using psychedelics. Three main concerns raised by the opposition regarding psychedelics include: safety, addiction and the long-term effects on mental health. While drug safety should be a concern regardless of its legal state, two legal drugs in particular, alcohol and tobacco, have been shown to be more harmful and dangerous to the brain and body than psychedelics. Recent reports by government agencies concerned with drug safety reported that only 0.005% of hospitalizations in 2013 were related to LSD or psilocybin, significantly lower than alcohol or non-medical abuse of prescription pills. Furthermore, psychedelics have very low levels of abuse when compared to alcohol and tobacco.  The National Institute for on Drug Abuse (NIDA), a government funded research agency, describes LSD as a non-addictive agent.

 

While there is a growing push to grant doctors the ability to prescribed marijuana for the purposes of treating the symptoms accompanying chronic and painful diseases like cancer or multiple sclerosis, there have been fewer studies investigating the use of other psychedelics to treat another chronic disease: mental illness. This is largely due to the third concern mentioned above. Many individuals who are opposed to loosening the restrictions on psychedelics worry that drugs like LSD, which transiently mimic aspects of schizophrenia, could independently instigate the onset of a mental illness.

 

A group from Norway has recently published a paper in the Journal of Psychopharmacology presenting data from a large-scale US population study to examine the relationship between psychedelic use and mental illness or suicidality within the year following use.  Lead authors Johansen and Krebs randomly surveyed data from 139, 095 individuals, approximately 20,000 of which were psychedelic users. After controlling for potentially confounding factors like childhood mental illness, demographics and other drug use, they failed to find any link between mental illness and psychedelic use.  There is a need for more studies like this to further benefit research, policy and the scheduling of psychedelic drugs.

 

A few interesting and promising clinical studies are currently underway to investigate the therapeutic potential of difference psychedelics for individuals who have failed to respond to mainstream treatments. The non-profit organization Multidisciplinary Association for Psychedelic Studies (MAPS) recently gained attention for a study that has successfully crowd-sourced funding to investigate the additive effects of MDMA-assisted psychotherapy in treating posttraumatic stress disorder (PTSD). Other large ongoing studies through MAPS include LSD-assisted psychotherapy for anxiety, ibogaine (from the West African shrub iboga) therapy for drug addiction, and a handful of studies using psilocybin in cancer patients or individuals diagnosed with obsessive-compulsive disorder.

 

The purpose of this article is not to advocate for the widespread use of psychedelics but to discuss key empirical findings that support a reclassification of these drugs to make it easier for scientists to more effectively study their potential benefits in treatment resistant patients. While the study by Johansen and Krebs found no link between psychedelic use and mental health or suicide risk, many researchers are interested in focusing on their potential to treat mental illnesses. It is important to remember that there are still potential risks of taking psychedelics that should be taken into consideration.

 

As with all prescribed or non-prescribed drugs, there are individual differences in the pharmacokinetics and pharmacodynamics, or how our body affects the drug and how the drug affects our body. While many users may experience an expansion of consciousness and feel as if they have benefited from taking these drugs, others may have a very negative subjective experience that can have lasting negative effects. Another risk to consider is that because these drugs are illegal and therefore unregulated, they can be laced with harmful or more addictive drugs. For the most part, the studies discussed in this article are investigating the use of these drugs not in healthy individuals but rather in patients who are suffering from a mental illness and have failed to respond to any other commercially available treatments.

 

 

 


The “Big Data” Future of Neuroscience

 

By John McLaughlin

In the scientific world, the increasingly popular trend towards “big data” has overtaken several disciplines, including many fields in biology. What exactly is “big data?” This buzz phrase usually signifies research with one or more key attributes: tackling problems with the use of large high-throughput data sets, large-scale “big-picture” projects involving collaborations among several labs, and heavy use of informatics and computational tools for data collection and analysis. Along with the big data revolution has come an exploding number of new “omics”: genomics, proteomics, regulomics, metabolomics, connectomics, and many others which promise to expand and integrate our understanding of biological systems.

 

The field of neuroscience is no exception to this trend, and has the added bonus of capturing the curiosity and enthusiasm of the public. In 2013, the United States’ BRAIN Initiative and the European Union’s Human Brain Project were both announced, each committing hundreds of millions of dollars over the next decade to funding a wide variety of projects, directed toward the ultimate goal of completely mapping the neuronal activity of the human brain. A sizeable portion of the funding will be directed towards informatics and computing projects for analyzing and integrating the collected data. Because grant funding will be distributed among many labs with differing expertise, these projects will be essential for biologists to compare and understand one another’s results.

 

In a recent “Focus on Big Data” issue, Nature Neuroscience featured editorials exploring some of the unique conceptual and technical challenges facing neuroscience today. For one, scientists seek to understand brain function at multiple levels of organization, from individual synapses up to the activity of whole brain regions, and each level of analysis requires its own set of tools with different spatial and temporal resolutions. For example, measuring the voltage inside single neurons will give us very different insights from an fMRI scan of a large brain region. How will the data acquired using disparate techniques become unified into a holistic understanding of the brain? New technologies have allowed us to observe tighter correlations between neural activity and organismal behavior. Understanding the causes underlying this behavior will require manipulating neuronal function, for example by using optogenetic tools that are now part of the big data toolkit.

 

Neuroscience has a relatively long history; the brain and nervous system have been studied in many different model systems which greatly range in complexity, from nematodes and fruit flies, to zebrafish, amphibians, mice, and humans. As another commentary points out, big data neuroscience will need to supplement the “vertical” reductionist approaches that have been successfully used to understand neuronal function, by integrating what has been learned across species into a unified account of the brain.

 

We should also wonder: will there be any negative consequences of the big data revolution? Although the costs of data acquisition and sharing are decreasing, putting the data to good use is still very complicated, and may require full-time computational biologists or software engineers in the lab. Will smaller labs, working at a more modest scale, be able to compete for funds in an academic climate dominated by large consortia? From a conceptual angle, the big data approach is sometimes criticized for not being “hypothesis-driven,” because it places emphasis on data collection rather than addressing smaller, individual questions. Will big data neuroscience help clarify the big-picture questions or end up muddling them?

 

If recent years are a reliable indicator, the coming decades in neuroscience promise to be very exciting. Hopefully we can continue navigating towards the big picture of the brain without drowning in a sea of data.


invariant object recognition

I Shall Label You Banana

 

By Alex Berardino

Humans are intensely visual creatures.  An enormous portion of our brains, nearly 25% of our cortex, is dedicated to breaking down the great big visual world into tidy little understandable bits.  Despite all the underlying complexity, seeing feels easy. The problems underlying our visual abilities don’t occur to most of us because we don’t really think that much about how we see what we see.  But all of these problems are hard problems, most of them still lacking clear explanations or working models that perform anywhere close to the level of our visual system.  The biggest of these is the problem of invariant object recognition.

 

What is invariance?  Invariance is a complex problem disguised as a simple idea.  We easily recognize objects regardless of their size, orientation, or the current lighting conditions.  We even recognize them if they are partially blocked by another object.  This is invariance.  This may not seem so incredible; a banana looks like a banana no matter its size or how it is laying.  But your conscious perception belies the complexity that underlies this experience.  In reality, the light from every banana you’ve ever seen forms a completely unique image on your retina, the sheet of neurons in your eye that captures images from the world.  Your brain inherits this unique image, and it has to make a decision about what it is seeing, banana or not banana.  The brain’s job then is to find the sweet spot between deciding that every unique image is a unique thing, complete selectivity, versus deciding every image is a banana, complete invariance.  Of course, this is a simplified version of the decision, but the question remains, how does the human brain do this?

 

A long line of research has shown that the human brain has regions specialized for processing objects that it has seen.  Two of the most common objects, places and faces, have entire areas dedicated them, the fusiform face area (FFA) and parahippocampal place area (PPA), respectively.   In the equivalent region in macaques, area IT, scientists have found specific neurons that respond to particular objects under some changes, but not others.  For example, some neurons respond only to a face when it is facing forward, and others respond only to this face’s profile, but they both respond no matter what size their preferred view happens to currently be.  There are also neurons that respond to this same face no matter how it is presented, presumably by responding when any of the “lower-level” neurons respond.   All of this lends support to what stands as the current best high-level theory of the problem of invariant object recognition, that invariance evolves sequentially as the signals move up through the visual system.

 

The visual system is comprised of a set of layers stacked on top of each other.  You can think of it like a ladder.  At the bottom of the ladder is the retina, at the top, FFA and PPA in humans, IT in the macaque.  Information flows up the ladder, and at each rung, the data is processed before being passed to the next rung.  Theory suggests that at each level, there are neurons that are a little bit more invariant to changes in the way their preferred object is seen.  It is at the highest rungs that the responses from all of these neurons combine to form a representation that is both selective for a particular object and invariant to how it is seen.  Since information flows up the ladder, neurons that are selective for particular views of the object should become active before the neuron that doesn’t care about what view the object is seen from.  This is what we see in macaques, but until now, it has been difficult to say if this is true in humans.

 

Recently, a paper out of MIT by Isik et al., in the Journal of Neurophysiology has provided the first convincing evidence that invariance unravels sequentially and in a hierarchy in humans.  This group ran subjects through a set of tests designed to reveal the dynamics of invariance while measuring their brain activity using Magneto Encephalography (MEG).  They presented their subjects with small set of objects at different sizes and orientations, and trained a pattern classifier, a tool from machine learning, to recognize when each particular object was present based on the subject’s MEG activity.  To find activity related to invariance, they trained the classifier to recognize when the subject had seen a particular size or orientation of each object, and then tested the classifier on the other sizes and orientations.  This allowed them to find common patterns of neural activity that exist no matter how the object is seen, patterns that are invariant to size and orientation, and to find where and when these patterns arise.  They found that both orientation invariant and size invariant patterns show up after the patterns specific to one size and orientation, 150ms, 125ms, and 80 ms after the object was presented, respectively.  They also found that small rotations or changes in size show up earlier than larger ones, suggesting that more complex changes require more processing.  Using this timing data, Isik et al. were able to localize the sources of MEG activity during these different time points.  They found that at the time that orientation and size specific information arises, around 80ms after the object is shown, the sources are localized to the lower regions of the visual ladder, but during the time periods pertaining to invariant information, around 125 to 150ms after the object is shown, the sources are localized to the top rungs.  All of this together is strong evidence that invariant representations unfold as information climbs up through the visual system in the human brain as well.

 

While the evidence is now unfolding in support of this particular theory of object recognition, we still lack a concrete model of exactly how each step is carried out.  At the end of the day, invariance remains a hard problem.  However, with recent investments by tech giants like Google and Facebook into recognition technologies, coupled with our increasing understanding of the way that biology has solved the problem, we may in fact be on the horizon of understanding exactly how we decide, banana or not banana.

 

Stay updated with everything new about invariance by opening your free Scizzle account!


Cleaning Your Body When You Sleep

Celine Cammarata

Sleep is a great mystery for scientists.  Nearly all living things do it, and sleep deprivation quickly leads to cognitive deficits, health problems, and death, so we can safely assume that sleep is important.  But for what exactly, no one is sure.  This week, a new paper in Science has made a splash by showing compelling evidence that sleep plays a key role in washing waste products from the brain, leaving it clean and refreshed for a new day of use.

 

Waste products are a natural part of life; all cellular processes produce waste, and being particularly busy cells, neurons tend to churn out a lot.  But unlike most parts of the body, where the lymphatic system takes care of clearing metabolic waste, in the brain proteins are washed out from the space surrounding cells via the exchange of clean cerebrospinal fluid (CSF) from the ventricular system in and around the brain with interstitial fluid containing waste products.

 

In the current study, the authors examined how readily labelled CSF traveled around the brains of mice in various states, and found that CSF influx to the brain was about 95% lower when animals were awake than during sleep.  Similar comparatively high CSF flow was seen when mice were anesthetized.

 

The investigators hypothesized that the observed difference in CSF flow may be due to differences in the interstitial space when animals were asleep or awake; reduced space between cells in awake mice could impede the movement of CSF.  When they tested this, they found that indeed, interstitial space was significantly greater during sleep or anesthesia, making an easier route for CSF.

 

Better flow of CSF means solutes are more easily flushed out of the area surrounding cells.  The authors demonstrated that β-amyloid, a major waste product in the brain, was cleared much more efficiently in sleeping and anesthetized mice than in awake animals, as was an inert test tracer, C-inulin.

 

The finding that anesthesia acts similarly to natural sleep suggests that it is the animal’s state, rather than circadian rhythms, that dictates the solute clearing properties of sleep, possibly via changes in cell volume that would in turn effect interstitial space.  Because they’re know to be important in arousal, adrenergic neurotransmitters are a good candidate to signal such changes.  The authors found that, consistent with this idea, inhibiting adrenergic neurotransmitters in awake animals improved CSF flow.

 

These findings suggest that a key role of sleep, and the reason sleep is so critical to brain function, may have to do with clearing waste products and restoring the brain the a healthy, clean state for the next days’ use.


Grow Your Own Brain

Sally Burn

For millennia the human brain has been an organ of infinite curiosity to scientists. Its complexity permits advanced thought processes, communicative abilities, and self-awareness at a higher level than in other species. Our earliest forays into the mechanics of the human brain, back in the Stone Age, involved trepanation - drilling holes in the skull to “cure” headaches. Thankfully since then brain research has come a long way thanks to advances in microscopy, cellular neuroscience, transgenic animal models, and techniques to image the living brain such as MRI. However, one research area in which the brain has fallen behind is that of three-dimensional organ culture. Less complex tissues such as the intestine have been amenable to growth in culture. By growing organs in a dish researchers can easily monitor and manipulate their development, gaining valuable insights into how they normally develop and which genes are involved. Now, however, a team of scientists from Vienna and Edinburgh have found a way to grow embryonic “brains” in culture, opening up a whole world of research possibilities. Their technique, published online last week in Nature, has also already provided a new insight into the etiology of microcephaly, a severe brain defect.

 

I’ll allow you a moment to get carried away with images of pulsating brains sat in petri dishes, silently controlling the actions of their lab minions before collapsing under the weight of existential angst… and then bring you back to science reality. The cultured “brains” don’t resemble miniature fully formed brains per se, instead taking the form of cerebral organoids. But the story of how they became these cerebral organoids is actually a whole lot more exciting. The researchers started out with human embryonic stem cells or induced pluripotent stem (iPS) cells which they aggregated into embryoid bodies, then differentiated into neuroectoderm using neural induction media. The neuroectoderm was then cultured in a spinning bioreactor, resulting in three-dimensional cerebral organoids that recapitulate aspects of human brain development. By 20-30 days the organoids contain discrete but interdependent brain regions and even develop a cerebral cortex, the tissue which plays roles in key functions including language, consciousness, and memory to name a few. The cerebral cortex regions of the organoids are organized in a manner similar to those in real brains; they also contain neuronal progenitor cells capable of differentiating into functional mature neurons.

 

If you are at all familiar with iPS cells you can already see how useful this technique could be: cells taken from, for example, the skin of a patient with a particular brain defect could be converted into iPS cells and then cultured to generate patient-specific cerebral organoids. The researchers demonstrated this using skin-derived iPS cells from an individual with severe microcephaly, a brain disorder characterized by greatly reduced brain size. An accurate animal model of microcephaly has not yet been generated, making this organoid model particularly important. Indeed, the patient organoids exhibited a defect which could feasibly underlie microcephaly - premature neural differentiation.

 

The implications of this study for future research on brain development and disease are immense. Neuroscience has previously fallen victim to the sheer complexity of the human brain: the differences between human and rodent brains have left researchers with a disconnect between human diseases and animal models. Conducting research on human embryonic brains would, however, be an ethical non-starter. In vitro analysis of cerebral organoids therefore offers an innovative new way to investigate human neuro-development and neurological disease.

 


How Cocaine Drives Associations – That Drive Craving

Celine Cammarata

We all learn to associate cues and settings with the things they promise, like linking the cool, creamy, delight of a vanilla cone with the tune of the ice cream truck.  But for those recovering from substance abuse, such connections have a dark side: contexts associated with drug use can elicit intense cravings, posing a significant challenge to recovery.  A look inside the brain – literally – of mice undergoing a similar experience helps reveal the basis of these problematic associations.

 

Munoz-Cuevas et al. used two-photon microscopy to visualize the dendritic spines of neurons in mouse dorsal medial prefrontal cortex – an area known to respond to cocaine-associated stimuli – through a window implanted in the skull.  Spines, a primary site of incoming synapses, normally undergo a certain degree of turnover, with some spines being lost and new ones formed at a fairly stable rate; however, this rate can be effected by a number of stimuli and events, representing an important form of plasticity.

 

Remarkably, even a single dose of cocaine significantly increased the rate of spine gain.  With continued drug application the rate of spine formation remained partially elevated, tapering away after several days, but the initial boost in spine growth was dramatic and occur within as little as two hours after cocaine administration, which is particularly interesting because it suggests these physiological changes are driven by the drug rather than being the result of sustained use or of withdrawal.  Because the loss of newly formed spines is unchanged by the drug, the end result is an increase in spine density, with those spines ‘induced’ by cocaine representing a higher than normal percentage of the total spine population.

 

So why all those spines?  The answer may link this neurobiological observation to cocaine’s behavioral effects.  Generally, when exposed to a rewarding stimulus – such as a drug – in one side of a two-chambered enclosure and a neutral substance like saline in the other, if given the choice animals will spend more time in the chamber associated with the reward, an effect termed conditioned place preference.  When Munoz-Cuevas et al. tested conditioned place preference with cocaine, they found an important connection.  First, exposure to the cocaine chamber prompted much more spine formation than did exposure to the saline chamber, such that several days later the cocaine-driven spines that persisted formed a far greater percentage of the total spines than did those gained following saline exposure.  Second, this percentage was strongly correlated to the degree to which an animal showed a conditioned place preference for the cocaine chamber – that is, there was a direct connection between cocaine-driven spine gains and the association of a context with the rewarding drug.

 

While it’s clear that much work remains to clarify the precise relationship between drug use, spine growth, and contextual associations, it’s exciting to know that we are one step closer to understanding the mechanisms by which addictive substance take hold of the brain and to developing potential therapies to aid those attempting to battle addiction.


Brain Bot

Mapping the Human Brain - the Challenges Faced

Sophia David

The human brain is made up of billions of neurons that communicate with each other via trillions of connections. Together, they make up a network of unimaginable intricacy. Perhaps it is not surprising then, given this complexity, that things frequently go wrong within the brain. Approximately 1 in 4 people suffer from a diagnosable mental health disorder within any given year and as many as five million Americans now live with Alzheimer’s disease.

Unfortunately, drugs to treat brain disorders have been slow to materialize. Many large pharmaceutical companies have withdrawn their research on mental health diseases due to the length of time it takes these drugs to be developed and the high failure associated with them. Essentially, to big pharma, the field is unattractive and economically not viable.

Our inability to Read more