health benefits of dancing

For a Healthier 2018!

Dancing together is good for your health!

 

By Jesica Levingston Mac leod, PhD

 

Social dancers know the amazing feeling that a synchronized dance could bring. When your follower or leader is connected and it feels like you are one mind and body following the music, it is mystical and magical… Well, it turns out that synchronized dancing is also good for your health. I started dancing salsa because a good friend was going crazy about it and she recommended it, this inspired me to join a class. At this point I was a solitary belly dancer only following in team dances where you have choreography and if you are coordinated enough you feel this celestial connection with the other dancers…but without any physical contact.

On the other hand, in social dances like salsa, bachata, tango, zouk or swing, the connection is the base of a good dance. Nobody wants to be the person stepping to the left when 5 other dancers moved to the right while performing in a stage in front of hundreds of people, as well as nobody enjoys turning to the wrong side for misreading your dance partner lead, or watching how a follower does a completely different step that the one the leader indicated. Furthermore, being “in sync” with the group or your direct dance partner may help to improve your health, science says. In a nutshell, a recent study found that synchronizing with others while dancing raised pain tolerance and encouraged people to feel closer to others.

This year, Dr. Burzynska et al., at Colorado State University, separated 174 healthy adults, 60s to 79 years old, who had no signs of memory loss or impairment, into 3 activity groups: walking, stretching and balance training, or dance classes. The activities were carry on for 6 months and three times a week, those in the dance group practiced and learned a country dance choreography. Brain scans were done on all participants and compared with scans taken before the activities began. Not surprisingly, the participants in the dancing group performed better and had less deterioration in their brains than the other groups. Their most recent study published in November: "The Dancing Brain: Structural and Functional Signatures of Expert Dance Training" showed that dancers’ brains differed from non-dancers’ at both functional- and structural-levels. Most of the group differences were skill-relevant and correlated with objective laboratory measures of dance skill and balance. Their results are promising in that long-term, versatile, combined motor and coordination training may induce neural alterations that support performance demands.” (link 2)

Moreover, It is well established that dancing-based therapies are providing outstanding results in the treatment of dementia, autism and Parkinson’s. Indeed, dance therapy improves motor and cognitive functions in patients with Parkinson's disease. Dancing was suggested to be a powerful tool to improve motor-cognitive dual-task performance in adults. Dance movement therapy has known benefits for cancer patients’ physical and psychological health and quality of lifeAnother study by Domane and collaborators, working with a cohort of overweight and physically inactive women, showed that Zumba fitness is indeed an efficacious health-enhancing activity for adults. Park also concluded that “a 12-week low- to moderate-intensity exercise program appears to be beneficial for obese elderly women by improving risk factors for cardiovascular disease”.

Dancing helps generate positive connections with others and this is one of the evolutionary reasons you are “called” to the dance floor when a song you like starts playing, and probably you will start your dance by coordinating with or copying others. Probably this behavior signaled tribe membership for early humans and also got couples together in a more romantic way, creating emotional bonds. Coordinated dances are as old as music, and distributed in a lot of different cultures, for example, the nowadays Hakka, used by rugby players, was a native group dance that intimidates rival tribes.

Talking about the chemistry of dancing, as any other exercise, it releases endorphins (the hormones of happiness and pain relief). For example, a study from the University of London were anxiety-sufferers enrolled in one of four settings: exercise class, a music class, a math class and a dance class, showed that only the last group displayed “significantly reduced anxiety.”

In the most recent study done in the same London University by Tarr and collaborators, the researchers used pain thresholds as an indirect measure of endorphin realize (more endorphins mean we tolerate pain better) for 264 young people in Brazil. The volunteers were divided into groups of three, and they did either high or low-exertion dancing that was either synchronized or unsynchronized. The high exertion moves were standing, full-bodied movements, on the other hand, in the low-exertion groups did small hand movements sitting down. They measured the before and after feelings of closeness to each other via a questionnaire and their pain threshold by attaching and inflating a blood pressure cuff on their arm, and determining how much pressure they could stand.

Most of the volunteers who did full-bodied exertive dancing had higher pain thresholds compared with those who were in the low-exertion groups. Most importantly, synchronization led to higher pain thresholds, even if the synchronized movements were not exertive. Therefore when the volunteers saw that others were doing the equivalent movement at the same time, their pain thresholds increased.

The results also showed that synchronized activity encouraged bonding and closeness feelings more than unsynchronized dancing. Therefore, “Dance which combined high energy and synchrony had the greatest effects. So the next time you find yourself in an awkward Christmas party or at a wedding wondering whether or not to get up and groove, just do it”, claims Dr. Tarr.

Coming back to the dance floor, I had reached out for an opinion about the wellness of dancing to the best Bachata DJ: Brian el Matatan: “I enjoy the dancing for a few reasons. There’s the enjoyment & challenge of using what I’ve learned; socially as well as choreographed performance. Also, there is the rush of endorphins similar to “runner’s high”. There’s also the socializing aspect of dancing. It’s like having a conversation without speaking.” Well said DJ!
He also offered some advice for followers: dance with many different types of leaders if you’d like to improve your following. There are many different leads, and there is an experience to be gained in social dancing that would not be gained via dance class. Also, feel free to ask a leader to dance, & be courteous in how you decline a dance. Most importantly- communicate. Don’t “lead” a leader into thinking their lead is better than what it really is- for your sake & that of your fellow followers. For example, if he almost ended your life with that risky move, let him know so that he doesn’t try it on you or anyone else again (at least not without figuring out how to do the move properly). And some advice for leaders: be VERY  courteous in how you ask for a dance, try to not take rejection personally, be patient with follows who may not be on the same skill level as you, & don’t almost end her life with risky moves.

Lastly, I asked for the most sensual dancer, scientist, and project manager –  Debbie McCabe – for her advice for followers. She commented “The lady’s job is to surrender and connect to her partner…it is a 3-minute love affair and energy exchange. I love Bachata because I can get out of my head and just feel, express my sensuality, be playful and connect… it balances out my left brained day job.”

More than 20 years ago, scientists found a connection between music and enhancement of performance or changing of neuropsychological activity involving Mozart’s music from which the theory of "The Mozart Effect" was derived. The basis of The Mozart Effect lies at the super-organization of the cerebral cortex that might resonate with the superior architecture of Mozart’s music. Basically listening to Mozart K.448 enhances performance on spatial tasks for a period of approximately 20 min.

So dear reader, please stop complaining and making excuses and just dance! Or at least listen to music, as the outstanding jazz singer Tamar Korn once told me when I was in distress “music heals”.

 

This post was originally published on Dec 30, 2015 and was updated with new research on Dec 12, 2016 and on Dec 19, 2017.


The Fake Drug Problem

 

By Gesa Junge, PhD

Tablets, injections, and drops are convenient ways to administer life-saving medicine - but there is no way to tell what’s in them just by looking, and that makes drugs relatively easy to counterfeit. Counterfeit drugs are medicines that contain the wrong amount or type of active ingredient (the vast majority of cases), are sold in fraudulent packaging, or are contaminated with harmful substances. A very important distinction here: counterfeit drugs do not equal generic drugs. Generic drugs contain the same type and dose of active ingredient as a branded product and have undergone clinical trials, and they, too, can be counterfeited. In fact, counterfeiting can affect any drug, and although the main targets, particularly in Europe and North America, have historically been “lifestyle drugs” such as Viagra and weight loss products, fake versions of cancer drugs, antidepressants, anti-Malaria drugs and even medical devices are increasingly reported.

The consequences of counterfeit medicines can be fatal, for example, due to toxic contaminants in medicines, or inactive drugs used to treat life-threatening conditions. According to a BBC article, over 100,000 people die each year due to ineffective malaria medicines, and overall, Interpol puts the number of deaths due to counterfeit pharmaceuticals at up to a million per year. There are also other public health implications: Antibiotics in too low doses may not help a patient fight an infection, but they can be sufficient to induce resistance in bacteria, and counterfeit painkillers containing fentanyl, a powerful opioid, are a major contributor to the opioid crisis, according to the DEA.

It seems nearly impossible to accurately quantify the global market for counterfeit pharmaceuticals, but it may be as much as $200bn, or possibly over $400bn. The profit margin of fake drugs is huge because the expensive part of a drug is the active ingredient, which can relatively easily be replaced with cheap, innate material. These inactive pills can then be sold at a fraction of the price of the real drug while still making a profit. According to a 2011 report by the Stimson Center, the large profit margin combined with comparatively low penalties for manufacturing and selling counterfeit pharmaceuticals make counterfeiting drugs a popular revenue stream for organized crime, including global terrorist organizations.

Even though the incidence of drug counterfeiting is very hard to estimate, it is certainly a global problem. It is most prevalent in developing countries, where 10-30% of all medication sold may be fake, and less so in industrialized countries (below 1%), according to the CDC. In the summer of 2015, Interpol launched a coordinated campaign in 115 countries during which millions of counterfeit medicines with an estimated value of $81 million were seized, including everything from eye drops and tanning lotion to antidepressants and fertility drugs. The operation also shut down over 2400 websites and 550 adverts for illegal online pharmacies in an effort to combat online sales of illegal drugs.

There are several methods to help protect the integrity of pharmaceuticals, including tamper-evident packaging (e.g. blister packs) which can show customers if the packaging has been opened. However, the bigger problem lies in counterfeit pharmaceuticals making their way into the supply chain of drug companies. Tracking technology in the form of barcodes or RFID chips can establish a data trail that allows companies to follow each lot from manufacturer to pharmacy shelf, and as of 2013, tracking of pharmaceuticals throughout the supply chain is required as per the Drug Quality and Security Act. But this still does not necessarily let a customer know if the tablets they bought are fake or not.

Ingredients in a tablet or solution can fairly easily be identified by chromatography or spectroscopy. However, these methods require highly specialized, expensive equipment that most drug companies and research institutions have access to, but are not widely available in many parts of the world. To address this problem, researchers at the University Of Notre Dame have developed a very cool, low-tech method to quickly test drugs for their ingredients: A tablet is scratched across the paper, and the paper is then dipped in water. Various chemicals coated on the paper react with ingredients in the drug to form colors, resulting in a “color bar code” that can then be compared to known samples of filler materials commonly used in counterfeit drugs, as well as active pharmaceutical ingredients.

Recently, there have also been policy efforts to address the problem. The European Commission released their Falsified Medicines Directive in 2011 which established counterfeit medicines as a public health threat and called for stricter penalties for producing and selling counterfeit medicines. The directive also established a common logo to be displayed on websites, allowing customers to verify they are buying through a legitimate site. In the US, VIPPS accredits legitimate online pharmacies, and in May of this year, a bill calling for stricter penalties on the distribution and import of counterfeit medicine was introduced in Congress. In addition, there have also been various public awareness campaigns, for example, last year’s MHRA #FakeMeds campaign in the UK,  which was specifically focussed on diet pills sold online, and the FDA’s “BeSafeRx” programme, which offers resources to safely buying drugs online.

In spite of all the efforts to raise awareness and address the problem of fake drugs, a major complication remains: Generic drugs, as well as branded drugs, are often produced overseas and many are sold online, which saves cost and can bring the price of medication down, making it affordable to many people. The key will be to strike the balance between restricting access of counterfeiters to the supply chain while not restricting access to affordable, quality medication for patients who need them.


3D printed model of Cas9 from CRISPR

We need to talk about CRISPR

By Gesa Junge, PhD

You’ve probably heard of CRISPR, the magic new gene editing technique that will either ruin the world or save it, depending on what you read and whom you talk to? Or the Three Parent Baby, which scientists in the UK have created?

CRISPR is a technology based on a bacterial immune defense system which uses Cas9, a nuclease, to cut up foreign genetic material (e.g., viral RNA). Scientists have developed a method by which they can modify the recognition part of the system, the guide RNA, and make it specific to a site in the genome that Cas9 then cuts. This is often described as “gene editing” which allows disease-causing genes to be swapped out for healthy ones.

CRISPR is now so well known that Google finally stopped suggesting I may be looking for “crisps” instead, but the real-world applications are not so well worked out yet, and there are various issues around CRISPR, including off-target effects, and also the fact that deleting genes is much easier than replacing them with something else. But, after researchers at Oregon Health and Science University managed to change the mutated version of the MYBPC3 gene to the unmutated version in a viable human embryo last month, the predictable bioethical debate was reignited, and terms such as “Designer Babies” got thrown around a lot.

A similar thing happened with the “Three Parent Baby,” an unfortunate term coined to describe mitochondrial replacement therapy (MRT). Mitochondria, the cells’ organelles for providing energy, have their own DNA (making up about 0.2% of the total genome) which is separate from the genomic DNA in the nucleus, which is the body’s blueprint. Mitochondrial DNA can mutate just like genomic DNA, potentially leading to mitochondrial disease, which affects 1 in 5000-10000 children. Mitochondrial disease can manifest in various ways, ranging from growth defects to heart or kidney to disease to neuropsychological symptoms. Symptoms can range from very mild to very severe or fatal, and the disease is incurable.

MRT replaces the mutated mitochondrial DNA in a fertilized egg or in an embryo with the healthy version provided by a third donor, which allows the mitochondria to develop normally. The UK was the first country to allow the “cautious adaption” of this technique.

While headlines need to draw attention and engage the reader for obvious reasons, oversimplifications like “gene editing” and dramatic phrases like “three parent babies” can really get in the way of broadening the understanding of science, which is difficult enough as it is. Research is a slow and inefficient process that easily gets lost in a 24-hour news cycle, and often the context is complex and not easily summed up in 140 characters. And even when the audience can be engaged and interested, the relevant papers are probably hiding behind a paywall, making fact checking difficult.

Aside from difficulties communicating the technicalities and results of studies, there is also often a lack of context in presenting scientific studies - think for example of chocolate and red wine which may or may not protect from heart attacks. What is lost in many headlines is that scientific studies usually express their results as a change in risk of developing a disease, not a direct causation, and very few diseases are caused by one chemical or one food additive. On this topic, WNYC’s “On The Media”-team have an issue of their Breaking News Consumer Handbook that is very useful to evaluate health news.

The causation vs. correlation issue is perhaps a little easier to discuss than big ethical questions that involve changing the germline DNA of human beings because ethical questions do not usually have a scientific answer, let alone a right answer. This is a problem, not just for scientists, but for everyone, because innovation often moves out of the realm of established ethics, forcing us to re-evaluate it.

Both CRISPR and MRT are very powerful techniques that can alter a person’s DNA, and potentially the DNA of their children, which makes them both promising and scary. We are not ready to use CRISPR to cure all cancers yet, and “Three Parent Babies” are not designed by anyone, but unfortunately, it can be hard to look past Designer Babies, Killer Mutations and DNA Scissors, and have a constructive discussion about the real issues, which needs to happen! These technologies exist; they will improve and eventually, and inevitably, play a role in medicine. The question is, would we rather have this development happen in reasonably well-regulated environments where authorities are at least somewhat accountable to the public, or are we happy to let countries with more questionable human rights records and even more opaque power structures take the lead?

Scientists have a responsibility to make sure their work is used for the benefit of humanity, and part of that is taking the time to talk about what we do in terms that anyone can understand, and to clarify all potential implications (both positive and negative), so that there can be an informed public discussion, and hopefully a solution everyone can live with.

 

Further Reading:

CRISPR:

National Geographic

Washington Post

 

Mitochondrial Replacement Therapy:

A paper on clinical and ethical implications

New York Times (Op-Ed)

 


deodorant and cancer

Is Your Deodorant Bad For Your Health?

 

By Jesica Levingston Mac Leod, PhD

Body odors (BO) are part of our evolution, and the ability to smell has evolved with us, making people fall in love or run away from a smelly person. Sweat has an initial effect to cool our body down and avoid overheating. Sweat can also be trigger by stress, anxiety or other hormonal changes. Sweat by itself doesn’t smell, but the bacteria located near the glands, for example, the armpits, breakdown the sweat generating the “BO”. How do we deal with the stinky fact? We apply deodorants and/or antiperspirants. Deodorants have ingredients like triclosan, which make the skin more salty or acidic for the bacteria to grow in those areas. Therefore deodorants don’t stop you from sweating, but antiperspirants will do the trick, as they contain ingredients like aluminum and zirconium, which are taken up through the pores and they react with water and swell, forming a gel that blocks the sweat.

Last year, Mandriota and collaborators demonstrated that in a cancer mouse model, concentrations of aluminum in the amount of those measured in the human breast are able to transform cultured mammary epithelial cells, allowing them to form tumors and to metastasize. Moreover, aluminum salts have been linked with DNA damage, oxidative stress, and estrogen action. In 2004, a woman reported aluminum poisoning after using antiperspirants for four years, and after stopping the use of these products the aluminum levels dropped and she recovered.

Breast cancer develops after cells with mutations in their DNA start growing uncontrolled, generating a tumor. Most breast cancers develop in the upper outer quadrant of the breast, near to the lymph nodes that are exposed to antiperspirants. This fact was the starting point for the theories that the underarm cosmetic products could be carcinogenic. One of the first publications on this subject dates from 2002; it was population-based (ages 20-74, 1606 patients) and found no correlation between breast cancer and antiperspirant use. A second article found a relationship between an earlier age of breast cancer diagnosis to more frequent regular use of antiperspirants/deodorants and underarm shaving.

Aluminum salts have been linked to increased risk of developing breast cancer, but so far the research on this has been quite inconsistent. Last month, a new research study of 418 women (ages 20 to 85) examined their self-reported history of use of underarm cosmetic products and health status, in order to unveil a bit more about the link between antiperspirants and breast cancer. Linhart and col. from Austria, studied the relationship of the use of underarm cosmetic products and the risk of breast cancer. They divided the group in two: half of the women were breast cancer patients and the other half healthy controls. Then, they measured the concentration of aluminum in the breast tissue of some of the women. The results showed that the risk of breast cancer increased by an odd ratio of 3.88 in females who described using the underarm products multiple times per day starting before their 30th birthday. Importantly: “aluminum traces were found in the breast tissue in both cancer patients and healthy controls and it was significantly associated to self-reported underarm cosmetic products use”. In fact, the median concentrations of aluminum were 5.8 (2.3-12.9) nmol/g in the tissues from breast cancer patients versus 3.8 (2.5-5.8) nmol/g in controls. The conclusion is that more than daily use of these cosmetic products at younger ages may lead to the accumulation of aluminum in breast tissue and increase the risk of breast cancer.

Although the American Cancer Society claims that “there are no strong epidemiologic studies in the medical literature that link breast cancer risk and antiperspirant use”, after the Linhart investigation, and knowing that 1 in 8 women will be diagnosed with breast cancer in her lifetime, I will avoid antiperspirants with aluminum. Nobody wants to be called “stinky”, so some actions to take are to wash your clothes after working out, take showers regularly and/or clean your armpits with water and soap as soon as you “smell something”, apply deodorant, and consult with your doctor about the best way to keep your body odors under control. The last resource: perfume. If you can’t win the fight… hide.


The Science of Solar Eclipses

By JoEllen McBride, PhD 

As the sky darkens on August 21st, we will stand in awe of the first total solar eclipse to cross over the contiguous U.S. in almost 40 years. This is also a chance for scientists to do what they do best-- science!

 

Total Eclipse of the Sun

Every month, the Moon passes between the Earth and Sun during its New Moon phase. We can’t see the New Moon because the side that faces us isn’t illuminated by the Sun but it’s up there. Solar eclipses happen only when the Moon is in the New Moon phase and crosses the plane created by the Earth-Sun orbit. All other New Moons are either too high or too low in the orbit, to cover Sun.

 

A total solar eclipse is even more special. The cosmos has gifted us with a spectacular coincidence. The distance between the Moon and Earth is 400 times less than the distance between the Sun and Earth. This wouldn’t be interesting except for the fact the Moon is also 400 times smaller than the Sun. Once the Moon hits that sweet spot in its orbit around Earth, it completely covers the Sun.

 

That also means that sometimes a solar eclipse occurs and the Moon doesn’t completely cover the Sun. These are partial or annular eclipses and it just means that the Moon was too far from Earth to hide the Sun completely.

 

A solar eclipse occurs approximately every year and a half (give or take a few months). What makes them seem so rare is our planet is mostly ocean, so the chances of the solar eclipse passing over land with people on it is reduced. That’s why Monday’s total solar eclipse passing over the entire mainland U.S. is such a big deal! Don’t let Neil deGrasse Tyson put a damper on it!

 

Predicting Eclipses

It is true that for centuries solar eclipses were thought of as omens and bringers of terrible things by many human societies. But once we figured out that they were predictable, we quickly used them to learn about the universe. The first predicted eclipse was done by Thales of ancient Greece around 610 or 585 BCE. Thales made the prediction using the idea of deductive geometry borrowed from the Egyptians. Euclid, much later, formalized this into what is now known as Euclidean Geometry. The historical record shows that Thales’s prediction only worked one time though because there are no other accounts of anyone successfully predicting an eclipse until Ptolemy used Euclidean geometry in 150 CE.

 

So how can scientists use this periodic alignment of celestial bodies to their advantage? The Sun is a pretty reliable part of our day, so having it gone for a few moments allows us to study the reaction of animals to an abrupt change in their environment. You’ll hear birds stop singing and frogs and crickets will begin chirping as the sky darkens. Mammals will begin their bedtime rituals also. But we can learn the most about the Sun itself from a solar eclipse.

 

Image of the corona created by placing a disc over the Sun to mimic a solar eclipse. These instruments, called coronagraphs, still allow a little sunlight to get through which can mess up measurements of the corona. So scientists still rely on real deal total solar eclipses to study the corona in detail.
Image of the corona created by placing a disc over the Sun to mimic a solar eclipse. These instruments, called coronagraphs, still allow a little sunlight to get through which can mess up measurements of the corona. So scientists still rely on real deal total solar eclipses to study the corona in detail.

Grab a Corona

The Sun has an outer atmosphere extending millions of miles above its surface called the corona. At temperatures reaching a few million degrees Fahrenheit, the corona significantly hotter than the Sun’s surface. The corona was first observed in 968 CE during a solar eclipse and for many centuries, scientists debated whether this bright wispy envelope was part of the Sun or the Moon. It wasn’t recognized as being part of the Sun until the eclipse in 1724 and then verified over a century later in 1842. Then, during 1932 and 1940 solar eclipses, scientists determined that the corona is significantly hotter than the surface of the Sun. Iron atoms in the corona are stripped of their electrons, which can only happen if the atoms are heated to millions of degrees. This discovery still summons solar physicists to all parts of the planet to observe solar eclipses. This solar eclipse is no different. They’re still not sure why the corona is so hot.

 

Get You Some Flare

Solar eclipses also allow scientists to study another extremity of the Sun, solar flares. Solar flares or prominences are as spectacular as they are dangerous-- especially today. They can disrupt satellites and other communications devices as well as short out electrical grids. So it is crucial that we understand as much as we can about them. The first solar prominence was observed, with the naked eye, during a partial solar eclipse in 334 CE. Knowing this probably would have helped Birger Wassenius during the total solar eclipse in 1733. He noticed solar flares but suspected they were coming from the Moon. It wasn’t until a solar eclipse in 1842 that scientists verified the ejections were coming from the Sun.

 

The Sun goes through cycles of solar flare activity about every 11 years. This year, the Sun is approaching a low point in its activity, so scientists will use this total eclipse to study how flares differ from when the Sun is more active.

 

Other Notable Discoveries Thanks to Solar Eclipses

In 1868 the element Helium was discovered in the Sun’s light during the 1868 and 1869 solar eclipses and named after the Sun (Helios = Sun in Greek). Helium wasn’t identified on Earth until 1895. Another big win for physics came during the 1919 solar eclipse. Scientists used the darkened sky to verify that the Sun is massive enough to bend the light of faraway stars before it reaches us. Stars that should have been behind the Sun-- and therefore not visible during the eclipse-- were clearly seen. This proved part of Einstein’s theory of relativity that massive objects bend space around them.

 

Solar eclipses are awe inspiring and also useful to science. So make sure you grab your eclipse glasses or pinhole cameras or fists and get out there!

 


What A Marshmallow Can Say About Your Brain

By Deirdre Sackett

In the 1970s, researchers at Stanford University performed a simple experiment. They offered children the chance to eat a single marshmallow right now, or wait 15 minutes to receive two marshmallows. Out of 600 children in the study, only about ⅓ were able to wait long enough for two treats. Most attempted to wait, but couldn’t make it through the whole 15 minutes. A minority of kids ate the marshmallow immediately.

 

Feeding marshmallows to children in the name of science may seem like a waste of federal funds. But it turns out that the ability to wait for a treat can actually predict a lot about someone’s personality and life trajectory.

 

Since the 70s, many scientific groups have repeated the “marshmallow test” (some of which have been hilariously documented). In some iterations, researchers recorded whether each child chose an immediate versus delayed treat, and then tracked the children’s characteristics as they grew up. Amazingly, the children’s choices predicted some important attributes later on in life. Generally, the more patient children who waited for the bigger reward would go on to score higher on the SAT, have a lower body mass index (BMI), and were more socially and cognitively competent compared to the kids who couldn’t wait and immediately ate one treat.

 

The “marshmallow test” measures a cognitive ability called delay discounting. The concept is that a big reward becomes less attractive (or “discounted”) the longer you need to wait for it. As such, delay discounting is a measure of impulsivity - how long are you willing to wait for something really good, before choosing a quicker, but less ideal, option?

 

While it’s okay to occasionally have spur-of-the-moment choices, poor delay discounting (increased impulsivity) is often a symptom of problematic gambling, ADHD, bipolar disorder, and other mental health issues. In particular, drug addiction is also accompanied by increased impulsive choices. For instance, drug users will choose immediate rewards (such as drugs of abuse) over delayed, long-term rewards (i.e., family life, socializing, or jobs). Drug users are poor at delay discounting and choose immediate options faster than non-drug users. This isn’t just a human flaw; exposing rats to cocaine also increases their impulsivity during delay discounting tasks.

 

Interestingly, aspects of the “marshmallow test” hint at this impulsivity-drug addiction link. In 2011, researchers did a follow-up study with the (now adult) children from the original 1970’s Stanford experiment. The scientists imaged the subjects’ brains while making them do a delayed gratification task in which they had to wait for a reward. They found that patient versus impulsive individuals had very different activity in two specific brain regions involved in drug addiction.

 

Firstly, the study found that impulsive individuals had greater activity in the ventral striatum, a brain region heavily linked to drug addiction and impulsivity. The greater activity in this region may imply that impulsive individuals process information about rewards differently than patient individuals. That is, the way their brain is wired may cause them to want their rewards right now.

 

Secondly, the impulsive individuals had less activity in the prefrontal cortex, which is responsible for “putting on the brakes” for impulsive actions. This finding suggests that impulsive individuals may not have that neural “supervisor” that can stop themselves from acting on their impulses. Drug addicts show similarly reduced prefrontal activity. So in addition to doing worse on standardized tests, having higher BMIs, or being less socially competent, the marshmallow test predicts that impulsive individuals may have brain activity similar to those of drug users.

 

While it seems like a silly experiment, the marshmallow test is a great starting point to help increase our understanding of impulsivity. Using this information, researchers can start to develop treatments for impulsive behavior that negatively affects people’s lives. Specifically, treating impulsivity in drug addicts could help as part of the rehabilitation process. So think about that the next time you reach for that sweet treat!

 


A lego Gollum crouches over a metal ring

One ring to rule them all: The cohesin complex

By Johannes Buheitel, PhD

In my blog post about mitosis (http://www.myscizzle.com/blog/phases-of-mitosis/), I explained some of the challenges a human cell faces when it tries to disentangle its previously replicated chromosomes (for an overview of the cell cycle, see also http://www.myscizzle.com/blog/cell-cycle-introduction/) and segregate them in a highly ordered fashion into the newly forming daughter cells. I also mentioned a protein complex, which is integral for this chromosomal ballet, the cohesin complex. To recap, cohesin is a multimeric ring complex, which holds the two chromatids of a chromosome together from the time the second sister chromatid is generated in S phase until their separation in M phase. This decreases complexity, and thereby increases the fidelity of chromosome segregation, and thus, mitosis/cell division. And while this feat should already be enough to warrant devoting a whole blog post to cohesin, you will shortly realize that this complex also performs a myriad of other functions during the cell cycle, which really makes it "one ring to rule them all".

Figure 1: The cohesin complex. The core complex consists of three subunits: Scc1/Rad21, Smc1, and Smc3. They interact to form a ring structure, which embraces ("coheses") sister chromatids.
Figure 1: The cohesin complex. The core complex consists of three subunits: Scc1/Rad21, Smc1, and Smc3. They interact to form a ring structure, which embraces ("coheses") sister chromatids.

But let’s back up a little first. Cohesin's integral ring structure is composed of three proteins: Smc1, Smc3 (Structural maintenance of chromosomes), and Scc1/Rad21 (Sister chromatid cohesin/radiation sensitive). These three proteins attach to each other in a more or less end-to-end manner, thereby forming a circular structure (see Figure 1; ONLY for the nerds: Smc1 and -3 form from long intramolecular coiled-coils by folding back onto themselves, bringing together their N- and C-termini at the same end. This means that these two proteins actually interact with their middle parts, forming the so-called “hinge”, as opposed to really “end-to-end”). Cohesin obviously gets its name from the fact that it causes “cohesion” between sister chromatids, which has been first described 20 years ago in budding yeast. The theory that the protein complex does so by embracing DNA inside the ring’s lumen was properly formulated in 2002 by the Nasmyth group, and much evidence supporting this “ring embrace model” has been brought forth over last decades, making it widely (but not absolutely) accepted in the field. According to our current understanding, cohesin is already loaded onto DNA (along the entire length of the decondensed one-chromatid chromosome) in telophase, i.e. only minutes after chromosome segregation, by opening/closing its Smc1-Smc3 interaction site (or “entry gate”). When the second sister chromatid is synthesized in S phase, cohesin establishes sister chromatid cohesion in a co-replicative manner (only after you have the second sister chromatid, you can actually start talking about “cohesion”). Early in the following mitosis, in prophase to be exact, the bulk of cohesin is removed from chromosome arms in a non-proteolytic manner by opening up the Smc3-Scc1/Rad21 interface (or “exit gate”; this mechanism is also called "prophase pathway"). However, a small but very important fraction of cohesin molecules, which is located at the chromosomes’ centromere regions, remains protected from this removal mechanism in prophase. This not only ensures that sister chromatids remain cohesed until the metaphase-to-anaphase transition, but also provides us with the stereotypical image of an X-shaped chromosome. The last stage in the life of a cohesin ring is its removal from centromeres, a tightly regulated process, which involves proteolytic cleavage of cohesin’s Scc1/Rad21 subunit (see Figure 2).

Figure 2: The cohesin cycle. Cohesin is topologically loaded onto DNA in telophase by opening up the Smc1-Smc3 interphase ("entry gate"). Sister chromatid cohesion is established during S phase, coinciding with the synthesis of the second sister. In prophase of early mitosis, the bulk of cohesin molecules are removed from chromosome arms (also called "prophase pathway") by opening up the interphase between Scc1/Rad21 and Smc3 ("exit gate"). Centromeric cohesin is ultimately proteolytically removed at the metaphase-to-anaphase transition.
Figure 2: The cohesin cycle. Cohesin is topologically loaded onto DNA in telophase by opening up the Smc1-Smc3 interphase ("entry gate"). Sister chromatid cohesion is established during S phase, coinciding with the synthesis of the second sister. In prophase of early mitosis, the bulk of cohesin molecules are removed from chromosome arms (also called "prophase pathway") by opening up the interphase between Scc1/Rad21 and Smc3 ("exit gate"). Centromeric cohesin is ultimately proteolytically removed at the metaphase-to-anaphase transition.

As you can see, during the 24 hours of a typical mammalian cell cycle, cohesin is pretty much always directly associated with the entire genome (the exceptions being chromosomes arms during most of mitosis, i.e. 20-40 minutes and entire chromatids during anaphase, i.e. ~10 minutes). This means that cohesin has at least the potential to influence a whole bunch of other chromosomal events, like DNA replication, gene expression and DNA topology. And you know what? Turns out it does!

Soon after cohesin was described as this guardian of sister chromatid cohesion, it also became clear that there is just more to it. Take DNA replication for example. There is good evidence that initial cohesin loading is already topological (meaning, the ring closes around the single chromatid). That poses an obvious problem during S phase: While DNA replication machineries (“replisomes”) zip along the chromosomes trying to faithfully duplicate the entire genome in a matter of just a couple of hours, they encounter – on average – multiple cohesin rings that are already wrapped around DNA. Simultaneously, cohesin's job is to take those newly generated sister chromatids and hold them tightly to the old one. Currently, we don’t really know how this works, whether the replisome can pass through closed cohesin rings, or whether cohesin gets knocked off and reloaded after synthesis. What we do know, however, is that cohesion establishment and DNA replication are strongly interdependent, with defects in cohesion metabolism causing replication phenotypes and vice versa.

Cohesin has also been shown to have functions in transcriptional regulation. It was observed quite early that cohesin can act as an insulation factor, blocking long-range promoter-enhancer association. Today we have good evidence showing that cohesin binds to chromosomal insulator elements that are usually associated with the CTCF (CCCTC-binding factor) transcriptional regulator. Here, the ring complex is thought to help CTCF's agenda by creating internal loops, i.e. inside the same sister chromatid!

Studying cohesin has, of course, not only academic value. Because of its pleiotropic functions, defects in human cohesin biology can cause a number of clinically relevant issues. Since actual cohesion defects will cause mitotic failure (which most surely results in cell death), most of cohesin-associated diseases are believed to be caused by misregulation of the complex's non-canonical functions in replication/transcription. These so-called cohesinopathies (e.g. Roberts syndrome and Cornelia de Lange syndrome) are congenital birth defects with widely ranging symptoms, which usually include craniofacial/upper limb deformities as well as mental retardation.

It is important to mention that cohesin also has a very unique role in meiosis where it not only coheses sister chromatids but also chromosomal homologs (the two maternal/paternal versions of a chromosome, each consisting of two sisters, which themselves are cohesed). As a reminder, the lifetime supply of all oocytes of a human female is produced before puberty. These oocytes are arrested in prophase I (prophase of the first meiotic division) with fully cohesed homologs and sisters, and resume meiosis one by one each menstrual cycle. This means that some oocytes might need to keep up their cohesion (between sisters AND homologs) over decades, which, considering the half-life of your average protein, can be challenging. This has important medical relevance as cohesion failure is believed to be the main cause behind missegregation of homologs, and thus, age-related aneuploidies, like e.g. trisomy 21.

After twenty years of research, the cohesin complex still manages to surprise us regularly, as new functions in new areas of cell cycle regulation come to light. Currently, extensive research is conducted to better understand the role of certain cohesin mutations in cancers such as glioblastoma, or Ewing's sarcoma. And while we're still far away from completely understanding this complex complex, we already know enough to say that cohesin really is "one ring to rule them all".

 


The WTF Star: Alien Mega Structure or Mega Version of Jupiter System?

 

JoEllen McBride, PhD

 

The Kepler telescope, despite technical issues, has observed over 100,000 stars in our galaxy. Its database is full of stars that show the tell-tale sign of an orbiting planet-- a periodic and repeatable dimming of the starlight. But one stellar dimming sequence doesn’t follow the expected protocol and it has astronomers getting creative to explain why.

 

Flux Lost

Tabby’s star, or more fondly, the WTF (Where’s the Flux?) star, is a yellow star slightly larger than our Sun located over 1200 light-years away in the constellation Cygnus the Swan. You can’t see it with your eyes but looking through a small 5-inch telescope you can see it just fine.

 

Kepler continuously observed the region of space where WTF lives from 2009 to 2013. Then in 2015, Citizen scientists analyzing the data noticed something very peculiar about WTF’s brightness. In March of 2011, the star dimmed by 22% of its original brightness, suggesting something big was passing in front of it. Then 700 days later in 2013, the star dimmed significantly again, but this time did so irregularly-- suggesting that not just one but many large objects were passing in front of the star. This is where the science gets interesting.

 

By JohnPassos (Own work) CC BY-SA 4.0, via Wikimedia Commons Light curve for Tabby’s star.
By JohnPassos (Own work) CC BY-SA 4.0, via Wikimedia Commons
Light curve for Tabby’s star.

 

When astronomers study the light from stars we create graphs that are called light curves. Light curves describe how the brightness of a star changes over a period of time. We choose a star, take images of it periodically and measure how bright it is. If the star’s brightness decreases, we will record a lower brightness value than in previous measurements.

 

Usually, when a star has planets orbiting it, the dimming will be periodic-- tied to the orbit of the planet. So we will measure a smooth dip in the brightness of the star at regular intervals as the planet passes in front. What’s so spectacular about WTF’s brightness is that there is a single, smooth dip in brightness followed 700 days later by  irregular but large decreases that lasted for 100 days before the brightness returned back to normal levels.

 

After ruling out issues with the Kepler telescope and the variability of WTF, the lead scientists considered more celestial explanations for the irregular dimming. Debris from a violent collision like the one that formed our Moon would probably create enough large particles to recreate the dimming-- but the likelihood of us catching such a one-off event is extremely small. A large conglomerate of comet fragments also seemed like a reasonable and likely cause. But we’ve never observed this before so can only make educated guesses as to what that light curve would look like.

 

Other scientists have jumped in on the task of explaining these dips with suggestions ranging from weird internal variations with the WTF star itself to unfinished alien megastructures. But recently, a group of researchers has proposed an explanation that’s a little more familiar and easily testable.

 

Follow the Gravity Train

To understand their proposal, we need to discuss a little-known fact (at least, I didn’t know this) about our solar system’s largest planet, Jupiter. All massive bodies in our solar system exert a gravitational force on other massive bodies. If we think of space as a bed sheet held taut at its corners and place a bowling ball at the center, the ball would create a pit or well in the sheet due to the mass of the ball. If we then place a baseball somewhere else on the sheet, the sheet will also bend due to the mass of the baseball. The larger well in the sheet due to the bowling ball will overlap in some places with the well in the sheet due to the baseball. This is sort of how gravitational forces interact with each other.

 

But space is a bit more complicated. The interaction of the gravitational forces of two massive bodies ends up creating what are known as Lagrange points. In our sheet analogy, these would appear as five additional wells created at specific locations around the bowling ball-baseball system. In space, these points orbit the more massive body at the same speed as the smaller body. Any objects living at these points are stuck following the smaller body around the larger one, never catching up or falling behind.

 

In the case of the Sun-Jupiter system, there are three Lagrange points that lie along Jupiter’s orbit and are home to thousands of asteroids. The two large ”Trojan” swarms are located on either side of Jupiter in its orbit around the Sun and the smaller “Hilda” swarm is always located on the opposite side of the Sun from Jupiter.

 

There is evidence for Trojan-type regions in other exoplanet systems and planet formation theory shows that these regions can exist long after planets form in solar systems. So this makes their detection more probable than one-off events like planetary collisions or never observed events like swarms of comet fragments.

 

Computer, Enhance

Researchers in Spain took a known idea and made it bigger to explain the weird dimming of the WTF star. Their proposal suggests the first, smoother dimming event is due to a large, ringed planet-- almost five time larger than Jupiter. This large planet would also have larger Trojan swarms which would explain the irregular dips in brightness 700 days later. Since the Jovian system has two Trojan regions, the astronomers expect there to be another irregular dimming episode again in February 2021 which would correspond to the second Trojan region. Then two years later in 2023, the giant ringed planet should pass in front of the star again, starting the approximately 12 year cycle over.

 

Their hypothesis even accounts for a smaller May 2017 dimming event which occurred at the same time their theoretical planet would have been passing behind the WTF star. If this system is similar to Jupiter, the dimming could be explained by a Hilda-like swarm of asteroids which would dim the star but not as significantly as the Trojan swarm.

 

You should still hold some reservations about this prediction though. The number of asteroids needed to produce such a large dimming is huge-- like the same mass as Jupiter huge. No one has a clue if this sort of configuration would even be stable. The team is working on a computer model for the system and plans on releasing those results in a forthcoming paper. But the key to a successful hypothesis is that it is easily testable and the Trojan hypothesis gives us something to look forward to in 2021. We only have to wait 4 years to see if these researchers are right or if we need to go back to the drawing board to figure out what’s going on with the WTF star.


Selfie photo of Curiosity rover and Mars terrain

Halos on Mars

By JoEllen McBride, PhD

Curiosity Discovery Suggests Early Mars Environment Suitable for Life Longer Than Previously Thought.

 

We have been searching desperately for evidence of life on Mars since the first Viking lander touched down in 1976. So far we’ve come up empty-handed but a recent finding from the Curiosity rover has refueled scientists’ hopes.

 

NASA’s Curiosity rover is currently puttering along the Martian surface in Gale Crater. Its mission is to determine whether Mars ever had an environment suitable for life. The clays and by-products of reactions between water and sulfuric acid (a.k.a. sulfates) that fill the crater are evidence that it once held a lake that dried up early in the planet’s history. Using its suite of instruments, Curiosity is digging, sifting and burning the soil for clues to whether the wet environment of a young Mars could ever give rise to life.

 

On Tuesday, scientists announced that they discovered evidence that groundwater existed in Gale Crater long after the lake dried up. Curiosity noticed lighter colored rock surrounding fractures in the crater which scientists recognized as a tell-tale sign of groundwater. As water flows underground on Earth, oxygen atoms from the water combine with other minerals found in the rock. The newly-formed molecules are then transported by the flowing water and absorbed by the surrounding rock. This process creates ‘halos’ within the rock that often have different coloration and composition than the original rock.

 

Curiosity used its laser instrument to analyze the composition of the lighter colored rock in Gale Crater and reported that it was full of silicates. This particular region of the crater contains rock that was not present at the same time as the lake and does not contain the minerals necessary to produce silicates. So the only way these silicates could be present is if they were transported there from older rock. Using what they know about groundwater processes on Earth, NASA scientists determined that groundwater must have reacted with silicon present in older rock creating the silicates. These new minerals then flowed to the younger bedrock and seeped in resulting in the halos Curiosity discovered. The time it would take these halos to form provide strong evidence that groundwater persisted in Gale Crater much longer than previously thought.

 

Credit: NASA/JPL-Caltech Image from Curiosity of the lighter colored halos surrounding fractures in Gale Crater.
Credit: NASA/JPL-Caltech Image from Curiosity of the lighter colored halos surrounding fractures in Gale Crater.

This news also comes on the heels of the first discovery of boron by Curiosity on Mars. Boron on Earth is present in dried-up, non-acidic water beds. Finding boron on Mars suggests that the groundwater present in Gale Crater was most likely at a temperature and acidity suitable for microbial life. The combination of the longevity of groundwater and its acceptable acidity greatly increases the window for microbial life to form on young Mars.

 

These two discoveries have not only extended the time-frame for the habitability of early Mars but lead one to wonder where else groundwater was present on the planet. We hopefully won’t have to wait too long to find out. Curiosity is still going strong and NASA has already begun work on a new set of exploratory Martian robots. The next rover mission to Mars is set to launch in 2020 and will be equipped with a drill that will remove core samples of Martian soil. The samples will be stored on the planet for retrieval at a later date. What (or who) will be sent to pick up the samples is still being determined.

 

Although we haven’t found evidence for life on Mars, the hope remains. It appears Mars had the potential for life at the same time in its formation as Earth. We just have to continue looking for organic signatures in the Martian soil or determine what kept life from getting its start on the Red Planet.

 


Fluorescently tagged cultured HeLa cells

HeLa, the VIP of cell lines

By  Gesa Junge, PhD

A month ago, The Immortal Life of Henrietta Lacks was released on HBO, an adaptation of Rebecca Skloot’s 2010 book of the same title. The book, and the movie, tell the story of Henrietta Lacks, the woman behind the first cell line ever generated, the famous HeLa cell line. From a biologist’s standpoint, this is a really unique thing, as we don’t usually know who is behind the cell lines we grow in the lab. Which, incidentally, is at the centre of the controversy around HeLa cells. HeLa was the first cell line ever made over 60 years ago and today a PubMed search for “HeLa” return 93274 search results.

Cell lines are an integral part to research in many fields, and these days there are probably thousands of cell lines. Usually, they are generated from patient samples which are immortalised and then can be grown in dishes, put under the microscope, frozen down, thawed and revived, have their DNA sequenced, their protein levels measured, be genetically modified, treated with drugs, and generally make biomedical research possible. As a general rule, work with cancer cell lines is an easy and cheap way to investigate biological concepts, test drugs and validate methods, mainly because cell lines are cheap compared to animal research, readily available, easy to grow, and there are few concerns around ethics and informed consent. This is because although they originate from patients, the cell lines are not considered living beings in the sense that they have feelings and lives and rights; they are for the most part considered research tools. This is an easy argument to make, as almost all cell lines are immortalised and therefore different from the original tissues patients donated, and most importantly they are anonymous, so that any data generated cannot be related back to the person.

But this is exactly what did not happen with HeLa cells. Henrietta Lack’s cells were taken without her knowledge nor consent after she was treated for cervical cancer at Johns Hopkins in 1951. At this point, nobody had managed to grow cells outside the human body, so when Henrietta Lack’s cells started to divide and grow, the researchers were excited, and yet nobody ever told her, or her family. Henrietta Lacks died of her cancer later that year, but her cells survived. For more on this, there is a great Radiolab episode that features interviews with the scientists, as well as Rebecca Skloot and Henrietta Lack’s youngest daughter Deborah Lacks Pullum.

In the 1970s, some researchers did reach out to the Lacks family, not because of ethical concerns or gratitude, but to request blood samples. This naturally led to confusion amongst family members around how Henrietta Lack’s cells could be alive, and be used in labs everywhere, even go to space, while Henrietta herself had been dead for twenty years. Nobody had told them, let alone explained the concept of cell lines to them.

The lack of consent and information are one side, but in addition to being an invaluable research tool, cell lines are also big business: The global market for cell lines development (which includes cell lines and the media they grow in, and other reagents) is worth around 3 billion dollars, and it’s growing fast. There are companies that specialise in making cell lines of certain genotypes that are sold for hundreds of dollars, and different cell types need different growth media and additives in order to grow. This adds a dimension of financial interest, and whether the family should share in the profit derived from research involving HeLa cells.

We have a lot to be grateful for to HeLa cells, and not just biomedical advances. The history of HeLa brought up a plethora of ethical issues around privacy, information, communication and consent that arguably were overdue for discussion. Innovation usually outruns ethics, but while nowadays informed consent is standard for all research involving humans, and patient data is anonymised (or at least pseudonomised and kept confidential), there were no such rules in 1951. There was also apparently no attempt to explain scientific concept and research to non-scientists.

And clearly we still have not fully grasped the issues at hand, as in 2013 researchers sequenced the HeLa cell genome - and published it. Again, without the family’s consent. The main argument in defence of publishing the HeLa genome was that the cell line was too different from the original cells to provide any information on Henrietta Lack’s living relatives. There may some truth in that; cell lines change a lot over time, but even after all these years there will still be information about Henrietta Lack’s and her family in there, and genetic information is still personal and should be kept private.

HeLa cells have gotten around to research labs around the world and even gone to space and on deep sea dives. And they are now even contaminating other cell lines (which could perhaps be interpreted as just karma). Sadly, the spotlight on Henrietta Lack’s life has sparked arguments amongst the family members around the use and distribution of profits and benefits from the book and movie, and the portrayal of Henrietta Lack’s in the story. Johns Hopkins say they have no rights to the cell line, and have not profited from them, and they have established symposiums, scholarships and awards in Henrietta Lack’s honour.

The NIH has established the HeLa Genome Data Access Working Group, which includes members of Henrietta Lack’s family. Any researcher wanting to use the HeLa cell genome in their research has to request the data from this committee, and explain their research plans, and any potential commercialisation. The data may only be used in biomedical research, not ancestry research, and no researcher is allowed to contact the Lacks family directly.


Cassini’s Sacrifice

 

By  JoEllen McBride, PhD

Our solar system is full of potential. From Earth to the frozen surface of Pluto, hydrocarbons and other complex organic molecules are surprisingly common. With every new space mission, we find the ingredients of life on more of our celestial neighbors.

 

The newest location to add to our list of places with potential for life comes from NASA’s Cassini spacecraft which began its study of Saturn in 2004. In the 13 years that Cassini has studied Saturn and its moons, it solved many mysteries and discovered some startling similarities to our own planet.

 

Saturn, at first glance, seems nothing like Earth. It is a gas giant, full of hydrogen and helium, with a possible Earth-sized core at the center. But Cassini revealed that there are phenomena occurring in the gas giant’s atmosphere that also occur on Earth. Cassini recorded video of lightning strikes on Saturn— the first taken on a planet other than our own. Since Saturn doesn’t have interference from mountains and other land features, jet streams can flow unimpeded forming a continuous hexagonal shape at the poles. But scientists are still unsure why that specific shape is created. Saturn also develops a planet-wide storm every 30 years that just happened to show up while Cassini was around in 2011-- 10 years early. From the data collected by Cassini, scientists were able to determine that the storms form in a similar way to thunderstorms on Earth. Instead of adjacent hot and cold fronts mixing on Saturn, layers of warm water vapor and cool hydrogen gasses mix. The storms take time to develop because water vapor is much heavier than hydrogen so it is normally positioned below the hydrogen fog. This gives the elevated hydrogen gas time to cool. Once it cools down enough, it becomes more dense which causes it to sink into the warmer water vapor. The two mix and voila!, a Saturnian thunderstorm is born. The storm also kicked up hydrocarbons from the lower atmosphere which surprised scientists.

 

Although Saturn probably can’t harbor life, two of Saturn’s moons, Titan and Enceladus, are ripe with the ingredients. The Cassini spacecraft made numerous orbits around Titan and even sent a probe (Huygens) down to the surface. Titan has land features similar to Earth, with lakes, mountains, ice caps, and deserts. The difference is methane and ethane are the chemical building blocks of the complex molecules found on the moon instead of carbon.

 

Enceladus was the biggest surprise to come out of the Cassini mission. This moon is essentially a smaller version of Jupiter’s moon Europa. Both are covered in a liquid ocean topped with a thick layer of ice that surrounds the moon. There is one big difference: Enceladus has hydrothermal vents deep within its oceans, just like on Earth, and these vents violently force liquid through cracks in the ice. The plumes are huge and powerful, extending hundreds of miles into space and traveling at hundreds of miles an hour. The Cassini spacecraft revealed that these plumes are chock full of hydrocarbons, which are the building blocks necessary for life. This tells scientists that there is the potential for life in the oceans of Enceladus and possibly Europa.

 

The other moons that Cassini visited revealed some startling information. Tethys has bright arcs of light which can only be seen at infrared wavelengths. Scientists are puzzled as to what they are and what is causing them. The spongy-looking moon Hyperion builds up a static charge as it tumbles around Saturn. Mimas, aka the Death Star Moon, was thought to be a dead world but shows evidence of a liquid ocean underneath its cratered surface. The moon is the same size as Enceladus but has no visible jets or plumes, so the liquid is trapped beneath the surface. Why these two moons are so different and whether Mimas’ ocean is full of hydrocarbons is something scientists hope to study in the future.

 

The potential of life in the Saturnian system is the main reason Cassini’s mission will come to a destructive end. The spacecraft is running out of fuel, meaning that scientists on Earth will eventually lose the ability to control the spacecraft. Our own planet is surrounded by defunct satellites whizzing around our planet-- just waiting to crash into other orbiting objects. The scientists in charge of the mission worry that if Cassini were left to orbit Saturn, it could potentially crash into Enceladus. This could introduce foreign microbes and chemicals, devastating any microbial life on the moon or ruining the chances of it ever forming. Instead, Cassini is performing its last dance with Saturn, orbiting the planet so closely that it is between the rings and the gaseous atmosphere. After 22 orbits, the spacecraft will take a dive into Saturn’s clouds on September 15, 2017, sacrificing its own metallic body for the sake of billions of potential life forms on the moons of Saturn.

 


Once Thought Elusive, A Black Hole Will Get A Close-up

 

By JoEllen McBride, PhD

Light can’t escape it but Matthew McConaughey can use it to ‘solve gravity’. They’re the most massive things in our universe but we can’t actually see them. Black holes were theorized by Einstein in the early 1900s and have intrigued both scientists and the public for over a century. Up until recently, we could only see their effects on visible matter that gets too close but an Earth-sized telescope is about to change all that.

 

The term black hole sounds silly but it’s pretty descriptive of this invisible phenomenon. Astronomers call things black or dark because we can’t actually see them with current technology. Black holes form when a star is so massive that its own force of gravity pushes in harder than the molecules and atoms that make it up can push out. The star collapses; decreasing its size to almost nothing. But matter can’t just disappear so this incredibly small object still has mass which can exert a gravitational influence on stars or gases that get too close. If our Sun became a black hole out of nowhere (don’t worry, this can’t happen), the Earth and other planets would not notice a difference gravitationally. We’d all continue orbiting as before; things would just get a lot colder. I guess that’s one way to wash away the rain.

 

So that’s how a single black hole forms but you’ve probably heard references to ‘supermassive’ black holes before. These black holes have masses of many millions or billions of suns. So what died and made that massive of a ‘hole’? Supermassive black holes are not the product of a single object but are most likely formed by the merging together of many smaller black holes. We recently found evidence of this process from the ground based gravitational wave detector, LIGO, which can detect the waves that are produced when two smaller black holes merge. We also know supermassive black holes exist because we have seen their influence on other luminous objects such as stars and gas that’s been heated. We see jets of gas being shot out of the centers of galaxies at close to light speed. There is something incredibly massive at the center of our own galaxy that causes stars nearby to orbit at incredible speeds. The simplest explanation for these observations is that galaxies have supermassive black holes at their centers.

 

But there is another way we could ‘see’ a black hole which was impossible before this year. As stated before, light cannot escape a black hole but anything that becomes trapped in the gravitational well has to orbit for some time before it disappears. So there must be a point where we can still see material just before it’s lost forever; like an object that swirls around the edge of a whirlpool just before falling down the drain. This region is known as the event horizon and it’s basically the closest we can get to seeing a black hole. Currently, the supermassive black hole at the center of our galaxy, named Sagittarius A*, isn’t taking any material in but that doesn’t mean the event horizon is empty. Luminous material can orbit in the event horizon for a very long time, we just need to look at the right wavelength with a big enough camera.

 

The center of our galaxy is 8 kiloparsecs or 1,50,000,000,000,000,000 miles away. To put that in perspective, that’s about 1014 times larger than the distance between the U.S. coasts, 1011 times larger than the Earth-Moon distance and 100,000 times larger than the distance to the next closest star, Alpha Centauri. It’s really far away. The width of Sagittarius A*’s event horizon is estimated to be between the width of Pluto and Mercury’s orbit around our Sun. At its widest estimate, the event horizon of Sagittarius A* would span one-millionth of a degree on the sky. For comparison, the full moon spans about half a degree. So we’re gonna need a bigger telescope-- an Earth-sized one.

 

Enter the Event Horizon Telescope (EHT). This network of telescopes operates at radio wavelengths and uses a technique that increases the size of a telescope without having to build a huge dish. The EHT combines telescopes in Arizona, Hawaii, Mexico, Chile, Spain and the South Pole to create an Earth-sized radio dish. A good analogy I’ve found is to picture you and five other friends are standing at various locations at the edge of a pond. You all know where you are located with respect to each other and the pond surface. Each of you also has a stopwatch and placed a bobber in the water directly in front of you. If a pebble gets dropped somewhere in the middle of the pond each of you will wait until you see the bobber start moving and begin recording the time and the up and down motions the bobber makes as the peaks and troughs of the wave passes by. After you’ve recorded enough bobs, you can meet back up with your friends to determine where the pebble was dropped and its size based on the ripples and when they reached each of your respective locations. The EHT will work similarly except the friends are telescopes pointed at Sagittarius A* and the water ripples are light waves.

 

Over 10 days at the beginning of April, these telescopes were in constant contact, monitoring the weather at each site, to coordinate their observations as best they could. Radio waves can usually penetrate everything but the wavelengths that these telescopes were looking at are blocked by water vapor, so clouds and rain mean no observing. On April 15th, they finished their run by successfully obtaining 5 days worth of observations. Now each site has to mail hard drives with their data to a central location, where the images can be properly aligned. The South Pole Telescope can only send out packages after their winter season ends in October, but data is already coming in from the other sites.

 

If everything went as planned, the images should add up to the highest resolution images ever taken of a black hole. This arrangement allows them to measure objects as small as a billionth of a degree. The estimated size of Sagittarius A*’s event horizon is larger than this, so a faint ring surrounding darkness should be visible in the final images. Hopefully, Sagittarius A* was ready for its close-up because humans are eager to see how their own depictions of black holes match up.

 

 


Want to Watch History Burn? Check Out a Meteor Shower!

 

By JoEllen McBride, PhD

 

Fireballs streaking across the sky. Falling or shooting stars catching your eye. Meteors have fascinated humans as long as we’ve kept records. Depending on the time of year, on a clear night, you can see anywhere from 2 to 16 meteors enter our atmosphere and burn up right before your eyes. If you really want a performance, you should look up during one of the many meteor showers that happen throughout the year. These shows can provide anywhere from 10 to 100 meteors an hour! But what exactly is burning up to create these cosmic showers?

 

To answer this question we need to go back in time to the formation of our solar system. Our galaxy is full of dust particles and gas. If these tiny particles get close enough they’ll be gravitationally attracted and forced to hang out together. The bigger a blob of gas and dust gets, the more gas and dust it can attract from its surroundings. As more and more particles start occupying the same space, they collide with each other causing the blob to heat up. At a high enough temperature the ball of now hot gas can fuse Hydrogen and other elements which sustains the burning orb. Our Sun formed just like this, about 5 billion years ago.

 

Any remaining gas and dust orbiting our newly created Sun coalesced into the eight planets and numerous dwarf planets and asteroids we know of today. Even though the major planets have done a pretty good job clearing out their orbits of large debris, many tiny particles and clumps of pristine dust remain and slowly orbit closer and closer to the Sun. If these 4.5 billion year old relics cross Earth’s path, our planet smashes into them and they burn up in our atmosphere. These account for many of the meteors that whiz through our atmosphere unexpectedly.

 

The predictable meteor showers, on the other hand, are a product of the gravitational influence of the larger gas giant planets. These behemoths forced many of the smaller bodies that dared to cross them out into the furthest reaches of our solar system. Instead of being kicked out of the solar system completely, a few are still gravitationally bound to the Sun in orbits that take them from out beyond the Kuiper belt to the realm of the inner planets. As these periodic visitors approach our central star, their surfaces warm, melting ice that held together clumps of ancient dust. The closer the body gets to the Sun, the more ice melts-- leaving behind a trail of particulates. We humans see the destruction of these icy balls as beautiful comets that grace our night skies periodically. But the trail of dust remains long after the comet heads back to edge of our solar system.

 

The dusty remains of our cometary visitors slowly orbit the Sun along the comet’s path. There are a few well-known dust lanes that our planet plows into annually. Some of these showers produce exciting downpours with over a hundred meteors an hour and others barely produce a drip. April begins the meteor shower season and the major events for 2017 are listed below.

Shower Dates

Peak Times

(UT)

Moon Phase At Peak Progenitor
Range Peak
Lyrid (N) Apr 16-25 Aprl 22 12:00 Crescent Thatcher 1861 I
Eta Aquarid (S) Apr 19-May 28 May 6 2:00 Gibbous 1P/Halley
Delta Aquarid (S) Jul 21-Aug 23 Jul 30 6:00 First Quarter 96P/Machholz
Perseid (N) Jul 17-Aug 24 Aug 12/13 14:00/2:30 Third Quarter 109P/Swift-Tuttle
Orionid Oct 2-Nov 7 Oct 21 6:00 First Quarter 1P/Halley
Taurids Sep 7-Nov 19

Nov 10/11

Nov 4/5

12:00

Crescent

Full

2P/Encke
Leonid Nov Nov 17 17:00 New 55P/Tempel-Tuttle
Geminid Dec 4-16 Dec 14 6:30 Crescent 3200 Phaethon*
Quadrantid (N) Dec 26-Jan 10 Jan 3 14:00 Full 2003 EH1

S= best viewed from Southern Hemisphere locations

N= best viewed from Northern Hemisphere locations

*This is an asteroid with a weird orbit that takes it very close to the Sun!

 

Here is a list of things you can do to ensure the best meteor viewing experience.

[unordered_list style="star"]

  • Check the weather. If it’s going to be completely overcast your meteor shower is ruined.
  • Is the Moon up? Is it more than a crescent? If the answer to both of these is yes you will have a more difficult time seeing meteors. The big, bright ones will still shine through but those are rare.
  • When trying to catch a meteor shower, make sure the constellation the shower will radiate from is actually up that night. Hint: Meteor showers are named after the constellation they appear to radiate from.
  • You need the darkest skies possible. So get away from cities and towns. The International Dark Sky Association has a dark sky place finder you can use. Your best bet is to find an empty field far from man-made light pollution.
  • Make sure trees and buildings aren’t obscuring your view.
  • It takes about 30 minutes for your eyes to completely adjust to the darkness. If you have a flashlight, cover it with red photography gel to help keep your eyes adjusted.
  • Ditch the cell phone. Cell phones ruin your night vision. Every time you look at your screen your eyes have to readapt to the dark when you look back up at the sky. There are apps you can download that dim your screen (iPhone, Android) but your eyes will still need time to adjust to the darkness if you glance at your phone. Also looking away almost guarantees the biggest meteor will streak by at just that moment.
  • Dress comfortably. In the fall and winter, wear warm clothes and have hot chocolate and coffee on hand. In the spring and summer, some cool beverages will enhance your experience. Make sure you have blankets to lay on or comfortable chairs so you can keep your eyes on the skies.

[/unordered_list]

Follow these guidelines and you’ll have the best chance of watching 4.5 billion years of history burn up before your very eyes.


Paperfuges and Foldscopes: The Case for Low-Tech Science

 

By Gesa Junge, PhD

 

If you have ever been inside a lab you will know that centrifuges and microscopes come in various shapes and sizes and degrees of sophistication, but in some form they are used every day in most research labs around the world. Microscopes and centrifuges are pretty basic lab equipment, although some versions can be very high-end, for example high-speed centrifuges that can cool down to fridge temperatures, or electron microscopes that can magnify structures up to 2 million times. But even basic centrifuges and microscopes cost a few thousand dollars, and they require electricity and maintenance. These are not big issues for most universities and established research institutes, but for scientists working in the field, or in developing countries, money and electricity can be hard to come by.

With this in mind, Manu Prakash from Stanford University developed a centrifuge and a microscope made of paper. Yes, you read that right. The centrifuge is basically a paper disk on two strings that you pull to make the disk spin (kind of like a whirligig Saw Mill, remember those?) – check out this video from Wired Magazine. The whole thing costs 20 cents and fits into a jacket pocket, but it can spin samples up to 12500rpm, which is fast. Fast enough, for example, to separate blood into blood cells and plasma, which is a key step in many diagnostic procedures.

And the foldscope is basically origami. It is printed on paper, you cut out the parts and fold them up and insert a lens. The microscope does need electricity, but it can run on a battery for up to 50 hours, and the sample can be mounted on a piece of tape, as opposed to a glass slide. The lens determines the magnification, and they can go up to 2000x. For reference, we can distinguish individual human cells easily at 10x, nuclei become clearly visible at 20x and bacteria at 40x. Using different color LEDs, this can even be converted into a fluorescent microscope, meaning it can be used to analyse different stains of tissues.

The paperfuge and the foldscope are the implementation of an emerging concept called “frugal science”, and aim to bring scientific advances to inaccessible and under-developed regions. And while Manu Prakash’s ideas are very low-tech approaches, the idea of making science useful to everyone also benefits from innovation and advanced technology. For example, Dr Samuel Sia at Columbia University has developed a smart phone dongle technology called mChip which can diagnose HIV from a finger prick’s worth of blood. This device contains all the necessary reagents which mix at the push of a button, and it plugs into the headphone jack of a phone as a power supply. Testing takes about 15 minutes and costs about $1 (the dongle is $100), which is a huge improvement over current methods. In a similar concept, a company called QuantumMDx in Newcastle in the UK is developing a handheld DNA testing tool, which could be used to identify strains of pathogens. And electronics company Phillips has come up with the MiniCare I-20, a handheld device that can measure troponin I levels from a single drop of blood taken from a pinprick. Troponin I is a marker of a damaged heart muscle, and is often measured in emergency departments.

All of these innovations address a really important, and sometimes overlooked, point: science and technology, in all their greatness and cool fascination, will only benefit humanity if applied in the community in a way that leads to real-life changes. As with so many resources, scientific expertise and technology, and therefore the benefit of science, are distributed incredibly unevenly among the world’s society. For example, malaria and AIDS drugs are still not reaching many of the people who need them, be it for financial, infrastructural, political, or organisational reasons. Diagnostic tests often require well-equipped labs and trained technicians. And while they are limited in their applications for research, the paperfuge and the foldscope have the potential to revolutionize diagnostics as well as education around the world. Cutting-edge research may require more sophisticated centrifuges that spin faster, microscopes that have better resolution, computers to store the images, and teams of scientists analyzing the data. But the frugal science approach is well-suited for the diagnosis of diseases, or to help a high school science class understand what cells are.

If you would like to find out more about the foldscope, check out Manu Prakash’s very cool TED talk. More information on Dr Sia’s mChip can be found here.

 


On Science and Values

 

By Rebecca Delker, PhD

 

In 1972 nuclear physicist Alvin Weinberg defined ‘trans-science’ as distinct from science (references here, here). Trans-science – a phenomenon that arises most frequently at the interface of science and society – includes questions that, as the name suggests, transcend science. They are questions, he says, “which can be asked of science and yet which cannot be answered by science.” While most of what concerned Weinberg were questions of scientific fact that could not (yet) be answerable by available methodologies, he also understood the limits of science when addressing questions of “moral and aesthetic judgments.” It is this latter category – the differentiation of scientific fact and value – that deserves attention in the highly political climate in which we now live.

Consider this example. In 2015 – 2016, action to increase the use of risk assessment algorithms in criminal sentencing received a lot of heat (and rightly so) from critics (references here, here). In an attempt to eliminate human bias from criminal justice decisions, many states rely on science in the form of risk assessment algorithms to guide decisions. Put simply, these algorithms build statistical models from population-level data covering a number of factors (e.g. gender, age, employment, etc.) to provide a probability of repeat offense for the individual in question. Until recently, the use of these algorithms has been restricted, but now states are considering expanding their utility for sentencing. What this fundamentally means is that a criminal’s sentence depends not only on the past and present, but also on a statistically derived prediction of future. While the intent may have been to reduce human bias, many argue that risk assessment algorithms achieve the opposite; and because the assessment is founded in data, it actually serves to generate a scientific rationalization of discrimination. This is because, while the data underpinning the statistical models does not include race, it requires factors (e.g. education level, socioeconomic background, neighborhood) that are, themselves, revealing of centuries of institutionalized bias. To use Weinberg’s terminology, this would fall into the first category of trans-science: the capabilities of the model fall short of capturing the complexity of race relations in this country.

But this is not the whole story. Even if we could build a model without the above-mentioned failings, there are still more fundamental ethical questions that need addressing. Is it morally correct to sentence a person for crimes not yet committed? And, perhaps even more crucial, does committing a crime warrant one to lose their right to be viewed (and treated) as an individual – a value US society holds with high regard – and instead be reduced to a trend line derived from the actions of others? It is these questions that fall into the second category of trans-science: questions of morality that science has no place in answering. When we turn to science to resolve such questions, however, we blind ourselves from the underlying, more complex terrain of values that make up the debate at hand. By default, and perhaps inadvertently, we grant science the authority to declare our values for us.

Many would argue that this is not a problem. In fact, in a 2010 TED talk neuroscientist Sam Harris claimed that “the separation between science and human values is an illusion.” Values, he says, “are a certain kind of fact,” and thus fit into the same domain as, and are demonstrable by, science. Science and morality become one in the same because values are facts specifically “about the well-being of conscious creatures,” and our moral duty is to maximize this well being.

The flaw in the argument (which many others have pointed out as well) is that rather than allowing science to empirically determine a value and moral code – as he argued it could – he presupposed it. That the well being of conscious creatures should be valued, and that our moral code should maximize this, cannot actually be demonstrated by science. I will also add that science can provide no definition for ‘well-being,’ nor has it yet – if it ever can – been able to provide answers to the questions of what consciousness is, and what creatures have it. Unless human intuition steps in, this shortcoming of science can lead to dangerous and immoral acts.

What science can do, however, is help us stay true to our values. This, I imagine, is what Harris intended. Scientific studies play an indispensable role in informing us if and when we have fallen short of our values, and in generating the tools (technology/therapeutics) that help us achieve these goals. To say that science has no role in the process of ethical decision-making is as foolish as relying entirely on science: we need both facts and values.

While Harris’ claims of the equivalency of fact and value may be more extreme than most would overtly state, they are telling of a growing trend in our society to turn to science to serve as the final arbiter of even the most challenging ethical questions. This is because in addition to the tangible effects science has had on our lives, it has also shaped the way we think about truth: instead of belief, we require evidenced-based proof. While this is a noble objective in the realm of science, it is a pathology in the realm of trans-science. This pathology stems from an increasing presence in our society of Scientism – the idea that science serves as the sole provider of knowledge.

But we live in the post-fact era. There is a war against science. Fact denial runs rampant through politics and media. There is not enough respect for facts and data. I agree with each of these points; but it is Scientism, ironically, that spawned this culture. Hear me out.

The ‘anti-science’ arguments – from anti-evolution to anti-vaccine to anti-GMO to climate change denial – never actually deny the authority of science. Rather, they attack scientific conclusions by either creating a pseudoscience (think: creationism), pointing to flawed and/or biased scientific reporting (think: hacked Climate data emails), clinging to scientific reports that demonstrate their arguments (think: the now debunked link between vaccines and autism), and by honing in on concerns answerable by science as opposed to others (think: the safety of GMOs). These approaches are not justifiable; nor are they rigorously scientific. What they are, though, is a demonstration that even the people fighting against science recognize that the only way to do so is by appealing to its authority. As ironic as it may be, fundamental to the anti-science argument is the acceptance that the only way to ‘win’ a debate is to either provide scientific evidence or to poke holes in the scientific evidence at play. Their science may be bad, but they are working from a foundation of Scientism.

 

Scientific truth has a role in each of the above debates, and in some cases – vaccine safety, for example – it is the primary concern; but too often scientific fact is treated as the only argument worth consideration. An example from conservative writer Yuval Levin illustrates this point. While I do not agree with Levin’s values regarding abortion, the topic at hand, his points are worth considering. Levin recounts that during a hearing in the House of Representatives regarding the use of the abortion drug RU-486, a DC delegate argued that because the FDA decided the drug was safe for women, the debate should be over. As Levin summarized, “once science has spoken … there is no longer any room for ‘personal beliefs’ drawing on non-scientific sources like philosophy, history, religion, or morality to guide policy.”

When we break down the abortion debate – as well as most other political debates – we realize that it is composed of matters of both fact and value. The safety of the drug (or procedure) is of utmost importance and can, as discussed above, be determined by science; this is a fact. But, at the heart of the debate is a question of when human life begins – something that science can provide no clarity on. To use scientific fact as a façade for a value system that accepts abortion is as unfair as denying the scientific fact of human-caused climate change: both attempts focus on the science (by either using or attacking) in an effort to thwart a discussion that encompasses both the facts of the debate and the underlying terrain of values. We so crave absolute certainty that we reduce complex, nuanced issues to questions of scientific fact – a tendency that is ultimately damaging to both social progress and society’s respect for science.

By assuming that science is the sole provider of truth, our culture has so thoroughly blurred the line between science and trans-science that scientific fact and value are nearly interchangeable. Science is misused to assert a value system; and a value system is misused to selectively accept or deny scientific fact. To get ourselves out of this hole requires that we heed the advice of Weinberg: part of our duty as scientists is to “establish what the limits of scientific fact really are, where science ends and trans-science begins.” Greater respect for facts may paradoxically come from a greater respect for values – or at the very least, allowing space in the conversation for them.