The Phase That Makes The Cell Go Round

 

By Johannes Buheitel, PhD

 

There comes a moment in every cell's life, when it's time to reproduce. For a mammalian cell, this moment usually comes at a ripe age of about 24 hours, at which it undergoes the complex process of mitosis. Mitosis is one of the two main chromosomal events of the cell cycle. But in contrast to S phase (and also to the other phases of the cell cycle) it's the only phase that is initiated by a dramatic change in the cell's morphology that, granted, you can't see with your naked eye, but definitely under any half-decent microscope without requiring any sort of tricks (like fluorescent proteins): Mitotic cells become perfectly round. This transformation however, as remarkable as it may seem, is merely a herald for the main event, which is about to unfold inside the cell: An elegant choreography of chromosomes, which crescendoes into the perfect segregation of the cell's genetic content and the birth of two new daughter cells.

 

To better understand the challenges behind this choreography, let's start with some numbers: A human cell has 23 unique chromosomes (22 autosomes and 1 gonosome) but since we're diploid (each chromosome has a homolog) that brings us to a total of 46 chromosomes that are present at any given time, in (nearly) every cell of our bodies. Before S phase, each chromosome consists of one continuous strand of DNA, which is called a chromatid. Then during S phase, a second "sister" chromatid is being synthesized as a prerequisite for later chromosome segregation in M phase. Therefore, a pre-mitotic cell contains 92 chromatids. That's a lot! In fact, if you laid down all the genetic material of a human cell that fits into a 10 micrometer nucleus, end to end on a table, you would wind up having with a nucleic acid string of about 2 meters (around 6 feet)! The challenge for mitosis is to entangle this mess and ultimately divide it into the nascent daughter cells according to the following rules: 1) Each daughter gets exactly half of the chromatids. 2) Not just any chromatids! Each daughter cell requires one chromatid of each chromosome. No more, no less. And maybe the most important one, 3) Don't. Break. Anything. Sounds easy? Far from it! Especially since the stakes are high: Because if you fail, you die (or are at least pretty messed up)...

 

Anatomy of a mitotic chromosome
Anatomy of a mitotic chromosome

To escape this dreadful fate, mitosis has evolved into this highly regulated process, which breaks down the high complexity of the task at hand into more sizable chunks that are then dealt with in a very precise spatiotemporal manner. One important feature of chromosomes is that its two copies – or sister chromatids – are being physically held together from the time of their generation in S phase until their segregation into the daughter cells in M phase. This is achieved by a ring complex called cohesin, which topologically embraces the two sisters in its lumen (we'll look at this interesting complex in a separate blog post). This helps the cell to always know, which two copies of a chromosome belong together, thus essentially cutting the complexity of the whole system in half, and that before the cell even enters mitosis.

Actual mitosis is divided into five phases with distinct responsibilities: prophase, prometaphase, metaphase, anaphase and telophase (cytokinesis, the process of actually dividing the two daughter cells, is technically not a phase of mitosis, but still a part of M phase). In prophase, the nuclear envelope surrounding the cell's genetic content is degraded and the chromosomes begin to condense, which means that each DNA double helix gets neatly wrapped up into a superstructure. Think of it like taking one of those old coiled telephone receiver cables (that's your helix) and wrapping it around your arm. So ultimately, chromosome condensation makes the chromatids more easily manageable by turning them from really long seemingly entangled threads into a shorter (but thicker) package. At this point each chromatid is still connected to its sister by virtue of the cohesin complex (see above) at one specific point, which is called the centromere. It is this process of condensation of cohesed sister chromatids that is actually responsible for the transformation of chromosomes into their iconic mitotic butterfly shape that we all know and love. While our butterflies are forming, the two microtubule-organizing centers of the cell, the centrosomes, begin to split up and wander to the cell poles, beginning to nucleate microtubules. In prometaphase, chromosome condensation is complete and the centrosomes have reached their destination, still throwing out microtubules like it’s nobody’s business. During this whole time, their job is to probe the cytoplasm for chromosomes by dynamically extending and collapsing, trying to find something to hold on to amidst the darkness of the cytoplasm. This something is a protein structure, called the kinetochore, which sits on top of each sister chromatid's centromere region. Once a microtubule has found a kinetochore, it binds to it and stabilizes. Not all microtubules will bind to kinetochores though, some of them will interact with the cell cortex or with each other to gradually form the infamous mitotic spindle, the scaffold tasked with directing the remainder of the chromosomal ballet. Chromsomes, which are attached to the spindle (via their kinetochores) will gradually move (driven by motor proteins like kinesins) towards the middle region of the mother cell and align on an axis, which lies perpendicular between the two spindle poles. This axis is called the metaphase plate and represents a visual mark for the eponymous phase. The transition from metaphase to anaphase is the pivotal moment of mitosis; the moment, when sister chromatids become separated (by proteolytic destruction of cohesin) and subsequently move along kinetochore-associated microtubules with the help of motor proteins towards cell poles. As such a critical moment, the metaphase-to-anaphase transition is tightly safeguarded by a checkpoint, the spindle assembly checkpoint (SAC), which ensures that every single chromatid is stably attached to the correct side of the spindle (we’ll go into some more details in another blog post). In the following telophase, the newly separated chromosomes begin to decondense, the nuclear envelope reforms and the cell membrane begins to restrict in anticipation of cytokinesis, when the two daughter cells become physically separated.

 

Overview over the five phases of mitosis.
Overview over the five phases of mitosis (click to enlarge).

To recap, the process of correctly separating the 92 chromatids of a human cell into two daughter cells is a highly difficult endeavor, which, however, the cell cleverly deals with by (1) keeping sister chromatids bound to each other, (2) wrap them  into smaller packets by condensation, (3) attach each of these packets to a scaffold a.k.a. mitotic spindle, (4) align the chromosomes along the division axis, so that each sister chromatid is facing opposite cell poles, and finally (5) move now separated sister chromatids along this rigid scaffold into the newly forming daughter cells. It's a beautiful but at the same time dangerous choreography. While there are many mechanisms in place that protect the fidelity of mitosis, failure can have dire consequences, of which cell death isn't the worst, as segregation defects can cause chromosomal instabilities, which are typical for tissues transforming into cancer. In future posts we will dive deeper into the intricacies of the chromosomal ballet, that is the centerpiece of the cell cycle, as well as the supporting acts that ensure the integrity of our life's code.

 


How to Live Long and Prosper - a Vulcan's Dream

 

By Jesica Levingston Mac leod, PhD

 

A new Harvard study found that we are living longer and better, too. In fact, the life expectancy for a 65 year old in USA grew a lot in the last 20 years: the life expectancy for females is now 81.2 years and for males it's 76.4 years. The 3 pillars of this improvement are the less smoking, healthier diet and the medical advances. Going straight into the deep science latest developments, two start ups (BioViva and Elysium Health) were in the news recently for their cutting-edge “anti-aging” approaches. The first group to research  telomeres gene therapy is Maria Blasco's group. A study by Bernardes de Jesus et al. demonstrated how telomerese gene therapy in adult and old mice could delay aging and increase longevity, without the collateral effect of increasing the propensity of developing cancer.

In the study, the scientists showed how the treatment of 1- and 2-year old mice with an adeno associated virus expressing mouse telomerase reverse transcriptase (TERT) had beneficial effects on health and fitness, with an increase in median lifespan of 24% and 13%, respectively. Some other benefits included better insulin sensitivity, reduced osteoporosis, improved neuromuscular coordination and improvements in several molecular biomarkers of aging. In cancer cells, the expression of the telomerase is enhanced, giving this protein a bad reputation as having a “tumorigenic activity”. Elizabeth Parrish, the CEO of BioViva, went all the way to Colombia, to receive two gene therapies that her company had developed: one to lengthen the telomeres and the other to increase muscle mass. The results of the treatment were very positive: the telomeres in leukocytes grew from 6.71 kb to 7.33 kb in seven months. As a side note, petite leukocyte telomere length may be associated with several psychiatric disorders (including major depressive disorder) and with poor response to psychiatric medications in bipolar disorder and schizophrenia.

In a nutshell, human telomeres are composed of double-stranded repeat arrays of “TTAGGG” terminating in a single-stranded G-rich overhang. The fidelity of that sequence is maintained by the enzyme telomerase, which uses an intrinsic RNA molecule containing the CAAUCCCAAUC template region and the reverse transcriptase component (TERT), to synthesize telomeric DNA de novo onto the chromosome terminus. The telomeres were named after the greek words télos (end, extremity) and méros (part). Take home message: Telomerase adds DNA to the ends of telomeres and by lengthening telomeres, it extends cellular life-span and/or induces immortalization. The telomerase is not active in normal somatic cells while active only in germ-line, stem and other highly proliferative cells.

 

Last year, Dr. Fagan and collaborators, published in PLoS One that the transcendental meditation and lifestyle variations stimulate two genes that produce telomerase (hTERT and hTR). Even cheerier news were reported in Nature for thanksgiving: the edible dormouse (super cute, small, long tail mouse - Glis glis) telomere length significantly increases from an age of 6 to an age of 9 years. As they state in the paper "the findings clearly reject the notion that there is a universal and inevitable progressive shortening of telomeres that limits the number of remaining cell cycles and predicts longevity".  These species of mouse skip reproduction in years with low food availability, this “sit tight” strategy in the timing of reproduction might pushed "older" dormouse to reproduce, and this could facilitate telomere attrition, this strategy may have led to the evolution of increased somatic maintenance and telomere elongation with increasing age.

The other company, Elysium, co-founded by MIT professor Lenny Guarente, is focus in the mitochondria and the NAD (nicotinamide adenine dinucleotide). Mitochondria are our energy generators and they get crumbly as we age. Dr. Guarente demonstrated in mice how it may be possible to reverse mitochondrial decay with dietary supplements that increase cellular levels of NAD, like nicotinamide riboside (NR, a precursor to NAD that is found in trace amounts in milk), resveratol (a red wine ingredient) or pterostilbene (present in berries and grapes). Elysium has just realized the results of the clinical trial that was placebo-controlled, randomized, and double-blinded, where they evaluated the safety and efficacy of BASIS (the diateary supplement with nicotinamide riboside (NR) and pterostilbene) in 120 healthy participants ages 60-80 over an eight-week period. Participants received either the recommended dose (250 mg NR and 50 mg pterostilbene) or double the dose. In both cases, the intake of Basis resulted in the increase of NAD+ levels in the blood safely and sustainably, 40% and 90% respectively.

 

A former Guarante's postdoc -  Dr. Sinclair - has just published in Science the discovery of a NAD binding area in a protein that regulate NAD's interactions with other proteins related to aging. The Sinclair's lab reported that the binding of NAD+ to DBC1 (Deleted in Breast Cancer 1 protein) prevents it for inhibiting another protein -  PARP1, an important DNA repairing protein. Furthermore, they have shown that as the mice aged, the concentration of NAD+ decreased, and more DBC1 was available to bind to PARP1, culminating in the accumulation of DNA damage. On a brighter note, this process was reversed by restoring higher levels of NAD+. The good news are that NAD+modulation might protect against cancer, radiation and aging.

 

Although all these advances are great, they won’t make you live longer in the next 10 years, so what can you do to live longer/healthier? Science comes again to answer this question! Harvard studies have shown that living “meaningful lives” helping others, having aims/motivations (and been conscious about the fact that we are taking our own decisions), been grateful, enjoying the present and significant relationships with other humans are key aspects to have a happy live. Obviously, exercising and having natural environments around us, as well as healthy eating are crucial points in a healthy life.

It might be an oversimplification, but 70% of your risk of disease is related to diet: soda and processed food are related with shortening the telomeres. Good news: you can slow down aging with a healthier life style: “Switch to a whole-food, plant-based diet, which has been repeatedly shown not just to help prevent the disease, but arrest and even reverse it” claims Dr. Greger’s, author of the Daily Dozen—a checklist of the foods we should try to consume every day. The super food list includes: Cruciferous vegetables (such as broccoli, Brussels sprouts, cabbage, cauliflower, kale, spring greens, radishes, turnip tops, watercress), Greens (including spring greens, kale, young salad greens, sorrel, spinach, swiss chard), other vegetables (Asparagus, beetroot, peppers, carrots, corn, courgettes, garlic, mushrooms, okra, onions, pumpkin, sugar snap peas, squash, sweet potatoes, tomatoes), beans (Black beans, cannellini beans, black-eyed peas, butter beans, soyabeans, baked beans, chickpeas, edamame, peas, kidney beans, lentils, miso, pinto beans, split peas, tofu, hummus),  Berries: (including grapes, raisins, blackberries, cherries, raspberries and strawberries),  other fruit (such as apples, apricots, avocados, bananas, cantaloupe melon, clementines, dates, figs, grapefruit, honeydew melon, kiwi, lemons, limes, lychees, mangos, nectarines, oranges, papaya, passion fruit, peaches, pears, pineapple, plums, pomegranates, prunes, tangerines, watermelon),  Flax seeds, nuts, spices (like turmeric), whole grains (Buckwheat, rice, quinoa, cereal, pasta, bread) and the almighty: water.

As you can expect, a lot of research is needed to get a magic pill that might boost your life expectancy but you can start investing in your future having a positive attitude, healthy diet, exercising and all the other things that you already know you should be doing to feel better, without forgetting that life is too short, so eat dessert first.

 


Can we reprogram adult cells into eggs?

 

By Sophie Balmer, PhD

 

Oogenesis is the female process necessary to create eggs ready for fertilization. Reproducing these keys steps in culture constitutes a major advance in developmental biology. Last week, a scientific group from Japan amazingly succeeded and published their results in the journal Nature. They replicated the entire cycle of oogenesis in vitro starting from adult skin cells. Upon fertilization of these in vitro eggs and transfer in adult females, they even obtained pups that grew normally to adulthood providing new platforms for the study of developmental biology.

 

Gamete precursor cells first appear early during embryonic development and are called primordial germ cells. These precursors then migrate to the gonads where they will remodel their genome via two rounds of meiosis to produce either mature oocytes or sperm depending on the sex of the embryo. For oocyte maturation, these two cycles occur at different times: the first one before or shortly after birth and the second one at puberty. The second round of meiosis is incomplete and the oocytes remain blocked in metaphase until fertilization by male gametes. This final event initiates the process of embryonic development, therefore closing the cycle of life.

 

Up until last week, parts of this life cycle were reproducible in culture. For years, scientists have known how to collect and culture embryos, fertilize them and transfer them to adult females to initiate gestation. This process called in vitro fertilization (IVF) has successfully been applied to humans and has revolutionized the life of millions of individuals suffering specific infertility issues and allowing them to have babies. However only a subset of infertility problems can be solved by IVF.

Additionally, in 2012, the same Japanese group recreated another part of the female gamete development: Dr. Hayashi and colleagues generated mouse primordial germ cells in vitro that once transplanted in female embryos recapitulated oogenesis. Both embryonic stem (ES) cells or induced pluripotent stem (iPS) cells were used for such procedure. ES cells can be derived from embryos before their implantation in the uterus and iPS cells are derived by reprogramming of adult cells. Finally, a couple of months ago, another group also reported being able to transform primordial germ cells collected from mouse embryos into mature oocytes.

 

However, replicating the full cycle of oogenesis from pluripotent cell lines in a single procedure constitutes an unprecedented discovery. To achieve this, they proceeded in different steps: first, they produced primordial germ cells in vitro from either skin cells (following their de-differentiation into iPS cells) or directly from ES cells. Second, they produced primary oocytes in a specific in vitro environment called "reconstituted ovaries". Third, they induced maturation of oocyte up until their arrest in meiosis II. This process took approximately the same time as it would take in the female mouse and it is impressive to see how the in vivo and in vitro oocytes are indistinguishable. Of course, this culture system also produced a lot of non-viable eggs and only few make it through the whole process. For example, during the first step of directed differentiation, over half of the oocytes contain chromosome mispairing during meiosis I, which is about 10 times more than in vivo. Additionally, only 30% complete meiosis I as shown by the exclusion of the 1st polar body. However, analysis of other parameters such as the methylation pattern of several genes showed that maternal imprinting was almost complete and that most of the mature oocytes had normal number of chromosomes. Transcription profiling also showed very high similarities between in vitro and in vivo oocytes.

The in vitro oocytes were then used for IVF and transplanted into mouse. Amazingly, some of them developed into pups that were viable, grew up to be fertile and had normal life expectancy without apparent abnormalities. However, the efficiency of such technique is very low as only 3.5% of embryos transplanted were born (compare to over 60% in the case of routine IVF procedures). Embryos that did not go through the end of the pregnancy showed delayed development at various stages, highlighting that there are probably conditions that could be improved for the oocytes to lead to more viable embryos.

Looking at the entire process, the rate of success to obtain eggs ready for transplant is around 7-14% depending on the starting cell line population. Considering how much time these cells spend in culture, this rate seems reasonably good. However, as mentioned above only few develop to birth. Nonetheless, this work constitutes major advancement in the field of developmental biology and will allow researchers to look in greater detail at the entire process of oogenesis and fertilization without worrying about the number of animals needed. We can also expect that, as with every protocol, it will be fine-tuned in the near future. It is already very impressive that the protocol led to viable pups from 6 different cell line populations.

 

Besides its potential for increasing knowledge in the oogenesis process, the impact of such research might reach beyond the scope of developmental biology. Not surprisingly, these results came with their share of concerns that soon this protocol would be used for humans. How amazing would it be for women who cannot use IVF to use their skin cells and allow them to have babies? Years ago, when IVF was introduced to the world, most people thought that “test-tube” babies were a bad idea. Today, it is used as a routine treatment for infertility problems. However, there is a humongous difference between extracting male and female gametes and engineering them. I do not believe that this protocol will be used on humans any time soon because it requires too many manipulations that we still have no idea how to control. Nonetheless, in theory, this possibility could be attractive. Also, for the most sceptic ones, one of the major reason why this protocol is not adaptable to human right now is that we cannot generate human “reconstituted ovaries”. This step is key for mouse oocytes to grow in vitro and necessitate to collect the gonadal somatic cells in embryos which is impossible in humans. So, until another research group manages to produce somatic gonadal cells from iPS cells, no need to start freaking out ;-)

 

 


Dr Frankenstein being crept up on by his monster

Dr Frankenstein’s Modern Guide to Body Building

By Sally Burn, PhD

Square head, green skin, bolt through the neck, and plagued by misconceptions: “Frankenstein” is a popular choice at the Halloween costume store. This erroneously named costume is based on the monster from Mary Shelley’s Frankenstein, the titular character of which is Dr Frankenstein, the scientist who creates the in fact unnamed creature. Modern depictions of the monster tend to hold little resemblance to that in the book where we are told that “His yellow skin scarcely covered the work of muscles and arteries beneath; his hair was of a lustrous black, and flowing; his teeth of a pearly whiteness…

A final misconception – born from the movies - is that the monster was created from stolen body parts, animated to life with electricity. Shelley did not go into details of how exactly the monster was made, instead leaving us with just an indication that Frankenstein had the lifestyle of a postdoc (“I had worked hard for nearly two years, for the sole purpose of infusing life into an inanimate body. For this I had deprived myself of rest and health”) and a Nature Letter-size methods section: “With an anxiety that almost amounted to agony, I collected the instruments of life around me, that I might infuse a spark of being into the lifeless thing that lay at my feet.”

Clearly this is an insufficient Materials & Methods section to permit study replication. So, in the spirit of spooky science, Scizzle presents Dr Frankenstein’s Modern Guide to Body Building – how to build a brain, kidney, gut, and more in the comfort of your own lab:

 

* Building Brains:

The first thing your creation will need is a brain. While we don’t yet have the technology to grow a whole brain in the lab, we can make cerebral organoids. Scizzle first reported on these brain-like spheres two years ago, when they were published in Nature. Cerebral organoids are generated from human pluripotent stem cells (hPSC) or induced pluripotent stem (iPS) cells – so no need to go raiding the graveyard for spare body parts anymore, wannabe Frankensteins! The cells are aggregated into embryoid bodies, which are then differentiated into neuroectoderm and cultured in a spinning bioreactor, resulting in 3D cerebral organoids. After around a month in culture the organoids contain distinct brain regions and a cerebral cortex – the seat of consciousness, memory, and language.

 

* Growing Guts:

Upon waking, your monster will need a good meal so you’re going to need to build a digestive system. The gut is composed of a number of functionally, physiologically, and histologically distinct organs, including the stomach and small and large intestines. The liver and pancreas also play vital roles in digestion. Progress has been made on growing all of these tissues in the lab. Stomach-like “gastric organoids” can be generated by exposing hPSCs to a specific cocktail of proteins and growth factors. These mini stomachs even act in an organotypic manner, responding to H. pylori infection as a human stomach would. Moving down the digestive tract, our next requirement is an intestine. While we can’t yet grow an entire intestine in the lab (that would be one big culture dish), our old friend the organoid is here to help again – this time in the form of intestinal organoids, grown from crypt-derived stem cells (crypt as in small intestinal, not as in the spooky Halloween place). And yes, you will be pleased to know it is even possible to engineer your monster an anal sphincter. Even monsters need to poop.

 

* Crafting Kidneys:

To keep your creation in peak operational form, it will need to be able to process and remove toxins from its body. For drug processing look no further than iPS-derived liver buds (iPS-LBs), which can successfully metabolize drugs. Excretion of waste from the monster’s body will require kidneys. The generation of kidney organoids is a hot topic – earlier this month Melissa Little’s lab in Australia reported their iPS-derived kidney organoid system. While other groups have made similar organoids, this newest report is exciting as their kidney organoids contain numerous cell types and tissue structures found in human kidneys, arranged in an organotypic fashion. Furthermore, the engineered organs exhibit kidney-like function and nephrotoxin sensitivity.

 

So could a modern Dr Frankenstein make a monster using these techniques? No, obviously not – and thank goodness for that – but by exchanging grave-digging tools for iPS cells they could certainly have a good attempt at making replacement body parts for real humans. Such an endeavor may be possible in the near future, but already right now these lab-grown organoids offer a number of other benefits.

One use is pharmacological screening – does a new drug adversely affect the human kidney, for example? The human origin of organoids also allows researchers to gain insights into the development and disease of their related organs, in a way not possible with animal models. Lastly, by using patient-specific iPS or stem cells to generate organoids, scientists can better understand the etiology and treatment prospects of an individual’s disease. Earlier this year, Hans Clever’s group reported the generation of gut organoids from colorectal cancer patients, which recapitulate characteristics of their tumor of origin and are amenable to high throughput drug screening.

For a more in-depth look at the growing field of organoid science, see Cassandra Willyard’s article in Nature and stay tuned to Scizzle for future tales of "Frankenstein" science!


Lethal Weapon: How Many Lethal Mutations Do We Carry?

 

By John McLaughlin

Many human genetic disorders, such as cystic fibrosis and sickle cell anemia, are caused by recessive mutations with a predictable pattern of inheritance. Tracking hereditary disorders such as these is an important part of genetic counseling, for example when planning a family. In fact, there exists an online database dedicated to medical genetics, Mendelian Inheritance in Man, which contains information on most human genetic disorders and their associated phenotypes.

 

The authors of a new paper in Genetics set out to estimate the number of recessive lethal mutations carried in the average human’s genome. The researchers’ rationale for specifically focusing on recessive mutations is their higher potential impact on human health; because deleterious mutations that are recessive are less likely to be purged by selection, they can be maintained in heterozygotes with little impact on fitness, and therefore occur in greater frequency. For the purposes of their analysis, recessive lethal disorders (i.e. caused by a recessive lethal mutation) were defined by two main criteria: first, when homozygous for its causative mutation, the disease leads to the death or effective sterility of its carrier before reproductive age, and second, mutant heterozygotes do not display any disease symptoms.

 

For this study, the researchers had access to an excellent sample population, a religious community known as the Hutterian Brethren. This South Dakotan community of ~1600 individuals is one of three closely related groups that migrated from Europe to North America in the 19th century. Importantly, the community has maintained a detailed genealogical record tracing back to the original 64 founders, which also contains information on individuals affected by genetic disorders since 1950. An additional bonus is that the Hutterites practice a communal lifestyle in which there is no private property; this helps to reduce the impact of confounding socioeconomic factors on the analysis.

 

Four recessive lethal genetic disorders have been identified in the Hutterite pedigree since their more detailed records began: cystic fibrosis, nonsyndromic mental retardation, restrictive dermopathy, and myopathy. To estimate the number of recessive lethal mutations carried by the original founders, the team used both the Hutterite pedigree and a type of computational simulation known as “gene dropping”. In a typical gene dropping simulation, alleles are assigned to a founder population, the Mendelian segregation and inheritance of these alleles across generations is simulated, and the output is compared with the known pedigree. One simplifying assumption made during the analysis is that no de novo lethal mutations had arisen in the population since its founding; therefore, any disorders arising in the pedigree are attributed to mutations carried by the original founder population.

 

After combining the results from many thousands of such simulations with the Hutterite pedigree, the authors make a final estimate of roughly one or two recessive lethal mutations carried per human genome (the exact figure is ~0.58). What are the implications of this estimate for human health? Although mating between more closely related individuals has been long known to increase the probability of recessive mutations homozygosing in offspring, a more precise risk factor was generated from this study’s mutation estimate. In the discussion section it is noted that mating between first cousins, although fairly rare today in the United States, is expected to increase the chance of a recessive lethal disorder in offspring by ~1.8%.

 

Perhaps the most interesting finding from this paper was the consistency of the predicted lethal mutation load across the genomes of different animal species. The authors compared their estimates for human recessive lethal mutation number to those from previous studies examining this same question in fruit fly and zebrafish genomes, and observed a similar value of one or two mutations per genome. Of course, the many simplifying assumptions made during their analyses should be kept in mind; the estimates are considered tentative and will most likely be followed up with similar future work in other human populations. It will certainly be interesting to see how large-scale studies such as this one will impact human medical genetics in the future.

 


Darwin’s Finches Revisited

 

By John McLaughlin

In 1859, Charles Darwin published the now famous “On the Origin of Species,” containing the first presentation of his theory of the common origin of all life forms and their diversification by means of natural selection. One aim of this theory was to explain the diversity of traits found in nature as a result of the gradual adaptation of populations to their environments. This point is elegantly summarized in the third chapter:

 

[quote style="boxed"]Owing to this struggle for life, any variation, however slight and from whatever cause proceeding, if it be in any degree profitable to an individual of any species, in its infinitely complex relations to other organic beings and to external nature, will tend to the preservation of that individual, and will generally be inherited by its offspring.[/quote]

 

A large contribution to this theory resulted from his five-year voyage aboard the HMS Beagle, during which he traveled in South and Central America, Africa, and Australia. Darwin collected a huge volume of notes on various plant and animal species, perhaps most famously the finch species inhabiting the Galápagos islands to the west of Ecuador. Although his finch studies were only briefly mentioned in one of his journals, “Darwin’s finches” are now a popular example of microevolution and adaptation for both students and the general public. One striking feature of these finch species is their diversity of beak shape; finches with larger, blunt beaks feed mainly on seeds from the ground while those with longer, thin beaks tend to have a diet of insects or seeds from fruit.

 

A recent study published in Nature examines the evolution of fifteen finch species that Darwin studied during his time in the Galápagos. Although previous work has helped construct phylogenetic trees based on mitochondrial and microsatellite DNA sequences from these same specimens, this is the first study to perform whole genome sequencing of all fifteen species. In addition to a more accurate phylogeny, these genome sequences allowed for new types of analyses to be performed.

 

First, the authors assessed the amount of interspecies hybridization that has taken place among the finches in their recent evolutionary history, and found evidence for both recent and more ancient hybridization between finch species on different islands. The authors then looked for specific genomic regions that could be driving the differences in beak morphology among the different finch species. To perform this analysis, they divided closely related finch species on the basis of beak shape, into either “pointed” or “blunt” groups; the genomes from each group were then searched for differentially fixed sequences. On the list of most significant regions uncovered, several included genes known to be involved in mammalian and bird craniofacial development. The top hit, ALX1, is a homeobox gene that also has previously established roles in vertebrate cranial development. Interestingly, almost all of the blunt beaked finches shared a specific ALX1 haplotype (“type B”) which was distinct from that shared by their pointed beak counterparts (“type P”), and vice versa. Based on the distribution of the “P” and “B” haplotypes, the authors estimated that these two groups of finches diverged approximately 900,000 years ago.

 

By applying genome-sequencing technologies, these labs were able to shed new light on a classic story in biology. Until fairly recently, phylogenetic relationships such as those described in the article could only by inferred on the basis of external morphology. In a Nature News piece commenting on this study, one of the co-authors remarked on what Darwin would think of the results: “We would have to give him a crash course in genetics, but then he would be delighted. The results are entirely consistent with his ideas.”


Drosophila Diaries: Ken and Barbie

By Michael Burel

 

The holidays are upon us, people. This is not a drill. While the greatest gift of all during the holiday season is giving rather than receiving, you can’t help but remember how amazing (and admittedly materialistic) it is to receive things from others. “Things, for free?!” Yes, things! For free! Such a life exists even for graduate students who exploit this chance to forgo extravagance in exchange for desperately needed life necessities: Tupperware, socks, food, shelter, social interaction, separation anxiety from work, relief from the dark ends of the seemingly infinite thesis tunnel. You know, the basics.

 

Perhaps one of the most interesting side effects of the holiday season is nostalgia, that wistful remembrance of holidays past. Remember getting that new LEGO set when you were six-years-old? The excitement of unwrapping a new video game? The screech you squealed when you finally got the newest iteration of Ken and Barbie dolls? In fact, it could have been this latter gift that spurred your desire to pursue science, seeing as Barbie herself has pursued over 150 careers in her lifetime , one of the most recent of which was computer engineering. Though her accompanying book I Can Be a Computer Engineer was met with sweeping criticism about Barbie’s reliance on male figures to code her ideas, it nevertheless was an imperative step towards increasing STEM awareness among the highly impressionable toddler set.

 

Quite surprisingly, Ken and Barbie dolls inspire not just receptive to-be scientists, but also the I-already-have-my-PhD-and-receive-federal-funding ones. In this segment of Drosophila Diaries, I’ll explore my favorite fruit fly gene name to date: ken and barbie.

 

You’ve heard it over and over again in your biology classes: Fruit flies provide an exceptional paradigm for studying gene function. They replicate quickly, possess evolutionarily conserved but simplified anatomy and cell behavior, and provide robust genetic tractability. It’s no wonder, then, why scientists in the early 1990’s used Drosophila spermatogenesis as a means to uncover novel genes that govern stem cell identity, mitosis, meiosis, morphogenesis, and cell-cell interactions. Within a single tissue, all of these processes can be empirically observed and probed ad nauseam, providing an unprecedented means to discover new genes (and subsequently name them weird, functionally-specific things).

 

In 1993, Diego Castrillon and colleagues published in Genetics a P-element mutagenesis screen that revealed mutations altering normal tissue function in the fruit fly testis. P-element mutagenesis screens offer some pretty nice incentives that expedite the genetic screening process. It involves a transposable element (those jumpy genes in our genome that plop in and out of place) inserting itself into random genes and disrupting their function by perturbing DNA sequences. P elements can be quickly mapped to genomic locations, used to make new mutant alleles of the gene it settled into, and exploited to clone out surrounding DNA and recover molecular information about its genetic geography.

 

Castrillon et al. generated over 8,000 fly lines that contained P elements plopped into random genomic locations. Of these, over 1,900 flies were screened for altered spermatogenesis; ultimately, they isolated and characterized 83 fly lines in which males couldn’t produce new progeny. These 83 fly lines were subdivided into seven different phenotypic classes, the last of which was a rather peculiar one: “sperm transfer defects.”

 

Male flies with sperm transfer defects essentially had difficulty in the final parts of copulation, the transfer of sperm to females. For example, male parts were sometimes in the wrong place, such as in twig mutant flies where the anal-genital plate was incorrectly rotated. Others, like the pointed mutation, had normal levels of motile sperm stored away but just couldn’t get them from point A to B. These two mutant flies had the right “tools” so to speak, but the final mutant fly in this phenotypic category apparently forgot its toolbox altogether. Flies mutant for one P-element insertion completely lacked external genital. In opening up the male flies, the researchers observed all the internal sexual parts were intact…but where were the outside parts? Whether plagued by holiday nostalgia or not, the scientists knew exactly what to name this new mutant gene: ken and barbie, after the dolls that also do not possess external genitalia.

 

As you’re wrapping up that gift for your brother or sister, niece or nephew, next-door neighbor, or local toy drive, consider two things: (1) how will this gift impress upon the next generation to enter into STEM fields, and (2) how will this gift inspire the hilarious naming of currently undiscovered genes and let scientists leave their comedic mark for decades to come. And if you’re in search for gift ideas for your fellow science enthusiasts, Scizzle has you covered.  ‘Tis the season.

 


evolution is real

Evolution is Real and I Have Seen it!

 

By Brent Wells, PhD

It has recently come to my attention that the biggest argument of opponents of evolution is that it has never been witnessed. While I could go on and on about the merits of this argument (considering it is a favorite of proponents of creation – something I’m pretty sure no one here was around to post on Instagram), I will instead give you seven examples of evolution that have occurred in recent history.

 

1) DOG BREEDS

Any evolutionist will tell you that all modern dog breeds share a common ancestor in the wolf, but even if you don’t want to believe that, consider the Chinook; this breed was developed in New Hampshire in the 1920’s. Dog breeding is an example of human-driven evolution yet relies on the exact same principles that drive it in the wild. Random mutation leads to a trait that then becomes fixed in the population if it provides a reproductive advantage. In the case of dog breeds, the mutation is random and we simply select for the traits WE enjoy (docile, smart, loyal, big ears, small tail) and continue to select and select until we have a completely new breed. That’s evolution.

 

2) AGRICULTURE

Speaking of human-driven evolution, the crop industry is probably the single most impressive example, even before the GMO craze. Corn did not begin its time on this earth producing golden, forearm-sized ears for us to smother in butter and salt. Early maize was actually a grass that held 8-10 hard shelled seeds and which required evolutionary selection to become what it is today. In fact corn, as we know it, can’t even survive in the wild on its own so I think it’s safe to say that it wasn’t part of a diet in the Garden of Eden.

 

3) ELEPHANT TUSKS

Poachers are some spectacular scumbags and despite the dwindling African Elephant populations, continue to kill these great animals just for their ivory tusks. In the wild, most male elephants develop large tusks. I say most because there is a small percentage that don’t, or develop smaller tusks, due to a genetic mutation. That number has gone from about 5% to as high as 35% in some areas in the last 150 years. Why? Because male elephants without tusks are undesirable to poachers, which means they are the ones to live and pass on their genes, including the mutation for no tusks, thereby increasing the percentage of offspring that share that trait.

 

4) TOXIC-LOVING FISH

The Hudson River has always had a pollution stigma surrounding it, and maybe rightfully so. It turns out that it was one of the biggest dumpsites for PCBs, which were introduced in 1929 and outlawed 50 years later, in the country. PCBs are deadly to all animals but it turns out one fish, the Atlantic tomcod, evolved a modified gene that allowed it to regulate the toxic effects of PCBs and related chemicals and continue to thrive. This adaptive evolution occurred in as little as 20-50 generations and all since 1929.

 

5) YEAST

Scientists have been able to take single celled brewers yeast, allow them to grow in colonies, and at the end of 350 generations (about 60 days), see traits of multicellularity. This, in fact, might be similar to how the first multicellular organisms came to be on the earth, only this time, we were watching.

 

6) PEPPERED MOTH

The Peppered Moth was widely distributed in England prior to the Industrial Revolution. This color scheme acted as a camouflage as it blended into similarly patterned trees and therefore helped it to avoid predation. As more and more factories began to clutter the landscape, soot from burning coal eventually darkened the trees making the white moth with dark spots easy for predators to spot and eat up. This is when the moths evolved to an all black color, allowing them to once again blend with the color of the surrounding trees and avoid becoming dinner. This is a case of evolution occurring within the last 200 years.

 

7) THE FLU

If you aren’t convinced yet, consider the virus that puts you out of commission with the common cold or the flu at least once a year. Each year your immune system develops antibodies to fight it off, which will stay in your body and remember that specific virus forever, and every year, a new strain comes and catches your immune system off guard. This is not because there are innumerable viruses; it’s because that virus is evolving from year to year into something your body cannot recognize and which can reproduce using your cells. This is what scientists and doctors are referring to when they talk about the risk of vaccines creating a super virus. The idea is that by doing battle with viruses, we are actually pushing them to evolve into something that we cannot effectively prevent or treat which will then have huge impacts on the population.

 

So there you have it, seven examples of evolution happening under the watchful eye of humans. And I could go on. Regardless of how you imagine life originated, because that is a completely different debate, I think it’s hard to deny that evolution is a very real process. You may have noticed a common thread in the examples above – most modern evolution is a result of gross human negligence and since that is not on track to change, I’m betting we will see even more life in nature that will.


Which Came First: the Enzyme or Metabolism?

 

By Elizabeth Ohneck, PhD

Where did we come from? How did life originate? These questions are perhaps the oldest form of “existential crisis,” and questions that science has long sought to answer. The path from simple elements to complex biological systems was dependent on the development of self-organizing, self-replicating systems, as well as generation of metabolic pathways and the enzymes that drive them. Much work has been done to build a map of this evolution, but we are far from a complete understanding.

 

Metabolism is the group of chemical reactions required for life, the processes that build our DNA, our RNA, and the proteins and fats that comprise us, as well as break down molecules to provide the energy for these activities. These reactions occur within cells and are driven by proteins called enzymes. Many of these reactions are conserved among all life, from tiny bacteria, to plants, to animals, including humans, suggesting these metabolic processes are ancient and likely arose before life as we know it; in fact, the origination of these processes likely allowed the formation of life. The consideration of the origins of metabolic networks presents a chicken-or-egg scenario: which came first, metabolism or enzymes? In an exciting paper published in Molecular Systems Biology, Keller et al. provide evidence for the former, demonstrating metabolic-type reactions can occur in the absence of enzymes in an environment that plausibly mimics earth before life.

 

Keller et al. selected a series of compounds that serve as intermediates of two universal metabolic pathways: glycolysis, which breaks down the sugar glucose to release energy, and the pentose phosphate pathway, which converts sugars to ribose-5-phosphate, a building block of RNA and DNA. Using ultra-pure water and chemical preparations, they first dissolved a known concentration of each chemical in water and heated the solutions to 70°C, a plausible temperature for early earth ocean environments near heat sources such as thermal vents. After 5 hours, they examined the chemicals in the solutions by liquid chromatography-selective reaction monitoring, a highly sensitive technique to measure the types and amounts of chemicals in solution. They discovered that many of the compounds were converted to other metabolic intermediates, with the most common being pyruvate, an important branch-point metabolite that can be used to generate energy or converted to sugars, fatty acids, or amino acids.

 

The researchers then repeated this experiment in an “Archaen ocean mimetic” – a solution of salts and metal ions at concentrations likely found in early earth’s oceans, as determined from geological data. While the salts alone did not change the reaction outcomes, the addition of metal ions resulted in a greater number of conversions, including the production of ribose-5-phosphate and erythrose 4-phosphate, a precursor for the formation of amino acids. In further studies, the researchers demonstrated that iron, which would have been at high concentration in early earth’s oceans, was the key metal ion in driving the conversion reactions. Additionally, they showed that an anoxic, or low-oxygen, environment, as would have been the state of early earth, facilitated these reactions.

 

The conversion of metabolites mimicked enzyme-catalyzed metabolic reactions that occur within our cells. The researchers ruled out the possibility of contaminating enzymes in their reactions in several ways. First, they conducted these reactions at a temperature that, while plausible for the early earth ocean environment, is too high for most metabolic enzymes. Importantly, these reactions were not observed below 40°C, temperatures at which common metabolic enzymes would be functional. Second, critical cofactors, or small molecules required for the function of some enzymes, were absent. Third, the researchers stringently assured purity of their reaction mixtures through physical and chemical means, including filtering the solutions through a membrane with a very small size cut-off that would exclude complex proteins such as enzymes, repeating experiments in different types of reaction tubes, and adding organic solvents that would denature or inhibit enzymes.

 

Thus, Keller et al. effectively demonstrated that metabolic reactions critical to life that we know today to be catalyzed by enzymes can occur in the absence of enzymes under conditions that mimic the environment of earth before life. These reactions include the formation of molecules that form the building blocks of RNA, DNA, and proteins, of which all living organisms are comprised. Their findings provide support for the hypothesis that metabolic networks arose first, leading to the subsequent formation of RNA and enzymes, which would eventually give rise to self-replicating systems that would evolve into the first cells.

 

Many questions remain, perhaps the most prominent being: where did the original metabolite compounds come from? One possibility is based on the Miller-Urey experiment, published in 1953. In this experiment, researchers combined water, methane, ammonia, and hydrogen, thought to be the primary components of the early earth’s atmosphere, and applied high-voltage electrical pulses, which, surprisingly, generated the amino acids alanine and glycine. Subsequent research refined the reaction set up to more accurately mimic the environment of early earth and was able to demonstrate the creation of multiple essential biomolecules. One might imagine, then, that the elemental and atmospheric conditions of earth pre-life allowed the generation of complex molecules, which, in the low-oxygen, high-iron conditions, underwent chemical conversions, creating the first metabolic networks that in turn allowed the generation of life.

 

These hypotheses are, of course, just that: hypotheses. Whether the occurrence of such a chain of events can be conclusively proven is debatable. But studies such as the Miller-Urey experiment and the research by Keller et al. present thought-provoking findings that stimulate careful consideration of the incredible set of circumstances that lead to the generation of life. To imagine how simple elements combined to form the diversity and complexity of life on our planet today can be overwhelming and awe-inspiring. The study by Keller et al. provides an important piece in the puzzle of understanding our past.


A Dog’s Tale

By Sally Burn

2014 may be the Year of the Horse, but dogs have started the year as a scientist’s best friend, giving paws for thought in several recent papers. Freedman and colleagues barked up the right evolutionary trees to investigate canine evolutionary history, while Waller et al. report that puppy dog eyes give dogs a selective advantage when soliciting human care. Finally, a paper just published in Science sniffs out the genome secrets of an ancient transmissible dog cancer.

 

A Dog’s Tale Part I: The evolutionary history of dogs

Dogs and wolves share many traits but their exact evolutionary connection is unclear. A new paper out this month in PLoS Genetics attempts to address their phylogenetic relationship and reconstruct the early evolutionary history of man’s best friend. Adam Freedman and colleagues doggedly sequenced the genomes of three region-specific gray wolves, two basal dog breeds historically isolated from wolves, and a jackal outlier. Comparisons of these genomes (plus a boxer dog genome) revealed that modern wolves and dogs arose from a now-extinct common ancestor, contradicting the common notion that dogs simply descended from wolves that cozied up with humans. After the initial divergence, both the dog and wolf lineages went through severe population bottlenecks, resulting in increased disparity between their gene pools. The dog lineage was subsequently domesticated by hunter-gatherers, around 11-16 thousand years ago according to this new paper. Analysis was complicated by the fact that the genomes have not been in total isolation from one other, as extensive wolf-dog interbreeding has permitted further gene flow between the species. Such admixture and the extinction of the common ancestor have rendered the evolutionary history of dogs particularly hard to dissect, leading to vastly different conclusions from different research groups. Indeed, a study published last year concluded that population bottlenecks were not that significant during dog evolution. Another bone of contention has been the link between dietary adaptation and domestication. Grab yourself some kibble as we move on to that shaggy dog tale next…

 

A Dog’s Tale Part II: A dog’s dinner

The domestic dog is particularly fond of scraps from human tables. However, the human diet changed dramatically when we transitioned from hunter-gatherers to agriculturalists and so therefore did the digestive abilities of dogs. This was the conclusion of a study published in Nature in 2013, in which Erik Axelsson and colleagues discovered that dogs possess a number of gene variants associated with starch digestion. Compared to wolves, dogs have a seven-fold increase in copy number of AMY2B, a gene involved in the breakdown of starch. This was a necessary adaptation to share the starch-rich food of humans. The advent of agriculture was, they argued, a catalytic event in domestication as humans now had attractive scrapheaps, from which genetically-equipped wolves could steal tasty morsels. All of a sudden hanging with the humans was advantageous for survival and so the ancestor of modern dogs was born. In contrast, this month’s Freedman et al. study found that AMY2B copy number varies between dog breeds and is also high in some wolves, discrediting the notion of high AMY2B copy number being an explicitly dog trait. More specifically, AMY2B copy number is low in dog breeds not associated with agricultural societies, reaffirming their conclusion that domestication predated the onset of agriculture. The contrasting conclusions of the two papers demonstrate once again the difficulties in tracing canine evolution.

 

A Dog’s Tale Part III: Puppy dog eyes

Regardless of how they came to be able to digest our food, one thing we can be sure of is that dogs have a guaranteed mechanism for obtaining it: puppy dog eyes. Now researchers at the University of Portsmouth, UK, have found evidence that puppy dog eyes provide a selective advantage when soliciting human care. The team proposed that a key factor in dog domestication was human selection against aggression; they hypothesized that, in a process of co-evolution, dogs displaying pedomorphic (puppy-like) facial characteristics were preferentially selected by humans desiring increasingly tame canine companions. To test this hypothesis they used the speed of rehoming from shelters as a proxy for artificial selection. Humans stood in front of shelter pens and the facial expressions of the dog inmates were analyzed using a novel system called DogFACS (Dog Facial Action Coding System). As predicted, dogs who displayed puppy-like facial expressions were rehomed faster than those who did not. A key facial movement was the raising of the inner brow to make their eyes look bigger and more puppy-like. So next time you acquiesce and give doe-eyed Fido a superfluous treat, take solace in the fact that your weakness is just part of your DNA.

 

A Dog’s Tale Part IV: Transmissible dog cancer genome

Our final dog bulletin concerns the world’s oldest known cancer. Canine transmissible venereal tumor (CTVT) spreads when cancer cells pass between dogs during mating. Researchers at the Wellcome Trust Sanger Institute in Cambridge, UK, sequenced the cancer cells’ genome and published their findings last week in Science. They found that the cancer originated in a single dog around 11,000 years ago; the cancerous cells have been passed on ever since as a clonal lineage, long outliving the body from which they came and making CTVT the oldest known living cancer in the world. The cancerous cells still contain the genome of the dog in which the cancer arose, allowing the team to build up a genetic “identikit” of the first infected animal. The canine patient zero was a medium to large husky-like dog, with black or agouti fur. Mutation analysis pinpointed the origin to approximately 11,368 years ago. The cancer was initially contained within an isolated dog population but it became a worldwide problem around 500 years ago, possibly as a result of humans traveling the earth and taking four-legged companions with them. Some of these voyages may have been to sunny locales as the cancer’s genome bears hallmarks of exposure to ultraviolet light. The cancer cells have also undergone many other changes during their evolution, losing 646 genes and acquiring an estimated 1.9 million somatic substitution mutations – several hundred times the number found in most human cancers. Despite this accumulation of mutations the cancer cells have survived, illustrating just how robust mammalian somatic cell lines can be. Indeed, many of the mutations may have allowed the cancer to adapt to niche changes and thrive. While the cancer itself is rare, this study is of note as it chronicles the evolutionary history of a transmissible cancer. Further analysis of the cancer’s genome may therefore provide insights into the processes underlying cancer transmissibility.


Got Stripes? How the Zebrafish Got its Stripes.

 

By Sophia David

What do butterflies, snakes and fish all have in common? One answer could be that they all display colourful and spectacular skin pigmentation patterns. The zebrafish, for example, displays a beautiful and characteristic stripy pattern.

In the last decade, zebrafish, also known as Danio rerio, have emerged as an excellent model organism for studying vertebrate biology and, in particular, vertebrate development. This is due to the ease of maintaining large stocks of zebrafish, their quick development, and the transparent nature of zebrafish embryos and larvae. Luckily then, scientists wishing to study how pigment pattern formations develop already have a great model organism at their fingertips.

The zebrafish stripe pattern consists primarily of two types of pigment cell: melanophores (black pigment cells) and xanthophores (yellow pigment cells). Mutants that lack either of these types of cells do not show the stripy pattern.

Previous work by scientists from Osaka University in Japan previously showed that interactions between these two types of pigment cells are important for the development of the stripy pattern. In particular, they found that direct contact between xanthophores and melanophores causes the membrane potential of melanophore cells to change. This is called membrane depolarization. They hypothesized that the membrane depolarization of melanophores affects the movement of the cells and these movements, in turn, result in the formation of the characteristic pigment patterns.

In the study published this week in the journal PNAS, the same scientists tested and confirmed their hypothesis, and further characterized the interaction between the two types of pigment cell. They showed that the xanthophore cells reach out to touch melanophores by extending a part of their cell. These temporary projections of cells are called pseudopodia. Meanwhile, the melanophores show a repulsive response to the pseudopodia of xanthophores and move away. The xanthophores are not discouraged, however, and continue to chase the running melanophores. The authors called these “run-and-chase” movements. They believe that these movements cause the segregation of xanthophores and melanophores into distinct stripes.

The scientists further demonstrated that these run-and-chase movements are disrupted in mutant zebrafish that do not show the typical stripy patterns. For example, “jaguar” mutants have broader and fuzzier stripes. The scientists showed that the repulsive response of melanophores in jaguar zebrafish is inhibited compared to in wild-type zebrafish so essentially the melanophores cannot “run away” so quickly. This is thought to lead to the incomplete segregation of the two types of cell, resulting in broader and fuzzier stripes.

There is still much left to understand, however. The next steps are to understand the precise molecular mechanisms that occur when the two types of cells interact and how these lead to specific cell movements. Furthermore, the scientists want to understand how those mechanisms differ in the mutant zebrafish.


Mother Knows Best: Breast Milk Affects Cognitive Function

 

 

By Celine Cammarata

Bacteria, parents’ experiences, poverty - the list of environmental factors that affect cognition continues to grow, and now there’s a new one: breast milk.  The composition of a mother’s milk can confer cognitive advantages to her offspring.

 

It’s known that the cytokine TNF (tumor necrosis factor), which works primarily in the immune system, is also found in the central nervous system, and past evidence has even suggested that TNF knockout mice may have some cognitive gains.  Now, researchers have revealed that spatial learning and memory are improved in mice whose mothers lack one or both TNF alleles.

 

Mice born to mothers with little or no TNF outperformed their peers on the Morris Water Maze, a widely used test of spatial memory that depends on the hippocampus.  These animals also demonstrated a transient elevation of neural proliferation in the dentate gyrus region of the hippocampus during development, which may underly their increased cognitive skills; inhibiting this increased proliferation eliminated the behavioral differences.  Furthermore, gene expression and dendrite morphology of dentate gyrus neurons were also altered in these offspring.

 

Why?  The answer could not be genetic: mice whose biological mothers were genetically normal, but who were raised by TNF-deficient foster mothers, had the same advanced skills.  So did offspring of genetically normal mothers, when the mother’s TNF levels were inhibited directly by administering antibodies for the protein postpartum.  It turns out that reducing TNF leads to lower levels of several chemokines in the mother’s milk.  Because newborn mice can’t fully break down such proteins, when these chemokines are present in milk they can reach the pup’s digestive tract in concentrations high enough to effect the immune and nervous systems from the gut.  Thus, the lack of these proteins can have an effect as well.  Although the precise mechanism of how milk composition alters hippocampal development is not yet clear, it may act in part by altering the number of white blood cells present in circulation.

 

TNF levels in the body can decrease as a downstream effect of mild stress and physical activity, as might occur when animals live in a challenging environment.  Perhaps this system of using low TNF to trigger increased cognitive capacity in offspring evolved as a way for parents to better “prepare” their young for the world.


Sizzling Papers of the Week - Nov 22

 

The Scizzle Team

Guys, Stop Fighting Over Me!

Male-male aggression is part of sexual selection in many species, and is affected by environment, experience, and the animal’s state - but how?  Researchers found that while male fruit flies will usually be at each others’ throats when an eligible [fly] bachelorette is around, this fighting is reduced in males who’ve had prior exposure to the ladies.  Turns out the male flies can sense females via a special pheromone-sensing  ion channel, triggering activity in a pathway mediated by the inhibitory neurotransmitter GABA which quells male aggression.  Thus, this new found circuit represents a key means for experience to modulate aggression.

 

Female contact modulates male aggression via sexually dimorphic gABAergic circuit in Drosophila, Yuan Q. et al., Nature Neuroscience, November 17, 2013

Create a feed for gABAergic circuit to keep up with all the fighting.

[hr]

Another Reason to Love Germs

You know how when you tell distant family members you’re a scientist, there’s always someone who asks whether you’re curing cancer?  Well it turns out the millions of microbiota living in your gut can answer that question with a resounding “yes.”  These little guys can have a big effect on inflammation, which in turn plays an important role in cancer.  Investigators found that when mice lack a robust host of microorganisms, they responded less well to cancer therapies.  Way to go bacteria!

 

Commensal Bacteria Control Cancer Response to Therapy by Modulating the Tumor Microenvironment, Iida, N. et al., Science, November 21 2013

Create a feed for bacteria, microenviroment and cancer. 

[hr]

If Only We Could Remember What This Paper Was About...

With the medical use of marijuana on the rise, it’s more important than even to understand the mechanisms of unwanted side effects.  Researchers made important strides in clarifying how marijuana effects memory when they discovered that ∆9-THC, the active component in the plant, induces the activity of the enzyme COX-2 via CB1 receptors.  Blocking COX-2 reduces the negative impacts of ∆9-THC on memory, while permitting the medicinal effects such as reducing neurodegeneration in Alzheimer’s disease.

 

9-THC-Caused Synaptic and Memory Impairments Are Mediated through COX-2 Signalling, Chen, R. et al., Cell, November 17 2013.

Fascinated by marijuana? Create a feed for marijuana, COX-2 and memory.

[hr]

Taking a Closer Look at Chromosomes

We know that mitotic chromosomes are critical to cell division, but there remains a lot of doubt about precisely how these structures organize.  Now investigators have used chromosome conformation capture methods to shed more light on the issue.  They demonstrated that the way nonsynchronous cell were believed to organize is actually only true in interphase, while a more homogenous, consistent organization occurs during metaphase.  Simulations went on to show that classical models don’t correctly explain the organization of chromosomes during mitosis.

 

Organization of the Mitotic Chromosome, Naumova, N., et al., Science, November 21 2013

[hr]

Are You Cold?

Apparently, the mandated temperatures in which lab mice are kept are too cold for them and suppress their anti-tumor immune response. A new study published in PNAS shows that when mice were kept in thermoneutral temperatures, there were fewer immunosuppressive cells with significantly enhanced CD8+ T cell-dependent control of tumor growth. This study highlights the importance of the environmental temperature conditions and show how it may lead to a misunderstanding of anti-tumor immune response and its effect when studying potential anti-cancer therapies.

Baseline tumor growth and immune control in laboratory mice are significantly influenced by subthermoneutral housing temperature. Kokolus KM et al., PNAS, November 2013.

[hr]


Forever Young?

 

Sally Burn

Embryos and the young can repair tissue injuries faster than adults in many different species. Wolverine, the lupine superhero of Marvel Comics, is a mutant who has harnessed this regenerative power, allowing him to rapidly heal any wound and also to age slower than mere mortals. While this may be the stuff of comic books, a new Cell paper from the lab of George Daley at Harvard University has reported a mouse mutant with uncannily similar traits.

Researchers in Daley’s lab genetically engineered mice to post-natally produce Lin28a, an RNA-binding protein usually only active in embryos, where it is involved in tissue repair. The effects of postnatal retention of this protein were breathtaking. Much like Wolverine the mice were huge, hairy, and with an increased healing capacity. The authors got a massive shock when they carried out ear and toe clipping (both standard identification procedures): the tissue grew back. Regenerative ability varied between tissues though, indicating that Lin28a has tissue-dependent effects. Excessive hair growth was evident throughout life, and shaved mutant mice regrew hair faster than unmodified mice. Tissue removed during ear clipping also grew back in adults, whereas toe regrowth was only enhanced in juveniles; once they reached adulthood they lost their ability to regenerate toes. Cardiac tissue, however, could never be restored by reactivating Lin28a, suggesting that the heart may have mechanisms to resist regeneration.

Lin28a was already known to be expressed in embryonic stem cells and to play roles in cancer development. This new study shows that it is also a regulator of the ability to repair tissue damage. This ability lessens with age, as does production of Lin28a. Adults genetically modified to produce Lin28a retain an embryonic-like ability to heal, suggesting that the biological age of their cells has somehow been reset. The mechanism for resetting is not absolutely clear but the authors show that metabolism is increased in the Lin28a-producing cells, as indicated by heightened levels of oxidative enzymes required for mitochondrial function. Adult cells in which Lin28a is reactivated appear to revert to a juvenile bioenergetic state.

Harnessing Lin28a activity for therapeutic purposes in the injured or elderly is unlikely to be straightforward. Lin28a is involved in many cellular processes and increasing levels would produce many side-effects. Directly targeting the metabolic processes downstream of Lin28a may therefore be a better option. Indeed, the authors found that taking this approach in non-mutant mice replicated the effects seen in the Lin28a-producing mice. The giants of the cosmetics industry are probably now falling over themselves to target these pathways in a bid to make the ultimate anti-aging cream. Let’s just hope they iron out the excessive hair growth effects before any product hits the market…

 

Want to keep up with tissue regeneration and Lin28? Create your  feeds in Scizzle.


A Peek Inside Mouse Development

Sally Burn

The humble laboratory mouse is one of the greatest tools researchers have to model human development and disease. A common approach is to create a transgenic model of a human disorder, often by “knocking out” a gene in mice and then examining the effects. Transgenic mouse models are of particular use for characterizing disorders that disrupt embryonic development. When a disease progresses through childhood or adult life we can gather information about its pathogenesis, even taking samples from the patient for research along the way. However, genetic diseases that disrupt embryonic development often result in death during gestation or at birth, limiting opportunities to observe how the disease manifested. By examining embryonic development in mouse models we can get an idea of the timeline of events involved.

Unfortunately mouse embryonic development can usually only be examined in a fairly spasmodic manner. The roughly twenty days of mouse gestation cannot be observed in a single fluid motion; instead researchers must euthanize the mothers and remove the embryos for examination at set points throughout gestation. The embryos cannot survive outside the mother and so all that can be achieved is a snapshot of that moment in development. Imagine that instead of watching a movie all you get is a series of film stills, which you must piece together to try to get the full story, potentially missing out on key plot twists. Now, in an effort to address this problem, researchers are turning to a non-invasive imaging technique used routinely in humans: high frequency ultrasound.

In utero ultrasounds were first reported in mice nearly twenty years ago but are still not that widely use