On Science and Values

 

By Rebecca Delker, PhD

 

In 1972 nuclear physicist Alvin Weinberg defined ‘trans-science’ as distinct from science (references here, here). Trans-science – a phenomenon that arises most frequently at the interface of science and society – includes questions that, as the name suggests, transcend science. They are questions, he says, “which can be asked of science and yet which cannot be answered by science.” While most of what concerned Weinberg were questions of scientific fact that could not (yet) be answerable by available methodologies, he also understood the limits of science when addressing questions of “moral and aesthetic judgments.” It is this latter category – the differentiation of scientific fact and value – that deserves attention in the highly political climate in which we now live.

Consider this example. In 2015 – 2016, action to increase the use of risk assessment algorithms in criminal sentencing received a lot of heat (and rightly so) from critics (references here, here). In an attempt to eliminate human bias from criminal justice decisions, many states rely on science in the form of risk assessment algorithms to guide decisions. Put simply, these algorithms build statistical models from population-level data covering a number of factors (e.g. gender, age, employment, etc.) to provide a probability of repeat offense for the individual in question. Until recently, the use of these algorithms has been restricted, but now states are considering expanding their utility for sentencing. What this fundamentally means is that a criminal’s sentence depends not only on the past and present, but also on a statistically derived prediction of future. While the intent may have been to reduce human bias, many argue that risk assessment algorithms achieve the opposite; and because the assessment is founded in data, it actually serves to generate a scientific rationalization of discrimination. This is because, while the data underpinning the statistical models does not include race, it requires factors (e.g. education level, socioeconomic background, neighborhood) that are, themselves, revealing of centuries of institutionalized bias. To use Weinberg’s terminology, this would fall into the first category of trans-science: the capabilities of the model fall short of capturing the complexity of race relations in this country.

But this is not the whole story. Even if we could build a model without the above-mentioned failings, there are still more fundamental ethical questions that need addressing. Is it morally correct to sentence a person for crimes not yet committed? And, perhaps even more crucial, does committing a crime warrant one to lose their right to be viewed (and treated) as an individual – a value US society holds with high regard – and instead be reduced to a trend line derived from the actions of others? It is these questions that fall into the second category of trans-science: questions of morality that science has no place in answering. When we turn to science to resolve such questions, however, we blind ourselves from the underlying, more complex terrain of values that make up the debate at hand. By default, and perhaps inadvertently, we grant science the authority to declare our values for us.

Many would argue that this is not a problem. In fact, in a 2010 TED talk neuroscientist Sam Harris claimed that “the separation between science and human values is an illusion.” Values, he says, “are a certain kind of fact,” and thus fit into the same domain as, and are demonstrable by, science. Science and morality become one in the same because values are facts specifically “about the well-being of conscious creatures,” and our moral duty is to maximize this well being.

The flaw in the argument (which many others have pointed out as well) is that rather than allowing science to empirically determine a value and moral code – as he argued it could – he presupposed it. That the well being of conscious creatures should be valued, and that our moral code should maximize this, cannot actually be demonstrated by science. I will also add that science can provide no definition for ‘well-being,’ nor has it yet – if it ever can – been able to provide answers to the questions of what consciousness is, and what creatures have it. Unless human intuition steps in, this shortcoming of science can lead to dangerous and immoral acts.

What science can do, however, is help us stay true to our values. This, I imagine, is what Harris intended. Scientific studies play an indispensable role in informing us if and when we have fallen short of our values, and in generating the tools (technology/therapeutics) that help us achieve these goals. To say that science has no role in the process of ethical decision-making is as foolish as relying entirely on science: we need both facts and values.

While Harris’ claims of the equivalency of fact and value may be more extreme than most would overtly state, they are telling of a growing trend in our society to turn to science to serve as the final arbiter of even the most challenging ethical questions. This is because in addition to the tangible effects science has had on our lives, it has also shaped the way we think about truth: instead of belief, we require evidenced-based proof. While this is a noble objective in the realm of science, it is a pathology in the realm of trans-science. This pathology stems from an increasing presence in our society of Scientism – the idea that science serves as the sole provider of knowledge.

But we live in the post-fact era. There is a war against science. Fact denial runs rampant through politics and media. There is not enough respect for facts and data. I agree with each of these points; but it is Scientism, ironically, that spawned this culture. Hear me out.

The ‘anti-science’ arguments – from anti-evolution to anti-vaccine to anti-GMO to climate change denial – never actually deny the authority of science. Rather, they attack scientific conclusions by either creating a pseudoscience (think: creationism), pointing to flawed and/or biased scientific reporting (think: hacked Climate data emails), clinging to scientific reports that demonstrate their arguments (think: the now debunked link between vaccines and autism), and by honing in on concerns answerable by science as opposed to others (think: the safety of GMOs). These approaches are not justifiable; nor are they rigorously scientific. What they are, though, is a demonstration that even the people fighting against science recognize that the only way to do so is by appealing to its authority. As ironic as it may be, fundamental to the anti-science argument is the acceptance that the only way to ‘win’ a debate is to either provide scientific evidence or to poke holes in the scientific evidence at play. Their science may be bad, but they are working from a foundation of Scientism.

 

Scientific truth has a role in each of the above debates, and in some cases – vaccine safety, for example – it is the primary concern; but too often scientific fact is treated as the only argument worth consideration. An example from conservative writer Yuval Levin illustrates this point. While I do not agree with Levin’s values regarding abortion, the topic at hand, his points are worth considering. Levin recounts that during a hearing in the House of Representatives regarding the use of the abortion drug RU-486, a DC delegate argued that because the FDA decided the drug was safe for women, the debate should be over. As Levin summarized, “once science has spoken … there is no longer any room for ‘personal beliefs’ drawing on non-scientific sources like philosophy, history, religion, or morality to guide policy.”

When we break down the abortion debate – as well as most other political debates – we realize that it is composed of matters of both fact and value. The safety of the drug (or procedure) is of utmost importance and can, as discussed above, be determined by science; this is a fact. But, at the heart of the debate is a question of when human life begins – something that science can provide no clarity on. To use scientific fact as a façade for a value system that accepts abortion is as unfair as denying the scientific fact of human-caused climate change: both attempts focus on the science (by either using or attacking) in an effort to thwart a discussion that encompasses both the facts of the debate and the underlying terrain of values. We so crave absolute certainty that we reduce complex, nuanced issues to questions of scientific fact – a tendency that is ultimately damaging to both social progress and society’s respect for science.

By assuming that science is the sole provider of truth, our culture has so thoroughly blurred the line between science and trans-science that scientific fact and value are nearly interchangeable. Science is misused to assert a value system; and a value system is misused to selectively accept or deny scientific fact. To get ourselves out of this hole requires that we heed the advice of Weinberg: part of our duty as scientists is to “establish what the limits of scientific fact really are, where science ends and trans-science begins.” Greater respect for facts may paradoxically come from a greater respect for values – or at the very least, allowing space in the conversation for them.

 


The Danger of Absolutes in Science Communication

 

By Rebecca Delker, PhD

Complementarity, born out of quantum theory, is the idea that two different ways of looking at reality can both be true, although not at the same time. In other words, the opposite of a truth is not necessarily a falsehood. The most well known example of this in the physical world is light, which can be both a particle and a wave depending on how we measure it. Fundamentally, this principle allows for, and even encourages, the presence of multiple perspectives to gain knowledge.

 

This is something I found myself thinking about as I witnessed the twitter feud-turned blog post-turned actual news story (and here) centered around the factuality of physician-scientist Siddhartha Mukherjee’s essay, “Same but Different,” published recently in The New Yorker. Weaving personal stories of his mother and her identical twin sister with experimental evidence, Mukherjee presents the influence of the epigenome – the modifications overlaying the genome – in regulating gene expression. From this perspective, the genome encodes the set of all possible phenotypes, while the epigenome shrinks this set down to one. At the cellular level – where much of the evidence for the influence of epigenetic marks resides – this is demonstrated by the phenomenon that a single genome encodes for the vastly different phenotypes of cells in a multicellular organism. A neuron is different from a lymphocyte, which is different from a skin cell not because their genomes differ but because their transcriptomes (the complete set of genes expressed at any given time) differ. Epigenetic marks play a role here.

 

While many have problems with the buzzword status of epigenetics and the use of the phrase to explain away the many unknowns in biology (here, here), the central critique of Mukherjee’s essay was the extent to which he emphasized the role of epigenetic mechanisms in gene regulation over other well-characterized players, namely transcription factors – DNA binding proteins that are undeniably critical for gene expression. However, debating whether the well-studied transcription factors or the less well-established epigenetic marks are more important is no different than the classic chicken or egg scenario: impossible to assign order in a hierarchy, let alone separate from one another.

 

But whether we embrace epigenetics in all of its glory or we couch the term in quotation marks – “epigenetics” – in an attempt to dilute its impact, it is still worth pausing to dissect why a public exchange brimming with such negativity occurred in the first place.
“Humans are a strange lot,” remarked primatologist Frans de Waal. “We have the power to analyze and explore the world around us, yet panic as soon as evidence threatens to violate our expectations” (de Waal, 2016, p.113). This inclination is evident in the above debate, but it also hints at a more ubiquitous theme of the presence of bias stemming from one’s group identity. Though de Waal deals with expectations that cross species lines, even within our own species, group identity plays a powerful role in dictating relationships and guiding one’s perspective on controversial issues. Studies have shown that political identities, for example, can supplant information during decision-making. Pew Surveys reveal that views on the issue of climate change divide sharply along partisan lines. When asked whether humans are at fault for changing climate patterns, a much larger percentage of democrats (66%) than republicans (24%) answered yes; however, when asked what the main contributor of climate change is (CO2), these two groups converged (democrats: 56%, republicans: 58%; taken from Field Notes From a Catastrophe, p. 199-200). This illustrates the potential for a divide between one’s objective understanding of an issue and one’s subjective position on that issue – the latter greatly influenced by the prevailing opinion of their allied group.

 

Along with group identity is the tendency to eschew uncertainty and nuance, choosing solid footing no matter how shaky the turf, effectively demolishing the middle ground. This tendency has grown stronger in recent years, it seems, likely in response to an increase in the sheer amount of information available. This increased complexity, while important in allowing access to numerous perspectives on an issue, also triggers our innate response to minimize cost during decision-making by taking “cognitive shortcuts” and receiving cues from trusted authorities, including news outlets. This is exacerbated by the rise in the use of social media and shrinking attention spans, which quench our taste for nuance in favor of extremes. The constant awareness of one’s (online) identity in relation to that of a larger group encourages consolidation around these extremes. The result is the transformation of ideas into ideologies and the polarization of the people involved.

 

These phenomena are evident in the response to Mukherjee’s New Yorker article, but they can be spotted in many other areas of scientific discourse. This, unfortunately, is due in large part to a culture that rewards results, promotes an I-know-the-answer mentality, and encourages its members to adopt a binary vision of the world where there is a right and a wrong answer. Those who critiqued Mukherjee for placing too great an emphasis on the role of epigenetic mechanisms responded by placing the emphasis on transcription factors, trivializing the role of epigenetics. What got lost in this battle of extremes was a discussion of the complementary nature of both sets of discoveries – a discussion that would bridge, rather than divide, generations and perspectives.

 

While intra-academic squabbles are unproductive, the real danger of arguments fought in absolutes and along group identity lines lays at the interface of science and society. The world we live in is fraught with complex problems, and Science, humanity’s vessel of ingenuity, is called upon to provide clean, definitive solutions. This is an impossible task in many instances as important global challenges are not purely scientific in nature. They each contain a very deep human element. Political, historical, religious, and cultural views act as filters through which information is perceived and function to guide one’s stance on complex issues. When these issues include a scientific angle, confidence in the institution of science as an (trustworthy) authority plays a huge role.

 

One of the most divisive of such issues is that of genetically modified crops (GMOs). GMOs are crops produced by the introduction or modification of DNA sequence to incorporate a new trait or alter an existing trait. While the debate spans concerns about the safety of GMOs for human health and environmental health to economic concerns over the potential disparate benefits to large agribusiness and small farmers, these details are lost in the conversation. Instead, the debate is reduced to a binary: pro-GMO equals pro-science, anti-GMO equals anti-science. Again, the group to which one identifies, scientists included, plays a tremendous role in determining one’s stance on the issue. Polling public opinion reveals a similar pattern to that of climate change. Even though awareness of genetic engineering in crops has remained constantly low over the years, beliefs that GMOs pose a serious health hazard have increased. What’s worse, these debates treat all GMO crops the same simply because they are produced with the same methodology. While the opposition maintains a blanket disapproval of all engineered crops, the proponents don’t fare better, responding with indiscriminate approval.

 

Last month The National Academy of Sciences released a comprehensive, 420-page report addressing concerns about GMOs and presenting an analysis of two-decades of research on the subject. While the conclusions drawn largely support the idea that GMOs pose no significant danger for human and environmental health, the authors make certain to address the caveats associated with these conclusions. Though prompted by many to provide the public with “a simple, general, authoritative answer about GE (GMO) crops,” the committee refused to participate in “popular binary arguments.” As important as the scientific analysis is this element of the report, which serves to push the scientific community away from a culture of absolutes. While the evidence at hand shows no cause-and-effect relationship between GMOs and human health problems, for example, our ability to assess this is limited to short-term effects, as well as by our current ability to know what to look for and to develop assays to do so. The presence of these unknowns is a reality in all scientific research and to ignore them, especially with regard to complex societal issues, only serves to strengthen the growing mistrust of science in our community and broaden the divide between people with differing opinions. As one review of the report states, “trust is not built on sweeping decrees.”

 

GMO crops, though, is only one of many issues of this sort; climate change and vaccine safety, for example, have been similarly fraught. And, unfortunately, our world is promising to get a whole lot more complicated. With the reduced cost of high-throughput DNA sequencing and the relative ease of genome editing, it is becoming possible to modify not just crops, but farmed animals, as well as the wild flora and fauna that we share this planet with. Like the other issues discussed, these are not purely scientific problems. In fact, the rapid rate at which technology is developing creates a scenario in which the science is the easy part; understanding the consequences and the ethics of our actions yields the complications. This is exemplified by the potential use of CRISPR-driven gene drives to eradicate mosquito species that serve as vectors for devastating diseases (malaria, dengue, zika, for example). In 2015, 214 million people were affected by malaria and, of those, approximately half a million died. It is a moral imperative to address this problem, and gene drives (or other genome modification techniques) may be the best solution at this time. But, the situation is much more complex than here-today, gone-tomorrow. For starters, the rise in the prevalence of mosquito-borne diseases has its own complex portfolio, likely involving climate change and human-caused habitat destruction and deforestation. With limited understanding of the interconnectedness of ecosystems, it is challenging to predict the effects of mosquito specicide on the environment or on the rise of new vectors of human disease. And, finally, this issue raises questions of the role of humans on this planet and the ethics of modifying the world around us. The fact is that we are operating within a space replete with unknowns and the path forward is not to ignore these nuances or to approach these problems with an absolutist’s mindset. This only encourages an equal and opposite reaction in others and obliterates all hope of collective insight.

 

It is becoming ever more common for us to run away from uncertainty and nuance in search of simple truths. It is within the shelter of each of our groups and within the language of absolutes that we convince ourselves these truths can be found; but this is a misconception. Just as embracing complementarity in our understanding of the physical world can lead to greater insight, an awareness that no single approach can necessarily answer our world’s most pressing problems can actually push science and progress forward. When thinking about the relationship of science with society, gaining trust is certainly important but not the only consideration. It is also about cultivating an understanding that in the complex world in which we live there can exist multiple, mutually incompatible truths. It is our job as scientists and as citizens of the world to navigate toward, rather than away from, this terrain to gain a richer understanding of problems and thus best be able to provide a solution. Borrowing the words of physicist Frank Wilczek, “Complementarity is both a feature of physical reality and a lesson in wisdom.”

 


Rethinking Academic Culture

 

By Rebecca Delker, PhD

At the root of science is a desire to understand ourselves and the world around us. It is this desire that underpins innovations that become world-changing technologies; and it is this desire that fuels scientists. It is the passion for the work that makes the rigors of research – the time commitment necessary, the oftentimes monotony, and the oh-so-many failed experiments – worth it, (mostly) every single day. Put simply, to be a scientist is to love science. But, in our current academic culture where success is measured not by the scientific process but by the product, to be a scientist is also to live and work on the rim of the disconnect between the realities of research and the expectations of academia that so oft ignore them. And it is in this culture where passion for science and for success in science makes scientists susceptible to the culture of shame that is pervasive in academics today.

 

I borrowed that idea – culture of shame – from self-proclaimed shame and vulnerability researcher, Brené Brown – a woman I have only recently discovered but quickly became obsessed with, immersing myself in a Brené Brown-binge of TED talks (here and here), interviews, and books (here and here). A shame-prone culture, as she states, is one where the use of fear is used as a management tool, where self-worth is tied to achievement and productivity, where perfectionism is the way of the land, where narrow standards measure worth, and where creativity and risk-taking are suffocated (Daring Greatly, Chapter 1). I think all of us can recognize at least some of these characteristics in academic science. I certainly can. And it’s this culture, not the science, at the root of my growing frustration with academia. Brown’s words capture perfectly the feelings and thoughts that have been kicking around in my head for the last many years – thoughts that were reinvigorated this past September when a prestigious scientific journal took to Instagram to wish post-docs a “happy and productive Labor Day.” September is long gone and the post’s mildly humorous take on #postdoclife buried in the ‘gram archives, but our broken culture, of which that post is a mere symptom, persists.

 

This culture, as Brown deftly identified, is one of scarcity – or more simply put, the never enough culture. Through seeking the unifying, head-nodding laughs of a truth widely understood, the aforementioned post very accurately identified a well-known downside of academic life: the expectation for long hours, even on the weekends and national holidays, because it’s just never enough. Without necessarily intending to, this post, written by one of the leading publications in biomedical research, was perpetuating the shame felt by post docs and other scientists derived from feelings of not having accomplished enough – essentially of having failed.

 

In the collective mind of academia – though without much evidence to support the claim – quality and quantity tend toward equivalence such that success is linked to the quantity of hours spent in the lab. While more often than not those extra hours prove to not be essential, we have all felt the pressure to choose work over another aspect of life. I aim not to downplay the vast amount of work that research requires – it’s a lot – but rather to highlight how time has surpassed itself as a measure of seconds ticking by to a metric by which the quality of a scientist is determined. By reducing the outcomes, especially failures, of experiments down to time spent in the lab (or vacations and holidays skipped), we are in effect placing the responsibility of those failures on the scientist. The result is a culture in which the sum of hours worked (greater than the norm) is worn as a badge of honor and feelings of pride and accomplishment go hand-in-hand with feelings of being overworked and exhausted.

 

The presence of self in our science is not unexpected. If someone were to ask me to choose words that best describe me, scientist would be at the top of the list. It is part of my identity both in and out of the lab and I imagine the same is true for many of my colleagues. The problem arises when experimental failures become personal failures, and in our current culture the equation between scientific success and self-worth is too often made. As a start, simply look at the language we use to describe technical finesse: good hands produce successful experiments; bad hands do not. It’s as if the fate of the experiment was genetically encoded. In reality, though, even the best hands can’t always generate the desired results because often we (and our hands) can’t comprehend all of the unknowns at play.

 

What is paramount to understanding how this culture of shame was created and persists is our definition of success and of failure. There is a growing misunderstanding in our culture-at-large of what science actually is. The way we educate, and thus what expectations have become, is that science is a series of facts – untouchable, black and white conclusions. An emphasis on information revealed by experiments, rather than the process, strengthens this misunderstanding by glossing over the critical thinking required to interpret what is often very nuanced data.

 

While scientists may not fall victim to this mentality to such an extreme, we are not innocent either. Within academic circles, too, the process of science often comes second to the findings; and this can largely be explained by the product-driven nature of science these days. In an environment with decreased funding and insufficient academic positions for the growing number of scientists, the product, that is publications, becomes the focus. It also becomes the means by which success and failure are defined. In this culture, success, measured by nominally quantitative metrics that rank the importance of scientific work and the quality of scientists, relies on publishing a paper – a big one, preferably, and quickly. Everything else may as well be called failure.

 

“I saw the results, and I wanted to throw myself off a bridge” (The Antidote: Happiness for People Who Can’t Stand Positive Thinking, Chapter 7). This is an actual quote from an interview with a biochemist conducted by researcher Kevin Dunbar who wanted to understand exactly how science works. His findings, which would surprise no practicing scientist, reveal that most experiments fail; they “rarely tell us what we think they’re going to tell us.” The quote from the biochemist above is an exaggerated example, but it illustrates the point that unanticipated results that are inconsistent with initial hypotheses – the majority of the results we deal with— are treated as failures even though they may reveal a new (not yet understood) fact. Dunbar went on to show that this response is due in part to the human tendency to focus in on evidence that is consistent with current theories. I would argue, though, that the product-driven nature of academic science that exclusively rewards publishable, positive results only strengthens this. As Dunbar states, “the problem with science, then, isn’t that most experiments fail – its that most failures are ignored.”

 

Stuart Firestein, neuroscientist at Columbia University and author of two books (here and here), takes this idea a step farther and reminds us that not only is science a process teeming with failure, but that failure is just as necessary as success to move science forward. “This iterative process – weaving from failure to failure, each one sufficiently better than the last – is how science often progresses,” he says. To forget this, as we often do, is not only psychologically damaging to the people conducting the science but horribly detrimental to the science itself. Not only does every failure pushed aside represent a lost opportunity to explore new terrain, but our narrow definition of success stifles creativity – an endeavor that requires enough time for missteps and recalculations.

 

So how do we fix our culture of shame? On this, we can extract some sage advice from Firestein and Brown: we need to become a lot more comfortable with uncertainty. Firestein advocates for an emphasis on ignorance – not stupidity, but simply the absence of knowledge – in science. Rather than obsessing over our quest to find an answer and eliminate any remnant of not knowing, we must embrace the idea that it is this not-knowing that drives science. “Answers don’t merely resolve questions; they provoke new ones.” And in doing so, drive innovation. Brown would call this same idea vulnerability. While most of us associate vulnerability with weakness, it is, as Brown defines it, uncertainty, risk, and emotional exposure; it is the courage to accept not-knowing, failure and imperfection that serves as a prerequisite for creativity in science and many other endeavors.

 

In an attempt to discover the ingredients required in making a successful team, Google uncovered that the cultural norms of the group matter more than the individual intelligence of its members. Creating an environment founded in empathy, which allows each member the freedom to take risks and expose their insecurities without fear of negative consequences, improved the success of teams more than any other individual or group characteristic. In other words, they found that allowing individuals the space to be vulnerable actually improved the output of the group. To make a perfect team, it seems, requires accepting the “usefulness of imperfection.” With this in mind we can hope to move away from a culture where shame is coupled with experimental failures.

 

It is obvious that academics is due for some much needed structural changes – from shifting away from our reliance on impact factors and other indices to judge the quality of science and scientists, to forging a deeper connection with the public and improving funding, to increasing (and respecting!) alternate pathways for successful scientists. I wholeheartedly believe that these structural changes won’t come unless we start adjusting our culture now. We must widen our definition of success and move away from a fact-based version of science to that of inquiry and ignorance. But most importantly, we must allow ourselves and our science to be vulnerable, make space for failure, and in doing so, breathe life back into the scientific process, which has been eclipsed by a results-driven culture. As Brown advises in Daring Greatly, our approach to research ought not be guided by a fear of the possibility of failure but rather by asking ourselves the question: “What’s worth doing even if I fail?”

 


DNA bases

CRISPR/Cas9: More Than a Genome Editor

By Rebecca Delker, PhD

 

The bacterial defense system, CRISPR/Cas9, made huge waves in the biomedical community when the seemingly simple protein-RNA complex of Type II CRISPR systems was engineered to target DNA in vitro and in complex eukaryotic genomes. The introduction of double-strand breaks using CRISPR/Cas9 in a targeted fashion opened the portal to highly affordable and efficient site-specific genomic editing in cells derived from yeast to man.

 

To get a sense of the impact CRISPR technology has had on biological research, one simply needs to run a search of the number of publications containing CRISPR in the title or abstract over the past handful of years; the results practically scream in your face. From 2012, the year of the proof-of-principle experiment demonstrating the utility of engineered Cas9, to 2015, CRISPR publications rose steadily from a mere 138 (in 2012) to >1000 (at the time of this post). Publications more than doubled between the years of 2012 and 2013, as well as between 2013 and 2014. Prior to the use of CRISPR as a technology, when researchers studied the system for the (very cool) role it plays in bacterial defense, publications-per-year consistently fell below 100. In other words, it’s a big deal.

 

In fact, during my 10 years at the bench I have never witnessed a discovery as transformative as CRISPR/Cas9. Overnight, reverse genetics on organisms whose genomes were not amenable to classical editing techniques became possible. And with the increasing affordability of high-throughput sequencing, manipulation of the genomes of non-model organisms is now feasible. Of course there are imperfections with the technology that require greater understanding to circumvent (specificity, e.g.), but the development of CRISPR as a tool for genomic engineering jolted biological research, fostering advances more accurately measured in leaps rather than steps. These leaps – and those expected to occur in the future – landed the discoverers of CRISPR/Cas9 at the top of the list of predicted recipients of the Nobel Prize in Chemistry; though they didn’t win this year (the award went to researchers of the not-totally-unrelated field of DNA repair), I anticipate that a win lies ahead. The rapid success of CRISPR genome editing has also sparked patent battles and incited public debate over the ethics of applying the technology to human genomes. With all of the media attention, it’s hard not to know about CRISPR.

 

The transformative nature of CRISPR/Cas9 does not, however, end with genome editing; in fact, an even larger realm of innovation appears when you kill the enzymatic activity of Cas9. No longer able to cut DNA, dead Cas9 (dCas9) becomes an incredibly good DNA-binding protein guided to its target by a programmable RNA molecule (guide RNA, gRNA). If we think of active Cas9 as a way to better understand genes (through deletions and mutations), then dCas9 is the route to get to know the genome a bit better – a particularly enticing mission for those, including myself, invested in the field of Genomics. From high-throughput targeted gene activation and repression screens to epigenome editing, dCas9 is helping scientists probe the genome in ways that weren’t possible before. Here, I put forth some of the best (in my humble opinion) applications, actual and potential, of CRISPR technology that go beyond genome editing.

 

Cas9 and Functional (Epi)Genomics

 

For many years the genome was considered as the totality of all genes in a cell; the additional junk DNA found was merely filler between the necessary gene units, stitching together chromosomes. We’ve come a long way since this naiveté, especially in recent years. We understand that the so-called junk DNA contains necessary regulatory information to get the timing and position of gene expression correct; and now, more than ever, we have a greater appreciation for the genome as a complex macromolecule in its own right, participating in gene regulation rather than acting as a passive reservoir of genetic material. The genome, it has been shown, is much more than just its sequence.

 

The epigenome, consisting of a slew of modifications to the DNA and the histones around which the DNA is wrapped, as well as the 3D organization of the genome in the nucleus, collaborates with DNA binding proteins to accurately interpret sequence information to form a healthy, functional cell. While mutations and/or deletions can be made – more easily, now, with Cas9 – to genomic sequences to test functionality, it is much harder to conduct comparable experiments on the epigenome, especially in a targeted manner. Because of the inability to easily perturb features of the epigenome and observe the consequences, our understanding of it is limited to correlative associations. Distinct histone modifications are associated with active versus inactive genes, for example; but, how these modifications affect or are affected by gene expression changes remains unknown.

 

Taking advantage of the tight binding properties of dCas9, researchers have begun to use the CRISPR protein as a platform to recruit a variety of functionalities to a genomic region of interest. Thus far, this logic has most commonly been employed to activate and/or repress gene expression through recruitment of dCas9 fused to known transcriptional activator or repressor proteins. Using this technique, scientists have conducted high-throughput screens to study the role of individual – or groups of – genes in specific cellular phenotypes by manipulating the endogenous gene locus. And, through a clever extension of the gRNA to include a hairpin bound by known RNA-binding proteins, the targeted functionality has been successfully transferred from dCas9 to the gRNA, allowing for simultaneous activation and repression of independent genes in the same cell with a single dCas9 master regulator – the beginnings of a simple, yet powerful, synthetic gene circuit.

 

Though powerful in its ability to decipher gene networks, dCas9-based activation and repression screens are still gene-centric; can this recruitment technique help us better understand the epigenome? The first attempts at addressing this question used dCas9 to target histone acetyltransferase, p300, to catalyze the acetylation of lysine 27 on histone 3 (H3K27) at specific loci. The presence of H3K27 at gene regulatory regions has been known to be strongly associated with active gene expression at the corresponding gene(s), but the direction of the histone modification-gene expression relationship remained in question. Here, Hilton et al. demonstrate that acetylation of regulatory regions distal to gene promoters strongly activates gene expression, demonstrating causality of the modification.

 

More recently, recruitment of a dCas9-KRAB repressor fusion to known regulatory regions catalyzed trimethylation of lysine 9 on histone 3 (H3K9) at the enhancer and associated promoters, effectively silencing enhancer activity. Though there have only been a few examples published, it will likely not be long until researchers employ this technique for the targeted analysis of additional epigenome modifiers. Already, targeted methylation, demethylation and genomic looping have been accomplished using the DNA-binders, Zinc Finger Nucleases and TALEs. With the increased simplicity in design of gRNAs, dCas9 is predicted to surpass these other proteins in its utility to link epigenome modifications with gene expression data.

 

Visualization of Genomic Loci

 

When you treat dCas9 as a bridge between DNA and an accessory protein, just as in the recruitment of activators, repressors and epigenome modifiers, there are few limits to what can be targeted to the genome. Drawing inspiration from the art of observation that serves as the foundation of scientific pursuit, researchers have begun to test whether dCas9 can be used to visualize genomic loci and observe their position, movements, and interactions simply by recruiting a fluorescent molecule to the locus of interest.

 

This idea, of course, is not entirely new. In situ hybridization techniques (ISH, and its fluorescent counterpart, FISH) have been successfully used to label locus position in fixed cells but cannot offer any information about the movement of chromosomes in living cells. Initial studies to conquer this much harder feat made use of long tracts of repetitive DNA sequence bound by its protein binding partner fused to fluorescing GFP; though surely an advance, this technique is limited because of the requirement to engineer the repetitive DNA motifs prior to imaging.

 

To circumvent this need, researchers have recently made use of TALEs and dCas9 (and here) carrying fluorescent tags to image unperturbed genomic loci in a variety of live cell cultures. The catch is that both TALEs and dCas9 perform much better when targeting repetitive regions, such that multiple copies of the fluorescent molecule are recruited, enhancing the intensity of the signal. Tiling of fluorescent dCas9 across a non-repetitive region using 30-70 neighboring gRNAs (a task made much more feasible with CRISPR versus TALEs) can similarly pinpoint targeted loci, albeit with much higher background. As is, the technique lacks the resolution desired for live imaging, but current advances in super-resolution microscopy and single-molecule tracking, as well as improvements in the brightness of fluorescent molecules available, will likely spur improvements in dCas9 imaging in the coming years.

 

Finally, dCas9 is not only useful in live cells. CASFISH, an updated Cas9-mediated FISH protocol, has been successfully used to label genomic loci in fixed cells and tissue. This updated version holds many benefits over traditional FISH including a streamlined protocol; but, most notably, CASFISH does not require the denaturation of genomic DNA, a necessary step for the hybridization of FISH probes, eliminating positional artifacts due to harsh treatment of the cells. Unfortunately, as of now, CASFISH also suffers from a need for repetitive sequences or tiling of gRNAs to increase signal intensity at the locus of interest.

 

Targeting RNA with Cas9

 

From cutting to tagging to modifying, it is clear that Cas9 has superstar potential when teamed up with double-stranded DNA (dsDNA); however, recent data suggests that this potential may not be limited to DNA. Mitchell O’Connell and colleagues at Berkeley found that Cas9 could bind and cleave single-stranded RNA (ssRNA) when annealed to a short DNA oligonucleotide containing the necessary NGG sequence. In addition, the authors made use of dCas9 and biotin-tagged gRNA to capture and immobilize targeted messenger RNA from cell extract. Though it remains to be shown, this proof-of-principle binding of dCas9 suggests that it is plausible to recruit a variety of functionalities to RNA as has been done for dsDNA. Recruitment of RNA processing factors through Cas9 could potentially enhance translation, generate known RNA editing events (deamination, e.g.), regulate alternative splicing events, or even allow visualization of RNA localization with conjugated fluorescent molecules. Again, each of these processes requires no modification to the RNA sequence or fixation, both of which can disrupt normal cell physiology.

 

Improving CRISPR Technology

 

The development of CRISPR technology, particularly the applications discussed here, is still in its infancy. It will likely take years of research for Cas9 and dCas9 to reach their full potential, but advances are underway. These developments pertain not only to the applications discussed here, but also genome engineering.

 

Specificity of Cas9

 

Cas9’s biggest flaw is its inability to stay focused. Off-target (OT) binding (and here) of Cas9 and DNA cutting have been reported and both present problems. With particular relevance to dCas9-based applications, promiscuous binding of Cas9 to regions of the genome that contain substantial mismatches to the gRNA sequence raises concerns of non-specific activity of the targeted functionality. Efforts to reduce OT binding are needed to alleviate these concerns, but progress has been made with the finding that truncated gRNA sequences are less tolerant of mismatches, reducing off-target Cas9 activity, if not also binding.

 

Temporal Precision of Cas9

 

One of the most exciting developments in dCas9 genome targeting is the potential to manipulate the genome and epigenome in select cell populations within a whole animal to gain spatial resolution in our understanding of genome regulation; however, as we have learned over the years, gene expression patterns don’t only change with space, but also time. A single cell, for example, will alter its transcriptome at different points during development or in response to external stimulus. The development of split versions of Cas9 (and dCas9), which require two-halves of the protein to be expressed simultaneously for function, will not only improve spatial specificity of Cas9 activity but holds the potential to restrict its activity temporally. Drug-inducible and photoactivatable (!) versions of split Cas9 restrict function to time windows of drug treatment or light activation, respectively. In addition, a ligand-sensitive intein has been shown to temporally control Cas9 activity by releasing functional Cas9 through protein splicing only in the presence of ligand.

 

Expanding the CRISPR Protein Repertoire

 

Finally, CRISPR technology will likely benefit from taking all of the weight off of the shoulders of Cas9. Progress toward designing Cas9 molecules with altered PAM specificity, as well as the isolation of Cas9 from different species of bacteria, has helped expand the collection of genomic sites that can be targeted. It has also enabled multiplexing of orthogonal CRISPR proteins in a single cell to effect multiple functions simultaneously. More recently, the Zhang lab isolated an alternative type II CRISPR protein, Cpf1, purified from Francisella novicida. Cas9’s new BFF is also able to cut genomic DNA (as shown in human cells), but in a slightly different fashion than Cas9, generating sticky overhangs rather than blunt ends. Cpf1 also naturally harbors an alternate PAM specificity; rather than targeting sequences upstream of NGG, it prefers T-rich signatures (TTN), further expanding the genomes and genomic sites that can be targeted.

 

CRISPR/Cas9 has already proven to be one of the most versatile tools in the biologist’s toolbox to manipulate the genomes of a variety of species, but its utility continues to grow beyond these applications. Targeting Cas9 to the mitochondria rather than the nucleus can specifically edit the mitochondrial genome, with implications for disease treatment. Cas9 has been used for in vitro cloning experiments when traditional restriction enzymes just won’t do. And, by directly borrowing the concept of Cas9 immunity from bacteria, researchers have enabled enhanced resistance to viruses in plants engineered with Cas9 and gRNAs. While we ponder what innovative technique will come next, it’s important to think about how this cutting-edge technology that promises to bolster both basic and clinical research came to be: this particular avenue of research was paved entirely by machinery provided by the not-so-lowly bacteria. That’s pretty amazing, if you ask me.


Taking Genome Editing out of the Lab: Cause for Concern?

By Rebecca Delker, PhD

Genome editing – the controlled introduction of modifications to the genome sequence – has existed for a number of years as a valuable tool to manipulate and study gene function in the lab; however, because of inefficiencies intrinsic to the methods used, the technique has, until now, been limited in scope. The advent of CRISPR/Cas9 genome editing technology, a versatile, efficient and affordable technique, not only revolutionized basic cell biology research but has opened the real possibility of the use of genome editing as a therapy in the clinical setting and as a defense against pests destructive to the environment and human health.

 

CRISPR – Clustered Regularly Interspaced Short Palindromic Repeats – when teamed up with the nuclease, Cas9, to form CRISPR/Cas9 serves as a primitive immune system for bacteria and archaea, able to tailor a specific response to an invading virus. During viral invasion, fragments of the invader’s foreign genome are incorporated between the CRISPR repeats, forever encoding a memory of the attack in the bacterial genome. Upon future attack by the same virus, these memories can be called upon by transcribing the fragments to RNA, which, through Watson-Crick base-pairing, guide Cas9 to the viral genome, targeting it for destruction by induced double strand breaks (DSBs).

 

While an amazing and inspiring piece of biology in its own right, the fame of CRISPR/Cas9 did not skyrocket until the discovery that this RNA/nuclease team could be programmed to target specific sequences and induce DSBs in the complex genomes of all species tested. Of course the coolness factor of CRISPR technology does not end with the induction of DSBs but rather the use of these breaks to modify the genome. Taking advantage of a cell’s natural DNA repair machinery, CRISPR-induced breaks can be repaired by re-gluing the broken ends in a manner that results in the insertion or deletion of nucleotides – indels, for short – that disrupt gene function. More interesting for genome editing, though, DSBs can also serve as a portal for the insertion of man-made DNA fragments in a site-specific fashion, allowing the insertion of foreign genes or replacement of faulty genes.

 

CRISPR/Cas9 is not the first technology developed to precisely edit genomes. The DNA-binding (and cutting) engineered proteins, TALENS and Zinc Finger Nuclease (ZFNs), came into focus first but, compared to the RNA-guided Cas9 nuclease, are just a bit clunky – more complex in design with lower efficiency and less affordable. Even prior to these techniques, the introduction of recombinant DNA technology in the 1970s allowed the introduction of foreign DNA into the genomes of cells and organisms. Mice could be made to glow green using a jellyfish gene before the use of nucleases – just less efficiently. Now, the efficiency of Cas9 and the general ease of use of the technology paired with the decreased costs of genome sequencing enable scientists to edit the genome of just about any species, calling to mind the plots of numerous sci-fi films.

 

While it is unlikely that we will find ourselves in a GATTACA-like situation anytime soon, the potential for the application of CRISPR genome editing to human genomes has sparked conversation in the scientific literature and popular press. Though genome modification of somatic cells (regulators of body function) is generally accepted as an enhanced version of gene therapy, editing of germline cells (carriers of hereditary information) has garnered more attention because of the inheritance of the engineered modifications by generations to come. Many people, including some scientists, view this as a line that should never be crossed and argue that there is a slippery slope between editing disease-causing mutations and creating designer babies. Attempts by a group at Sun Yat-sen University in China to test the use of CRISPR in human embryos was referred to by many as irresponsible and their paper was rejected from top journals including Nature and Science. It should be noted, however, that this uproar occurred despite the fact that the Chinese scientists were working with non-viable embryos in excess from in vitro fertilization and with approval by the appropriate regulatory organizations.

 

Modifying human beings is unnatural; and, as such, seems to poke and prod at our sense of morality, eliciting the knee-jerk response of no. But, designer babies aside, how unethical is it to target genes to prevent disease – the ultimate preventative medicine, if you will? It is helpful to address this question in a broader context. All medical interventions – antibiotics, vaccinations, surgeries – are unnatural, but (generally) their ethics are not questioned because of their life-saving capabilities. If we look specifically at reproductive technology, there is precedent for controversial innovation. In the 1970s when the first baby was born by in vitro fertilization (IVF), people were skeptical of scientists making test-tube babies­ in labs. Now, it is a widely accepted technique and more than 5 million babies have been born with IVF.

 

Moving the fertilization process out of the body allowed for the unique possibility to prevent the transmission of genetic diseases from parent to child. Pre-Implantation Genetic Diagnosis (PGD), the screening of eggs or embryos for genetic mutations, allows for the selection of embryos that are free of disease for implantation. More recently, the UK (although not the US) legalized mitochondrial replacement therapy – a technique that replaces faulty mitochondria of the parental egg with that of a healthy donor either prior to or post fertilization. Referred to in the press as the creation of three-parent babies because genetic material is derived from three sources, this technique aims to prevent the transmission of debilitating mitochondrial diseases from mother to child. To draw clearer parallels to germline editing, mitochondria – energy producing organelles that are the likely descendants of an endosymbiotic relationship between bacteria and eukaryotic cells – contain their own genome. Thus, although mitochondrial replacement is often treated as separate from germline editing because nuclear DNA is left untouched, the genomic content of the offspring is altered. There are, of course, naysayers who don’t think the technique should be used in humans, but largely this is not because of issues of morality; rather, their opposition is rooted in questions of safety.

 

Germline editing could be the next big development in assisted reproductive technology (ART), but, like mitochondrial replacement and all other experimental therapies, safety is of utmost concern. Most notably, the high efficiency of CRISPR/Cas9 relative to earlier technologies comes at a cost. It has been demonstrated in a number of model systems, including the human embryos targeted by the Chinese group, that in addition to the desired insertion, CRISPR results in off-target mutations that could be potentially dangerous. Further, because our understanding of many genetic diseases is limited, there remains a risk of unintended consequences due to unknown gene-environmental interactions or the interplay of the targeted gene and other patient-specific genomic variants. The voluntary moratorium on clinical applications of germline editing in human embryos suggested by David Baltimore and colleagues is fueled by these unknowns. They stress the importance of initiating conversations between scientists, bioethicists, and government agencies to develop policies to regulate the use of genome editing in the clinical setting. Contrary to suggestions by others (and here), these discussions should not impede the progress of CRISPR research outside of the clinical setting. As a model to follow, a group of UK research organizations have publically stated their support for the continuation of genome editing research in human embryos as approved by the Human Fertilisation and Embryology Authority (HFEA), the regulatory organization that oversees the ethics of such research. Already, a London-based researcher has requested permission to use CRISPR in human embryos not as a therapeutic but to provide insight into early human development.

 

Much of the ethics of taking genome editing out of the lab is, thus, intertwined with safety. It is unethical to experiment with human lives without taking every precaution to prevent harm and suffering. Genome editing technology is nowhere near the point at which it is safe to attempt germline modifications, although clinical trials are in progress testing the efficacy of ZFN-based editing of adult cells to reduce viral titers in patients with HIV. This is not to say that we will never be able to apply CRISPR editing to germline cells in a responsible and ethical manner, but it is imperative that it be subject to regulations to assure the safety of humans involved, as well as to prevent the misuse of the technology.

 

This thought process must also be extended to the application of CRISPR to non-human species, especially because it does not typically elicit the same knee-jerk response as editing human progeny. CRISPR has been used to improve the efficiency of so-called gene drives, which guarantee inheritance of inserted genes, in yeast and fruit flies; and they have been proposed for use in the eradication of malaria by targeting the carrier of disease, the Anopheles mosquito. It is becoming increasingly important to consider the morality of our actions with regard to other species, as well as the planet, when developing technologies that benefit humanity. When thinking about the use of CRISPR-based gene drives to manipulate an entire species it is of utmost importance to take into consideration unintended consequences to the ecosystem. Though the popular press has not focused much on these concerns, a handful of scientific publications have begun to address these questions, releasing suggested safety measures.

 

There is no doubt that CRISPR is a powerful technology and will become more powerful as our understanding of the system improves. As such, it is critical to discuss the social implications of using genome editing as a human therapeutic and an environmental agent. Such discussions have begun with the convention in Napa attended by leading biomedical researchers and will likely continue with similar meetings in the future. This dialogue is necessary to ensure equal access to beneficial genome-editing therapies, to develop safeguards to prevent the misuse of technology, and to make certain that the safety of humans and our planet is held in the highest regard. However, too much of the real estate in today’s press regarding CRISPR technology has been fear-oriented (for example) and we run the risk of fuelling the anti-science mentality that already plagues the nation. Thus, it is equally important to focus on the good CRISPR has done and will continue to do for biological and biomedical research.

 

We are rapidly entering a time when the genomes of individuals around the world will be sequenced completely, along with many other organisms on the planet; however, this is just the tip of the iceberg of our understanding of the complex translation of this genome into life. For over a decade we have known the complete sequence of the lab mouse, but our understanding of the cellular processes within this mouse is still growing every day. Thus, there is an important distinction to be made between knowing a DNA sequence and understanding it well enough to be able to make meaningful (and safe) modifications. CRISPR genome editing technology, as it is applied in basic biology, is helping us make this leap from knowing to understanding in order to inform the creation of remedies for diseases that impact people, animals and our planet; and it is doing so with unprecedented precision and speed.

 

We must strike a balance that enables the celebration and use of the technology to advance knowledge, while assuring that the proper regulations are in place to prevent premature use in humans and hasty release into the environment. Or, as CRISPR researcher George Church remarked: “We need to think big, but also think carefully.”

 


DNA code

Double Strand Breaks For The Win

 

By Rebecca Delker, PhD

The blueprint of an organism is its genome, the most fundamental code necessary for life. The carefully ordered – and structured – composition of As, Ts, Cs and Gs provides the manual that each cell uses to carry out its diverse function. As such, unintended alterations to this code often produce devastating consequences, manifesting themselves in disease phenotypes. From mutations to insertions and deletions, changes in the sequence of nucleotides alter the cell’s interpretation of the genome, like changing the order of words in a sentence. However, arguably one of the most threatening alterations is the double-strand break (DSB), a fracture in the backbone of the helical structure, splitting a linear piece of DNA in two, as if cut by molecular scissors. While the cell has a complex set of machinery designed to repair the damage, this process can be erroneous generating deletions, or even worse, translocations – permanently reordering the pages of the manual and ultimately transforming the cell. Given the central role translocations can play in oncogenic transformation, DSBs have understandably received a bad rap; but, as can be expected, not all is black and white and it’s worth asking whether there is an upside to DSBs.

 

One such commendable pursuit of the DSB serves to expand the capabilities of our genome. While it is true that the genome is the most basic code necessary for life, many of the processes within a cell actually require changes to the code. These can occur at all levels of the Central Dogma – modifications of proteins, RNA, and even DNA. B- and T-lymphocytes, cells that provide a good amount of heft to our immune system, are notable for their DNA editing skills. Tasked with protecting an organism from billions of potential pathogens, B- and T-cells must generate receptors specific for each unique attack. Rather than encoding each of these receptors in the genome – an impossibility due to size restrictions – B- and T-lymphocytes use DSBs to cut and paste small gene fragments to build a myriad of different receptor genes, each with a unique sequence and specificity (reviewed here). For immune cells, and for the survival of the organism, these DSBs are essential. Although tightly controlled, DNA rearrangements in immune cells are mechanistically similar to the sinister DSB-induced translocations that promote cancer formation; however, rather than causing disease, they help prevent it.

 

New research published this summer points to exciting, and even more unusual uses of DSBs in the regulation of gene expression. In a quest to understand the molecular effects of DSBs that are causally linked to a variety of neurological disorders, Ram Madabhushi, Li-Huei Tsai and colleagues instead discovered a necessary role for DSBs in the response of neurons to external stimulus. To adapt to the environment and generate long-term memories, changes in the “morphology and connectivity of neural circuits” occur in response to neuron-activation. This synaptic plasticity relies on a rapid increase in gene expression of a select set of early-response genes responsible for initiating the cascade of cellular changes needed for synaptogenic processes. In their paper published in Cell this summer, the authors reveal that the formation of DSBs in the promoter of early-response genes induces gene expression in response to neuron stimulation.

 

By treating neuronal cells with etoposide, an inhibitor of type-II topoisomerase enzymes (TopoII) that causes DSB formation, the researchers expected to find that DSBs interfere with transcription. In fact, most genes found to be differentially expressed in cells treated with the drug showed a decrease in expression; however, a small subset of genes, including the early-response genes, actually increased. Through a series of in vivo and ex vivo experiments, the researchers showed that even in the absence of drug treatment, DSB formation in the promoters of early-response genes is critical for gene expression – beautifully weaving a connection between neuronal activation, DSB formation and the rapid initiation of transcription in this subset of genes.

 

The serendipitous discovery of the positive effect of etoposide on gene expression lead the researchers to focus in on the role of topoisomerases, the guardians of DNA torsion, in DSB formation. As a helical structure composed of intertwined strands, nuclear processes like replication and transcription cause the over- or under-twisting of the DNA helix, leading the DNA molecule to twist around itself to relieve the torsional stress and form a supercoiled structure. Topoisomerases return DNA to its relaxed state by generating breaks in the DNA backbone – single-strand breaks by type I enzymes and DSBs by type II – untwisting the DNA and religating the ends. While etoposide can artificially force sustained DSBs, physiological TopoII-induced breaks are typically too transient to allow recognition by DNA repair proteins. The finding that TopoIIb-induced DSBs at the promoters of neuronal early-response genes are persistent and recognized by DNA repair machinery suggests a non-traditional role for TopoII enzymes, and DSBs, in transcription initiation and regulation.

 

In fact, the contribution of TopoII and DSBs in the regulation of neuronal genes may not be so niche. Another study published recently found a similar relationship between transcriptional activation and Topo-mediated DSB formation. Using the primordial cells of the germline in C. elegans as a model system, Melina Butuči, W. Matthew Michael and colleagues found that the abrupt increase in transcription as embryonic cells switch from a dependence on maternally provided RNA and protein to activation of its own genome induced widespread DSB formation. Amazingly, TOP-2, the C. elegans ortholog of TopoII is required for break formation; but, in contrast to neuronal activation, these DSBs occur in response to transcription rather than as a causative agent.

 

These recent studies build upon a growing recognition of a potentially intimate relationship between DSBs, torsion and transcription. DNA repair proteins, as well as topoisomerase enzymes have been shown to physically interact with transcription factors and gene regulatory elements; topoisomerase I and II facilitate the transcription of long genes; and, as in neuronal cells, studies of hormone-induced gene expression in cell culture reveal an activation mechanism by which TopoIIb induces DSBs selectively in the promoters of hormone-sensitive genes. Thus, DSBs may constitute a much broader mechanism for the regulation of gene-specific transcription than previously thought.

 

Given the grave danger associated with creating breaks in the genome, it is curious that the use of DSBs evolved to be an integral component of the regulation of transcription – an inescapable and ubiquitously employed process; however, as we expand our understanding of transcription to include the contribution of the higher-order structure of DNA, the utility of this particular evolutionary oddity comes into focus. Genomic DNA is not naked, but rather wrapped around histone proteins and packaged in the 3D space of the nucleus such that genomic interactions influence gene expression. Changes in the torsion and supercoiling of DNA have been associated with histone exchange, as well as changes in the affinity of DNA-binding proteins for DNA. In addition, the necessity of topoisomerase for the transcription of long genes occurs early as RNA polymerase transitions from initiation to elongation, suggesting that the role of TopoI and II is not to relieve transcription-induced torsion, but rather to resolve an inhibitory, likely 3D, genomic structure that is specific to genes of longer length. A similar mechanism may be involved at the neuronal early-response genes. In these cells, genomic sites of TopoIIb-binding and DSB-formation significantly overlap binding sites of CTCF – a crucial protein involved in genomic looping and higher-order chromatin structure – and, again, DNA breaks may function to collapse a structure constraining gene activation. Whatever the exact mechanisms at play here, these results inspire further inquiry into the relationship between DSBs, genome topology and transcription.

 

A cell’s unique interpretation of the genome via distinct gene expression programs is what generates cell diversity in multicellular organisms. Immune cells, like B- and T-lymphocytes, are different from neurons, which are different from skin cells, despite working from the same genomic manual. In B- and T-cells, DSBs are essential to piece together DNA fragments in a choose-your-own-adventure fashion to produce a reorganization of the manual necessary for cell function. And, as is emphasized in this growing body of research, DSBs function along with a variety of other molecular mechanisms to highlight, underline, dog-ear, and otherwise mark-up the genome in a cell-specific manner to facilitate the activation and repression of the correct genes at the correct time. Here, DSBs may not reorder the manual, but, nevertheless, play an equally important role in promoting proper cell function.

 


money in a jar

Measuring the Value of Science: Keeping Bias out of NIH Grant Review

 

By Rebecca Delker, PhD

Measuring the value of science has always been – and, likely, will always remain – a challenge. However, this task, with regard to federal funding via grants, has become increasingly more daunting as the number of biomedical researchers has grown substantially and the available funds contracted. As a result of this anti-correlation, funding rates for NIH grants, most notably, the R01, have dropped precipitously. The most troubling consequences of the current funding environment are (1) the concentration of government funds in the hands of older, established investigators at the cost of young researchers, (2) a shift in the focus of lab-heads toward securing sufficient funds to conduct research, rather than the research itself and (3) an expectation for substantial output, increasing the demands for preliminary experiments and discouraging the proposal of high-risk, high-reward projects. The federal grant system has a direct impact on how science is conducted and, in its current form, restricts intellectual freedom and creativity, promoting instead guaranteed, but incremental, scientific progress.

 

History has taught us that hindsight is the only reliable means of judging the importance of science. It was sixteen years after the death of Gregor Mendel – and thirty-five years after his seminal publication – before researchers acknowledged his work on genetic inheritance. The rapid advance of HIV research in the 1980s was made possible by years of retroviral research that occurred decades prior. Thus, to know the value of research prior, or even a handful of years after publication, is extremely difficult, if not impossible. Nonetheless, science is an innately forward-thinking endeavor and, as a nation, we must do our best to fairly distribute available government funds to the most promising research endeavors, while ensuring that creativity is not stifled. At the heart of this task lies a much more fundamental question – what is the best way to predict the value of scientific research?

 

In a paper published last month in Cell, Ronald Germain joins the conversation of grant reform and tackles this question by proposing a new NIH funding system that shifts the focus from project-oriented to investigator-oriented grants. He builds his new system on the notion that the track record of a scientist is the best predictor of future success and research value. By switching to a granting mechanism similar to privately funded groups like the HHMI, he asserts, the government can distribute funds more evenly, as well as free up time and space for creativity in research. Under the new plan, funding for new investigators would be directly tied to securing a faculty position by providing universities “block grants,” which are distributed to new hires. In parallel, individual grants for established investigators would be merged into one (or a few) grant(s), covering a wider range of research avenues. For both new and established investigators, the funding cycle would be increased to 5-7 years and – the most significant departure from the current system – grant renewal dependent primarily on a retrospective analysis of work completed during the prior years. The foundation for the proposed granting system relies on the assumption that past performance, with regard to output, predicts future performance. As Germain remarks, most established lab-heads trust a CV over a grant proposal when making funding decisions; but it is exactly this component of the proposal – of our current academic culture – that warrants a more in-depth discussion.

 

Germain is not the first to call into question the reliability of current NIH peer reviews. As he points out, funding decisions for project-oriented grants are greatly influenced by the inclusion of considerable preliminary data, as well as form and structure over content. Others go further and argue that the peer review process is only capable of weeding out bad proposals, but fails at accurately ranking the good. This conclusion is supported by studies, which establish a correlation between prior publication, not peer review score, and research outcome. (It should be noted that a recent study following the outcomes of greater than 100,000 funded R01 grants found that peer review scores are predictive of grant outcome, even when controlling for the effects of institute and investigator. The contradictory results of these two studies cannot yet be explained, though anecdotal evidence falls heavily in support of the former conclusions.)

 

Publication decisions are not without biases. Journals are businesses and, as such, benefit from publishing headline-grabbing science, creating an unintended bias against less trendy, but high quality, work. The more prestigious the journal, the higher its impact factor, the more this pressure seems to come into play. Further, just as there is a necessary skill set associated with successful grant writing that goes beyond the scientific ideas, publication success depends on more factors than the research itself. An element of “story-telling” can make research much more appealing; and human perception of the work during peer review can easily be influenced by name recognition of the investigator and/or institute. I think it is time to ask ourselves if past publication record is truly predictive of future potential, or, if it simply eases the way to additional papers.

 

In our modern academic culture, the quality of research and of scientists is often judged by quantitative measures that, at times, can mask true potential. Productivity, as measured by the number of papers published in a given period of time, is a standard gaining momentum in recent years to serve as a meaningful evaluation of the quality of a scientist. As Germain states, a “highly competent investigator” is unlikely “to fail to produce enough … to warrant a ‘passing grade’.” The interchangeability of competence and output has been taken to such extremes that pioneering physicist and Nobel Prize winner, Peter Higgs, has publicly stated that he would be overlooked in current academia because of the requirement to “keep churning out papers.” The demand for rapid productivity and high impact factor has caused an increase in the publication of poorly validated findings, as well as in retraction rates due to scientific misconduct. The metrics used currently to value science are just as, if not more, dangerous to the progress of science as the restrictions placed on research by current funding mechanisms.

 

I certainly do not have a fail-proof plan to fix the current funding problems; I don’t think anyone does. But, I do think that we need to look at grant reform in the context of the larger issues plaguing biomedical sciences. As a group of people who have chosen a line of work founded in doing/discovering/inventing the impossible, we have taken the easy way out when approached with measuring the value of research. Without the aid of hindsight, this task will never be objective and assigning quantitative measures like impact factor, productivity, and the h-index has proven only to generate greater bias in the system. We must embrace the subjectivity present in our review of scientific ideas while remaining careful not to vandalize scientific progress with bias. Measures to bring greater anonymity to the grant review process and greater emphasis on qualitative and descriptive assessments of past work and future ideas will help lessen the influence of human bias and make funding more fair. As our culture stands, a retrospective review process, as Germain proposes, with a focus on output runs the risk of adopting into the grant review process our flawed, and highly politicized, methods of judging the quality of science. I caution that in parallel to grant reform, we begin to initiate change in the metrics we use to measure the value of science.

 

Though NIH funding-related problems and the other systemic flaws of our culture seem at an all time high right now, the number of publications addressing these issues has also increased, especially in recent years. Now, more than ever, scientists at all stages recognize the immediacy of the problems and are engaging in conversations both in-person and online to brainstorm potential solutions. A new website  serves as a forum for all interested to join the discussion and contribute reform ideas – grant, or otherwise. With enough ideas and pilot experiments from the NIH we can ensure that the best science is funded and conducted. Onward and upward!