The Reading Revolution

For years scientists have asserted that language is the one characteristic that sets humans apart from animals. The ability to speak and communicate is believed to have emerged around 50,000 years ago, along with the development of tools, and the increase in brain size. Scientists have identified the Broca’s and Wernicke’s regions as associated with language, but the ability to read is a little more perplexing. The written language has not been around long enough to influence evolutionary changes, and yet we know that something happens to our brain when we read.

A few years ago, Stanislas Dehaene, a cognitive neuroscientist at the Collège de France, teamed up with colleagues to conduct a study on 63 volunteers – 31 who had learned to read in childhood, 22 who had learned as adults, and 10 who were illiterate. What he found is that those who could read demonstrated a stronger response to the written word in several areas, including areas in the left lobe of the brain associated with spoken language.

However, that’s not all. Recently it was found that reading doesn’t just activate regions like Broca’s and Wernicke’s, but also, other areas associated with the content of what you are reading. A word such as ‘cinnamon’ activates areas of the brain associated with smell. Apparently, the brain doesn’t make much of a distinction between reading about something and actually experiencing it. Reading tricks the brain into thinking it is doing something it’s not, which is called embodied cognition. As we read more and more, the experiences and stories we absorb are used by the brain to understand emotions and social situations, and the brain is able to construct a ‘map’ of others’ intentions, called “theory of mind.”

An even more recent study published a few months ago involved testing 19 subjects as they read the book Pompeii by Robert Harris. After conducting fMRI scans on the subjects over the course of 19 days, it was concluded that after the reading, there was an increase in connectivity between the left angular/supramarginal gyri and the right posterior temporal gyri, regions which are associated with perspective and comprehension. Though these effects seemed to peak soon after the reading, and faded with time, more permanent changes were observed in the bilateral somatosensory cortex. Despite the implications of the study, people were too quick to publicize these results.

It’s easy to make the claim that x changes the brain in y number of ways, but this is a simplification of a very complex system that we still do not completely understand. While it may have been claimed that a certain part of the brain lights up during a specific activity, most of the brain is already busy with activity – a scientist can really only observe additional activity to that area. Not only did the study have too small a sample size and neglect to include a control group, but it also failed to make a clear distinction between the situations that the subjects were experiencing. How did they know that it was the reading that was making these changes? It is possible that these changes occurred due to the testing environment that the subjects were sharing for those 19 days.

Of course, I’m sure that the researchers had other methods to verify that the observed activity correlated to reading, but you can see why I’m not impressed by their conclusions. It’s been fairly obvious for a few years now that reading is workout for the brain and has long-lasting benefits that go beyond the basic language acquisition skills – there had to be a reason my parents always told me to turn off the TV and go read a book.

I don’t think we shouldn’t be asking if reading rewires the brain. Maybe, instead, we should be asking: does the way I read affect my brain? Does the way I process information change if I were to read a paperback versus a digital version?

Before 1992, studies showed that people read slower on screens, but since then, results have been more inconclusive. At a surface level, screens can drain mental resources and make it harder for us to remember things after we’re done. People also approach technological media with a state of mind that is less conducive to learning, even if it is subconscious. Reading a paperback, sometimes, just feels more real – though we may understand writing and language as abstract phenomena, to our brains, reading is part of the physical world.

As mentioned before, the brain is not hardwired for reading – the written word was only invented around 4,000 BC, and since then, our brains have had to repurpose some of its regions to adapt. Some of the regions excel in object recognition, to help us understand how different line strokes, curves, and shapes of a letter correspond to a certain sound, or how these letters, when joined together, can create a word. There are also regions that, perhaps more importantly, can create a physical landscape when reading a text, just in the same way that we can construct representations of terrain, offices, or homes, in our minds. When remembering certain information from a text, we often remember where in the text it appeared; in my copy of Pride and Prejudice, I can remember very clearly that Darcy professed his love for Elizabeth in the middle of a left-hand side page.

In this context, paper books have a more obvious topography than virtual books. A paperback is a physical, three dimensional object, whereas a virtual book is just that – virtual. A paperback has a left side and a right side, and eight corners with which the reader can orient himself. One can see where a book begins, and where it ends. One can flip through the pages in a book, and gauge its thickness. Even this process of turning pages creates a lilting rhythm, leaving a ‘footprint’ on the brain.

Screens and e-readers, on the other hand, interfere with the brain’s ability to construct a mental landscape. Imagine if you were using Google maps, but you could only use it in street view, walking through each street one at a time, unable to zoom out and see the whole picture at once. This is similar to what we experience when trying to navigate virtual documents.

Additionally, e-readers interfere with two key components of comprehension: serendipity, and a sense of control. Readers often feel that a specific sentence or section in a book reminds them of a previous part, and they want to flip back to read this part again. They also like to be able to highlight, underline, and write notes in a book. Thus, while reading a paperback involves the use of tactile, auditory, visual, and olfactory senses, virtual text only requires the use of one: visual.

Erik Wastlund, a researcher in experimental psychology, has conducted rigorous research on the differences between screen and paper reading. In one experiment, 72 volunteers completed a READ test (a 30 minute Swedish-language reading comprehension test). People who took the test on the computer scored lower, on average, and reported feeling more tired than those who took the test on paper, showing that screens can also be more mentally and physically taxing.

The problem is that people take shortcuts when reading on a device, such as scanning, and using the search tool to locate specific keywords, instead of reading the entire document at once. They are also less likely to engage in metacognitive learning regulation when reading on screens, which involves strategies such as setting specific goals, rereading certain sections, and testing oneself as to how well the material has been understood along the way.

So far, e-readers have been trying to copy the paperback: e-readers reflect ambient light just like books, there are ways to bookmark and highlight text, and some have even added a ‘depth’ feature, which makes it seem like there are piles of pages on the left and right sides of the screens. Even so, many will agree with me when I say – it’s just not the same.

There are certain advantages to virtual text and media presentation that have not been fully realized yet. There are a few apps on the marketplace which are trying to revolutionize the way we take in information, such as Rooster, which divides books into manageable 15-minute sections to read on the way to the office, or Spritz, in which content is streamed one word at a time, working around the Optical Recognition Point of the eye. And yet, content production is still revolving around the same models that have been in circulation for hundreds of years.

Learning is the most effective when it engages different regions of the brains, and connects different topics. Some tools have the right idea – the popular Scale of the Universe feature uses the scroll bar to communicate an idea that could not have been done as effectively on paper – but interactive media still hasn’t reached its full potential.

So maybe people have the wrong idea with trying to replicate the experience of a paperback. Maybe we should be heading in a completely different direction.

Image sources: featured photo

Posted in BSJ Blog | Tagged , , | Leave a comment

MIT Scientists Produce Hybrid Material

According to a paper published in Nature Materials, Massachusetts Institute of Technology (MIT) scientists recently have incorporated inorganic matter with living cells to develop a material that has properties of living and non-living things using E.coli bacteria.

Led by Timothy Lu, a professor of electrical engineering and biological engineering, MIT researchers used E.coli bacteria to produce biofilms (group of microorganisms where cells stick on a surface through adhesion) that have “curli fibers” which are comprised of protein peptides called CsgA. CsgA help bacteria attach to surfaces, allowing them to grip non-living materials.

(E.coli bacteria, the substance used to produce the biofilms that have the curli fibers. Wikipedia)

MIT News revealed that the researchers manipulated these E.coli cells to create various curli fibers. Since CsgA can only be produced under specific conditions, the researchers had to create a certain genetically engineered strain that contains a molecule called AHL. With AHL, the cells secreted CsgA and by controlling the amount of AHL present, researchers were able to control production of curli fibers. In addition, the researchers wanted to create curli fibers that can specifically grab onto gold nanoparticles, and thus produced CsgA with peptides that contain clusters of the amino acid histidine. This particular CsgA can only be produced with a molecule called aTc. The researchers manipulated amounts of AHL and aTc so that sufficient amount of curli fibers could grab onto the gold nanoparticles.

In the end, the scientists were able to produce a network of gold nanowires, which can conduct electricity.

The scientists also produced another type of curli fiber that attached to a substance coined the name SpyCatcher. The researchers put quantum dots, which are semiconductor nanocrystals, and the curli fibers were able to grab onto the quantum dots. The scientists grew the different strains of E.coli (which contained the slightly different curli fibers) and were able to create a material of both gold nanoparticles and quantum dots.

“It’s a really simple system but what happens over time is you get curli that’s increasingly labeled by gold particles. It shows that indeed you can make cells that talk to each other and they can change the composition of the material over time. Ultimately, we hope to emulate how natural systems, like bone, form. No one tells bone what to do, but it generates a material in response to environmental signals,” Lu says according to MIT news.

The recent discover of synthesizing hybrid materials that integrate living material with everyday applications gives hope for future developments such as future computers, biosensors, and biomedical devises. This research shows that perhaps a system, maybe a smartphone one day, can recognize its own defects and respond by repairing itself. Solar cells, which can converts light into electricity, may also potentially use the self healing quality of living cells to develop solar panels that can repair themselves by self command.

Now the researchers want to further explore this mechanism by coating the biofilms with enzymes, which catalyze the breakdown of cellulose and see if it could convert agricultural wastes to biofuels. Who knows what else is in store for these hybrid materials!

 

 

Posted in Uncategorized | 2 Comments

Curing the Silicon Addiction

Every few years, we demand that the next iteration of phones, computers, and tablets be faster than the last. What we fail to think about is that each new iteration requires a technological innovation, someone in a research lab has to create a new and better way of making CPUs. At its most basic level, this is attempting to pack more transistors onto the same chip. By Moore’s Law, every 2 years the number of transistors that gets packed onto one chip doubles. Although this isn’t a law set into nature, it has become a benchmark for the processor industry, and everyone expects that it will hold. This has worked for the past 30 years, but we are fast approaching the limit of how small traditional transistors on silicon can get. We can buy MOSFETS (the transistor architecture used in all modern processors) that are just 32nm across. For perspective, that means that 2.6 trillion transistors could fit in the palm of your hand.  What happens now, can transistors get much smaller as they are now? What will processors look like in 20 years? In short, the future can’t be silicon based.

Moore’s Law, showing transistor counts doubling every 2 years, Wikipedia
Moore’s Law, showing transistor counts doubling every 2 years, Wikipedia

As transistors get smaller and smaller, many problems arise if we use traditional architectures on silicon. Firstly, there is a “leakage current” that occurs when the “off” state isn’t completely off. This is a bigger problem with small MOSFETS where the threshold voltage (the voltage where it switches from “on” to “off”) is very small and can barely be separated from random thermal noise. Even if the leakage current is a modest 100 nano-Amperes ( ) per transistor, this adds up quickly, and for a modern cell phone, this creates a current of 10 Amperes, which would drain a cellphone battery within a few minutes. As transistors get smaller, this leakage current becomes even more of a problem, as other effects, like quantum tunneling, come into play. The distance between the gate and oxide can be so small (up to ~ 2nm), that electrons just “tunnel” across the junction, increasing the leakage current. This current also produces heat, which processors have to dissipate. All in all, the more transistors you have, the smaller they are, and the harder they are to deal with.

The evolution of MOSFET architecture, Nature
The evolution of MOSFET architecture, Nature

To combat these problems, the next generation of processors have complex geometries to minimize these effects. Intel has produced 3d transistors which have crazy looking geometries that fix this problem, at least temporarily. But what happens when even these are not sufficient, what does the future hold?

In the future, we need to cure our addiction to Silicon transistors. The more complex geometries have their own problems, and can’t be considered reliable to arbitrarily small dimensions. If we want to continue Moore’s law for years to come, we need new exotic materials and topologies. There are some new technologies that hold promise.

One of the most promising is the Carbon Nanutube Field Effect Transistor (CNTFET). Carbon Nanotubes would be placed on a silicon substrate, and plated with metals to be used as a transistor. The nanotube is a much better conductor than copper, causing fewer heat dissipation problems. It also doesn’t have the same problems of threshold voltage and leakage current, so it can be scaled much easier than traditional silicon transistors. IBM has demonstrated a computer using 10,000 of these transistors, and researchers at Stanford, and other schools continue to work on this new topology. The basic technology is here, but it needs to be scaled and made reliable to meet consumer and commercial demand.

A depiction of a simple CNTFET, Infineon
A depiction of a simple CNTFET, Infineon

Another solution to the silicon addiction lies in using one of the problems themselves. The 1973 Nobel Prize in Physics was awarded to Leo Esaki for his invention of the tunneling diode and discovering the electron tunneling effect, but only in the last few years have we been able to make a transistor out of it. This is a very strange effect where, if an energy barrier is small enough, an electron can simply pass through it. It can be thought of like this: in the classical approach, to roll a ball over a hill, you have to roll it up and then roll it back down, the energy of the system is the same, but you needed the extra energy to go over the hill. With quantum tunneling, the ball would be able to pass through the hill if it was small enough. A few teams have been able to make this approach work with different materials like aluminum gallium and mixes of indium, gallium and arsenic. These “TFETs” (Tunneling field effect transistors) have few of the problems of traditional silicon transistors, they have little leakage current, their threshold voltage is stable, and they don’t heat up significantly.

A depiction of how quantum tunneling works. In the classical approach (top) the electron cannot pass through the energy, but with the quantum approach (bottom) the electron has the ability to pass through, IEEE
A depiction of how quantum tunneling works. In the classical approach (top) the electron cannot pass through the energy, but with the quantum approach (bottom) the electron has the ability to pass through, IEEE

These are only a few of the promising technologies in the field of transistor architecture. To solve the problems we have with our current transistors, we will need to kick the silicon addiction and adopt a novel technology. It is left to be seen which technology will be adopted, and what the next generations of computers will look like.

 

 

 

 

 

 

 

Posted in BSJ Blog | Leave a comment

Reconstructing Memories

On September 11, 2001, when I was seven years old, I sat in an elementary school classroom, watching footage of a plane crashing into the Twin Towers on a small television screen. My mother tells me she also remembers exactly what she was doing when the world found out about Princess Diana’s death. Nearly everyone in the country who has lived through the 9/11 attacks will remember what they were doing the moment the news broke out. Most people have what Dr. Karim Nader calls “flashbulb memories” of what they were doing when something significant happened, but as vivid as these memories can be, psychologists find that they are often surprisingly inaccurate.

As it turns out, my memory of the 9/11 attacks is almost entirely wrong, because the footage of the plane crashing into the towers wasn’t even aired until the following day. But I’m not alone – a 2003 study of college students found that 73% had the misconception that the footage was shown that same day. The problem is that it is nearly impossible for humans to remember memories without altering them in some way. Momentous occasions like 9/11 are especially susceptible because we replay them over and over in our minds, often in conversation with other memories, and the memories of other people we interact with.

Recording a memory, as Eric Kandel (Nobel Prize winner in Physiology or Medicine) found, requires adjusting connections between neurons. Over five decades of work, Kandel took a reductionist approach by using animal models. Though others were skeptical, he argued that there was no fundamental difference between the neurons and synapses of a human and a fly, and studied nerve connections in a giant marine snail, Apylsia. He shared that memory storage has two phases: short-term and long-term. Long-term memory differs from the manufacturing of short-term memory because it requires synthesis of new proteins and expansion of docks. For a memory to last longer than a few hours, it must literally be built into the brain’s synapses. Kandel thought that once a memory is constructed, it is stable and can’t be undone.

Recent experiments, however, tell a very different story. Psychologist Alain Brunet tested patients suffering from PTSD, who had each experienced a traumatic event a decade earlier in a therapy session. Nine patients took a propranolol pill (a drug that inhibits a neurotransmitter norepinephrine and is intended to interfere with memories), while ten took a placebo pill. Each patient read aloud from a script based on a previous interview describing their traumatic experience. A week later, the patients listened to a recording of themselves reading the scripts. What transpired was that those who had taken the propranolol pills were calmer – there was a smaller uptick in the heart and they perspired less. The treatment didn’t actually erase the memory – it just changed the quality of that memory. This has important implications for clinical treatment for those with nervous disorders.

So why are memories susceptible to change? Why is something that came about as an evolutionary advantage so unreliable? Dr. Karim Nader speculates that reconsolidation could be the brain’s mechanism for recasting old memories in a new light. Adjusting our memories is what keeps us from living in the past.

But that’s not all – new research from the Emory University School of Medicine shows that it is possible for some information to be biologically inherited through chemical changes in DNA. Not only do memories have the capacity to change and evolve, but they also seemingly have the ability to alter our genetic makeup.

In the study, mice were trained to fear the smell of a compound, acetophenone, by exposing the mice to the smell and then treating them with a mild electric shock (it had already been established in a previous study that this kind of learning significantly changes the structure of olfactory neurons in mice). Ten days later, the mice mated, and it was observed that neurological changes occurred in both the children and grandchildren, even when the offspring were kept separate to avoid the effects of behavioral changes. The offspring proved to be 200 times more sensitive than other mice to the smell of acetophenone. After studying the sperm of the offspring and the parent, the part of the DNA responsible for sensitivity to the particular scent was found to be more prevalent.

It is unclear how this fear was passed down, or even how it came to be imprinted in the parent’s DNA in the first place. However, it is possible that these findings could explain irrational phobias. A fear of spiders may be an inherited defense mechanism present in a family’s genes by virtue of an ancestor’s frightening encounter with an arachnid.

It is still too early to tell what the greater implications of this breakthrough could be. What we do know is that the old notion that memory, and by extension, learning, is static, is entirely wrong. Not only are we constantly constructing and reconstructing memories through neuron connections and synapses, but our memories are also changing our internal structure by influencing the way we think, and the things we choose to remember. As our memories change, the way our bodies react changes, which in turn affects our memories again. It’s a spiraling cycle that never ends.

Posted in BSJ Blog | Tagged , | 1 Comment

Are You Nothing More (Or Less) Than A Soft Machine?

‘Man as Industrial Palace’ 1926 Fritz Khan

There is something tantalizingly romantic to me about the objectivity of science. There is something about how the structure of a tail of a twirling galaxy, and that of a hurricane hurdling around its eye, is fundamentally the same. One could even say these entropic laws provide the crystal resolution of an inevitable architecture at any, and every scale.

The more I let this sink in to my ever wavering understanding of what it means to exist in an empiric landscape, the more I become haunted by what it means – to be. If you choose survival of genetic material as the final sum, you will find every variable manifested as an organisms behavior will add up to reach that sum.

How would we as humans fall outside this equation? Yet even as fields like neuroscience disprove century old Aristotelian views that mind is separate from body, I still get the sense that most of us hold that our consciousness, our character, our souls are indeed some intangible entity separate from the physical world. But what if they weren’t.

‘Visual Field’ XKCD

What if we are simply machines programmed by physics and chemistry and biology which churn out outputs like society, which further program us as individuals for the sole output of survival and reproduction. To illustrate the synthesis of external information into the fabric we call consciousness, take one of my favorite subjects, color. From physics we know that what we perceive as a color is the reflected wavelength of visible light hitting an object. From chemistry we know that the pigments that determine what wave lengths are absorbed by the object, do so because the specific length, strength, and angle of molecular bonds. From biology we can see how the reflected light progresses through the incredible optical structures of our eyes. From neuroscience we can explain how these electrical signals are translated through the optical nerve to the occipital lobe in our brain for us to finally perceive the color. We can even take it a step up to psychology. How we respond to certain colors, for instance, the allure of red. Red cheeks and red lips signify reproductive health, which lead to a certain individual with these qualities being considered attractive. Even sociologically, the color red has significant meaning in everything from aboriginal face paint to marketing campaigns, due to its subliminal arousing response. This was somewhat of tangent, but I’d like to think of it as a micro example of the linear progression of matter to our sense of being.

Then there are other examples in evolutionary biology. We have internal clocks synchronized to the rotation of the planet which ensure our body restores the energy it requires to function. We fear heights, spiders and enclosed spaces because they threatened our humanoid ancestors.  We’ve evolved to enjoy the taste of sweet and fatty things to better recognize the nutrition that will fuel our bodies. We’ve evolved  memory in order to recall successful locations of food and shelter. We’ve developed “love” as a mechanism to reproduce and then raise offspring. There are even statistically significant trends of people resembling their romantic partners, because we have an innate attraction to those who are similar to us due their presumably similar genetic material. We develop the ability to empathize with others in order to live in a community. This allows for the specialization of skills which improves the overall chance of survival for the entire community. We devise political and economic systems to navigate those societies. We fight and argue to establish dominance over potential mates, territory, resources (all survival tools). The list goes on. If you begin looking through the lens of an evolutionary biologist, you may find yourself unable to stop.

So maybe we are machines.

Yet with this logic, I have to wonder, why aren’t we perfect. Why aren’t we flawless adaptors to our changing environment? Why are there some behaviors that don’t seem to be at all productive for survival?

My favorite example is unrequited love.

In this mechanism that plagues a majority of the human population at some point in their lives, an individual will sustain intense grief, loss, anger, and depression over an unsuccessful partner (i.e. mate). If we were perfect machines, with a single output for all inputs, this behavior would be completely useless. In this (sometimes drawn out) duration of self indulgent, self induced mourning we could be seeking out other more suitable mates to continue the quest of reproduction.

It was here where I began to synthesize the hypothesis that these evolutionary imperfections are what distinguish us from machines. Sure all the well oiled, linearly advantageous actions could well fit into the “computer-istic” model. But, we’d be omitting data on the unrequited loves, and the occasional drives to surpass the limits of beneficial behaviors. Like how we wage wars that end up killing masses of our fellow species, or how we allow ourselves to follow a leader so far (say in a cult) that we become brain-washed, or how as a nation we will over-consume macronutrients to a point of medical catastrophe.

That is, our defects, our foils, our weakness, our failures to conform to the deterministic direction of survival, are what make us dare I say, fundamentally human.

Posted in BSJ Blog | 1 Comment

To know where we are with Geographic Information Systems is to understand who we are.

“A place is what it is because of its location. Where we are is who we are.”

Portuguese poet Fernando Pessoa did not take geography for granted. He understood that a place is a space with an identity. Throughout his work Pessoa created multiple personalities to write his poetry, so much so that his literary genius was only recognized after his death when it was discovered that he alone had been multiple of the country’s greatest poets.

All this musing is good, but what of it?

The same way Pessoa was many from a singular space, so is geography. Your experience on Earth is a lottery of geographic variables: the human development index of your place of birth can determine your overall quality of life, the zip code of your residence the schools you’re eligible to attend, and your latitude the type of weather you need to prepare for. Your geography is, in many ways, your future. Which leads to the topic at hand:

You’re permeated with spatial data.

Enters the Geographic Information Systems (GIS), a set of tools and methodologies to make sophisticated decisions based on geography. Geospatial data is becoming increasingly reliable, voluminous, and accessible; this is data that you’re both passively and actively creating. For example, the route you take to work may be a conscious effort in itself, but the energy consumption from transportation may not. The spatial data that you produce is multilayered and connected, and you actively use that information in your daily life. In other words, we intuitively use GIS to navigate our world.

What needs to be emphasized is the use of GIS technology and methodology to address the world’s greatest trials: rising seas, food waste, urban inequity, resource management etc. A greater awareness of the space we occupy can in itself produce innovative solutions. And better yet, GIS is an interdisciplinary tool. Virtually all knowledge is bound to a spatial and temporal clause. Understanding those conditions enable us to become better decision makers, with greater foresight than previous generations.

Already in 1854 geospatial analysis was utilized to solve public health issues. You may have heard of Dr. John Snow, the father of modern epidemiology, and of his work on the cholera epidemic in London. Dr. Snow surveyed the infected district and identified the sources of the outbreaks around a cluster of water pumps. Because he mapped his fieldwork data, he was able to identify a spatial pattern and implement a solution to curb the epidemic. While cholera and other waterborne diseases are still a threat in large parts of the world, GIS can better address those problems on much larger scales.

The world is changing because we wish to shape it to our image, and we must prepare for the unintended consequences of our visions.

The beauty about geography is that it’s a discipline with no boundaries. City planners use historic traffic data to make city more sustainable by redesigning traffic grids. Architects and engineers can cooperate to choose the most suitable location for buildings that have solar grids. Disaster managers are able to mitigate the worst of a bad situation before it happens by identifying areas out of proximity from emergency services. The list goes on to include climatologist analyzing multispectral images to gage the effects of CO2 on the Earth’s systems, archaeologists conducting a transect to locate and map an excavation area, and educators to teach their students about issues in their communities.

The Geographic Information Systems, and more broadly the Geographic Information Sciences, are essential to mapping the denouement of what the future holds for our planet. We live in paradoxical times where the distances between places are becoming narrower through communication and trade, yet further apart in terms of social equity and environmental degradation. Globalization, being the loaded word that it is, cannot be remotely understood before we understand where we are relatively to other places.

For a place is an identity. Understanding how it’s changing now will also help us understand who, or what, we are becoming.

Posted in Uncategorized | Tagged , , , , , , , , , , , , | 2 Comments

Reading Between the Atoms – Writing on the Nanoscale

“Why can’t we write the entire twenty-four volumes of the Encyclopedia Britannica on the head of a pin?”

Richard Feynman offered up this daunting challenge (with a rather paltry $1000 prize) at his famous 1959 Caltech lecture “There’s Plenty of Room at the Bottom” – a seminal event in the history of nanotechnology. In 1985, Tom Newman, then a Stanford grad student, completed the challenge. Newman reduced the first paragraph of A Tale of Two Cities to the necessary 1/25,000 of its size using a focused beam of electrons to carve out the lettering.

IBM logo written with 35 xenon atoms
The initials for Stanford University are written in electron waves

Since then, nanoscale writing has become something of a scientific competition (who can make it smaller) as well as an interesting marker of progress in nanotechnology since Feynman’s vision over 50 years ago. In 1989, a team at IBM precisely arranged 35 xenon atoms to spell out the company logo. The current record holders are a Stanford team, who in 2009, manipulated subatomic pieces on the order of 0.3 nanometers to spell out the university’s initials. A scanning tunneling microscope was used to position single carbon monoxide molecules on a sliver of copper. Electrons in the copper bounce around, spreading out as waves and rebounding off the carefully placed carbon monoxide molecules. The specific interference patterns of these electron waves encode the letters. Hari Manoharan, author of the paper, put this accomplishment in perspective, “One bit per atom is no longer the limit for information density. There’s a grand new horizon below that, in the subatomic regime. Indeed, there’s even more room at the bottom than we ever imagined.”

Yet, Feynman’s challenge, writing out an entire book, remains unmet. The techniques previously described are laborious, costly, and above all painfully slow. All of these techniques are based on manipulating single atoms one at a time. The recent field of DNA nanotechnology offers a possible solution.

Most people are surprised to learn that DNA makes for an effective building material at the nanoscale; yet, nucleic acids are life’s information storage molecule of choice. The key to building small is encoding the assembly information into the molecules themselves rather than using external forces to arrange them. The main advantage of DNA is that the sequence of nucleotides in a strand can be precisely controlled and pairing rules are well understood. Moreover, strands of DNA with complementary sites will always spontaneously assemble into the most favorable structure.

Shapes constructed from coiled strands of DNA

In a 2006 Nature article, Paul Rothemund coined the fanciful term “DNA Origami” to describe his successful manipulation of DNA strands into a variety of shapes. His revolutionary technique utilized a single long strand of genomic DNA from a bacteriophage, which was coiled, twisted, and stacked by small custom “staple” strands which bind to complementary sites. By winding the long strand back and forth with staple strands, Rothemund folded six different 2-D shapes including squares, triangles, and stars – demonstrating that this technique was applicable to any shape within a 100nm diameter.

Rothemund’s most significant achievement was developing a method to pattern the 2-D sheets. He designed staple strands that would stick up from the flat lattice, increasing the height of the nanostructure at desired locations. A world map as well as the word “DNA” were successfully created and visualized. Perhaps the most surprising part of the method is its simplicity – staple strands are mixed with the long strands, heated for 2 hours and viola! The staple strands can be designed with computer algorithms and synthesized at relatively cheap cost, which makes scaling up easy. How appropriate that one day humanity’s knowledge may be recorded in the same molecules that encode our lives.

Computer models and actual electron microscope images of 2D patterned DNA sheets

 

Posted in BSJ Blog | Tagged , , , , , | 1 Comment

Evolution and Consciousness

Archaeopteryx, found in 1861, was the first transitional fossil discovered that suggested intermediate forms between feathered dinosaurs and modern birds. Unearthed just years after Darwin published “On the Origin of Species”, Archaeopteryx seemed to support Darwin’s theories about evolution. Since then, 28 other transitional species between birds and dinosaurs have been discovered, as well as countless other transitional forms, such as the Tiktaalik (fish to amphibians), Eupodophis (lizard to snake), and Lycaenops (a mammal-like reptile), which can only confirm the theory of evolution. Why, then, has the public recognition of evolution not grown?

The overall acceptance that humans have evolved has remained at 60 percent in the past four years, while the divide between Republicans and Democrats has widened to a 24 percent margin from a 10 percent margin in 2009. Although the acceptance of evolution is significantly higher in young adults, there is a disturbingly large number of people who dismiss evolution as “just a theory”, revealing perhaps the biggest flaw in the Theory of Evolution – its name. In everyday jargon, a “theory” refers to an unproved assumption, or a speculation. However, in scientific terms, a “theory” is a set of ideas that is intended to explain facts or events in detail, as in the Theory of Relativity.

The other reason evolution is dismissed so easily is because it is often portrayed misleadingly by uniformed opponents. As an incredibly egotistical race, humans dislike the idea that we “evolved from apes” because it seems preposterous and hurts our pride. Yet this idea that evolution is linear, as shown in the picture below, is not entirely correct.

We did not evolve from the modern day chimp – we evolved from a common ancestor. A more accurate portrayal would be the one shown below, an evolutionary tree.

There are many other, perhaps more appealing, hypotheses out there about how we came to be, including creationism, intelligent design, punctuated equilibrium, and theistic evolution, but these speculations are not as grounded in experimental and observational practice as mainstream science is. However, the reason these other alternative theories are perhaps so appealing is that the Theory of Evolution cannot (yet) account for the existence of human consciousness. If evolution happened via natural processes, where did human consciousness come from? How are we aware that we exist?

Many scientists agree that self-awareness evolved because of the benefits it contributes understanding others and social situations, implying that self-awareness is intrinsically connected to other-awareness. This suggests that there was an advantage for the individual in understanding others, and therefore that competition and cooperation played a pivotal role in how human evolution progressed. Consciousness, then, is an experience, and our capacity for mental construction and time travel allows us to compare current situations with past and future ones. Mental trial and error is much more efficient than actual trial and error, so this part of the decision making process greatly reduces the chance of failure. This extends to our interactions with others – we use our own experiences in order to predict the behavior of others. Mirror neuron experiments in humans and monkeys favor this view.

Mirror neurons, or F5 neurons (in the ventral premotor cortex of the monkey) fire both when the monkey performs a certain action, and when it observes the same action being done by another monkey. According to Gallese and Goldman (1998), this extensive mirror neuron research in humans and monkeys support the simulation theory of mindreading, though the mirror neuron system (MNS) is observed to be more primitive in monkeys. While the MNS alone does not lead to action understanding, it helps create an action execution plan that is available for processing when needed.

The “why” of human consciousness has now been answered – among other advantages, it allows for social cognition and simulation of experiences. How this self-awareness evolved, though, is a much more complex question to deconstruct.

It is known that in evolutionary history, the human brain swelled up to enormous proportions, and at some point, consciousness seemed to just appear. Psychologists and linguists have hypothesized that the human brain is programmed to acquire and understand language – language (used here to describe a complex system of communication) is, after all, a defining characteristic that sets humans apart from all other species. Sometime during the development of language and communication, the need for greater cooperation arose, and thus, we began to develop self- and other-awareness. The only flaw in this theory is that language only suggests cognition, whereas consciousness is slightly more complicated. It consists of the individual experience: what it is like to be something, to feel something.

It may be a theory, but the idea that language contributed to the evolution of consciousness doesn’t tell the whole story. We still don’t have enough data to develop a comprehensive theory. Until we develop a way to measure consciousness universally and quantitatively, we will continue to formulate hypotheses that are difficult to test. Perhaps when we do find out, it will enhance our understanding of the natural world, and we will be able to uncover a theory that is both comprehensive and universally intelligible. Until then, however, the search for answers continues.

Posted in BSJ Blog | Tagged , , , , | 3 Comments

Genome editing just got a lot easier

This post is cross-posted with the PLOS Student Blog

If you’ve recently taken a glimpse at the front page of any major science news outlet, it is likely you are no stranger to an emerging genome editing technology known as CRISPR/Cas9. With the help of RNA, Cas9 (a bacterial enzyme) can be programmed to target specific locations within the human genome, enabling scientists to delete, modify, or insert sequences that may treat, or even cure patients with genetic diseases. Although the CRISPR/Cas9 field is still in its early stages, major breakthroughs have been made recently, paving the road for a new line of gene therapy.

Just a couple weeks ago, researchers at Nanjing Medical University and Yunnan Key Laboratory reported successful usage of Cas9 in monkeys, thereby progressing toward the exciting possibility of editing human genomes. They injected Cas9 into monkey embryos and impregnated female monkeys with the resulting eggs; quite

These set of twins were treated with Cas9 as embryos and as a result, born with targeted deletions in their DNA.

remarkably, the newly born monkeys had deletions in the targeted gene of interest. In the world of science, where expected results sometimes never come to fruition, this marks an important stepping-stone in Cas9 technology. Targeting genes with this high degree of specificity could potentially lead to therapies that will prevent individuals from developing genetic diseases.

In the same week, a paper published in Nature revealed the mechanism by which Cas9 finds target DNA sequences tens of base-pairs in size within a genome that contains three billion base pairs. Interestingly, the researchers showed that Cas9 searches for a specific sequence known as the PAM – if the target doesn’t carry this short DNA tag, the sequence is neither recognized nor cut. Samuel Sternberg, lead author on the paper, explains that the presence of PAM sequences “accelerates the rate at which the target can be located, and minimizes the time spent interrogating non-target DNA sites.”

Cas9 looks for PAM sequences (gold) and matching sequences before cutting the DNA (picture by KC Roeyer)
Cas9 looks for PAM sequences (gold) and matching sequences before cutting the DNA (picture by KC Roeyer)

One of the most recent discoveries came out last week in Science magazine: two structural biologists at UC Berkeley, Jennifer Doudna and Eva Nogales, published the structure of Cas9 from two different organisms, providing key insights into the mode of DNA recognition and cleavage by the RNA-guided enzyme. Fuguo Jiang, one of the lead authors on the paper, said, “although the two Cas9s are from different organisms, and overall they look very different, when you superimpose the two structures, their functional domains are very similar. This suggests all the Cas9s have a similar mechanism to make cuts in DNA.” Additionally, the paper revealed an important loop within Cas9 is responsible for recognizing the PAM. As for what this means for the future of CRISPR/Cas9 technology, Jiang elaborated, “The original Cas9 recognizes one particular PAM, but if you can engineer it to recognize a different PAM sequence through mutagenesis, that would be great. The genome sequence is usually fixed. But what you could do is change the PAM sequences that Cas9 recognizes and tailor it to target more genomic loci. In this way, we can expand our Cas9-based genome-editing toolbox.” This discovery, along with the physical mechanism by which Cas9 locates target sequences, may help improve the efficiency of targeted gene editing.

And that’s not all. Editas Medicine, a new company co-founded by some of the leading scientists studying Cas9 aim to translate its genome engineering technology into a novel class of human therapeutics. These therapies are destined to make significant medical advances for people with genetic diseases including, but not limited to, Huntington’s disease, cystic fibrosis, and Alzheimer’s. Since modern sequencing technology has produced a massive amount of human genome sequences, mapping diseases to certain genomic coordinates is becoming faster and easier. With this valuable sequence information, the CRISPR/Cas9 system can simply be engineered to make positive changes in specific diseased DNA sequences and restore normal function.

Editas Medicine’s goal is to further develop CRISPR/Cas9 technology into a novel class of human therapeutics. Pictured from left: Jennifer Doudna, Feng Zhang, Keith Joung, and David Liu

Genome engineering earned researchers a Nobel Prize in 2007, but with Cas9 speeding ahead, I wouldn’t be surprised if one is awarded to a Cas9-er in the near future.

 

**While writing this article, a paper was published in Cell that reveals another structure of Cas9, but now bound to its target DNA. This structure provides more information about the molecular mechanisms by which Cas9 cuts its targets and will further aid researchers in improving genome-editing tools**

If you want to learn more about how Cas9 functions, check out this video produced by a student in Eric Greene’s lab at Columbia University:

Cas9: The Enzyme, The RNA, & The Virus

Cas9: The Enzyme, The RNA, & The Virus (video by Myles Marshell)

Posted in BSJ Blog | Tagged , , , , , , , , , , , | Leave a comment

The Art of Science and the Science of Art

Every year, the Townsend Center for the Humanities invites a cultural icon to campus as the Avenali Chair in the Humanities, and every year, the Avenali Chair in the Humanities delivers a lecture. It’s an amazing opportunity to hear from fascinating individuals, but I found about this only because last year’s speaker was none other than Ursula K. Le Guin, queen of the universe (whose lecture is available here). This year, I went through the opposite process; I’d never heard of the lecturer, Lawrence Weschler, but I was immediately interested in the topic: Art and Science as Parallel and Divergent Ways of KnowingContinue reading

Posted in BSJ Blog | 2 Comments