HOW DOES YOUR BRAIN WORK?

Fantasy writers can make up all the details of their fictional world. Authors who write about real life can observe what goes on around them. For the rest of us, there’s research. The research I’m doing for the novel I’ve just begun to write (my fourth) is about consciousness, and how the brain works. And it’s enough to make my brain stop working.

Considering we’ve all got a brain (yes, even that @#$*&X driver in front of you), and we think we know how we think, there’s been an awful lot of ink spilled in attempts to explain it. The French philosopher Descartes is famous for saying, “I think, therefore I am.” And one of his main beliefs about how we think has become deeply ingrained in our collective knowledge, because it fits what we intuitively believe about the process of our thoughts. It seems evident that, for every bit of information taken in by our senses, there’s a specific moment when we become conscious of that information, and the details surrounding it (“Oh, look, there’s a very solid baseball coming straight at my face at high speed!”) So each moment of awareness is kind of like an image projected onto a screen in our heads (what some philosophers call the Cartesian theatre). But that forces the question: who’s looking at the screen? It presupposes there’s an inner mind, a deeper you or me who sees the projections on the screen and then does something about them—presumably a center spot of the brain where consciousness happens and decisions are made.

Neuroscientists have never found such a place. Philosophers now discredit the idea of the Cartesian theatre and will fill a book with thought exercises to show you why it’s wrong (a book so thick it qualifies as physical exercise just to lift it). Prevailing theories suggest that consciousness is more like a stream of activity with information coming in constantly, being processed, gaining priority and triggering action, or failing to achieve priority and being discarded. If you think it’s hard to wrap your head around a concept like that, imagine trying to absorb and retain it, along with dozens of other facts about neuroanatomy, brain-scanning technology, cognitive evolution, and more, while you try to write a good old-fashioned yarn about average people and the trouble they get into.

I guess the real question is: was I conscious when I decided to make a living this way?

New Old Transistor

If you’ve been complaining that your brand new laptop still doesn’t have the processing speed you need, some old science is coming to the rescue. As described in Scientific American, a transistor design that was first patented in 1925 might be the key to putting even more circuitry on computer chips.

Standard transistors, based on a design from the 1940’s, allow current along a sandwich-like semi-conductor strip depending on whether an electron “gate” in the strip is open or closed: the “on” or “off” states computing depends on. But the dividers between the sandwich layers—the junctions—are becoming too hard to define at increasingly smaller sizes. A design by Austrian physicist Julius Lilienfeld doesn’t require sandwiched strips. Just a single nanorod with a gate in the middle opened and closed by an electric field that deprives the gate section of its electrons to cut off current flow. Physicists at the Tyndall National Institute in Ireland have built one of these, and claim not only that it can be made with existing technology, but also requires less voltage, producing less heat and allowing faster processing speed.

Moore’s Law claims that the number of transistors a computer chip can hold (translate that into amount of computing power) doubles every two years. Lately that progress rate has been stalling. This new/old transistor design could get Moore’s Law back on track.

Molar-size supercomputers anyone?