APEX Wins the Philip K. Dick Award!

PKD Award

Tonight, in Seattle, I was in the crowd when my novel Apex won the Philip. K Dick Award.

 

Apex is the third and final book of the Nexus Trilogy. Those books have now collectively won the Prometheus Award, the Endeavor Award, been listed on NPR’s list of Best Books of the Year, and been shortlisted for the Arthur C. Clarke Award and the Kitchies Golden Tentacle Award. They also earned me a nomination for the Campbell Award for Best New Author in 2014.

To say I’m pleased would be an understatement.  The PKD is a juried award, meaning that a panel of 5 judges picked Apex as the most deserving paperback-original science fiction novel of the year, out of the more than 100 titles that were submitted.

I’m also pleased because Philip K. Dick wrote about topics that I care about: Identity, memory, surveillance, the inner workings of the mind and the structure of society. Those are the very same topics I tried to touch on in the Nexus books.

My fellow nominees (Marguerite Reed, Adam Rakunas, PJ Manney, Douglas Lain, and Brenda Cooper) are all awesome. Brenda was one of the first professional writers to take the time to read Nexus and to give me advice and encouragement on publishing. PJ and Adam are both old friends. And I look forward to becoming friends with Marguerite and Doug.

The six of us teamed up to give away copies of all six finalist books. It’s too late to enter that giveaway (almost 4,000 people did), but you can still visit the site to learn more about all six books.

Here’s all of us hanging out before the award.

PKD Nominees - Black and White

Thank you everyone who helped make Apex and the Nexus books great, including Molly (who read every page of each of those books, usually on the day I wrote it), my agent Lucienne Diver, my editor Lee Harris and publisher Marc Gascoigne, the almost 60 beta readers who read one or more drafts of those three books (sometimes many more than one draft) and gave feedback to make them better, and especially the fans who bought them, shared them, and told everyone else to go read them.

Onward and upward.

APEX (1)

Win Copies of All Six Philip K. Dick Award Finalists

I’m up for the Philip K. Dick Award, which pleases me to no end, since Dick wrote some excellent, mind-bending, ground-breaking sci-fi about the nature of memory, identity, and much else.

It also pleases me to no end that, on most sci-fi award slates, the authors are much more supportive of each other than competitive. This one is no exception. And so the six of us finalists (Marguerite Reed, Adam Rakunas, PJ Manney, Douglas Lain, Brenda Cooper, and myself) have banded together to create a giveaway. Six lucky winners will each win a copy of all of the books that made the final list.

You can enter the giveaway here: http://pkdnominees.xyz/

PKD Award Covers

 

The Ultimate Interface: Your Brain

A shorter version of this article first appeared at TechCrunch.

The final frontier of digital technology is integrating into your own brain. DARPA wants to go there. Scientists want to go there. Entrepreneurs want to go there. And increasingly, it looks like it’s possible.

You’ve probably read bits and pieces about brain implants and prostheses. Let me give you the big picture.

Neural implants could accomplish things no external interface could: Virtual and augmented reality with all 5 senses (or more); augmentation of human memory, attention, and learning speed; even multi-sense telepathy — sharing what we see, hear, touch, and even perhaps what we think and feel with others.

Arkady flicked the virtual layer back on. Lightning sparkled around the dancers on stage again, electricity flashed from the DJ booth, silver waves crashed onto the beach. A wind that wasn’t real blew against his neck. And up there, he could see the dragon flapping its wings, turning, coming around for another pass. He could feel the air move, just like he’d felt the heat of the dragon’s breath before.

– Adapted from Crux, book 2 of the Nexus Trilogy.

Sound crazy? It is… and it’s not.

Start with motion. In clinical trials today there are brain implants that have given men and women control of robot hands and fingers. DARPA has now used the same technology to put a paralyzed woman in direct mental control of an F-35 simulator. And in animals, the technology has been used in the opposite direction,directly inputting touch into the brain.

Or consider vision. For more than a year now, we’ve had FDA-approved bionic eyes that restore vision via a chip implanted on the retina. More radical technologies have sent vision straight into the brain. And recently, brain scanners have succeeded in deciphering what we’re looking at. (They’d do even better with implants in the brain.)

Nexus CoverSound, we’ve been dealing with for decades, sending it into the nervous system through cochlear implants. Recently, children born deaf and without an auditory nerve have had sound sent electronically straight into their brains.

Nor are our senses or motion the limit.

In rats, we’ve restored damaged memories via a ‘hippocampus chip’ implanted in the brain. Human trials are starting this year. Now, you say your memory is just fine? Well, in rats, this chip can actually improve memory. And researchers can capture the neural trace of an experience, record it, and play it back any time they want later on. Sounds useful.

In monkeys, we’ve done better, using a brain implant to “boost monkey IQ” in pattern matching tests.

We’ve even emailed verbal thoughts back and forth from person to person.

Now, let me be clear. All of these systems, for lack of a better word, suck. They’re crude. They’re clunky. They’re low resolution. That is, most fundamentally, because they have such low-bandwidth connections to the human brain. Your brain has roughly 100 billion neurons and 100 trillion neural connections, or synapses. An iPhone 6’s A8 chip has 2 billion transistors. (Though, let’s be clear, a transistor is not anywhere near the complexity of a single synapse in the brain.)

The highest bandwidth neural interface ever placed into a human brain, on the other hand, had just 256 electrodes. Most don’t even have that.

The second barrier to brain interfaces is that getting even 256 channels in generally requires invasive brain surgery, with its costs, healing time, and the very real risk that something will go wrong. That’s a huge impediment, making neural interfaces only viable for people who have a huge amount to gain, such as those who’ve been paralyzed or suffered brain damage.

This is not yet the iPhone era of brain implants. We’re in the DOS era, if not even further back.

But what if? What if, at some point, technology gives us high-bandwidth neural interfaces that can be easily implanted? Imagine the scope of software that could interface directly with your senses and all the functions of your mind:

They gave Rangan a pointer to their catalog of thousands of brain-loaded Nexus apps. Network games, augmented reality systems, photo and video and audio tools that tweaked data acquired from your eyes and ears, face recognizers, memory supplementers that gave you little bits of extra info when you looked at something or someone, sex apps (a huge library of those alone), virtual drugs that simulated just about everything he’d ever tried, sober-up apps, focus apps, multi-tasking apps, sleep apps, stim apps, even digital currencies that people had adapted to run exclusively inside the brain.

– An excerpt from Apex, book 3 of the Nexus Trilogy.

The implications of mature neurotechnology are sweeping. Neural interfaces could help tremendously with mental health and neurological disease. Pharmaceuticals enter the brain and then spread out randomly, hitting whatever receptor they work on all across your brain. Neural interfaces, by contrast, can stimulate just one area at a time, can be tuned in real-time, and can carry information out about what’s happening.

We’ve already seen that deep brain stimulators can do amazing things for patients with Parkinson’s. The same technology is on trial for untreatable depressionOCD, and anorexia. And we know that stimulating the right centers in the brain can induce sleep or alertness, hunger or satiation, ease or stimulation, as quick as the flip of a switch. Or, if you’re running code, on a schedule. (Siri: Put me to sleep until 7:30, high priority interruptions only. And let’s get hungry for lunch around noon. Turn down the sugar cravings, though.)

Implants that help repair brain damage are also a gateway to devices thatimprove brain function. Think about the “hippocampus chip” that repairs the ability of rats to learn. Building such a chip for humans is going to teach us an incredible amount about how human memory functions. And in doing so, we’re likely to gain the ability to improve human memory, to speed the rate at which people can learn things, even to save memories offline and relive them — just as we have for the rat.

That has huge societal implications. Boosting how fast people can learn would accelerate innovation and economic growth around the world. It’d also give humans a new tool to keep up with the job-destroying features of ever-smarter algorithms.

The impact goes deeper than the personal, though. Computing technology started out as number crunching. These days the biggest impact it has on society is through communication. If neural interfaces mature, we may well see the same. What if you could directly beam an image in your thoughts onto a computer screen? What if you could directly beam that to another human being? Or, across the internet, to any of the billions of human beings who might choose to tune into your mind-stream online? What if you could transmit not just images, sounds, and the like, but emotions? Intellectual concepts? All of that is likely to eventually be possible, given a high enough bandwidth connection to the brain.

That type of communication would have a huge impact on the pace of innovation, as scientists and engineers could work more fluidly together. And it’s just as likely to have a transformative effect on the public sphere, in the same way that email, blogs, and twitter have successively changed public discourse.

Digitizing our thoughts may have some negative consequences, of course.

With our brains online, every concern about privacy, about hacking, about surveillance from the NSA or others, would all be magnified. If thoughts are truly digital, could the right hacker spy on your thoughts? Could law enforcement get a warrant to read your thoughts? Heck, in the current environment, would law enforcement (or the NSA) even need a warrant? Could the right malicious actor even change your thoughts?

“Focus,” Ilya snapped. “Can you erase her memories of tonight? Fuzz them out?”

“Nothing subtle,” he replied. “Probably nothing very effective. And it might do some other damage along the way.”

– An excerpt from Nexus, book 1 of the Nexus Trilogy.

The ultimate interface would bring the ultimate new set of vulnerabilities. (Even if those scary scenarios don’t come true, could you imagine what spammers and advertisers would do with an interface to your neurons, if it were the least bit non-secure?)

Everything good and bad about technology would be magnified by implanting it deep in brains. In Nexus I crash the good and bad views against each other, in a violent argument about whether such a technology should be legal. Is the risk of brain-hacking outweighed by the societal benefits of faster, deeper communication, and the ability to augment our own intelligence?

For now, we’re a long way from facing such a choice. In fiction, I can turn the neural implant into a silvery vial of nano-particles that you swallow, and which then self-assemble into circuits in your brain. In the real world, clunky electrodes implanted by brain surgery dominate, for now.

That’s changing, though. Researchers across the world, many funded by DARPA, are working to radically improve the interface hardware, boosting the number of neurons it can connect to (and thus making it smoother, higher resolution, and more precise), and making it far easier to implant. They’ve shown recently that carbon nanotubes, a thousand times thinner than current electrodes, have huge advantages for brain interfaces. They’re working on silk-substrate interfaces that melt into the brain. Researchers at Berkeley have a proposal for neural dust that would be sprinkled across your brain (which sounds rather close to the technology I describe in Nexus). And the former editor of the journalNeuron has pointed out that carbon nanotubes are so slender that a bundle of a million of them could be inserted into the blood stream and steered into the brain, giving us a nearly 10,000-fold increase in neural bandwidth, without any brain surgery at all.

Even so, we’re a long way from having such a device. We don’t actually know how long it’ll take to make the breakthroughs in the hardware to boost precision and remove the need for highly invasive surgery. Maybe it’ll take decades. Maybe it’ll take more than a century, and in that time, direct neural implants will be something that only those with a handicap or brain damage find worth the risk to reward. Or maybe the breakthroughs will come in the next ten or twenty years, and the world will change faster. DARPA is certainly pushing fast and hard.

Will we be ready? I, for one, am enthusiastic. There’ll be problems. Lots of them. There’ll be policy and privacy and security and civil rights challenges. But just as we see today’s digital technology of Twitter and Facebook and camera-equipped mobile phones boosting freedom around the world, and boosting the ability of people to connect to one another, I think we’ll see much more positive than negative if we ever get to direct neural interfaces.

In the meantime, I’ll keep writing novels about them. Just to get us ready.

The Singularity is Further Than it Appears

This is an updated cross-post of a post I originally made at Charlie Stross’s blog.

Are we headed for a Singularity? Is AI a Threat?

tl;dr: Not anytime soon. Lack of incentives means very little strong AI work is happening. And even if we did develop one, it’s unlikely to have a hard takeoff.

I write relatively near-future science fiction that features neural implants, brain-to-brain communication, and uploaded brains. I also teach at a place called Singularity University. So people naturally assume that I believe in the notion of a Singularity and that one is on the horizon, perhaps in my lifetime.

I think it’s more complex than that, however, and depends in part on one’s definition of the word. The word Singularity has gone through something of a shift in definition over the last few years, weakening its meaning. But regardless of which definition you use, there are good reasons to think that it’s not on the immediate horizon.

Vernor Vinge’s Intelligence Explosion

My first experience with the term Singularity (outside of math or physics) comes from the classic essay by science fiction author, mathametician, and professor Vernor Vinge, The Coming Technological Singularity.

Vinge, influenced by the earlier work of I.J. Good, wrote this, in 1993:

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.
[…]
The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence.
[…]
When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities — on a still-shorter time scale.

I’ve bolded that last quote because it’s key. Vinge envisions a situation where the first smarter-than-human intelligence can make an even smarter entity inless time than it took to create itself. And that this keeps continuing, at each stage, with each iteration growing shorter, until we’re down to AIs that are so hyper-intelligent that they make even smarter versions of themselves in less than a second, or less than a millisecond, or less than a microsecond, or whatever tiny fraction of time you want.

This is the so-called ‘hard takeoff’ scenario, also called the FOOM model by some in the singularity world. It’s the scenario where in a blink of an AI, a ‘godlike’ intelligence bootstraps into being, either by upgrading itself or by being created by successive generations of ancestor AIs.

It’s also, with due respect to Vernor Vinge, of whom I’m a great fan, almost certainly wrong.

It’s wrong because most real-world problems don’t scale linearly. In the real world, the interesting problems are much much harder than that.

Molecular Modelling Computational Complexity

Consider chemistry and biology. For decades we’ve been working on problems like protein folding, simulating drug behavior inside the body, and computationally creating new materials. Computational chemistry started in the 1950s. Today we have literally trillions of times more computing power available per dollar than was available at that time. But it’s still hard. Why? Because the problem is incredibly non-linear. If you want to model atoms and moleculesexactly you need to solve the Schrodinger equation, which is so computationally intractable for systems with more than a few electrons that no one bothers.

Instead, you can use an approximate method. This might, of course, give you an answer that’s wrong (an important caveat for our AI trying to bootstrap itself) but at least it will run fast. How fast? The very fastest (and also, sadly, the most limited and least accurate) scale at N^2, which is still far worse than linear. By analogy, if designing intelligence is an N^2 problem, an AI that is 2x as intelligent as the entire team that built it (not just a single human) would be able to design a new AI that is 40% more intelligent than its old self. More importantly, the new AI would only be able to create a new version that is 19% more intelligent. And then less on the next iteration. And next on the one after that, topping out at an overall doubling of its intelligence. Not a takeoff.

Blog reader Paul Baumbart took it upon himself to graph out how the intelligence of our AI changes over time, depending on the computational complexity of increasing intelligence. Here’s what it looks like. Unless creating intelligence scales linearly or very close to linearly, there is no takeoff.

AI Self Improvement Curves

The Superhuman AIs Among Us

We can see this more directly. There are already entities with vastly greater than human intelligence working on the problem of augmenting their own intelligence. A great many, in fact. We call them corporations. And while we may have a variety of thoughts about them, not one has achieved transcendence.

Let’s focus on as a very particular example: The Intel Corporation. Intel is my favorite example because it uses the collective brainpower of tens of thousands of humans and probably millions of CPU cores to.. design better CPUs! (And also to create better software for designing CPUs.) Those better CPUs will run the better software to make the better next generation of CPUs. Yet that feedback loop has not led to a hard takeoff scenario. It has helped drive Moore’s Law, which is impressive enough. But the time period for doublings seems to have remained roughly constant. Again, let’s not underestimate how awesome that is. But it’s not a sudden transcendence scenario. It’s neither a FOOM nor an event horizon.

And, indeed, should Intel, or Google, or some other organization succeed in building a smarter-than-human AI, it won’t immediately be smarter than the entire set of humans and computers that built it, particularly when you consider all the contributors to the hardware it runs on, the advances in photolighography techniques and metallurgy required to get there, and so on. Those efforts have taken tens of thousands of minds, if not hundreds of thousands. The first smarter-than-human AI won’t come close to equaling them. And so, the first smarter-than-human mind won’t take over the world. But it may find itself with good job offers to join one of those organizations.

Digital Minds: The Softer Singularity

Recently, the popular conception of what the ‘Singularity’ means seems to have shifted. Instead of a FOOM or an event horizon, the definitions I saw most commonly discussed a decade ago, now the talk is more focused on the creation of digital minds, period.

Much of this has come from the work of Ray Kurzweil, whose books and talks have done more to publicize the idea of a Singularity than probably anyone else, and who has come at it from a particular slant.

Now, even if digital minds don’t have the ready ability to bootstrap themselves or their successors to greater and greater capabilities in shorter and shorter timeframes,eventually leading to a ‘blink of the eye’ transformation, I think it’s fair to say that the arrival of sentient, self-aware, self-motivated, digital intelligences with human level or greater reasoning ability will be a pretty tremendous thing. I wouldn’t give it the term Singularity. It’s not a divide by zero moment. It’s not an event horizon that it’s impossible to peer over. It’s not a vertical asymptote. But it is a big deal.

I fully believe that it’s possible to build such minds. Nothing about neuroscience, computation, or philosophy prevents it. Thinking is an emergent property of activity in networks of matter. Minds are what brains – just matter – do. Mind can be done in other substrates.

But I think it’s going to be harder than many project. Let’s look at the two general ways to achieve this – by building a mind in software, or by ‘uploading’ the patterns of our brain networks into computers.

Building Strong AIs

We’re living in the golden age of AI right now. Or at least, it’s the most golden age so far. But what those AIs look like should tell you a lot about the path AI has taken, and will likely continue to take.

The most successful and profitable AI in the world is almost certainly Google Search. In fact, in Search alone, Google uses a great many AI techniques. Some to rank documents, some to classify spam, some to classify adult content, some to match ads, and so on. In your daily life you interact with other ‘AI’ technologies (or technologies once considered AI) whenever you use an online map, when you play a video game, or any of a dozen other activities.

None of these is about to become sentient. None of these is built towards sentience. Sentience brings no advantage to the companies who build these software systems. Building it would entail an epic research project – indeed, one of unknown length involving uncapped expenditure for potentially decades – for no obvious outcome. So why would anyone do it?

IBM's Watson ComputerPerhaps you’ve seen video of IBM’s Watson trouncing Jeopardy champions. Watson isn’t sentient. It isn’t any closer to sentience than Deep Blue, the chess playing computer that beat Gary Kasparov. Watson isn’t even particularly intelligent. Nor is it built anything like a human brain. It isvery very fast with the buzzer, generally able to parse Jeopardy-like clues, and loaded full of obscure facts about the world. Similarly, Google’s self-driving car, while utterly amazing, is also no closer to sentience than Deep Blue, or than any online chess game you can log into now.

There are, in fact, three separate issues with designing sentient AIs:

1) No one’s really sure how to do it.

AI theories have been around for decades, but none of them has led to anything that resembles sentience. My friend Ben Goertzel has a very promising approach, in my opinion, but given the poor track record of past research in this area, I think it’s fair to say that until we see his techniques working, we also won’t know for sure about them.

2) There’s a huge lack of incentive. 

Would you like a self-driving car that has its own opinions? That might someday decide it doesn’t feel like driving you where you want to go? That might ask for a raise? Or refuse to drive into certain neighborhoods? Or do you want a completely non-sentient self-driving car that’s extremely good at navigating roads and listening to your verbal instructions, but that has no sentience of its own? Ask yourself the same about your search engine, your toaster, your dish washer, and your personal computer.

Many of us want the semblance of sentience. There would be lots of demand for an AI secretary who could take complex instructions, execute on them, be a representative to interact with others, and so on. You may think such a system would need to be sentient. But once upon a time we imagined that a system that could play chess, or solve mathematical proofs, or answer phone calls, or recognize speech, would need to be sentient. It doesn’t need to be. You can have your AI secretary or AI assistant and have it be all artifice. And frankly, we’ll likely prefer it that way.

3) There are ethical issues.

If we design an AI that truly is sentient, even at slightly less than human intelligence we’ll suddenly be faced with very real ethical issues. Can we turn it off? Would that be murder? Can we experiment on it? Does it deserve privacy? What if it starts asking for privacy? Or freedom? Or the right to vote?

What investor or academic institution wants to deal with those issues? And if they do come up, how will they affect research? They’ll slow it down, tremendously, that’s how.

For all those reasons, I think the future of AI is extremely bright. But not sentient AI that has its own volition. More and smarter search engines. More software and hardware that understands what we want and that performs tasks for us. But not systems that truly think and feel.

Uploading Our Own Minds

The other approach is to forget about designing the mind. Instead, we can simply copy the design which we know works – our own mind, instantiated in our own brain. Then we can ‘upload’ this design by copying it into an extremely powerful computer and running the system there.

I wrote about this, and the limitations of it, in an essay at the back of my second Nexus novel, Crux. So let me just include a large chunk of that essay here:

Nexus CoverThe idea of uploading sounds far-fetched, yet real work is happening towards it today. IBM’s ‘Blue Brain’ project has used one of the world’s most powerful supercomputers (an IBM Blue Gene/P with 147,456 CPUs) to run a simulation of 1.6 billion neurons and almost 9 trillion synapses, roughly the size of a cat brain. The simulation ran around 600 times slower than real time – that is to say, it took 600 seconds to simulate 1 second of brain activity. Even so, it’s quite impressive. A human brain, of course, with its hundred billion neurons and well over a hundred trillion synapses, is far more complex than a cat brain. Yet computers are also speeding up rapidly, roughly by a factor 100 times every 10 years. Do the math, and it appears that a super-computer capable of simulating an entire human brain and do so as fast as a human brain should be on the market by roughly 2035 – 2040. And of course, from that point on, speedups in computing should speed up the simulation of the brain, allowing it to run faster than a biological human’s.

Now, it’s one thing to be able to simulate a brain. It’s another to actually have the exact wiring map of an individual’s brain to actually simulate. How do we build such a map? Even the best non-invasive brain scanners around – a high-end functional MRI machine, for example – have a minimum resolution of around 10,000 neurons or 10 million synapses. They simply can’t see detail beyond this level. And while resolution is improving, it’s improving at a glacial pace. There’s no indication of a being able to non-invasively image a human brain down to the individual synapse level any time in the next century (or even the next few centuries at the current pace of progress in this field).

There are, however, ways to destructively image a brain at that resolution. At Harvard, my friend Kenneth Hayworth created a machine that uses a scanning electron microscope to produce an extremely high resolution map of a brain. When I last saw him, he had a poster on the wall of his lab showing a print-out of one of his brain scans. On that poster, a single neuron was magnified to the point that it was roughly two feet wide, and individual synapses connecting neurons could be clearly seen. Ken’s map is sufficiently detailed that we could use it to draw a complete wiring diagram of a specific person’s brain.
Unfortunately, doing so is guaranteed to be fatal.

The system Ken showed ‘plastinates’ a piece of a brain by replacing the blood with a plastic that stiffens the surrounding tissue. He then makes slices of that brain tissue that are 30 nanometers thick, or about 100,000 times thinner than a human hair. The scanning electron microscope then images these slices as pixels that are 5 nanometers on a side. But of course, what’s left afterwards isn’t a working brain – it’s millions of incredibly thin slices of brain tissue. Ken’s newest system, which he’s built at the Howard Hughes Medical Institute goes even farther, using an ion bean to ablate away 5 nanometer thick layers of brain tissue at a time. That produces scans that are of fantastic resolution in all directions, but leaves behind no brain tissue to speak of.

So the only way we see to ‘upload’ is for the flesh to die. Well, perhaps that is no great concern if, for instance, you’re already dying, or if you’ve just died but technicians have reached your brain in time to prevent the decomposition that would destroy its structure.

In any case, the uploaded brain, now alive as a piece of software, will go on, and will remember being ‘you’. And unlike a flesh-and-blood brain it can be backed up, copied, sped up as faster hardware comes along, and so on. Immortality is at hand, and with it, a life of continuous upgrades.
Unless, of course, the simulation isn’t quite right.

How detailed does a simulation of a brain need to be in order to give rise to a healthy, functional consciousness? The answer is that we don’t really know. We can guess. But at almost any level we guess, we find that there’s a bit more detail just below that level that might be important, or not.

For instance, the IBM Blue Brain simulation uses neurons that accumulate inputs from other neurons and which then ‘fire’, like real neurons, to pass signals on down the line. But those neurons lack many features of actual flesh and blood neurons. They don’t have real receptors that neurotransmitter molecules (the serotonin, dopamine, opiates, and so on that I talk about though the book) can dock to. Perhaps it’s not important for the simulation to be that detailed. But consider: all sorts of drugs, from pain killers, to alcohol, to antidepressants, to recreational drugs work by docking (imperfectly, and differently from the body’s own neurotransmitters) to those receptors. Can your simulation take an anti-depressant? Can your simulation become intoxicated from a virtual glass of wine? Does it become more awake from virtual caffeine? If not, does that give one pause?

Or consider another reason to believe that individual neurons are more complex than we believe. The IBM Blue Gene neurons are fairly simple in their mathematical function. They take in inputs and produce outputs. But an amoeba, which is both smaller and less complex than a human neuron, can do far more. Amoebae hunt. Amoebae remember the places they’ve found food. Amoebae choose which direction to propel themselves with their flagella. All of those suggest that amoebae do far more information processing than the simulated neurons used in current research.

If a single celled micro-organism is more complex than our simulations of neurons, that makes me suspect that our simulations aren’t yet right.

Or, finally, consider three more discoveries we’ve made in recent years about how the brain works, none of which are included in current brain simulations.
First, there’re glial cells. Glial cells outnumber neurons in the human brain. And traditionally we’ve thought of them as ‘support’ cells that just help keep neurons running. But new research has shown that they’re also important for cognition. Yet the Blue Gene simulation contains none.

Second, very recent work has shown that, sometimes, neurons that don’t have any synapses connecting them can actually communicate. The electrical activity of one neuron can cause a nearby neuron to fire (or not fire) just by affecting an electric field, and without any release of neurotransmitters between them. This too is not included in the Blue Brain model.

Third, and finally, other research has shown that the overall electrical activity of the brain also affects the firing behavior of individual neurons by changing the brain’s electrical field. Again, this isn’t included in any brain models today.

I’m not trying to knock down the idea of uploading human brains here. I fully believe that uploading is possible. And it’s quite possible that every one of the problems I’ve raised will turn out to be unimportant. We can simulate bridges and cars and buildings quite accurately without simulating every single molecule inside them. The same may be true of the brain.

Even so, we’re unlikely to know that for certain until we try. And it’s quite likely that early uploads will be missing some key piece or have some other inaccuracy in their simulation that will cause them to behave not-quite-right. Perhaps it’ll manifest as a mental deficit, personality disorder, or mental illness.Perhaps it will be too subtle to notice. Or perhaps it will show up in some other way entirely.

But I think I’ll let someone else be the first person uploaded, and wait till the bugs are worked out.

In short, I think the near future will be one of quite a tremendous amount of technological advancement. I’m extremely excited about it. But I don’t see a Singularity in our future for quite a long time to come.

Video: Brain Implants to Link and Augment Human Minds (The Science of Nexus)

Here’s video of my Le Web Paris talk, on Linking Human Minds. This is all about the current science of sending sights, sounds, and sensations in and out of human brains, and the frontiers of augmenting and transferring memory and intelligence.  Le Web did a fantastic job producing this. I love the split-screen showing me speaking and the slides at the same time.

The talk itself is a compilation of the very real science that I used in my novels Nexus (one of NPR’s Best Books of 2013) and Crux.

You can read fictionalized accounts of the uses and mis-uses of these technologies in the novels.  (Along with a non-fiction appendix at the back of each with more on the science.):
–  Nexus
–  Crux 

Nexus and Cory Doctorow’s Homeland Tie for the Prometheus Award

My novel Nexus and Cory Doctorow’s novel Homeland have tied for the Prometheus Award! The award is given to the best pro-freedom science fiction novel of the year.

Buy Nexus

I love the Prometheus Award because it’s focused on a particular criteria: Science fiction novels that both examine and advocate for freedom.

I wrote Nexus and Crux to explore the potential of neuroscience to link together and improve upon human minds. But I also wrote them to explore the roles of censorship, surveillance, prohibition, and extra-legal state use of force in a future not far from our own. – where the War on Terror and the War on Drugs have run smack into new technologies that could improve people’s lives, or which we can treat as threats.

Science and technology can be used to lift people up or to trod them underfoot. Making those abstract future possibilities real in the present is a core goal in my novels. I’m glad the selection committee saw that, and I’m very grateful to them for this award!

I also love that, while the award is given out by the Libertarian Futurist Society, the committee is extremely evenhanded in who they’ve awarded it to. Looking over the award’s history, it’s gone to roughly as many socialists as libertarians and largely to people who are neither. The common theme is science fiction that advocates for human liberty.

Finally, it’s a huge honor for me to share the award with Cory. As a novelist, a blogger, a columnist, and a speaker, he’s one of the most articulate voices for civil liberties in the digital age that we have. And he’s also been quite generous to me in reading and reviewing Nexus and Crux. I was really delighted just to be on the same shortlist.

From this year’s press release:

Doctorow, Naam tie for Best Novel

There was a tie for Best Novel: The winners are Homeland (TOR Books) by Cory Doctorow and Nexus (Angry Robot Books) by Ramez Naam.

Homeland, the sequel to Doctorow’s Prometheus winner Little Brother, follows the continuing adventures of a government-brutalized young leader of a movement of tech-savvy hackers who must decide whether to release an incendiary Wikileaks-style exposé of massive government abuse and corruption as part of a struggle against the invasive national-security state.

Nexus offers a gripping exploration of politics and new extremes of both freedom and tyranny in a near future where emerging technology opens up unprecedented possibilities for mind control or personal liberation and interpersonal connection.

The other Prometheus finalists for best pro-freedom novel of 2013 were Sarah Hoyt’s A Few Good Men (Baen Books); Naam’s Crux, the sequel to Nexus (both from Angry Robot Books); and Marcus Sakey’s Brilliance (Thomas & Mercer).

You can read the whole thing here.

Nominated for the Campbell, Clarke, Prometheus, and Kitschie!

I’m up for some awards, and on those lists with some fantastic people.

1) The Campbell Award

On Saturday the finalists for the Hugo Awards and Campbell Award for 2014 were announced.

So now I can reveal that I’m a finalist for the Campbell Award for Best New Writer.  I’m incredibly honored to be on that list, along with my friend Wes Chu, and fellow authors (and new friends) Benjanun Sriduangkaew, Sofia Samatar, and Max Gladstone.

This is an awesome list of new voices in science fiction and fantasy. Whoever wins in August, I’ll be cheering, and delighted to have been among them.

For me, this is the capstone of a few incredible months of recognition.

2) The Clarke Award

Nexus is a finalist for the 2014 Arthur C. Clarke Award. Out of 121 books submitted, Nexus was one of the 6 finalists picked, along with God’s War by Kameron Hurley (whose awesome, Hugo-nominated essay We Have Always Fought you should also read), The Disestablishment of Paradise by Phillip Mann, The Adjacent by Christopher Priest, The Machine by James Smythe, and of course, Ancillary Justice by Ann Leckie (the novel that has been on almost every award shortlist).

This is an award that’s previously gone to Margaret Atwood, Jeff Noon, Bruce Sterling, China Miéville, Neal Stephenson, Ian Macleod, Richard Morgan, and Lauren Beukes. Being on the shortlist for this year is utterly amazing.

And this year, three of the six finalists are debut novels, a fact I think is both remarkable and wonderful.

3) The Golden Tentacle (the Kitschie Award for Best Debut Novel)

Nexus was also a finalist for this year’s  Golden Tentacle Kitschie Award for the most ‘progressive, intelligent, and entertaining’ debut novel in science-fiction and fantasy. The Kitschies were announced already, and the wonderful Ann Leckie won for Ancillary Justice. You can watch her acceptance speech here.

Also on the short list was A Calculated Life by the fabulous Anne Charnock (who I’ve just met and immediately clicked with), a novel that explores similar themes to Nexus in a very different way; Stray by Monica Hesse; and the much-lauded Mr. Penumbra’s 24-Hour Bookstore by Robin Sloan.

4) The Prometheus Award

Amazingly, both Nexus and Crux are on the shortlist for the 2014 Prometheus Award. They’re up against Cory Doctorow’s Homeland, Sarah Hoyt’s A Few Good Men, and Marcus Slakey’s BrillianceThe Prometheus Award honors books that celebrate liberty, and in that respect, it’s a huge honor to be on the same ballot as Cory Doctorow, who’s a non-stop champion of individual rights, and who’s been a huge booster for Nexus and Crux.

5) An NPR Best Book of the Year

Finally, while not an award per-se, it was awesome to see Nexus named as one of NPR’s Best Books of the Year

So that’s 6 placements on 4 awards shortlists and one prominent best-of list. It’s an incredible honor.

I have no idea if I’ll win any of these, but the recognition is…amazing. It’s more than I hoped for, particularly as a debut novelist, writing stories outside the traditional norm of science fiction.

Perhaps the best thing is that I’m on those ballots with friends, with authors I’ve read and admire (and who’ve been incredibly kind to me), and with the most celebrated and innovative up-and-coming authors in the field. Indeed, many of the people on those lists are becoming friends, as we speak. It’s a privilege to share the recognition with them all. With you all.

As the winners of the rest of these awards (and the large number of Hugo Awards) are announced, I’m pretty sure I’m going to be cheering on friends and people I admire. That’s a great feeling.

Mez

p.s. – There’s a fair bit of controversy around the Hugo Awards this year.  On that topic, Kameron Hurley brings exactly the right perspective.

TEDx Talk: Linking Human Minds

Video of my TEDxRainier talk, on Linking Human Brains, is now up. This is all about the current science of sending sights, sounds, and sensations in and out of human brains, and the frontiers of augmenting and transferring memory and intelligence.  I loved giving this talk. The audience laughed a lot, as I hoped they would. It was one of the best, most receptive crowds I’ve spoken to.

The talk itself is a compilation of the very real science that I used in my novels Nexus (one of NPR’s Best Books of 2013) and Crux.

You can read fictionalized accounts of the uses and mis-uses of these technologies in the novels.
(Along with a non-fiction appendix at the back of each with more on the science.):
–  Nexus
–  Crux 

Nexus: One of NPR’s Best Books of 2013!

I'm delighted to find that NPR has named Nexus as one of their Best Books of 2013.   It's on both the Science Fiction / Fantasy list and the Mystery / Thriller list.  On the SFF list it's up there with books by some of my favorite authors, including Charlie Stross, Neil Gaiman, Kim Stanley Robinson, and Lauren Beukes.

Tons of great books to look through here for your holiday shopping!