The Ultimate Interface: Your Brain

A shorter version of this article first appeared at TechCrunch.

The final frontier of digital technology is integrating into your own brain. DARPA wants to go there. Scientists want to go there. Entrepreneurs want to go there. And increasingly, it looks like it’s possible.

You’ve probably read bits and pieces about brain implants and prostheses. Let me give you the big picture.

Neural implants could accomplish things no external interface could: Virtual and augmented reality with all 5 senses (or more); augmentation of human memory, attention, and learning speed; even multi-sense telepathy — sharing what we see, hear, touch, and even perhaps what we think and feel with others.

Arkady flicked the virtual layer back on. Lightning sparkled around the dancers on stage again, electricity flashed from the DJ booth, silver waves crashed onto the beach. A wind that wasn’t real blew against his neck. And up there, he could see the dragon flapping its wings, turning, coming around for another pass. He could feel the air move, just like he’d felt the heat of the dragon’s breath before.

Adapted from Crux, book 2 of the Nexus Trilogy.

Sound crazy? It is… and it’s not.

Start with motion. In clinical trials today there are brain implants that have given men and women control of robot hands and fingers. DARPA has now used the same technology to put a paralyzed woman in direct mental control of an F-35 simulator. And in animals, the technology has been used in the opposite direction,directly inputting touch into the brain.

Or consider vision. For more than a year now, we’ve had FDA-approved bionic eyes that restore vision via a chip implanted on the retina. More radical technologies have sent vision straight into the brain. And recently, brain scanners have succeeded in deciphering what we’re looking at. (They’d do even better with implants in the brain.)

Nexus CoverSound, we’ve been dealing with for decades, sending it into the nervous system through cochlear implants. Recently, children born deaf and without an auditory nerve have had sound sent electronically straight into their brains.

Nor are our senses or motion the limit.

In rats, we’ve restored damaged memories via a ‘hippocampus chip’ implanted in the brain. Human trials are starting this year. Now, you say your memory is just fine? Well, in rats, this chip can actually improve memory. And researchers can capture the neural trace of an experience, record it, and play it back any time they want later on. Sounds useful.

In monkeys, we’ve done better, using a brain implant to “boost monkey IQ” in pattern matching tests.

We’ve even emailed verbal thoughts back and forth from person to person.

Now, let me be clear. All of these systems, for lack of a better word, suck. They’re crude. They’re clunky. They’re low resolution. That is, most fundamentally, because they have such low-bandwidth connections to the human brain. Your brain has roughly 100 billion neurons and 100 trillion neural connections, or synapses. An iPhone 6’s A8 chip has 2 billion transistors. (Though, let’s be clear, a transistor is not anywhere near the complexity of a single synapse in the brain.)

The highest bandwidth neural interface ever placed into a human brain, on the other hand, had just 256 electrodes. Most don’t even have that.

The second barrier to brain interfaces is that getting even 256 channels in generally requires invasive brain surgery, with its costs, healing time, and the very real risk that something will go wrong. That’s a huge impediment, making neural interfaces only viable for people who have a huge amount to gain, such as those who’ve been paralyzed or suffered brain damage.

This is not yet the iPhone era of brain implants. We’re in the DOS era, if not even further back.

But what if? What if, at some point, technology gives us high-bandwidth neural interfaces that can be easily implanted? Imagine the scope of software that could interface directly with your senses and all the functions of your mind:

They gave Rangan a pointer to their catalog of thousands of brain-loaded Nexus apps. Network games, augmented reality systems, photo and video and audio tools that tweaked data acquired from your eyes and ears, face recognizers, memory supplementers that gave you little bits of extra info when you looked at something or someone, sex apps (a huge library of those alone), virtual drugs that simulated just about everything he’d ever tried, sober-up apps, focus apps, multi-tasking apps, sleep apps, stim apps, even digital currencies that people had adapted to run exclusively inside the brain.

- An excerpt from Apex, book 3 of the Nexus Trilogy.

The implications of mature neurotechnology are sweeping. Neural interfaces could help tremendously with mental health and neurological disease. Pharmaceuticals enter the brain and then spread out randomly, hitting whatever receptor they work on all across your brain. Neural interfaces, by contrast, can stimulate just one area at a time, can be tuned in real-time, and can carry information out about what’s happening.

We’ve already seen that deep brain stimulators can do amazing things for patients with Parkinson’s. The same technology is on trial for untreatable depressionOCD, and anorexia. And we know that stimulating the right centers in the brain can induce sleep or alertness, hunger or satiation, ease or stimulation, as quick as the flip of a switch. Or, if you’re running code, on a schedule. (Siri: Put me to sleep until 7:30, high priority interruptions only. And let’s get hungry for lunch around noon. Turn down the sugar cravings, though.)

Implants that help repair brain damage are also a gateway to devices thatimprove brain function. Think about the “hippocampus chip” that repairs the ability of rats to learn. Building such a chip for humans is going to teach us an incredible amount about how human memory functions. And in doing so, we’re likely to gain the ability to improve human memory, to speed the rate at which people can learn things, even to save memories offline and relive them — just as we have for the rat.

That has huge societal implications. Boosting how fast people can learn would accelerate innovation and economic growth around the world. It’d also give humans a new tool to keep up with the job-destroying features of ever-smarter algorithms.

The impact goes deeper than the personal, though. Computing technology started out as number crunching. These days the biggest impact it has on society is through communication. If neural interfaces mature, we may well see the same. What if you could directly beam an image in your thoughts onto a computer screen? What if you could directly beam that to another human being? Or, across the internet, to any of the billions of human beings who might choose to tune into your mind-stream online? What if you could transmit not just images, sounds, and the like, but emotions? Intellectual concepts? All of that is likely to eventually be possible, given a high enough bandwidth connection to the brain.

That type of communication would have a huge impact on the pace of innovation, as scientists and engineers could work more fluidly together. And it’s just as likely to have a transformative effect on the public sphere, in the same way that email, blogs, and twitter have successively changed public discourse.

Digitizing our thoughts may have some negative consequences, of course.

With our brains online, every concern about privacy, about hacking, about surveillance from the NSA or others, would all be magnified. If thoughts are truly digital, could the right hacker spy on your thoughts? Could law enforcement get a warrant to read your thoughts? Heck, in the current environment, would law enforcement (or the NSA) even need a warrant? Could the right malicious actor even change your thoughts?

“Focus,” Ilya snapped. “Can you erase her memories of tonight? Fuzz them out?”

“Nothing subtle,” he replied. “Probably nothing very effective. And it might do some other damage along the way.”

- An excerpt from Nexus, book 1 of the Nexus Trilogy.

The ultimate interface would bring the ultimate new set of vulnerabilities. (Even if those scary scenarios don’t come true, could you imagine what spammers and advertisers would do with an interface to your neurons, if it were the least bit non-secure?)

Everything good and bad about technology would be magnified by implanting it deep in brains. In Nexus I crash the good and bad views against each other, in a violent argument about whether such a technology should be legal. Is the risk of brain-hacking outweighed by the societal benefits of faster, deeper communication, and the ability to augment our own intelligence?

For now, we’re a long way from facing such a choice. In fiction, I can turn the neural implant into a silvery vial of nano-particles that you swallow, and which then self-assemble into circuits in your brain. In the real world, clunky electrodes implanted by brain surgery dominate, for now.

That’s changing, though. Researchers across the world, many funded by DARPA, are working to radically improve the interface hardware, boosting the number of neurons it can connect to (and thus making it smoother, higher resolution, and more precise), and making it far easier to implant. They’ve shown recently that carbon nanotubes, a thousand times thinner than current electrodes, have huge advantages for brain interfaces. They’re working on silk-substrate interfaces that melt into the brain. Researchers at Berkeley have a proposal for neural dust that would be sprinkled across your brain (which sounds rather close to the technology I describe in Nexus). And the former editor of the journalNeuron has pointed out that carbon nanotubes are so slender that a bundle of a million of them could be inserted into the blood stream and steered into the brain, giving us a nearly 10,000-fold increase in neural bandwidth, without any brain surgery at all.

Even so, we’re a long way from having such a device. We don’t actually know how long it’ll take to make the breakthroughs in the hardware to boost precision and remove the need for highly invasive surgery. Maybe it’ll take decades. Maybe it’ll take more than a century, and in that time, direct neural implants will be something that only those with a handicap or brain damage find worth the risk to reward. Or maybe the breakthroughs will come in the next ten or twenty years, and the world will change faster. DARPA is certainly pushing fast and hard.

Will we be ready? I, for one, am enthusiastic. There’ll be problems. Lots of them. There’ll be policy and privacy and security and civil rights challenges. But just as we see today’s digital technology of Twitter and Facebook and camera-equipped mobile phones boosting freedom around the world, and boosting the ability of people to connect to one another, I think we’ll see much more positive than negative if we ever get to direct neural interfaces.

In the meantime, I’ll keep writing novels about them. Just to get us ready.

Posted in Uncategorized | Tagged , , , | Comments Off

The Singularity is Further Than it Appears

This is an updated cross-post of a post I originally made at Charlie Stross’s blog.

Are we headed for a Singularity? Is AI a Threat?

tl;dr: Not anytime soon. Lack of incentives means very little strong AI work is happening. And even if we did develop one, it’s unlikely to have a hard takeoff.

I write relatively near-future science fiction that features neural implants, brain-to-brain communication, and uploaded brains. I also teach at a place called Singularity University. So people naturally assume that I believe in the notion of a Singularity and that one is on the horizon, perhaps in my lifetime.

I think it’s more complex than that, however, and depends in part on one’s definition of the word. The word Singularity has gone through something of a shift in definition over the last few years, weakening its meaning. But regardless of which definition you use, there are good reasons to think that it’s not on the immediate horizon.

Vernor Vinge’s Intelligence Explosion

My first experience with the term Singularity (outside of math or physics) comes from the classic essay by science fiction author, mathametician, and professor Vernor Vinge, The Coming Technological Singularity.

Vinge, influenced by the earlier work of I.J. Good, wrote this, in 1993:

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.
[...]
The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence.
[...]
When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities – on a still-shorter time scale.

I’ve bolded that last quote because it’s key. Vinge envisions a situation where the first smarter-than-human intelligence can make an even smarter entity inless time than it took to create itself. And that this keeps continuing, at each stage, with each iteration growing shorter, until we’re down to AIs that are so hyper-intelligent that they make even smarter versions of themselves in less than a second, or less than a millisecond, or less than a microsecond, or whatever tiny fraction of time you want.

This is the so-called ‘hard takeoff’ scenario, also called the FOOM model by some in the singularity world. It’s the scenario where in a blink of an AI, a ‘godlike’ intelligence bootstraps into being, either by upgrading itself or by being created by successive generations of ancestor AIs.

It’s also, with due respect to Vernor Vinge, of whom I’m a great fan, almost certainly wrong.

It’s wrong because most real-world problems don’t scale linearly. In the real world, the interesting problems are much much harder than that.

Molecular Modelling Computational Complexity

Consider chemistry and biology. For decades we’ve been working on problems like protein folding, simulating drug behavior inside the body, and computationally creating new materials. Computational chemistry started in the 1950s. Today we have literally trillions of times more computing power available per dollar than was available at that time. But it’s still hard. Why? Because the problem is incredibly non-linear. If you want to model atoms and moleculesexactly you need to solve the Schrodinger equation, which is so computationally intractable for systems with more than a few electrons that no one bothers.

Instead, you can use an approximate method. This might, of course, give you an answer that’s wrong (an important caveat for our AI trying to bootstrap itself) but at least it will run fast. How fast? The very fastest (and also, sadly, the most limited and least accurate) scale at N^2, which is still far worse than linear. By analogy, if designing intelligence is an N^2 problem, an AI that is 2x as intelligent as the entire team that built it (not just a single human) would be able to design a new AI that is 40% more intelligent than its old self. More importantly, the new AI would only be able to create a new version that is 19% more intelligent. And then less on the next iteration. And next on the one after that, topping out at an overall doubling of its intelligence. Not a takeoff.

Blog reader Paul Baumbart took it upon himself to graph out how the intelligence of our AI changes over time, depending on the computational complexity of increasing intelligence. Here’s what it looks like. Unless creating intelligence scales linearly or very close to linearly, there is no takeoff.

AI Self Improvement Curves

The Superhuman AIs Among Us

We can see this more directly. There are already entities with vastly greater than human intelligence working on the problem of augmenting their own intelligence. A great many, in fact. We call them corporations. And while we may have a variety of thoughts about them, not one has achieved transcendence.

Let’s focus on as a very particular example: The Intel Corporation. Intel is my favorite example because it uses the collective brainpower of tens of thousands of humans and probably millions of CPU cores to.. design better CPUs! (And also to create better software for designing CPUs.) Those better CPUs will run the better software to make the better next generation of CPUs. Yet that feedback loop has not led to a hard takeoff scenario. It has helped drive Moore’s Law, which is impressive enough. But the time period for doublings seems to have remained roughly constant. Again, let’s not underestimate how awesome that is. But it’s not a sudden transcendence scenario. It’s neither a FOOM nor an event horizon.

And, indeed, should Intel, or Google, or some other organization succeed in building a smarter-than-human AI, it won’t immediately be smarter than the entire set of humans and computers that built it, particularly when you consider all the contributors to the hardware it runs on, the advances in photolighography techniques and metallurgy required to get there, and so on. Those efforts have taken tens of thousands of minds, if not hundreds of thousands. The first smarter-than-human AI won’t come close to equaling them. And so, the first smarter-than-human mind won’t take over the world. But it may find itself with good job offers to join one of those organizations.

Digital Minds: The Softer Singularity

Recently, the popular conception of what the ‘Singularity’ means seems to have shifted. Instead of a FOOM or an event horizon, the definitions I saw most commonly discussed a decade ago, now the talk is more focused on the creation of digital minds, period.

Much of this has come from the work of Ray Kurzweil, whose books and talks have done more to publicize the idea of a Singularity than probably anyone else, and who has come at it from a particular slant.

Now, even if digital minds don’t have the ready ability to bootstrap themselves or their successors to greater and greater capabilities in shorter and shorter timeframes,eventually leading to a ‘blink of the eye’ transformation, I think it’s fair to say that the arrival of sentient, self-aware, self-motivated, digital intelligences with human level or greater reasoning ability will be a pretty tremendous thing. I wouldn’t give it the term Singularity. It’s not a divide by zero moment. It’s not an event horizon that it’s impossible to peer over. It’s not a vertical asymptote. But it is a big deal.

I fully believe that it’s possible to build such minds. Nothing about neuroscience, computation, or philosophy prevents it. Thinking is an emergent property of activity in networks of matter. Minds are what brains – just matter - do. Mind can be done in other substrates.

But I think it’s going to be harder than many project. Let’s look at the two general ways to achieve this – by building a mind in software, or by ‘uploading’ the patterns of our brain networks into computers.

Building Strong AIs

We’re living in the golden age of AI right now. Or at least, it’s the most golden age so far. But what those AIs look like should tell you a lot about the path AI has taken, and will likely continue to take.

The most successful and profitable AI in the world is almost certainly Google Search. In fact, in Search alone, Google uses a great many AI techniques. Some to rank documents, some to classify spam, some to classify adult content, some to match ads, and so on. In your daily life you interact with other ‘AI’ technologies (or technologies once considered AI) whenever you use an online map, when you play a video game, or any of a dozen other activities.

None of these is about to become sentient. None of these is built towards sentience. Sentience brings no advantage to the companies who build these software systems. Building it would entail an epic research project – indeed, one of unknown length involving uncapped expenditure for potentially decades – for no obvious outcome. So why would anyone do it?

IBM's Watson ComputerPerhaps you’ve seen video of IBM’s Watson trouncing Jeopardy champions. Watson isn’t sentient. It isn’t any closer to sentience than Deep Blue, the chess playing computer that beat Gary Kasparov. Watson isn’t even particularly intelligent. Nor is it built anything like a human brain. It isvery very fast with the buzzer, generally able to parse Jeopardy-like clues, and loaded full of obscure facts about the world. Similarly, Google’s self-driving car, while utterly amazing, is also no closer to sentience than Deep Blue, or than any online chess game you can log into now.

There are, in fact, three separate issues with designing sentient AIs:

1) No one’s really sure how to do it.

AI theories have been around for decades, but none of them has led to anything that resembles sentience. My friend Ben Goertzel has a very promising approach, in my opinion, but given the poor track record of past research in this area, I think it’s fair to say that until we see his techniques working, we also won’t know for sure about them.

2) There’s a huge lack of incentive. 

Would you like a self-driving car that has its own opinions? That might someday decide it doesn’t feel like driving you where you want to go? That might ask for a raise? Or refuse to drive into certain neighborhoods? Or do you want a completely non-sentient self-driving car that’s extremely good at navigating roads and listening to your verbal instructions, but that has no sentience of its own? Ask yourself the same about your search engine, your toaster, your dish washer, and your personal computer.

Many of us want the semblance of sentience. There would be lots of demand for an AI secretary who could take complex instructions, execute on them, be a representative to interact with others, and so on. You may think such a system would need to be sentient. But once upon a time we imagined that a system that could play chess, or solve mathematical proofs, or answer phone calls, or recognize speech, would need to be sentient. It doesn’t need to be. You can have your AI secretary or AI assistant and have it be all artifice. And frankly, we’ll likely prefer it that way.

3) There are ethical issues.

If we design an AI that truly is sentient, even at slightly less than human intelligence we’ll suddenly be faced with very real ethical issues. Can we turn it off? Would that be murder? Can we experiment on it? Does it deserve privacy? What if it starts asking for privacy? Or freedom? Or the right to vote?

What investor or academic institution wants to deal with those issues? And if they do come up, how will they affect research? They’ll slow it down, tremendously, that’s how.

For all those reasons, I think the future of AI is extremely bright. But not sentient AI that has its own volition. More and smarter search engines. More software and hardware that understands what we want and that performs tasks for us. But not systems that truly think and feel.

Uploading Our Own Minds

The other approach is to forget about designing the mind. Instead, we can simply copy the design which we know works – our own mind, instantiated in our own brain. Then we can ‘upload’ this design by copying it into an extremely powerful computer and running the system there.

I wrote about this, and the limitations of it, in an essay at the back of my second Nexus novel, Crux. So let me just include a large chunk of that essay here:

Nexus CoverThe idea of uploading sounds far-fetched, yet real work is happening towards it today. IBM’s ‘Blue Brain’ project has used one of the world’s most powerful supercomputers (an IBM Blue Gene/P with 147,456 CPUs) to run a simulation of 1.6 billion neurons and almost 9 trillion synapses, roughly the size of a cat brain. The simulation ran around 600 times slower than real time – that is to say, it took 600 seconds to simulate 1 second of brain activity. Even so, it’s quite impressive. A human brain, of course, with its hundred billion neurons and well over a hundred trillion synapses, is far more complex than a cat brain. Yet computers are also speeding up rapidly, roughly by a factor 100 times every 10 years. Do the math, and it appears that a super-computer capable of simulating an entire human brain and do so as fast as a human brain should be on the market by roughly 2035 – 2040. And of course, from that point on, speedups in computing should speed up the simulation of the brain, allowing it to run faster than a biological human’s.

Now, it’s one thing to be able to simulate a brain. It’s another to actually have the exact wiring map of an individual’s brain to actually simulate. How do we build such a map? Even the best non-invasive brain scanners around – a high-end functional MRI machine, for example – have a minimum resolution of around 10,000 neurons or 10 million synapses. They simply can’t see detail beyond this level. And while resolution is improving, it’s improving at a glacial pace. There’s no indication of a being able to non-invasively image a human brain down to the individual synapse level any time in the next century (or even the next few centuries at the current pace of progress in this field).

There are, however, ways to destructively image a brain at that resolution. At Harvard, my friend Kenneth Hayworth created a machine that uses a scanning electron microscope to produce an extremely high resolution map of a brain. When I last saw him, he had a poster on the wall of his lab showing a print-out of one of his brain scans. On that poster, a single neuron was magnified to the point that it was roughly two feet wide, and individual synapses connecting neurons could be clearly seen. Ken’s map is sufficiently detailed that we could use it to draw a complete wiring diagram of a specific person’s brain.
Unfortunately, doing so is guaranteed to be fatal.

The system Ken showed ‘plastinates’ a piece of a brain by replacing the blood with a plastic that stiffens the surrounding tissue. He then makes slices of that brain tissue that are 30 nanometers thick, or about 100,000 times thinner than a human hair. The scanning electron microscope then images these slices as pixels that are 5 nanometers on a side. But of course, what’s left afterwards isn’t a working brain – it’s millions of incredibly thin slices of brain tissue. Ken’s newest system, which he’s built at the Howard Hughes Medical Institute goes even farther, using an ion bean to ablate away 5 nanometer thick layers of brain tissue at a time. That produces scans that are of fantastic resolution in all directions, but leaves behind no brain tissue to speak of.

So the only way we see to ‘upload’ is for the flesh to die. Well, perhaps that is no great concern if, for instance, you’re already dying, or if you’ve just died but technicians have reached your brain in time to prevent the decomposition that would destroy its structure.

In any case, the uploaded brain, now alive as a piece of software, will go on, and will remember being ‘you’. And unlike a flesh-and-blood brain it can be backed up, copied, sped up as faster hardware comes along, and so on. Immortality is at hand, and with it, a life of continuous upgrades.
Unless, of course, the simulation isn’t quite right.

How detailed does a simulation of a brain need to be in order to give rise to a healthy, functional consciousness? The answer is that we don’t really know. We can guess. But at almost any level we guess, we find that there’s a bit more detail just below that level that might be important, or not.

For instance, the IBM Blue Brain simulation uses neurons that accumulate inputs from other neurons and which then ‘fire’, like real neurons, to pass signals on down the line. But those neurons lack many features of actual flesh and blood neurons. They don’t have real receptors that neurotransmitter molecules (the serotonin, dopamine, opiates, and so on that I talk about though the book) can dock to. Perhaps it’s not important for the simulation to be that detailed. But consider: all sorts of drugs, from pain killers, to alcohol, to antidepressants, to recreational drugs work by docking (imperfectly, and differently from the body’s own neurotransmitters) to those receptors. Can your simulation take an anti-depressant? Can your simulation become intoxicated from a virtual glass of wine? Does it become more awake from virtual caffeine? If not, does that give one pause?

Or consider another reason to believe that individual neurons are more complex than we believe. The IBM Blue Gene neurons are fairly simple in their mathematical function. They take in inputs and produce outputs. But an amoeba, which is both smaller and less complex than a human neuron, can do far more. Amoebae hunt. Amoebae remember the places they’ve found food. Amoebae choose which direction to propel themselves with their flagella. All of those suggest that amoebae do far more information processing than the simulated neurons used in current research.

If a single celled micro-organism is more complex than our simulations of neurons, that makes me suspect that our simulations aren’t yet right.

Or, finally, consider three more discoveries we’ve made in recent years about how the brain works, none of which are included in current brain simulations.
First, there’re glial cells. Glial cells outnumber neurons in the human brain. And traditionally we’ve thought of them as ‘support’ cells that just help keep neurons running. But new research has shown that they’re also important for cognition. Yet the Blue Gene simulation contains none.

Second, very recent work has shown that, sometimes, neurons that don’t have any synapses connecting them can actually communicate. The electrical activity of one neuron can cause a nearby neuron to fire (or not fire) just by affecting an electric field, and without any release of neurotransmitters between them. This too is not included in the Blue Brain model.

Third, and finally, other research has shown that the overall electrical activity of the brain also affects the firing behavior of individual neurons by changing the brain’s electrical field. Again, this isn’t included in any brain models today.

I’m not trying to knock down the idea of uploading human brains here. I fully believe that uploading is possible. And it’s quite possible that every one of the problems I’ve raised will turn out to be unimportant. We can simulate bridges and cars and buildings quite accurately without simulating every single molecule inside them. The same may be true of the brain.

Even so, we’re unlikely to know that for certain until we try. And it’s quite likely that early uploads will be missing some key piece or have some other inaccuracy in their simulation that will cause them to behave not-quite-right. Perhaps it’ll manifest as a mental deficit, personality disorder, or mental illness.Perhaps it will be too subtle to notice. Or perhaps it will show up in some other way entirely.

But I think I’ll let someone else be the first person uploaded, and wait till the bugs are worked out.

In short, I think the near future will be one of quite a tremendous amount of technological advancement. I’m extremely excited about it. But I don’t see a Singularity in our future for quite a long time to come.

Posted in Uncategorized | Tagged , , , , | Comments Off

Tesla Battery Economics: On the Path to Disruption

Update: The Tesla battery is better than I thought for homes. And at utility scale, it’s deeply disruptive.

Elon Musk announced Tesla’s home / business battery today. tl;dr: It’ll get enthusiastic early adopters to buy. The economics are almost there to make it cost effective for a wide market. [Update: It might actually be cost effective in the US today. See the third cost estimate down below.] And within just a few years, it almost certainly will be cheap enough to be cost effective for a broad market. Not a complete game changer for the home mrket today, but a shot fired in an incredible energy storage disruption.

At the utility scale, it may actually be even more disruptive. Tesla appears to be selling the utility scale models at $250 / kwh. Multiple utility studies suggest that such a price should replace natural gas peakers and drive gigantic grid-level deployments.

[If you want to understand the overall energy storage technology race and market, read this: Why Energy Storage is About to Get Big, and Cheap.]

Here are the specs, from Tesla’s Powerwall site.

Gizmodo has more details.

$3500 is, as some people online have noted, less than a fully decked out Mac. There will be some set of early adopters who buy this because they love the idea, because they dislike utility companies, because they’re committed to solar, or because they love Elon Musk. Indeed, across my feed, I’ve seen quite a large number of people already announce that, at $3000 or $3,500, they’re just going to buy it, and ROI be damned.

There’s also an economic case for anyone to whom outages are extremely expensive and cutting off even one or two outages in the lifetime of the battery is worth the purchase price.  (Movie theaters are one set of customers I’ve heard are looking closely at this.) As competition against a backup generator, the battery has huge advantages. [Seamless, no fueling, less maintenance, can save money on day-to-day operations, etc...] That alone may power early sales.

Beyond that, is the battery cheap enough to make storing your self-generated solar power worthwhile for hundreds of thousands or millions of homes across the US and overseas? If not, how close is it?

As I’ve written before, the number that really matters is the round-trip cost of electricity over the lifetime of the battery. How much do you pay for every kilowatt-hour put into the battery and then retrieved later?  We can talk about this as LCOE (levelized cost of electricity).

Here are two (make that three) ways we can calculate the LCOE of the Tesla Powerwall.

1. Rule of Thumb: 1,000 Full Charge Cycles. This gives an LCOE of $0.35 / kwh.  That compares to average grid electricity prices in the US of 12 cents / kwh, and peak California prices on a time-of-use plan of around 28 cents / kwh.

2. 10 Year Warranty + Daily Shallow Cycles. Tesla is offering a ten-year warranty on these batteries, which is bold. Yet evidence shows that Tesla automotive batteries are doing quite well, not losing capacity fast. Why? It’s because they’re rarely fully discharged. Most people drive well under half of the range of the battery per day. So let’s assume 10 years of daily use (3650 days, if we ignore leap days) and 50% depth of discharge on each day. Using the 7kwh battery, that gives us a price of around 23-24 cents / kwh.

3. UPDATE: 10 Years of 7kwh Cycles. Cheap Enough. I’m adding this after some twitter conversations with Robert Fransman. Let’s assume for a moment that the Tesla Battery actually can be used for full 7kwh charging and discharging every day during its 10 year warranty. That would make the cost around 12 cents / kwh.

[I had initially assumed that daily 7kwh cycling was impossible, despite the specs Tesla provided. No Li-ion battery today can handle 3650 discharges to 100% depth. But Robert Fransman has done the math on the weight of the battery vs. Tesla car batteries. He suggests that the 7kwh battery is actually a 12kwh battery under the hood. Discharging a battery to 60% 3650 times is still a stretch, but much closer to plausible. Tesla may here be just assuming they'll have to replace some on warranty before 10 years, but given that the price of batteries is plunging, future replacement is far less expensive. Smart.]

All three of these prices are the price to installers. It’s not counting the installer’s profit margin or their cost of labor or any equipment  needed to connect it to the house. So realistically the costs will be higher. If we add 25% of so, the bottom price, the one backed by the warranty, is around 15 cents per kwh. 

Tentative Conclusion: The battery is right on the verge of being cost effective to buy across most of the US for day/night arbitrage. And it’s even more valuable if outages come at a high economic cost.

In Sunny Countries: Bigger Impact, Drives Solar

Outside the continental US, the battery’s economics look far better, though. 43 US states currently have Net Metering laws that compensate solar homes for excess power created during the day. A good Net Metering plan is simply a better deal for most solar-equipped homes than buying a battery.

In some of the sunniest places in the world, though, retail electricity prices from the grid are substantially higher than the US, plenty of sunlight is available, and Net Metering either doesn’t exist or is being severely curtailed.

Here’s a map from BNEF of sunshine vs grid electricity rates. Countries above the 2015 line have cheaper solar electricity than grid electricity today. But a number of those countries, including Australia, Spain, Italy, Turkey, and Brazil have no or severely limited ability for solar home owners to sell extra power back to the grid. In those sunny, policy-light countries, Tesla’s batteries make economic sense today, and will help drive rooftop solar. 

Even Germany, I’d note, gets enough sun that the price of rooftop solar is below that of grid electricity. And in Germany, feed-in-tarrifs to homes that put solar on the grid are plunging. There’s now a roughly 20 euro cent difference between the price of retail electricity and the feed in tariff in Germany. That’s 22 US cents. So if the Tesla battery is really 15 cents per kwh, it makes more sense for German solar customers to store their excess solar electricity in a battery than it does to provide it back to the grid.

The real prize, though, would be India. Northern India is sunny. The power grid struggles to provide enough electricity to meet the daytime and early evening peak. India is now rolling out Time-of-Day pricing to residential customers and reports indicate that retail peak power prices are edging towards 20 cents / kwh in some cities. (Most commercial customers in India are already on Time-of-Day pricing.) For now, the solar + battery economics aren’t quite there for Indians that have access to the grid, though with outages there so frequent, high-income urbanites and commercial power users may find that the reliability value puts it over the top.

Back to the US

For some parts of the US with time-of-use plans, this battery is right on the edge of being profitable. From a solar storage perspective, for most of the US, where Net Metering exists, this battery isn’t quite cheap enough. But it’s in the right ballpark. And that means a lot.

Net Metering plans in the US are filling up. California’s may be full by the end of 2016 or 2017, modulo additional legal changes. That would severely impact the economics of solar. But the Tesla battery hedges against that. In the absence of Net Metering, in an expensive electricity state with lots of sun, the battery would allow solar owners to save power for the evening or night-time hours in a cost effective way. And with another factor of 2 price reduction, it would be a slam dunk economically for solar storage anywhere Net Metering was full, where rates were pushed down excessively, or where such laws didn’t exist.

That is also a policy tool in debates with utilities. If they see Net Metering reductions as a tool to slow rooftop solar, they’ll be forced to confront the fact that solar owners with cheap batteries are less dependent on Net Metering.

As I mentioned above, the battery is right on the edge of being effective for day-night electricity cost arbitrage, wherein customers fill up the battery with cheap grid power at night, and use stored battery power instead of the grid during the day. In California, where there’s a 19 cent gap between middle of the night power and peak-of-day power, those economics look very attractive right now. Further price reductions will make this even more clear.

And the cost of batteries is plunging fast. Tesla will get that 2x price reduction within 3-5 years, if not faster. See below for a Nature Climate Change view of the pace of battery price declines.

What About Utility Deployment?

The above analysis is for homes and businesses. But what about utilities deploying the battery themselves?

The impact there may be far bigger. Elon Musk has tweeted that the cost to utilities is $250/kwh.

$250 / kwh appears to be cheap enough to replace natural gas peakers and motivate hundreds of gigawatt hours of deployment across the US.

For example, a study conducted for ERCOT, the Texas power grid, found that below a cost of $350 / kwh, ERCOT would benefit from deploying 8 gigawatts and 24 gigawatt hours of battery storage.

This is a potential huge impact on utilities, the power grid, and electricity markets. If you want to understand more, read my primer which goes into more depth on energy storage innovation and markets.

In Summary: Disruption is Coming

Net, on the home front, I think this battery will sell quite a lot of units to early adopters and those with a low tolerance for outages. As a substitute for a backup generator, it has huge advantages. For utilities, it may have tremendous bang for the buck. And early adopters and utilities will fund the price continuing to decline. Tesla’s strong brand, and the compact, convenient nature of lithium-ion will help sell this into enthusiastically pro-solar homes. For anywhere that doesn’t have Net Metering or a high feed-in-tariff rate today, or where Net Metering is getting full (Australia, Germany, Spain, Hawaii, etc..) this is a slam dunk and a balance-of-power shifter beween home owners and utilities.

All that said, for large scale grid deployment (outside of the home), it still looks like flow batteries and advanced compressed air are likely to be far cheaper in the long run.

Batteries are going to keep getting cheaper. This is just the beginning.

—-

There’s more about the exponential pace of innovation in both storage and renewables in my book on innovating to beat climate change and resource scarcity and continue economic growth:The Infinite Resource: The Power of Ideas on a Finite Planet

Posted in Uncategorized | Tagged , , , | Comments Off

Why I’m Wearing an Eye Patch

 

Choose from among the following:

  • A) I like costumes.
  • B) I really wanted to look like Samuel L Jackson.
  • C) It’s an interesting (and humbling) learning experience, walking around with one eye.
  • D) The left half of my face is paralyzed.

If you chose E, “All of the Above” (option not shown) you win!

What Happened

The last few weeks have been a little more hectic than normal. I woke up in Mountainview on Wednesday, at NASA Ames, having just recorded videos the previous day on energy innovation and climate disruption for Singularity University (we’re working to put more of our curriculum into a video form to reach more people).

My upcoming schedule looked like this:

  • Wed:           Fly to LA. Give important talk to utility executives on energy innovation.
  • Wed night:  Red eye to Miami.
  • Thursday:   Keynote the awesome Smart City Startups Miami conference. (It was awesome. Truly. I’ve seldom been so inspired.)
  • Friday:        Home to Seattle.
  • Sunday:      Fly to Denmark for 4 days of energy-related talks, media interviews, and dinners with awesome people.
  • Next week: Next novel comes out. Speak in SF. Speak in Las Vegas. Launch event in Seattle.

Sometimes life decides things are going to change. That Wednesday morning, the left side of my mouth didn’t work too well. My left eye didn’t work too well, and was, in fact, burning up. I had full sensation. But movement was sluggish, like I’d just come from the dentist. My speech was a little slurred. Drinking was… awkward.

I did not have a stroke.

Bell’s Palsy and How to Treat It

What I have is much less serious. It’s Bell’s Palsy. The facial nerve that controls the muscles of the left side of my face is inflamed, probably from an infection. That, in turn is cutting off signals. The large majority of cases of Bell’s Palsy resolve on their own within 3 to 6 months, and many (particularly mild cases) show some sign of improvement within 2 weeks.

[If you think you have Bell's Palsy, get treatment fastideally within the first 48 hours. More below.]

I gave my talk in LA, still not 100% sure how to deal with the dryness in my eye. That dryness is caused by an inability to close the eye fully + paralysis of the mechanisms that would normally lubricate the eye with tears. But I told the audience I was a bit under the weather, I was able to enunciate well enough (with difficulty and focus), and the talk seemed to be a success.

In Miami I saw a physician and started prednisone (an oral steroid) to bring down inflammation of the nerve. If you or someone you know comes down with Bell’s Palsy, prednisone is virtually the only treatment (besides rest) that is shown to make a difference.

  • At three months, 83% of patients given prednisone within 72 hours of symptoms have recovered fully, compared to 64% of patients not given prednisone.
  • At nine months, 94% of patients given prednisone within 72 hours of symptoms have recovered fully, compared to 82% of patients not given predisone.
  • Reference for the above.
Other studies suggest the real treatment for Bell’s Palsy is the first 48 hours. More on that below.
Another study with more stringent criteria found the total recovery numbers a bit lower, but again with a very real impact of prednisone. They found a 60% chance of complete recovery without medication, and 75-80% with prednisone. You can see a graph of recovery rates below. That means cutting your risk of lasting damage roughly in half. 

 

MY STRONG ADVICE: START MEDICATION FAST

Most clinical guidelines suggest that starting prednisone within 72 hours, at the latest, seems to be critical to making an impact. After 72 hours, the impact of steroids seems to be far reduced. Other studies suggest that up to a weak after symptoms, there’s a chance of benefit.

But a study in 2011 looked more closely, and found that the critical window is the first 48 hours. So if this happens, see a doctor immediately, tell them you suspect Bell’s Palsy, point them at the clinical recommendations, and fairly insist on a strong and immediate dose of prednisone. [Unless, of course, there's a medical reason you can't take oral steroids, or the doctor convinces you that you don't actually have Bell's Palsy.]

Here’s a quote from that paper finding the first 48-hour window is critical in treating Bell’s.

Patients treated with prednisolone within 24 hours and within 25 to 48 hours had significantly higher complete recovery rates than patients given no prednisolone. For patients treated within 49 to 72 hours of palsy onset, there were no significant differences [vs patients given no prednisolone].

(Axelsson et al 2011). More general write-up here.

How I’m Doing

That’s enough on the medical side. How, you ask, am I?

Well, I’m just trying to have fun.

Here’s me at Smart City Startups.

Where I had to show a slide to quash a rumour that I’m actually Samuel L. Jackson (despite both of us being tall, handsome, rich, and famous, and no one having seen us in the same room together).

I did cancel the trip to Denmark, for which I apologize profusely to everyone I was planning to see there. There were an amazing set of talks lined up, but it’s clear that rest, especially in the early stages of the disease, maximizes the odds of full recovery. That trip will be rescheduled, and I’m looking forward to it.

More than anything else, I’m trying to take it easy and make light of what is, most likely, a temporary inconvenience. So what the heck, here are a few recent photos. I think I look rather dashing in an eyepatch, actually.  (Though I need to work on my good-eye-glare and my photoshopped backgrounds).


A Note of Empathy

One last note:

Living with this, even for the last week, has made me aware of the difficulties that people who are even partially blinded or who suffer from any sort of facial paralysis or scarring face. It seems rather trivial, but my field of vision is narrower while I’m protecting the left eye. My depth perception is gone. I’m constantly being surprised by things. And my face… Well, when I pose well, it looks great. When I laugh? Oh my lord. I think I’ve scared small children. And elicited more than my share of stares. So my compassion and empathy for anyone who suffers any sort of facial issue has seen a boost.

And, beyond that, I’m acutely aware that I’m privileged to be male. That’s true in so many parts of life, but it’s specifically true in this case. In a society that places a far larger premium on looks (and specifically, looking “pretty”) for women, I suspect the emotional and societal cost is likely higher on most women suffering from this from most men, and that my gender is making it far easier for me. That’s another thing I’ll be thinking about.

Thanks for reading along. Hopefully I’ll be well in a few weeks. I don’t plan to cancel any more events, at any rate, so if you see me in public, you’ll be getting a one-eyed, lopsided grinning, wild-man looking Ramez Naam.

And in the meantime, if you want the cerebral version of me, you can always pre-order Apex, the last of the Nexus trilogy.

Posted in Uncategorized | Tagged , | Comments Off

China Isn’t the Reason Solar is Cheap. Innovation Is.

“The only reason solar is so cheap is because China is dumping cells.”

I hear this a lot. So let me correct it. Here is the price, as of February 2015, of solar modules, per watt sold in Europe. SE Asia (Malaysia, mostly) is cheapest. China is next. Japan, Korea, and Germany are slightly above that.

First, note that SE Asian cells are cheaper than Chinese.  Second, note that the price difference between Chinese cells and cells from Japan, Korea, or Germany is about 5-7 euro cents per watt.

That difference, in the grand scheme of things, is trivial. It’s trivial for two reasons.

First: The installed cost of utility scale solar is currently in the range of $1.50 – $2 / Watt. That makes the difference in module prices from China to Japan / Korea / Germany about a 3-4% difference in total price

Second, the plunge in solar module prices worldwide has been from $77 per watt, to this current price point of ~50 cents per watt. So the difference in price between Chinese modules and Japanese / Korean / Germany modules is, at most, 1% of that.

Now, the Chinese manufacturers actually have driven much more price change than that. They’ve done so by injecting competition into the market, and forcing everyone to keep driving down the amount of energy, labor, and materials going into each watt of solar. But China didn’t even enter the market in a large way until the last several years. Solar was on a long-term exponential price decline for decades before that.

Posted in Uncategorized | Tagged , , | Comments Off

Solar + Wind, More Than the Sum of Their Parts

David Roberts has an amazing first post in his new job at Vox, on why a solar future is inevitable.

Clearly I’m bullish on solar. My own reasons are that:

1. Solar is plunging in price far faster than any other energy source.

2. Solar takes very little land: Less than 1% of US land would be required to provide US electricity needs via solar.

3. Energy storage is plunging in price at least as fast as solar, complementing it and providing backstop for it.

That said, there’s very likely a role for multiple source of electricity in the future (let alone multiple sources of energy overall, when one adds in things like transportation and manufacturing.)

Consider wind. Wind power, while not plunging in price nearly as rapidly as solar, is cheaper in many places today. And wind and solar have a dynamic that makes them greater than the sum of their parts: The wind tends to blow most when the sun isn’t shining, and vice versa. That’s true on an hour-by-hour basis, and even true on a season-by-season basis.

Consider this chart of capacity factors by hour of day for solar and coastal and inland wind from the ERCOT grid (Texas).

The top line is electricity load – demand being placed on the grid by people drawing electricity. Load peaks in daylight hours, but stays at that peak in the early evening. The sun sets before load drops, but the wind tends to kick in.  And overnight, when no sun is shining, the wind blows, on average harder than it does during the day.

Every gigawatt of solar deployed, for this reason, actually makes wind power slightly more economically valuable.

And while I’ve written extensively about the cost plunge of storage, the reality is that combining solar + wind, at the grid level, often removed the need, at least in the short term, for storage, and reduces the total amount of storage needed on the grid.

The same pattern is generally true across seasons. The sun is most available in summer months, wind most available in winter months. Here’s a view of 11 months in Germany:

The point here isn’t to knock solar. Solar’s ferocious price decline, combined with the fact that it is the most abundant renewable on the planet, give it a clear advantage. Yet there are parts of the world with less sun (Northern Europe, for instance), parts of the day with less sun, and parts of the year with less sun. Combining wind and solar is a bit like adding 1 + 1 and getting three. And for that reason, as solar penetration increases and likely passes wind power in the next 2-3 years, I expect the economic case for wind to actually grow stronger.

That also, by the way, makes the case for the grid. Renewables become far more reliable when integrated over a larger area. Integrating solar power over a wider area cuts the intermittency of clouds, for example:

And for any continent-sized area, using the grid to connect solar + wind allows the best of both worlds, drawing sunlight from the sunny areas, and wind from the windy areas to create a best of both worlds. Indeed, this is what I hope to see happen in Europe, where the northern nations have fairly little sunlight but lots of wind, and the south has abundant sunlight that it could provide to the north.

A European grid to knit these together could provide the best of both worlds to Europe’s electricity system. Energy interdependence over energy independence.

Posted in Uncategorized | Tagged , , , , , | Comments Off

The Prime Minister of Singapore is a Coder

Amazing. I know nothing of his politics or skills as a leader. But this surprised me. Can we import this guy? Elect more engineers, scientists, and developers to office in the United States?

Full transcript of this speech here.

Hat tip: Jon Evans, Alec Muffett

Posted in Uncategorized | Comments Off

Why Energy Storage is About to Get Big – and Cheap

tl;dr: Storage of electricity in large quantities is reaching an inflection point, poised to give a big boost to renewables, to disrupt business models across the electrical industry, and to tap into a market that will eventually top many of tens of billions of dollars per year, and trillions of dollars cumulatively over the coming decades.

Update: My assessment of the Tesla Powerwall Battery. It’s a big step towards disruption.

The Energy Storage Virtuous Cycle

I’ve been writing about exponential decline in the price of energy storage since I was researching The Infinite Resource. Recently, though, I delivered a talk to the executives of a large energy company, the preparation of which forced me to crystallize my thinking on recent developments in the energy storage market.

Energy storage is hitting an inflection point sooner than I expected, going from being a novelty, to being suddenly economically extremely sensible. That, in turn, is kicking off a virtuous cycle of new markets opening, new scale, further declining costs, and additional markets opening.

To elaborate: Three things are happening which feed off of each other.

  1. The Price of Energy Storage Technology is Plummeting. Indeed, while high compared to grid electricity, the price of energy storage has been plummeting for twenty years. And it looks likely to continue.
  2. Cheaper Storage is on the Verge of Massively Expanding the Market.  Battery storage and next-generation compressed air are right on the edge of the prices where it becomes profitable to arbitrage shifting electricity prices – filling up batteries with cheap power (from night time sources, abundant wind or solar, or other), and using that stored energy rather than peak priced electricity from natural gas peakers.This arbitrage can happen at either the grid edge (the home or business) or as part of the grid itself. Either way, it taps into a market of potentially 100s of thousands of MWh in the US alone.
  3. A Larger Market Drives Down the Cost of Energy Storage. Batteries and other storage technologies have learning curves. Increased production leads to lower prices. Expanding the scale of the storage industry pushes forward on these curves, dropping the price. Which in turn taps into yet larger markets.

 

Let’s look at all three of these in turn.

1. The Price of Energy Storage is Plummeting

Lithium Ion

Lithium-ion batteries have been seeing rapidly declining prices for more than 20 years, dropping in price for laptop and consumer electronic uses by 90% between 1990 and 2005, and continuing to drop since then.

A widely reported study at Nature Climate Change finds that, since 2005, electric vehicle battery costs have plunged faster than almost anyone projected, and are now below most forecasts for the year 2020.

The authors estimate that EV batteries in 2014 cost between $310 and $400 per kwh. It’s now in the realm of possibility that we’ll see $100 / kwh lithium-ion batteries in electric vehicles by 2020, with some speculating that Tesla’s ‘gigafactory’ will push into sufficient scale to achieve that.

And the electric car market, in turn, is making large-format lithium-ion batteries cheaper for grid use.

What Really Matters is LCOE – the Cost of Electricity

Now let’s digress and talk about price. The prices we’ve just been talking about are capital costs. Those are the costs of the equipment. But how does that translate into the cost of electricity? What really matters when we talk about energy storage for electricity that can be used in homes and buildings is the impact on Levelized Cost of Electricity (LCOE) that the battery imposes. In other words, if I put a kwh of electricity into the battery, and then pull a kwh of electricity out, over the lifetime of the battery (and including maintenance costs, installation costs, and all the rest), what did that cost me?

Traditional lithium ion-batteries begin to degrade after a few hundred cycles of fully charging and fully discharging, or 1,000 cycles at most. So naively we’d take the capital cost of the battery and divide it by 1,000 to find the cost per kwh round-tripped through it (the LCOE). However, we also have to factor in that some electricity is lost due to less than 100% efficiency (Li-ion is perhaps 90% efficient in round trip). This multiplies our effective cost by 11%.

So we’d estimate that at the following battery prices we’d get the following effective LCOEs:

- $300 / kwh battery  :  33 cent / kwh electricity storage
- $200 / kwh battery  :  22 cent / kwh electricity storage
- $150 / kwh battery  :  17 cent / kwh electricity storage
- $100 / kwh battery  :  11 cent / kwh electricity storage

All of those battery costs, by the way, are functions of what the ultimate buyer pays, including installation and maintenance.

For comparison, wholesale grid electricity in the US at ‘baseload’ hours in the middle of the night averages 6-7 cents / kwh. And retail electricity rates around the US average around 12 cents per kwh. You can see why, at the several hundred dollars / kwh prices of several years ago, battery storage was a non-starter.

On the Horizon: Flow Batteries, Compressed Air

Right now, most of the talk about energy storage is about lithium-ion, and specifically about Tesla, who appear close to announcing a new home battery product at what appears to be a price of around $300 / kwh.

But there are other technologies that may be ultimately more suitable for grid energy storage than lithium-ion.

Lithium-ion is compact and light. It’s great for mobile applications. But heavier, bulkier storage technologies that last for more cycles will be long-term cheaper.

Two come to mind:

1. Flow Batteries, just starting to come to market, can theoretically operate for 5,000 charge cycles or more. In some cases they can operate for 10,000 cycles or more. In addition, the electrolyte in a flow battery is a liquid that can be replaced, refurbishing the battery at a fraction of the cost of installing a new one.

2. Compressed Air Energy Storage, like LightSail Energy’s, uses physical components that are likewise rated for 10,000+ cycles of compression and decompression.

Capital costs for these technologies are likely to be broadly similar to lithium-ion costs over the long term and at similar scale. Most flow battery companies have $100 / kwh capital cost as a target in their minds or one that they’ve publicly talked about. (ARPA-E has used $100 / kwh as a target.) And because a flow battery or compressed air system lasts for so many more cycles, the overall cost of electricity is likely to be many times lower.

How low? At this point, other variables begin to dominate the equation: The cost of capital (borrowing or opportunity cost); management and maintenance costs; siting costs.

DOE’s 2013 energy storage roadmap lists 20 cents / kwh LCOE as the ‘short term’ goal. It articulates 10 cents / kwh LCOE as the ‘long term’ goal.

At least one flow battery company, EnerVault, claims that it is ‘well below’ the DOE targets (presumably the short term target of 20 cents / kwh of electricity).

[Update: I'm informed that EnerVault has run into financial difficulties, a reminder that the storage market, like the solar market before it, will likely be fiercely Darwinian. In solar, the large majority of manufacturers went out of business, even as prices plunged by nearly 90% in the last decade. We should expect the same in batteries. The large majority of energy storage technology companies will go out of business, even as prices drop - or perhaps because of plunging prices - in the decade ahead.]

Getting back to fundamentals: In the long run, given the advantage of long life, if flow batteries or compressed air see the kind of growth that lithium-ion has seen, and thus the cost benefits of scale and learning curve, it’s conceivable that a $100 / kwh flow battery or compressed air system could reach an LCOE of 2-4 cents / kwh of electricity stored.

Of course, neither flow batteries nor compressed air are as commercially proven as lithium-ion. I’m sure many will be skeptical of them, though 2015 and 2016 look likely to be quite big years.

Come back in a year, and let’s see.

2. Storage is on the Verge of Opening Vast New Markets

Now let’s turn away from the technology and towards the economics that make it appealing. Let’s start with the simplest to understand: in the home.

A. Fill When Cheap, Drain When Pricey (Time of Use Arbitrage)

The US is increasingly going to time-of-use charges for electricity. Right now that means charging consumers a low rate in the middle of the night (when demand is low) and a high rate in the afternoon and early evening (when demand is at its peak, often twice as high as the middle of the night).

This matches real underlying economics of grid operators and electricity producers. The additional electricity to meet the surge in afternoon and early evening is generally supplied by natural-gas powered “peaker” plants. And these plants are expensive. They only operate for a few hours each day, so their construction costs are amortized over a smaller amount of electricity. And they have other problems we’ll come back to shortly. The grid itself pays other costs for the peak of demand. Everything – wires, transformers, staff – must be built out to handle the peak of capacity, not the minimum or the average.

The net result is that electricity in the afternoon and early evening is more expensive, and this is (increasingly) being passed on to consumers. How much more expensive? See below:

In California, one can choose the standard tiered rate of 18.7 cents per kwh. Or one can choose the the time-of-use rate. In the latter, there’s a 19.2 cent per kwh difference in electricity rates between the minimum (9pm to 10am) and the peak (1pm – 7pm).

Batteries cheaper than 19 cents / kwh LCOE (including financing, installation, etc.) can be used to arbitrage this price difference. Software fills the battery up with cheap power at night. Software preferentially uses that cheap power from the battery during the peak of demand, instead of drawing it from the grid.

This leads to what seems to be a paradoxical situation. A battery that is more expensive than the average price of grid electricity can nonetheless arbitrage the grid and save one money. That’s math.

That’s also presumably one of the scenarios behind Tesla’s entry into the home battery market, though it’s unlikely to be explicitly stated.

One last point on this before moving on. The arbitrage happening here is also actually good for the grid. From a grid operator’s standpoint, this is ‘peak shaving’ or ‘peak shifting’. Some of the peak load is being diverted to another time when there’s excess capacity in the system. The total amount of electricity being drawn doesn’t change. (In fact, it goes up a bit because battery efficiency is less than 100%). But it’s actually a cost savings for the grid as a whole. In any situation where electricity demand is growing, for instance, widespread use of this scenario can postpone the data at which new distribution lines need to be installed.

B. Store the Sun (Solar + Batteries, as Net Metering Gets Pressured)

Rooftop solar customers love net metering, the rules that allow solar-equipped homes to sell excess electricity back to the grid. Yet around the world and the US, net metering is under pressure. It’s likely, in the US, that the rate at which consumers are paid for their excess electricity will drop, that caps will be imposed, or both.

The more that happens, the more attractive batteries in the home look.

Indeed, it’s happening in Germany already, and the economics there are revealing.

First, let’s be clear on the scenarios, with some help from some graphics from a useful Germany Trade and Invest presentation (pdf link) that dives into “battery parity” (with some tweaks to the images from me.)

Current scenario: Excess power (the bright orange bit – electricity solar panels generate that is beyond what the home their own needs) is sold to the grid. Then, in the evening, the home need power. It buys that electricity from the grid.

Potential new situation. Excess power is available during the day. At least some of it gets stored in a battery for evening use.

Under what circumstances would the second scenario be economically advantageous over the first? In short: The difference in price between grid electricity and the net metering rate / feed-in-tariff is the price that batteries have to meet. In Germany, where electricity is expensive, and feed-in-tariffs have been plunging, this gap is opening wide.

There’s now roughly a 20 euro cent gap between the price of grid electricity and the feed-in-tariff for supplying excess solar back to the grid (the gold bands) in Germany, roughly the same gap as exists between cheapest and most expensive time of use electricity in California.

GTAI and Deutsche Bank’s conclusion – based on the price trends of solar, batteries, electricity in Germany, and German feed-in-tariffs – is that ‘battery parity’, the moment when home solar + a lithium-ion battery makes economic sense, will arrive in Germany by next summer, 2016.

Almost any sunny state in the US that did away with net metering would be at or near solar + battery parity in the next 5 years.

Tesla’s battery is almost cheap enough for this. In fact, it makes more economic sense in Germany than in the US.

Note: Solar + a battery is not the same as ‘grid defection’. It’s not going off-grid. We’re used to 99.9% availability of our electricity. Flick a switch and it’s on. Solar + a small battery may get someone in Germany to 70%, and someone in Southern California to 85%, but the amount of storage you need to deploy to increase that reliability goes up steeply as you approach 99.99%.

For most of us, the grid will always be there. But it may be relegated to slightly more of a backup role.

C. Storage as a Grid Component (Caching for Electrons)

Both of the previous scenarios have looked at this from the standpoint of installation in homes (or businesses – the same logic applies).

But the dropping price of storage isn’t inherently biased towards consumers. Utility operators can deploy storage as well, Two recent studies have assessed the economics of just that. And both find it compelling. Today. At the price of batteries that Tesla has announced.

First, Texas utility Oncor commissioned a study (pdf link – The Value of Distributed Electricity Storage in Texas) of whether it would be cost-effective to deploy storage throughout the Texas grid (called ERCOT), placing the energy storage at the ‘edge’ of the grid, close to consumers.

The conclusion was an overwhelming yes. The study authors concluded that, at a capital cost of $350 / kwh for lithium-ion batteries (which they expected by 2020, but which Tesla has already beaten), it made sense across the ERCOT region to deploy at least 15,000 MWh of battery storage. (That would be 15 million KWh, or the equivalent battery capacity of nearly 160,000 Tesla model 85Ds.)

The study authors concluded that this additional battery storage would slightly lower consumer electrical bills, reduce outages, reduce the need to build added capacity (by shifting the peak, much as a home battery would), and similarly reduce the need to build additional transmission and distribution lines.

The values shown above are in megawatts of power, by the way. The assumption is that there are 3 MWh of storage per MW of power output in the storage system.

You can also see that at a slightly lower price of storage than the $350 / kwh assumed here, the economic case for 8,000 MW (or 24,000 MWh) of storage becomes clear. And we are very likely about to see such prices.

8,000 MW or 8 GW is a very substantial amount of energy storage. For context, average US electrical draw (over day/night, 365 days a year) is roughly 400 GW. So this study is claiming that in Texas alone, the economic case for energy storage is strong enough to motivate storage capacity equivalent to 2% of the US’s average power draw.

ERCOT consumes roughly 1/11th of the US’s electricity. (ERCOT uses roughly 331,000 GWh / year. The US as a whole roughly 3.7 million GWh / year.) If similar findings hold true in other grids (unknown as of yet), that would imply an economic case fairly soon for energy storage capacity of 22% of US electric draw for 3 hours, meaning roughly 88,000 MW or 264,000 MWh.

This is, of course, speculative. We don’t know if the study findings scale to the whole of the United States. It’s back of the envelope math. Atop that, the study itself is an analysis, which is not the same value as experience. Undoubtedly in deployment we’ll discover new things which will inform future views. Even so, it appears that there is very real value at unexpectedly high prices.

Energy storage, because of its flexibility, and because it can sit in so many different places in the grid, doesn’t have to compete with wholesale grid power prices. It competes with the price of peak demand power, the price of outages, and the price of building new distribution and transmission lines. 

Which brings us to scenario 2D:

D. Replacing Natural Gas Peakers

The grid has to be built out to support the peak of use, not the average of use. Part of that peak is sheer load. Earlier I mentioned natural gas ‘peaker’ plants. Peaker plants are reserve natural gas plants. On average they’re active far less than 10% of the time. They sit idle, fueled, ready to come online to respond to peaking electricity demand. Even in this state, bringing a peaker online takes  a few minutes.

Peaker plants are expensive. They operate very little of the time, so their construction costs are amortized over few kwh; They require constant maintenance to be sure they’re ready to go; and they’re less efficient than combined cycle natural gas plants, burning roughly 1.5x as much fuel per kwh of electricity delivered, since the economics of investing in their efficiency hardly make sense when they run for so little of the time.

The net result is that energy storage appears on the verge of undercutting peaker plants. You can find multiple articles online on this topic. Let me point you to one in-depth report, by the Electric Power Research Institute (EPRI): Cost-Effectiveness of Energy Storage in California (pdf).

This report specifically looked at the viability of replacing some of California’s natural gas peaker plans.

While the EPRI California study was asking a different question than the ERCOT study that looked at storage at the edge, it came to a similar conclusion. Storage would cost money, but the economic benefit to the grid of replacing natural gas peaker plants with battery storage was greater than the cost. Shockingly, this was true even when they used fairly high prices. The default assumption here was a 2020 lithium-ion battery price of $528 / kwh. The breakeven price their analysis found was $842 / kwh, three times as high as Tesla’s announced utility scale price of $250/kwh.

Flow batteries, compressed air, and pumped hydro (where geography supports it) also were economically viable.

California alone has 71 natural gas peaker plants, with a combined capacity of 7,418 MW (pdf link). The addressable market is large.

3. Scale Reduces Costs. Which Increases Scale.

In every scenario above there are large parts of the market where batteries aren’t close to competitive yet; where they won’t be in the next 5 years; where they might not be in the next 10 years.

But what we know is this: Batteries (and other storage technologies) will keep dropping in cost. Market growth accelerates that. And thus helps energy storage reach the parts of the market it isn’t priced yet for.

Who Benefits?

Storage has plenty of benefits – higher reliability, lower costs, fewer outages, more resilience.

But I wouldn’t have written these three thousand words without a deep interest in carbon-free energy. And the increasing economic viability of energy storage is profoundly to the benefit of both solar and wind.

Let me be clear: A great deal can be done with solar and wind with minimal storage, by integrating over a wider region and intelligently balancing wind and solar against one another.

Even so, cheap storage is a big help. It removes a long term concern. And in the short term, storage helps whichever energy source is cheapest overcome intermittence and achieve flexibility.

Batteries are flexible. Storage added to add reliability the grid can soak up extra solar power for the hours just after sunset. It can soak up extra wind power from a breezy morning to use in the afternoon peak. Or it can dispatch saved up power to cover for an unexpected degree of cloudiness or a shortfall of wind.

Once the storage is there – whatever else it was intended for – it will get used for renewables. Particularly as those renewables become the cheapest sources of electricity on the grid.

Today, in many parts of the US, wind power is the cheapest source of new electricity, when the wind is blowing. The same is true in northern Europe. On the horizon, an increasing chorus of voices, even the normally pessimistic-on-renewables IEA, see solar as the cheapest source of electricity on the planet, heading towards 4 cents per kwh. Or, if you believe more optimistic voices, a horizon of solar at 2 cents per kwh.

Cheap energy storage adds flexibility to our energy system overall. It can help nuclear power follow the curve of electrical demand (something I didn’t explore here). It helps the grid stay stable and available. It adds caching at the edge, reducing congestion and the need for new transmission.

But for renewables, especially, cheap storage is a force multiplier.

And that’s a disruption I’m excited to see.

—-

There’s more about the exponential pace of innovation in both storage and renewables in my book on innovating to beat climate change and resource scarcity and continue economic growth:The Infinite Resource: The Power of Ideas on a Finite Planet

Posted in Uncategorized | Tagged , , , , , , , | Comments Off

How Much Land Would it Take to Power the US via Solar?

I’ve seen some pieces in the media lately questioning this, so allow me to point to some facts based on real-world data.

tl;dr: We’ll probably never power the world entirely on solar, but if we did, it would take a rather small fraction of the world’s land area: Less than 1 percent of the Earth’s land area to provide for current electricity needs.

First, let me be clear: I doubt the future is 100% solar or anything like it. We are in the midst of a multi-decade transition. And while solar is the most abundant renewable on the planet, and plunging in price faster than any other, there’s a role for solar, wind, hydro, nuclear, and geothermal in the distant future based on ideal geographies and scenarios. I very much hope to see highly advanced, high-yield biofuels come into the mix in the next decade. And for a number of decades to come we’re going to have fossil fuels in play. This article is a ‘what if?’ and not a prediction of or call for 100% solar.

Second, to move to a high-renewables world, we need low-cost energy storage. We’re making progress on that. But there’s still quite a distance to go.

For the data, let’s use two examples.

Example #1 comes from a Breakthrough Institute article complaining about the vast amount of land that solar needs. Guest writer Ben Heard complains that solar’s land footprint (specifically at the Ivanpah plant) is 92 times that of a small modular nuclear reactor. (If you’ve read The Infinite Resource you may know that I wrote a whole chapter in praise of nuclear power and of small modular reactors in particular. I’m a fan.)

What Heard’s Breakthrough Institute article doesn’t tell you is how tiny that land footprint, in the grand scheme of things, actually is. Do the math on the numbers he presents: 1087 Gwh / yr, or 0.31 Gwh / acre / year.

At that output, to meet the US electricity demand of 3.7 million Gwh per year, you’d need about 48,000 square kilometers of solar sites. (That’s total area, not just area of panels.) That may sound like a stunningly large area, and in some sense, it is. But it’s less than half the size of the Mojave desert. And more importantly, the continental United States has a land area of 7.6 million square kilometers. That implies to that meet US electrical demand via this real world example of Ivanpah, would require just 0.6 percent of the land area of the continental US.

This fact – which puts the land area requirements in context – is completely missing from Heard’s piece at the Breakthrough Institute site.

Asked about this on twitter, Heard replied that larger size nevertheless is a disadvantage. It threatens ecosystems and endangered species, for instance. And this is a legitimate point, in some specific areas. (Though certainly far less so than coal and natural gas.)

But, for context, agriculture uses roughly 30% of all land in the United States, or 50 times as much land as would be needed to meet US electricity needs via solar.

Ivanpah, of course, may be an atypical site. So let’s look more generally.

Example #2 is a convenient reference from NREL:  Land Use Requirements for Solar Plants in the United States (2013) It’s excellent reading. I recommend pulling it up the next time Bjorn Lomborg writes an op-ed.

The second to last column tells us that, weighted by how much electricity they actually produce, large solar PV facilities need 3.4 acres of total space (panels + buildings + roads + everything else) for each Gwh of electricity they produce.

That leads to an output estimate of 0.294 Gwh / year / acre, and virtually the same total area, around 50,000 square kilometers in the US, or 0.6% of the continental US’s land area.

Update: In my original post I didn’t take the time to compare this area to other suitable areas in the US, such as rooftops, parking lots, and built land. Various people pointed me to pieces of data. So, consider that:

1. The built environment in the US (buildings, roads, parking lots, etc..) covered an estimated 83,337 square kilometers in 2009, or roughly 166% of the area estimated above. (Likely this area would not be as efficiently used, of course. But it could make a significant dent.)

2. Idled cropland in the US, not currently being used, totaled 37.2 million acres in 2007, or roughly 150,000 square kilometers, roughly three times the area needed.

3. “National Defense and Industrial” lands in the US (which includes military bases, department of energy facilities, and related, but NOT civilian factories, powerplants, coal mines, etc..) totaled 23 million acres in 2007, or roughly 93,000 square kilometers, nearly twice the area needed to meet US electricity demand via solar. Presumably much of that land is actively in use, but it gives a sense of the scale.

4. Coal mines have disturbed an estimated 8.4 million acres of land in the US. That works out to around 34,000 square kilometers, not too far off for the estimate from solar, and doesn’t include the space for coal power plants. And coal currently produces around only 40% of US electricity and hasn’t been above 60% in decades. To scale coal to 100% of US electricity would have required far more land than is required to meet that same demand via solar. Other analysis says the same: Counting the size of coal mines and their output, solar has a smaller land footprint per unit of energy than coal.

And the solar estimate of ~50,000 square kilometers, of course, is with solar systems already deployed. It doesn’t take into account the possibility of future systems with higher efficiencies that could reduce the land footprint needed.

Again, the point here is not that we’ll see a 100% solar world. The more solar we deploy, the more sense it makes to deploy wind to complement it. And frankly, I want to see the nuclear industry succeed. Nuclear is safe baseload power that we should be rooting for. I hope the nuclear industry can get costs and construction times down and under control.

But, when it comes to solar, land is not a blocking issue. Be skeptical when it’s brought up as one.

Posted in Uncategorized | Tagged , , , | Comments Off

A Simple Suggestion for the Hugo Awards

If you haven’t followed the Hugo Awards, some context. A slate of nominees backed by a voting block is dominating this year’s awards, so much so that the slate has every single nomination in some categories – like Best Short Story, Best Novella, and Best Related Work. Read more about that here or here.

This was possible for a number of reasons. One reason, though, is that in each category there are five finalists for the Hugo. And every person making a nomination can nominate… five works in each category.

So here’s a small suggestion for a rules change for the Hugos for 2016 and beyond.

The number of finalists in each category should substantially exceed the number of nominations possible on a single ballot.

E.g., if the number of final nominees for each category is 5 (as today), then each person should be able to nominate 2 or 3 works in each category.

Or, if we want each person to be able to nominate 5 works in each category, then the number of finalists per category should be raised to 8 or 10. [This may have a ripple effect on the "5% rule" - to be dealt with.]

Such a change wouldn’t make it substantially harder for a voting block to get their slate of works onto the list of nominees. But such a slate would no longer push out most or all other eligible works in a category (which is the thing that troubles me most about this year).

Other rules changes may also be good ideas, of course. This is just one.

As for this year, I’ll just echo John Scalzi‘s thoughts on assessing and voting on Hugo works in 2015.

Posted in Uncategorized | Tagged , , , , | Comments Off