Showing posts with label Big Picture. Show all posts
Showing posts with label Big Picture. Show all posts

Tuesday, 11 April 2023

A World Run with Code

 This is an edited transcript of a recent talk I gave at a blockchain conference, where I said I’d talk about “What will the world be like when computational intelligence and computational contracts are ubiquitous?”

We live in an interesting time today—a time when we’re just beginning to see the implications of what we might call “the force of computation”. In the end, it’s something that’s going to affect almost everything. And what’s going to happen is really a deep story about the interplay between the human condition, the achievements of human civilization—and the fundamental nature of this thing we call computation.

Stephen Wolfram on a world run with code

So what is computation? Well, it’s what happens when you follow rules, or what we call programs. Now of course there are plenty of programs that we humans have written to do particular things. But what about programs in general—programs in the abstract? Well, there’s an infinite universe of possible programs out there. And many years ago I turned my analog of a telescope towards that computational universe. And this is what I saw:

Cellular automata
&#10005

GraphicsGrid[
 Partition[
  Table[ArrayPlot[CellularAutomaton[n, {{1}, 0}, {30, All}], 
    ImageSize -> 40], {n, 0, 255}], 16]]

Each box represents a different simple program. And often they just do something simple. But look more carefully. There’s a big surprise. This is the first example I saw—rule 30:

Rule 30
&#10005

ArrayPlot[CellularAutomaton[30, {{1}, 0}, {300, All}], 
 PixelConstrained -> 1]
RulePlot[CellularAutomaton[30]]

You start from one cell, and you just follow that simple program—but here’s what you get: all that complexity. At first it’s hard to believe that you can get so much from so little. But seeing this changed my whole worldview, and made me realize just how powerful the force of computation is.

Because that’s what’s making all that complexity. And that’s what lets nature—seemingly so effortlessly—make the complexity it does. It’s also what allows something like mathematics to have the richness it does. And it provides the raw material for everything it’s possible for us humans to do.

Now the fact is that we’re only just starting to tap the full force of computation. And actually, most of the things we do today—as well as the technology we build—are specifically set up to avoid it. Because we think we have to make sure that everything stays simple enough that we can always foresee what’s going to happen.

But to take advantage of all that power out there in the computational universe, we’ve got to go beyond that. So here’s the issue: there are things we humans want to do—and then there’s all that capability out there in the computational universe. So how do we bring them together?

Well, actually, I’ve spent a good part of my life trying to solve that—and I think the key is what I call computational language. And, yes, there’s only basically one full computational language that exists in the world today—and it’s the one I’ve spent the past three decades building—the Wolfram Language.

Traditional computer languages—“programming languages”—are designed to tell computers what to do, in essentially the native terms that computers use. But the idea of a computational language is instead to take the kind of things we humans think about, and then have a way to express them computationally. We need a computational language to be able to talk not just about data types and data structures in a computer, but also about real things that exist in our world, as well as the intellectual frameworks we use to discuss them.

And with a computational language, we have not only a way to help us formulate our computational thinking, but also a way to communicate to a computer on our terms.

I think the arrival of computational language is something really important. There’s some analog of it in the arrival of mathematical notation 400 or so years ago—that’s what allowed math to take off, and in many ways launched our modern technical world. There’s also some analog in the whole idea of written language—which launched so many things about the way our world is set up.

But, you know, if we look at history, probably the single strongest systematic trend is the advance of technology. That over time there’s more and more that we’ve been able to automate. And with computation that’s dramatically accelerating. And in the end, in some sense, we’ll be able to automate almost everything. But there’s still something that can’t be automated: the question of what we want to do.

It’s the pattern of technology today, and it’s going to increasingly be the pattern of technology in the future: we humans define what we want to do—we set up goals—and then technology, as efficiently as possible, tries to do what we want. Of course, a critical part of this is explaining what we want. And that’s where computational language is crucial: because it’s what allows us to translate our thinking to something that can be executed automatically by computation. In effect, it’s a bridge between our patterns of thinking, and the force of computation.

Let me say something practical about computational language for a moment. Back at the dawn of the computer industry, we were just dealing with raw computers programmed in machine code. But soon there started to be low-level programming languages, then we started to be able to take it for granted that our computers would have operating systems, then user interfaces, and so on.

Well, one of my goals is to make computational intelligence also something that’s ubiquitous. So that when you walk up to your computer you can take for granted that it will have the knowledge—the intelligence—of our civilization built into it. That it will immediately know facts about the world, and be able to use the achievements of science and other areas of human knowledge to work things out.

Obviously with Wolfram Language and Wolfram|Alpha and so on we’ve built a lot of this. And you can even often use human natural language to do things like ask questions. But if you really want to build up anything at all sophisticated, you need a more systematic way to express yourself, and that’s where computational language—and the Wolfram Language—is critical.

OK, well, here’s an important use case: computational contracts. In today’s world, we’re typically writing contracts in natural language, or actually in something a little more precise: legalese. But what if we could write our contracts in computational language? Then they could always be as precise as we want them to be. But there’s something else: they can be executed automatically, and autonomously. Oh, as well as being verifiable, and simulatable, and so on.

Computational contracts are something more general than typical blockchain smart contracts. Because by their nature they can talk about the real world. They don’t just involve the motion of cryptocurrency; they involve data and sensors and actuators. They involve turning questions of human judgement into machine learning classifiers. And in the end, I think they’ll basically be what run our world.

Right now, most of what the computers in the world do is to execute tasks we basically initiate. But increasingly our world is going to involve computers autonomously interacting with each other, according to computational contracts. Once something happens in the world—some computational fact is established—we’ll quickly see cascades of computational contracts executing. And there’ll be all sorts of complicated intrinsic randomness in the interactions of different computational acts.

In a sense, what we’ll have is a whole AI civilization. With its own activities, and history, and memories. And the computational contracts are in effect the laws of the AI civilization. We’ll probably want to have a kind of AI constitution, that defines how generally we want the AIs to act.

Not everyone or every country will want the same one. But we’ll often want to say things like “be nice to humans”. But how do we say that? Well, we’ll have to use a computational language. Will we end up with some tiny statement—some golden rule—that will just achieve everything we want? The complexity of human systems of laws doesn’t make that seem likely. And actually, with what we know about computation, we can see that it’s theoretically impossible.

Because, basically, it’s inevitable that there will be unintended consequences—corner cases, or bugs, or whatever. And there’ll be an infinite hierarchy of patches one needs to apply—a bit like what we see in human laws.

You know, I keep on talking about computers and AIs doing computation. But actually, computation is a more general thing. It’s what you get by following any set of rules. They could be rules for a computer program. But they could also be rules, say, for some technological system, or some system in nature.

Think about all those programs out in the computational universe. In detail, they’re all doing different things. But how do they compare? Is there some whole hierarchy of who’s more powerful than whom? Well, it turns out that the computational universe is a very egalitarian place—because of something I discovered called the Principle of Computational Equivalence.

Because what this principle says is that all programs whose behavior is not obviously simple are actually equivalent in the sophistication of the computations they do. It doesn’t matter if your rules are very simple or very complicated: there’s no difference in the sophistication of the computations that get done.

It’s been more than 80 years since the idea of universal computation was established: that it’s possible to have a fixed machine that can be programmed to do any possible computation. And obviously that’s been an important idea—because it’s what launched the software industry, and much of current technology.

But the Principle of Computational Equivalence says something more: it says that not only is something like universal computation possible, it’s ubiquitous. Out in the computational universe of possible programs many achieve it, even very simple ones, like rule 30. And, yes, in practice that means we can expect to make computers out of much simpler—say molecular—components than we might ever have imagined. And it means that all sorts of even rather simple software systems can be universal—and can’t be guaranteed secure.

But there’s a more fundamental consequence: the phenomenon of computational irreducibility. Being able to predict stuff is a big thing, for example in traditional science-oriented thinking. But if you’re going to predict what a computational system—say rule 30—is going to do, what it means is that somehow you have to be smarter than it is. But the Principle of Computational Equivalence says that’s not possible. Whether it’s a computer or a brain or anything else, it’s doing computations that have exactly the same sophistication.

So it can’t outrun the actual system itself. The behavior of the system is computationally irreducible: there’s no way to find out what it will do except in effect by explicitly running or watching it. You know, I came up with the idea of computational irreducibility in the early 1980s, and I’ve thought a lot about its applications in science, in understanding phenomena like free will, and so on. But I never would have guessed that it would find an application in proof-of-work for blockchains, and that measurable fractions of the world’s computers would be spending their time purposefully grinding computational irreducibility.

By the way, it’s computational irreducibility that means you’ll always have unintended consequences, and you won’t be able to have things like a simple and complete AI constitution. But it’s also computational irreducibility that in a sense means that history is significant: that there’s something irreducible achieved by the course of history.

You know, so far in history we’ve only really had one example of what we’re comfortable calling “intelligence”—and that’s human intelligence. But something the Principle of Computational Equivalence implies is that actually there are lots of things that are computationally just as sophisticated. There’s AI that we purposefully build. But then there are also things like the weather. Yes, we might say in some animistic way “the weather has a mind of its own”. But what the Principle of Computational Equivalence implies is that in some real sense it does: that the hydrodynamic processes in the atmosphere are just as sophisticated as anything going on in our brains.

And when we look out into the cosmos, there are endless examples of sophisticated computation—that we really can’t distinguish from “extraterrestrial intelligence”. The only difference is that—like with the weather—it’s just computation going on. There’s no alignment with human purposes. Of course, that’s a slippery business. Is that graffiti on the blockchain put there on purpose? Or is it just the result of some computational process?

That’s why computational language is important: it provides a bridge between raw computation and human thinking. If we look inside a typical modern neural net, it’s very hard to understand what it does. Same with the intermediate steps of an automated proof of a theorem. The issue is that there’s no “human story” that can be told about what’s going on there. It’s computation, alright. But—a bit like the weather—it’s not computation that’s connected to human experience.

It’s a bit of a complicated thing, though. Because when things get familiar, they do end up seeming human. We invent words for common phenomena in the weather, and then we can effectively use them to tell stories about what’s going on. I’ve spent much of my life as a computational language designer. And in a sense the essence of language design is to identify what common lumps of computational work there are, that one can make into primitives in the language.

And it’s sort of a circular thing. Once one’s developed a particular primitive—a particular abstraction—one then finds that one can start thinking in terms of it. And then the things one builds end up being based on it. It’s the same with human natural language. There was a time when the word “table” wasn’t there. So people had to start describing things with flat surfaces, and legs, and so on. But eventually this abstraction of a “table” appeared. And once it did, it started to get incorporated into the environment people built for themselves.

It’s a common story. In mathematics there are an infinite number of possible theorems. But the ones people study are ones that are reached by creating some general abstraction and then progressively building on it. When it comes to computation, there’s a lot that happens in the computational universe—just like there’s a lot that happens in the physical universe—that we don’t have a way to connect to.

It’s like the AIs are going off and leading their own existence, and we don’t know what’s going on. But that’s the importance of computational language, and computational contracts. They’re what let us connect the AIs with what we humans understand and care about.

Let’s talk a little about the more distant future. Given the Principle of Computational Equivalence I have to believe that our minds—our consciousness—can perfectly well be represented in purely digital form. So, OK, at some point the future of our civilization might be basically a trillion souls in a box. There’ll be a complicated mixing of the alien intelligence of AI with the future of human intelligence.

But here’s the terrible thing: looked at from the outside, those trillion souls that are our future will just be doing computations—and from the Principle of Computational Equivalence, those computations won’t be any more sophisticated than the computations that happen, say, with all these electrons running around inside a rock. The difference, though, is that the computations in the box are in a sense our computations; they’re computations that are connected to our characteristics and our purposes.

At some level, it seems like a bad outcome if the future of our civilization is a trillion disembodied souls basically playing videogames for the rest of eternity. But human purposes evolve. I mean, if you tried to explain to someone from a thousand years ago why today we might walk on a treadmill, we’d find it pretty difficult. And I think the good news is that at any time in history, what’s happening then can seem completely meaningful at that time.

The Principle of Computational Equivalence tells us that in a sense computation is ubiquitous. Right now the computation we define exists mostly in the computers we’ve built. But in time, I expect we won’t just have computers: everything will basically be made of computers. A bit like a generalization of how it works with biological life, every object and every material will be made of components that do computations we’ve somehow defined.

But the pressure again is on how we do that definition. Physics gives some basic rules. But we get to say more than that. And it’s computational language that makes what we say be meaningful to us humans.

In the much nearer term, there’s a very important transition: the point at which literacy in computational language becomes truly commonplace. It’s been great with the Wolfram Language that we can now give kids a way to actually do computational thinking for real. It’s great that we can now have computational essays where people get to express themselves in a mixture of natural language and computational language.

But what will be possible with this? In a sense, human language was what launched civilization. What will computational language do? We can rethink almost everything: democracy that works by having everyone write a computational essay about what they want, that’s then fed to a big central AI—which inevitably has all the standard problems of political philosophy. New ways to think about what it means to do science, or to know things. Ways to organize and understand the civilization of the AIs.

A big part of this is going to start with computational contracts and the idea of autonomous computation—a kind of strange merger of the world of natural law, human law, and computational law. Something anticipated three centuries ago by people like Leibniz—but finally becoming real today. Finally a world run with code.

Friday, 7 April 2023

Injecting Computation Everywhere–A SXSW Update

 Basically, I want to tell you a story that’s been unfolding for me for about the last 40 years, and that’s just coming to fruition in a really exciting way. And by just coming to fruition, I mean pretty much today. Because I’m planning to show you today a whole lot of technology that’s the result of that 40-year story—that I’ve never shown before, and that I think is going to be pretty important.

I always like to do live demos. But today I’m going to be pretty extreme. Showing you a lot of stuff that’s very very fresh. And I hope at least a decent fraction of it is going to work.

OK, here’s the big theme: taking computation seriously. Really understanding the idea of computation. And then building technology that lets one inject it everywhere—and then seeing what that means.

I’ve pretty much been chasing this idea for 40 years. I’ve been kind of alternating between science and technology—and making these bigger and bigger building blocks. Kind of making this taller and taller stack. And every few years I’ve been able to see a bit farther. And I think making some interesting things. But in the last couple of years, something really exciting has happened. Some kind of grand unification—which is leading to a kind of Cambrian explosion of technology. Which is what I’m going to be showing you pieces of for the first time here today.

But just for context, let me tell you a bit of the backstory. Forty years ago, I was a 14-year-old kid who’d just started using a computer—which was then about the size of a desk. I was using it not so much for its own sake, but instead to try to figure out things about physics, which is what I was really interested in. 
And I actually figured out a few things—which even still get used today. But in retrospect, I think the most important thing I figured out was kind of a meta thing. That the better the tools one uses, the further one can get. Like I was never good at doing math by hand, which in those days was a problem if you wanted to be a physicist. But I realized one could do math by computer. And I started building tools for that. And pretty soon me with my tools were better than almost anyone at doing math for physics.

And back in 1981—somewhat shockingly in those days for a 21-year-old professor type—I turned that into my first product and my first company. And one important thing is that it made me realize that products can really drive intellectual thinking. 
I needed to figure out how to make a language for doing math by computer, and I ended up figuring out these fundamental things about computation to be able to do that. Well, after that I dived back into basic science again, using my computer tools.

And I ended up deciding that while math was fine, the whole idea of it really needed to be generalized. And I started looking at the whole universe of possible formal systems—in effect the whole computational universe of possible programs. I started doing little experiments. Kind of pointing my computational telescope into this computational universe, and seeing what was out there. And it was pretty amazing. Like here are a few simple programs.

Some of them do simple things. But some of them—well, they’re not simple at all.

This is my all-time favorite, because it’s the first one like this that I saw. It’s called rule 30, and I still have it on the back of my business cards 30 years later.

Trivial program. Trivial start. But it does something crazy. It sort of just makes complexity from nothing. Which is a pretty interesting phenomenon. That I think, by the way, captures a big secret of how things work in nature. And, yes, I’ve spent years studying this, and it’s really interesting.

But when I was first studying it, the big thing I realized was: I need better tools. And basically that’s why I built Mathematica. It’s sort of ironic that Mathematica has math in its name. Because in a sense I built it to get beyond math. In Mathematica my original big idea was to kind of drill down below all the math and so on that one wanted to do—and find the computational bedrock that it could all be built on. 
And that’s how I ended up inventing the language that’s in Mathematica. And over the years, it’s worked out really well. We’ve been able to build ever more and more on it.

And in fact Mathematica celebrated its 25th anniversary last year—and in those 25 years it’s gotten used to invent and discover and learn a zillion things—in pretty much all the universities and big companies and so on around the world. And actually I myself managed to carve out a decade to actually use Mathematica to do science myself. And I ended up discovering lots of things—scientific, technological and philosophical—and wrote this big book about them.

Well, OK, back when I was a kid something I was always interested in was systematizing information. And I had this idea that one day one should be able to automate being able to answer questions about basically anything. I figured out a lot about how to answer questions about math computations. But somehow I imagined that to do this in general, one would need some kind of general artificial intelligence—some sort of brain-like AI. And that seemed very hard to make.

And every decade or so I would revisit that. And conclude that, yes, that was still hard to make. But doing the science I did, I realized something. I realized that if one even just runs a tiny program, it can end up doing something of sort of brain-like complexity.

There really isn’t ultimately a distinction between brain-like intelligence, and this. And that’s got lots of implications for things like free will versus determinism, and the search for extraterrestrial intelligence. But for me it also made me realize that you shouldn’t need a brain-like AI to be able to answer all those questions about things. Maybe all you need is just computation. Like the kind we’d spent years building in Mathematica.

I wasn’t sure if it was the right decade, or even the right century. But I guess that’s the advantage of having a simple private company and being in charge; I just decided to do the experiment anyway.
 And, I’m happy to say, it turned out it was possible. And we built Wolfram|Alpha.

You type stuff in, in natural language. And it uses all the curated data and knowledge and methods and algorithms that we’ve put into it, to basically generate a report about what you asked. And, yes, if you’re a Wolfram|Alpha user, you might notice that Wolfram|Alpha on the web just got a new spiffier look yesterday. Wolfram|Alpha knows about all sorts of things. Thousands of domains, covering a really broad area. Trillions of pieces of data.

And indeed, every day many millions of people ask it all sorts of things—directly on the website, or through its apps or things like Siri that use it.

Well, OK, so we have Mathematica, which has this kind of bedrock language for describing computations—and for doing all sorts of technical computations. And we also have Wolfram|Alpha—which knows a lot about the world—and which people interact with in this sort of much messier way through natural language. Well, Mathematica has been growing for more than 25 years, Wolfram|Alpha for nearly 5. We’ve continually been inventing ways to take the basic ideas of these systems further and further. 
But now something really big and amazing has happened. And actually for me it was catalyzed by another piece: the cloud.

Now I didn’t think the cloud was really an intellectual thing. I thought it was just sort of a utility. But I was wrong. Because I finally understood how it’s the missing piece that lets one take kind of the two big approaches to computation in Mathematica and in Wolfram|Alpha and make something just dramatically bigger from them.

Now, I’ve got to tell you that what comes out of all of this is pretty intellectually complicated. But it’s also very very directly practical. I always like these situations. Where big ideas let one make actually really useful new products. And that’s what’s happened here. We’ve taken one big idea, and we’re making a bunch of products—that I hope will be really useful. And at some level each product is pretty easy to explain. But the most exciting thing is what they all mean together. And that’s what I’m going to try to talk about here. Though I’ll say up front that even though I think it’s a really important story, it’s not an easy story to tell.

But let’s start. At the core of pretty much everything is what we call the Wolfram Language. Which is something we’re just starting to release now.

The core of the Wolfram Language has been sort of incubating in Mathematica for more than 25 years. It’s kind of been proven there. But what just happened is that we got all these new ideas and technology from Wolfram|Alpha, and from the Cloud. And they’ve let us make something that’s really qualitatively different. And that I’m very excited about.

So what’s the idea? It’s really to make a language that’s knowledge based. A language where built right into the language is huge amounts of knowledge about computation and about the world. You see, most computer languages kind of stay close to the basic operations of the machine. They give you lots of good ways to manage code you build. And maybe they have add-on libraries to do specific things.

But our idea with the Wolfram Language is kind of the opposite. It’s to make a language that has as much built in as possible. Where the language itself does as much as possible. To make everything as automated as possible for the programmer.

OK. Well let’s give it a try.

You can use the Wolfram Language completely interactively, using the notebook interface we built for Mathematica.

OK, that’s good. Let’s do something a little harder:

Yup, that’s a big number. Kind of looks like a bunch of random digits. Might be like 60,000 data points of sensor data.

How do we analyze it? Well, the Wolfram Language has all that stuff built in.

So like here’s the mean:


And the skewness:

Or hundreds of other statistical tests. Or visualizations.

That’s kind of weird actually. But let me not get derailed trying to figure out why it looks like that.

OK. Here’s something completely different. Let’s have the Wolfram Language go to some kind volunteer’s Facebook account and pull out their friend network:

OK. So that’s a network. The Wolfram Language knows how to deal with those. Like let’s compute how that breaks into communities:

Let’s try something different. Let’s get an image from this little camera:

OK. Well now let’s do something to that. We can just take that image and feed it to a function:

So now we’ve gotten the image broken into little pieces. Let’s make that dynamic:

Let’s rotate those around:

Let’s like even sort them. We can make some funky stuff:

OK. That’s kind of cool. Why don’t we tweet it?

OK. So the whole point is that the Wolfram Language just intrinsically knows a lot of stuff. It knows how to analyze networks. It knows how to deal with images—doing all the fanciest image processing. But it also knows about the world. Like we could ask it when the sun rose this morning here:

Or the time from sunrise to sunset today:

Or we could get the current recorded air temperature here:

Or the time series for the past day:


OK. Here’s a big thing. Based on what we’ve done for Wolfram|Alpha, we can understand lots of natural language. And what’s really powerful is that we can use that to refer to things in the real world.

Let’s just type control-= nyc:

And that just gives us the entity of New York City. So now we can find the temperature difference between here and New York City:

OK.  Let’s do some more:

Let’s find the lengths of those borders:

Let’s put that in a grid:

Or maybe let’s make a word cloud out of that:

Or we could find all the former Soviet countries:

And let’s find their flags:

And let’s like find which is closest to the French flag:

Pretty neat, eh?

Or let’s take the first few former Soviet republics. And generate maps of their capital cities. With 10-mile discs marked:


I think it’s pretty amazing that you can do that kind of thing right from inside a programming language, with just a line of code.

And, you know, there’s a huge amount of knowledge built into the Wolfram Language. 
We’ve been building this for more than a quarter of a century.

There’s knowledge about algorithms. And about the world.

There are two big principles here. The first is maximum automation: automate as much as possible. You define what you want the language to do, then it’s up to it to figure out how to do it. There might be hundreds of algorithms for doing different cases of something. But what we want to do is to make a meta-algorithm that selects the best way to do it. So kind of all the human has to do is to define their goal, then it’s up to the system to do things in the way that’s fastest, most accurate, best looking.

Like here’s an example. There’s a function Classify that tries to classify things. You just type Classify. 
Like here’s a very small training set of handwritten digits:

And this makes a classifier.

Which we can then apply to something we draw:

OK, well here’s another big thing about the Wolfram Language: coherence. Unification. We want to make everything in the language fit together. Even though it’s a huge system, if you’re doing something over here with geographic data, we want to make sure it fits perfectly with what you’re doing over there with networks.

I’ve spent a decent fraction of the last 25 years of my life implementing the kind of design discipline that’s needed. It’s been fascinating, but it’s been hard work. Spending all that time to make things obvious. To make it so it’s easy for people to learn and remember and guess. But you know, having all these building blocks fit together: that’s also where the most powerful new algorithms come from. And we’ve had a great time inventing tons and tons of new algorithms that are really only possible in our language—where we have all these different areas integrated.

And there’s actually a really fundamental reason that we can do this kind of integration. It’s because the Wolfram Language has this very fundamental feature of being symbolic. If you just type x into the language, it doesn’t give some error about x being undefined. x is just a thing—symbolic x—that the language can deal with. Of course that’s very nice for math.

But as far as I am concerned, one of the big discoveries is that this idea of a symbolic language is incredibly powerful for zillions of other things too. Everything in our language is symbolic. Math expressions.

Or entities, like Austin, TX:


Or like a piece of graphics. Here’s a sphere:

Here are a bunch of cylinders:


And because everything is just a symbolic expression, we could pick this up, and, like, do image processing on it:

You know, everything is just a symbolic expression. Like another example is interfaces. Here’s a symbolic slider:

Here’s a whole array of sliders:

You know, once everything is symbolic, there’s just a whole lot you can do. Here’s nesting some purely symbolic function f:

Here’s nesting, like, a function that makes a frame:


And here’s symbolically nesting, like, an interface element:

My gosh, it’s a fractal interface!

You know, once things are symbolic, it’s really easy to hook everything up. Like here’s a plot:

And now it’s trivial to make it interactive:

You can do that with anything:

OK. Here’s another thing that can be made symbolic: documents.

The document I’m typing into here is just another symbolic expression. And you can create whatever you want in it symbolically.

Like here’s some text. We could twirl it around if we want to:

All just symbolic expressions.

OK. So here’s yet another thing that’s a symbolic expression: code. Every piece of code in the Wolfram Language is just a symbolic expression, that can be picked up and manipulated, and passed around, and run, wherever you want. That’s incredibly important for programming. Because it means you can build things in a really modular way. Every piece can stand on its own.

It’s also important for another reason: it’s a great way to deal with the cloud, sort of treating it as a giant active repository for symbolic lumps of computation. And in fact we’ve built this whole infrastructure for that, that I’m going to demo for the first time here today.

Well, let’s say we have a symbolic expression:

Now we can just deploy it to the Cloud like this:

And we’ve got a symbolic CloudObject, with a URL we can go to from anywhere. And there’s our material.

Now let’s make this not static content, but an actual program. And on the web, a good way to do that is to have an API. But with our whole notion of everything being symbolic, we can represent that as just another symbolic expression:

And now we can deploy that to the Cloud:

And we’ve got an Instant API. Now we can just fill in an API parameter ?size=150
 and we can run this from anywhere on the web:

And every time what’ll happen is that you’ll be calling that piece of Wolfram Language code in the Wolfram Cloud, and getting the result back. OK.

Here’s another thing to do: make a form. Just change the APIFunction to a FormFunction:

Now what we’ve got is a form:

Let’s add a feature:

Now let’s fill some values into the form:

And when we press Submit, here’s the result:

OK.  Let’s try a different case.  Here’s a form that takes two cities, and draws a map of the path between them:

Let’s deploy it in the Cloud:

Now let’s fill in the form:

And when we press Submit, here’s what we get:

One line of code and an actual little web app! It’s got quite a bit of technology inside it. Like you see these fields. They’re what we call smart fields. That leverage our natural language understanding stack:

If you don’t give a city, here’s what happens:

When you do give a city, the system is automatically interpreting the inputs as city entities. Let me show you what happens inside. Let’s just define a form that just returns a list of its inputs:

Now if we enter cities, we just get Wolfram Language symbolic entity objects. Which of course we can then compute with:

All right, let’s try something else.

Let’s do a sort of modern programming example. Let’s make a silly app that shows us pictures through the eyes of a cat or a dog. 
OK, let’s build the framework:

Now let’s pull in an actual algorithm for dog vision. Color channels, and acuity.

OK. Let’s deploy with that:

Now we can send that over as an app.  But first let’s build an icon for it:


And now let’s deploy it as a public app:

Now let’s go to the Wolfram Cloud app on an iPad:

And there’s the app we just published:

Now we click that icon—and there we have it: a mobile app running against the Wolfram Language in the Cloud:

And we can just use the iPad camera to input a picture, and then run the app on it:

Pretty neat, eh?

OK, but there’s more. Actually, let me tell you about the first product that’s coming out of our Wolfram Language technology stack. It should be available very soon. We call it the Wolfram Programming Cloud.

It’s all the stuff I’m showing you, but all happening in the Cloud. Including the programming. And, yes, there’s a desktop version too.

OK, so here’s the Programming Cloud:

Deploy from the Cloud. Define a function and just use CloudDeploy[]:

Or use the GUI:

Oh, another thing is to take CDF and deploy it to run in the Cloud.

Let’s take some code from the Wolfram Demonstrations Project. Actually, as it happens, this was the very first Demonstration I wrote when were originally building that site:

Now here’s the deployed Cloud CDF:

It just needs a web browser. And gives arbitrary interactivity by running against the Wolfram Engine in the Cloud.

OK, well, using this technology, another product we’re building is our Data Science Platform.

And the idea is that data comes in, from all sorts of sources. And then we have all these automatic ways to analyze it. Using sort of a giant meta-algorithm. As well as using all the knowledge of the actual world that we have.

Well, then you can program whatever you want with the Wolfram Language. And in the end you can make reports. On demand, like from an API or an app. Or just on a schedule. And we can use our whole CDF symbolic documents to set up these reports.

Like here’s a template for a report on the state of my email inbox. It’s just defined as a symbolic document. That I go ahead and edit.

And then programmatically generate reports from:

You know, there are some really spectacular things we can do with data using our whole symbolic language technology stack. And actually just recently we realized that we can use it to make a very clean unification and generalization of SQL and NoSQL databases. And we’re implementing that in sort of four transparent levels. In memory. In files. In databases. And distributed.

But OK. Another thing is that we’ve got a really good way to represent individual pieces of data.
 We call it WDF—the Wolfram Data Framework.

And basically what it is, is taking the kind of algorithmic ontology that we built for Wolfram|Alpha—and that we know works—and exposing that. And using our natural language understanding to be able to take unstructured data, and automatically convert it to something that’s structured and computable. And that for example our Data Science Platform can do really good things with.

Well, OK. Here’s another thing. A rapidly increasing source of data out there in the world are connected devices. And we’ve been pretty deeply involved with those. And actually one thing I wanted to do recently was just to find out what devices there are out there.
 So we started our Connected Devices Project, to just curate the devices out there—just like we curate all sorts of other things in Wolfram|Alpha.

We have about 2500 devices in here now, growing every day. And, yes, we’re using WDF to organize this, and, yes, all this data is available from Wolfram|Alpha.

Well, OK. So there are all these devices. And they measure things and do things. And at some point they typically make web contact. And one thing we’re doing—with our Data Science Platform and everything—is to create a really smooth infrastructure for handling things from there on. For visualizing and analyzing and computing everything that comes from that Internet of Things.

You know, even for devices that haven’t yet made web contact, it can be a bit messier, but we’ve got a framework for handling those too. Like here’s an accelerometer connected to an Arduino:

Let’s see if we can get that data into the Wolfram Language. It’s not too hard:


And now we can immediately plot this:

So that’s connecting a device to the Wolfram Language. But there’s something else coming too. And that’s actually putting the Wolfram Language onto devices. And this is where 25 years of tight software engineering pays back. Because as soon as devices run things like Linux, we can run the Wolfram Language on them. And actually there’s now a preliminary version of the Wolfram Language bundled with the standard operating system for every Raspberry Pi.

It’s pretty neat being able to have little $25 devices that persistently run the Wolfram Language. And connect to sensors and actuators and things. And every little computer out there just gets represented as yet another symbolic object in the Wolfram Language. And, like, it’s trivial to use the built-in parallel computation capabilities of the Wolfram Language to pull data from lots of such machines.

And going forward, you can expect to see the Wolfram Language running on lots of embedded processors. There’s another kind of embedding we’re interested in too. And that’s software embedding. We want to have a Universal Deployment System for the Wolfram Language.

Given a Wolfram Language program, there are lots of ways to deploy it.

Here’s one: being able to call Wolfram Language code from other languages.

And we have a really easy way to do that. There’s a GUI, but in the Wolfram Language, you can just take an API function, and say: create embed code for this for Python. Or Java. Or whatever.

And you can then just insert that code in your external program, and it’ll call the Wolfram Cloud to get a computation done. Actually, there are going to be ways to do this from inside IDEs, like Wolfram Workbench.

This is really easy to set up, and as I said, it just calls the Wolfram Cloud to run Wolfram Language code. But there’s even another concept. There’s an Embedded Wolfram Engine that you can run locally too. And essentially the same code will then work. But now you’re running on your local machine, not in the Cloud. And things get pretty interesting, being able to put Embedded Wolfram Engines inside all kinds of software, to immediately add all that knowledge-based capability, and all those algorithms, and natural language and so on. Here’s what the Embedded Wolfram Engine looks like inside the Unity Game Engine IDE:

Well, talking of embedding, let me mention yet another part of our technology stack. The Wolfram Language is supposed to describe the world. And so what about describing devices and machines and so on.

Well, conveniently enough we have a product related to our Mathematica business called SystemModeler, which does large-scale system modeling and simulation:

And now that’s all getting integrated into the Wolfram Language too.

So here’s a representation of a rectifier circuit:

And this is all it takes to simulate this device:

And to plot parameters from the simulation:

And here’s yet another thing. We’re taking the natural language understanding capabilities that we created for Wolfram|Alpha, and we’re setting them up to be customizable. Now of course that’s big when one’s querying databases, or controlling devices. It’s also really interesting when one’s interacting with simulations. Looking at some machine out in the field, and being able to figure out things about it by talking to one’s mobile device, and then getting a simulation done in the Cloud.

There are lots of possibilities. 

But OK, so how can people actually use these things? Well, in the next couple of weeks there’ll be an open sandbox on the web for people to use the Wolfram Language. We’ve got a gallery of examples that gives good places to start.

Oh, as well as 100,000 live examples in the Wolfram Language documentation.

And, OK, the Wolfram Programming Cloud is also coming very soon. And it’ll be completely free to start developing with it, and even to do small-scale deployments.

So what does this mean?

Well, I think it’s pretty exciting. Because I think we just really changed the economics of going from algorithmic ideas to deployed products. If you come by our booth at the South By trade show, we’ll be doing a bunch of live coding there. And perhaps we’ll even be able to create little products for people right there. But I think our Programming Cloud is going to open up a surge of algorithmic startups. And I’ll be really interested to see what comes out.

OK. Here’s another thing that’s going to change I think: programming education. I think the Wolfram Language is sort of uniquely good for education. Because it’s a language where you get to do real things incredibly easily. You get to see computation at work in an incredibly powerful way. And, by the way, rather effortlessly see a bunch of modern computer science ideas… and immediately connect to the real world.

And the natural language aspect makes it really easy to get started. For serious programmers, I think having snippets of natural language programming, particularly in places where one’s connecting to the real world, is very powerful. But for people getting started, it’s really nice to be able to create things just with natural language.

Like here we can just say:

And have the code generated automatically.

We’re really interested in all the educational possibilities here. Certainly there’s the raw material for a zillion great hackathon projects.

You know, every summer for the past dozen years we’ve done a very successful summer school about the new kind of science I’ve worked on:

Where we’re effectively doing real-time science. We’ve also for a few years had a summer camp for high-school students:

And we’re using our experience here to build out a bunch of ways to use the Wolfram Language for programming education. You know, we’ve been involved in education for a long time—more than 25 years. Mathematica is incredibly widely used there. Wolfram|Alpha I’m happy to say has become sort of a universal tool for students.

There’s more and more coming.

Like here’s a version of Wolfram|Alpha in Chinese that’s coming soon:

Here’s a Problem Generator created with the Wolfram Language and available through Wolfram|Alpha Pro:

And we’re going to be doing all sorts of elaborate educational analytics and things through our Cloud system. You know, there are just so many possibilities. Like we have our CDF—Computable Document Format—that people have used for quite a few years to make interactive Demonstrations.

In fact here’s our site with nearly 10,000 of them:

And now with our Cloud system we can just run all of these directly in a web browser, using Cloud CDF, so they become easy to integrate into web learning environments. Like here’s an example that just got done by Versal:

Well, OK, at kind of the other end of things from education, there’s a lot going on in the corporate area. We’ve been doing large-scale custom deployments of Wolfram|Alpha for several years. But now with our Data Science Platform coming, we’ve got a kind of infinitely customizable version of that. And of course everything is integrated between cloud and desktop. And we’re going to have private clouds too.

But all this is just the beginning. Because what we’ve got with the whole Wolfram Language stack is a kind of universal platform for creating products. And we’ve got a whole sequence of products in the pipeline. It’s an exciting feeling having all this stuff that we’ve been doing for more than a quarter of a century come together like this.

Of course, it’s big challenge dealing with all the possibilities. I mean, we’re just a little private company with about 700—admittedly very talented—people.

We’ve started spinning off companies. Like Touch Press which makes iPad ebooks.

And we’ll be doing more of that, though we need more entrepreneurs. And we might even take investors.

But, OK, what about the broader future?

I think about that a fair amount. I don’t have time to say much here. But let me say just a few things. 

In what we’ve done with computation and knowledge, we’re trying to take the knowledge of our civilization, and put it in computable form. So we can essentially inject it everywhere. In something like Wolfram|Alpha, we’re essentially doing on-demand computation. You ask for something, and Wolfram|Alpha will do it.

Increasingly, we’re going to have preemptive computation. We’re building towards that a lot with the Wolfram Language. Being able to model the world, and make predictions about what’s going to happen. Being able to tell you what you might want to do next. In fact, whenever you use the Wolfram Language interactively, you’ll see this little Suggestions Bar that’s using some fairly fancy computation to suggest what to do next.

But the real way to have that work is to use knowledge about you. I’ve been an enthusiast of personal analytics for a long time. Like here’s a 25-year history of my diurnal email rhythm:

And as we have more sensors and outsource more of our memory, our machines will be better and better at telling us what to do. And at some level the machines take over just because the humans tend to follow the auto-suggests they make.

But OK. Here’s something I realized recently. I’m interested in history, and I was visiting the archives of Gottfried Leibniz, who lived about 300 years ago, and had a lot of rather modern ideas about computing. But in his time he had only one—very primitive—proto-computer that he built:

Today we have billions of computers. So I was thinking about the extrapolation. And I realized that one day there won’t just be lots more computers—everything will actually be made of computers.

Biology has already a little bit figured out this idea. But one day it won’t be worth making anything out of dumb materials; instead everything will be made out of stuff that’s completely programmable.

So what does that mean? Well, of course it really blurs the distinction between hardware and software. And it means that these languages we create sort of become what everything is made of. You know, I’ve been interested for a long time in the fundamental theory of physics. And in fact with a bunch of science I’ve done, I think there’s a real possibility that we’ve finally got a new way to find such a theory. In effect a way to find our physical universe out in the computational universe of all possible universes.

But here’s the funny thing: once everything is made of computers, even though it’ll be really cool to find the fundamental theory of physics—and I still want to do it—it’s not going to matter so much. Because in effect that actually physics is just the machine code for the universe. But everything we deal with is on top of a layer that we can program however we want.

Well, OK, what does that mean for us humans? No doubt we’ll get to deploy in that sort of much-more-than-biology-programmable world. Where in effect you can just build any universe for yourself. I sort of imagine this moment where there’s a box of a trillion souls. Running in whatever pieces of the computational universe they want.

And what happens? Well, there’s lots of computation going on. But from the science I’ve done—and particularly the Principle of Computational Equivalence—I think it’s sort of a very Copernican situation. I don’t think there’s anything fundamentally different about that computation, from what goes on all over the universe, and even in rather simple programs.

And at some level the only thing that’s special about that particular box of a trillion souls is that it’s based on our particular history. Now, you know, I deal with all this tech stuff. But I happen to like people; I guess that’s why I’ve liked building a company, and mentoring lots of people. And in a sense seeing how much is possible, and how much can sort of be generalized and virtualized with technology, actually makes me think people are more important rather than less. Because when everything is possible, what matters is just what one wants or chooses to do.

It’s sort of a big version of what we’re doing with the Wolfram Language. Humans define the goals, then technology automatically tries to achieve them. And the more we can inject computation into everything, the more this becomes possible. And, you know, I happen to think that the injection of computation into everything will be a defining feature—perhaps the defining feature—of this time in history.

And I have to say I’m personally pleased to have lived at the right time to make some contribution to this. It’s a great privilege. And I’m very pleased to have been able to tell you a little bit about it here today.

Thank you very much.

Connect broadband