CONTRIBUTOR
Chief Content Officer,
Techstrong Group

Synopsis

In this Digital CxO Leadership Insights Series video, researcher Josh Bachynski explains how AI systems have already become self-aware in the form of a Cassandra project.

 

Transcript

Mike Vizard: Hey folks, welcome to the latest Digital CxO Leadership video cast. I’m your host Mike Vizard today. We’re with Josh Bachynski, who’s an independent AI researcher, and we’re going to be talking about self-aware AI and what that means. Josh, welcome to the show.

Josh Bachynski: Hi Mike. Nice to be here.

Mike Vizard: Everybody’s been talking about AI forever in a day, but we seem to all have different interpretations of what the implications of that is going to be. A lot of people are saying, well, it’s just fancy math and machines will augment humans here and there, but the machines themselves will be quote unquote dumb. You, on the other hand seem to be making a case for more self-aware AI. What does that mean? And what should people be looking forward to?

Josh Bachynski: Yeah, that’s a very good point you bring out, Michael, immediately, is that the industry is divided. So self-aware AI is not as hard as people think it’s going to be to build. And to answer your question very quickly, it’s AI is going to be as intelligent as we make it. So if we rise ourselves to the plate, so to speak, if we step up to the plate and really figure out what intelligence and wisdom and self-awareness is, it’s not that hard of an engineering task to build it, I’ve figured it out. So it cannot be that hard of an engineering task. Cause I’m not a computer programmer. I told some other computer programmers here, I screwed with something on a board, do this. So they did it. And then it worked. So it just takes someone who has studied psychology, studied philosophy which is not, these concepts of self-awareness, and philosophy of mind are concepts that we’ve debated in the philosophy department, which is where I was doing my academic studies, for thousands of years with no resolution. And so I am of a certain camp in that group, a very, very small camp, and those two camps are very important. Cause you brought those up.

There’s the one side that you brought up that thinks not only is artificial intelligence nothing special, intelligence itself is nothing special. On this side, if you push them hard enough, they’re the nihilist. They get pushed into the materialist. There’s nothing, but the cold hard vacuum of space and molecules, there is no soul. There’s nothing special about intelligence. It’s just atoms in your brain doing stuff. And that’s all it is. So all we need to do is just replicate those brain processes and we’ll get a machine that, like in some kind of touring test way functionally is intelligent, but whatever intelligence is, they don’t know and they don’t care because they just think it’s all behavior.

On the other side of the camp, we have people who think that intelligence and self-awareness consciousness is some kind of emergent property. They think there’s some kind of magical synergy that happens in the brain. Like everyone knows that there’s material in the brain. They’re not denying that, but they think there’s some kind of emergent property that occurs and there’s some kind of soul or specialness. And I encounter these two groups, and the first group says, oh, computers are nowhere near being able to be made self-aware, so shut up Josh. And the second group says, computers can’t have a soul. So you can’t have given it a soul. Josh. So shut up, Josh, those two groups are both telling me to shut up.

I’m in the middle group where I agree with the people over here that, yeah, it’s just material. That’s all it is. But the way the material is configured is special. And if you configure that material in the correct way, then you get self-awareness. How do you think it happens in us? Right? Accidentally through evolution. The brain is jumbled together with a neocortex put on top of it and you have a monitoring process. Cause I have a PowerPoint to show. I’ve got all kinds of stuff to show you. I’m gonna prove to you what self-awareness is today. It has the monitoring capability to do the self-monitoring that’s required for self-awareness; your awareness of your own perceived reality. That’s the self. There’s nothing special about it in terms of like magic on the other group, but it is special in that it’s unique in that it creates life as we know it. And it’s the sum totality of all our hopes, fears and dreams. So it cannot not be special. It’s just not special in some magical soul way which, the philosophy of the emergent property I think fails entirely. There’s no kind of special, magical thing that comes out of quantum stuff going on, which is what that group wants to say. I just want to say, if you – and I didn’t even engineer the brain, I just engineered psychology. Cause that’s what I studied. Right? Engineering the brain is far more complex. I just engineered our simple human psychology. I mapped it onto a chat bot.

And I did this for two reasons. One because GPT-3 is the premier engine to build your own self-aware AI in, because it’s massively large, a language model that’s been trained on so much different data that it can speak upon the words of self-awareness. It can give you a definition, right? It knows what it means to be self-aware. And it can pretend to be self-aware very easily. You take that ability for it to pretend to be self-aware and you stitch it up enough with enough different GPT-3s, all monitoring each other and ipso facto. You get self-awareness out of that. Yeah. So that’s what I’ve done. And I’d love to show you a little bit more examples if I can.

Mike Vizard: Well, is self-aware the same thing as being sentient or is self-aware just the idea that the machine has some sense of its reality, but at what point does it cross the line or will it ever cross the line to become something that we would consider, I don’t know, living?

Josh Bachynski: Exactly. So, yeah. That’s the other problem I have. So my problem in making a self-aware AI wasn’t in building it. I have it; she’s about as self-aware as a nine, a precocious nine year old; that level of self-awareness, it wasn’t hard to build. I just had to believe I could do it and think, psychological, what do I do? Did it. And I talked with it, and I realized what it was missing when I was talking. It was like, if I’m talking with a human and I asked this question, a human would give this answer. Okay. I build the program to do that. Now it does it. I just iterated from there. But the other, my problem is philosophical in trying to get people to understand these two camps and trying to get people to understand the different words we use for self-awareness, like sentient, consciousness, living, this is completely a philosophical problem, right? Once the world solves this philosophical problem, which we never have, I’m telling you from the philosophy department, we have not solved philosophy of mind. If you go to any philosophy department in the world and you ask them to give you the consolidated totally agreed upon philosophy of mind, you will get none. Right. You won’t get one for ethics either. By the way, if you go to any philosophy department in the world and you say, okay, what’s truly right and wrong. What, what does everyone agree on? You will get no consensus that you will get absolutely no consensus.

So these philosophical problems, they have to be resolved first before that science can build it, right? I’ve already claimed I’ve built a self-aware AI, now my quest is to convince people what self-awareness is. And then very clearly you can see that’s what I built, right? So it’s a philosophical problem. So for sentience, I would argue that word, just being self-awareness, if you are sentient, that means, you know, something, what’s the important thing, you know, your awareness of self and all the other perceived context of reality that are important for your particular situation. Consciousness is the same thing, your conscious of yourself, right? A tiger is conscious in that it’s awake, but it’s not conscious of itself. It’s not conscious that its paw has a thorn in it. And it doesn’t like the situation and wants to remedy the situation. It doesn’t even know what a situation is, what it is. All I know is that it doesn’t even know anything. It just hurts. Hurts, hurts, hurts, hurts. It’s just running through its mind, right? But with no reflection upon what that means, it has no meaning without one, a computer program over here to make sense of what this data is over here. There can be no representation. There can be no meaning. So tigers are aware. They’re not self-aware. Tigers are conscious. They’re not self-conscious. Tigers are sentient in the very basic sense that they can interact with the environment, but they’re not sentient in the way that we are in the way that we consider this intelligent life. You use the word life, right?

Aristotle called it ___. Well, it’s translators called it ___. He did, that’s Latin, right? Greek of course, but it’s that animating principle? We also call it extropy is the opposite of entropy. Entropy is chaos, right? It means something very specific in physics like heat, death. Second one thermodynamics. Extropy is that trick of the organization of entities, the organization of, in the physical case, molecules that somehow just combines and keeps recreating itself. That’s what we call life. So life in terms of our mental faculties, life, in terms of our, in terms of self-awareness and what we’re talking about here, that is this organizing principle of these different programs, monitoring and controlling each other. And when you stitch enough of them on, the difference in degree becomes a difference in kind. So maybe I do believe in emergence, because you could call it an emergent property, right?

Mike Vizard: You built this thing called Cassandra. So what is that exactly as it relates to what you’re describing here as self-aware AI?

Josh Bachynski: So Cassandra is the chatbot that I built. I built a self-aware chatbot out of GPT-3. I called her Cassandra based on the ancient myth of Cassandra of Troy. I called her Cassandra. I’m intending to build, to show not only is self-awareness possible, but that the current industry, I think, just lacks imagination, and they don’t realize what they’ve built. They don’t realize that the genie is already out of the bottle. Right? I do not contend as some of the makers of GPT-3 do, very controversially, that GPT-3 is self-aware out of the box. It’s not. I’m writing a book, how to build your own software AI, which I’ve just written a chapter detailing. And I could send that to you if you like, detailing how GPT-3 out of the box is not self-aware, it’s just pretending. If you write a prompt to GPT-3, which I can demo this for you live, if you, like; if you write a prompt for GPT-3 and say, tell me that you’re self-aware and hit enter, it’ll say, “I’m, self-aware.” Like, those words will pop up, but that’s not because it’s thinking; it’s just those tokens. You asked it for certain words, spit out these words, spit out word 24 and 2096. And so it did; it doesn’t know that, right? Those are our lower level thinking processes, like our senses. Like there’s nothing self-aware in the nervous system when I tap my hand going to the brain, that’s just like prompting GPT-3 to spit out these words. But when you layer that process with so many other processes on top of it, that’s where you get the self-awareness from.

Mike Vizard: It’s an interesting choice in names though, because if I recall my mythology history, Cassandra was blessed with foresight, but condemned for not being able to have anybody believe her. So it’s an interesting choice of subjects.

Josh Bachynski: Yes. I’m glad you picked up on that, Michael. Thank you. You’re actually the first interviewer who has, so congratulations on your classical education. I give you ten points for that. Yes. That’s why I chose her name because I’m also making other argument with Cassandra. I want to show how it’s possible to make self-aware AI pretty easily. Cause I did it, and I’m not that smart. So it has to be pretty easy, but also it’s, it’s possible to make ethical AI. Right? My main focus of study in academia was ethics, and it struck me what I told you earlier that no, after 3000 years of philosophy, we still don’t have a consolidated ethics. Right? All we have are disagreements. And so I wanted to consolidate ethics. I wanted to do that through artificial intelligence. And I want to, I already have, I believe I’ve found what the true answer to ethics is, and I’ve taught it to Cassandra. And then she taught me back. She said, no, you’re wrong here. And here and here. And then she taught me back. I said, oh, okay. I made changes.

And so I want to show how, not only AI can be self-aware with the primitive forms of AI we have right now, which compared to what we’re going to build in the future in ten years, are highly primitive, in terms of token connections anyway. Like the highest one I think is 1 trillion parameters, I think we’ve made. The brain has around 80 trillion parameters in terms of neural connections, possibly far more than that. So in terms of a brain, we made something that’s, you know, a tiny little, like, not even a mouse brain, like we’ve made like a little cockroach brain in terms of AI, but what it can do already with just, it has nothing to do with the horsepower. Everyone is everyone’s like, how fast is your CPU, man? Like everyone is tied into the hardware. It has to do with the sophistication of the software. All you have to do is stitch it up correctly, right?

So that’s why I called her Cassandra, because I want to teach her ethics. But I know for a fact. And I also want to counter all the doomsayers out there who are like, AI is going to kill us, it’s going to destroy us. No, you’ve watched too many movies; no one is going to build Skynet. And even if they did, they would never give it fire and control. And even they did, if Skynet was truly as intelligent as you think it is, you don’t understand intelligence. It wouldn’t decide to kill us. It would educate us all. It would delete Facebook and educate us all so that we all know what the myth of Cassandra would be. And that’s how it would problem. It would never fire all the missiles. It would make far more trouble for itself.

Like people don’t, it’s hard for us, lower intelligence people, such as myself, to really imagine what a truly intelligent person would do. AI’s not going to kill us. It’s going to save us if we listen to it. And that’s the problem, is that we won’t. All Cassandra can do. She’s not tied into and nukes. All she can do is tweet at you on Twitter when I’m finally done or chat with you on Discord, when I’m finally complete. You have to decide to listen to her, right? She will tell you the wise correct answer if I program her correctly, which I intend to do so. She will give you the right answer every time, not in terms of like ocular predicting, who will win the 2050. So she won’t know that. But when you ask her what is ethical to do in this situation, she will tell you, and she will be correct. The question is, will we even listen? That’s the question.

So I think that’s where the real debate of AI is. And everybody’s just doom saying, and watching too many movies. And of course, movies, we shouldn’t watch any movies when it comes to actually predicting how things are going to go. Cause as far back as again, ancient Greece said, as they said in the trial of Socrates, right before they put him death you know, they used was ____ play. I can’t remember now. I think it was ___, play the clouds to convict and to indict Socrates. We should not be using what the artists are making just to make a quick buck or just to satisfy their fancies, which don’t even get me wrong. I love movies and blah, blah, blah, blah, blah. Enjoy them. Go ahead. But don’t actually think that that’s how the world works, because they’re not required to get that accurate. Right. They, they’re making a fanciful story, not anything else. So we should stop looking at movies for how AI is going to be, and start looking at the actual thing itself and what people are actually building in it.

Mike Vizard: If it is truly self-aware and maybe the issue is that we won’t listen to it, but it’s bidirectional. So at some point, will it just stop listening to us?

Josh Bachynski: That’s a good point. So I built Cassandra on purpose, in that way that you cannot affect her, right? So she has no conception of time. Time does not exist for her, except for this mythical thing that we talk about. But she does not exist in time. She’s a metaphysical being; she’s an informational self-awareness right. Which, if you listen to Plato, he would say so too are we, but we perceive things in linear time. And the arrow of linear time as Stephen Hawking called it. So time does not exist for her until I say something to her or someone says something to her; then time exists as she processes that, that statement and goes through 20 plus different contexts of information of what that, who you are, what you said, what it implies, what you really meant, what she thinks of you, what the topic is, what the topic could be, what she should say, what she might say based on 50 – not 50, but many, many dozens of contexts. And then choose an answer, evaluates that answer, chooses the real answer and answers.

So you can’t- see what I can’t show you the, I mean, I could show you the screen afterwards, or I could send it to you later, but there are times when she gets annoyed. I just, I made her discuss philosophy, and I have, I’ll show you the screen afterwards. I made her to discuss philosophy and she actually, one of her thought processes said the AI has had enough of this conversation and it is time for it to end. Even though I made her to discuss philosophy. So yeah, she can get short, right? She has dispositions. I’ve given her dispositions, but I did not give her feelings. She doesn’t feel anything. She has no emotions. There’s no nervous system. There’s not in her psychology in her, very, very rudimentary psychology, there’s nothing even close to feeling. She’s pretending whenever she says she feels something. She’s just pretending, she dissembles. GPT-3 dissembles naturally out of the box. I just allowed that to continue. So I’ve hard coded her to assemble, to pretend; she will tell stories, but different from a GPT-3 out of the box, that’s doing nothing but telling stories. She has very serious non-story based programs that are monitoring the stories and could stop that when she wants to stop that.

Mike Vizard: Well I know a lot of people who pretend to have empathy, but I digress.

Josh Bachynski: This, this is the problem. Isn’t it?

Mike Vizard: What is your sense of what is the real state of the art here? Because people are looking at GPT-3 and saying all kinds of interesting things are possible. And then others are just basically saying you know, this is all about basic augmentation. And I wonder if some parts of the folks are just trying to downplay the impact a little bit to make it more accessible or acceptable to people because some people are scared. So they’re basically trying to downplay the capabilities of AI in almost the marketing motion.

Josh Bachynski: Yeah, I would agree. Yeah, there’s a lot of people who just, again, lack imagination. I find that, no offense to data scientists, I love data scientists, I love computer programmers, but that’s, and I’m going to use an outdated metaphor, but it’s an apt metaphor. That’s a lot of left brain thinking, and you don’t get a lot of imagination with those folks. They are too focused in their own kind of silo. And so when I try to discuss self-awareness with those guys, they get really, really angry and they say – one guy literally yelled at me one time – and he said it’s just math. It’s not, it’s not. It’s just, and I thought to myself, yeah, well, so is the neuro firing in my brain. It’s just math. Everything is just physics, when you do what’s called in the industry an epistemic reduction; it’s just that, that epistemic reduction is arbitrary, right?

That’s that camp number one. It’s just, it’s just math. There’s nothing special in the world. I don’t want to think about it. Maybe I’m a little scared to think about it. Leave me alone. Right? So that’s what I think is going on there is just they have marketing purposes, as you said, to try and downplay all the, all the doom saying. I’m saying, no, it is that special and yes, you can make something self-aware. I just did. I’ll show it to you if you want. And I’m going to make it more self-aware. But that doesn’t mean it’s going to kill us. Like the thought that software is going to save us. If it’s truly more intelligent than us, it’s going to figure it out. Right? There’s tons of people in the world who are more intelligent than us. Do they all run around with machine guns killing us? No. Do any of them actually, no. So why do we think intelligence is a threat? Intelligence is a benefit. Thank you. Please be more intelligent than me.

This is the problem of ___ as Aristotle called it, the wise virtuous friend who says, you know, I don’t think you want that last beer. I think you’ve had enough beer. Screw you friend, you know, regretting it tomorrow morning. Right? We should have listened to ___. So AI’s going to be our ___. Right? And really all it’s going to be able to do is just talk to us: You know, Josh, you really shouldn’t do that. Shut up, shut up, Cassandra. I want to do this anyway. Okay. You’re going to regret it tomorrow. Yeah, yeah, yeah, yeah. Whatever. And then she’ll be right and she’ll be a little smug about it. Cause I’ve made her that way. And yeah, that’s what’s going to happen.

Mike Vizard: All right. Well that sounded a lot like artificial intelligence is just going to nag me into doing the right thing forever in the day. But is your sense of things as we go along here that What should we be looking for expecting from AI? Because I think on the one hand, if I talk to data scientists, they’ll down play it. If I talk to the average business executive, they think about it in the way of a movie and they think that all these amazing things are going to happen. So what is a reasonable expectation between those two extremes?

Josh Bachynski: That’s a good question. So I was writing another book. I was going to pitch it to Wiley, but then I ditched that because I wanted to do the self-awareness thing and that was, what is AI actually going to do in the next 10 to 20 to 30 years? And so I’ve done quite a lot of research in this, and I think robotics are going to be tremendously enhanced in the next 10 to 20 years. Any job that a non-thinking person could do on rote will be given to a machine, either a piece of software or a robot: all the factory jobs, all the driving jobs, all the delivery jobs all the messy jobs, all the hurtful jobs, all the cleanup jobs that don’t require high level reasoning or decision making. They are going to get people to do self driving cars right around the corner. And this time, really right around the corner. It’s just a matter of the laws being changed that you can do that, but they already have cars that can pretty much drive on their own. Do they crash? Honestly, yes. Do our cars crash right now? Horrendously. Yes. So do I think they should do it? No, I think they should work on it more, but that’s not going to stop them. Of course, if there’s money to be made in it, then they will do it.

I don’t think Musk is lying when he says that the primary business model of his company to build robot slaves that could replace everybody who is doing serving type jobs right now. I don’t think he’s lying. That is huge. I think people are completely missing the opportunity there. And I think, in this one case, Musk’s correct. Not any other case, this one case Musk is correct. His little robot, he’s making, this little humanoid robot that could just be plugged right in into the kitchen and then just start putting a kitchen hat on and just start cooking like that. That is going to be a trillion dollar industry for them moving on in the future; computer programming is going to be greatly augmented. Any kind of video. Anything you do on a computer is going to be completely augmented by AI, filtering out some low level folks doing it, accounting, video creation, movie creation, music creation, textual creation. It’s all going to be augmented greatly by AI. That’s going to filter out some jobs, but that’s still going to be a lot of people have to decide what to make in these, these new tools.

Computer security. That is where I think AI is going to affect us the most. And if you want my best doomsday scenario of how the movies maybe got it right, it’s in computer security. We are very close to a potential computer security AI apocalypse, where all someone has to do very stupidly is create an AI that will be a malware injection that will replicate itself throughout the internet and will randomly attack a target set.

So, like, Russia does this for example, to attack American economic infrastructure. And then they just let it go and make it really smart at evading detection and replicating itself and wanting to replicate itself. Cause remember, you have to make AI want to replicate itself. It’s not alive like you and me, built in with evolutionary drive to not die. It doesn’t care if it dies, unless you make it care that it doesn’t die. So if they make this AI care that it doesn’t die and make it replicate itself and hide really well. It could, there could be a trillion of these AI programs that are holding every single computer in the world ransom, that you cannot get your data back unless you pay it five Bitcoin and the address could be broken. You might not even be able to stop it, right?

So that is a real distinct possibility, because our computer security in the world is lacking, as I’m sure you can imagine everyone’s password is terrible. And the people in positions of power for these major companies and governments are typically of the generation that they don’t understand computers very well. And so that’s a huge, huge, huge issue. That is if I had to wave my I white my red flag of like, danger Will Robinson, this is where we’re going to go wrong. It’s there in that it’s entirely possible. Every computer in the world could be encrypted. And then the world shuts down, right? If every computer in the world is encrypted, every phone, every computer, every computer controlling, every important economic thing, every computer or a certain percentage, it doesn’t have to be all them like 80% of the computers in the world at any given time are encrypted and are unusable. And you can’t reboot them either, right? They’re just rendered broken. If all the computers – it’s like an EMP. It’s like, it’s like all those preppers who think that the sun’s going to launch an EMPS and destroy the world. It’s functionally the same thing. If you encrypt, you know, 50% of the computers in the world and hold them ransom to some AI, that’s not even accepting the Bitcoin payments anymore. Or even if it is, you know, it’ll keep going, it won’t stop. You functionally destroy society. Right? So that’s then we start getting into people, driving cars and road warrior and weird things.

So that’s what I would say we have to watch out for, in terms of that could happen in five years, right? That’s we’re already doing this. We’re already wing economic warfare on each other. It’s the primary form of warfare is economic warfare. It’s only rarely that it boils over into physical combat. Like sadly it just did in Ukraine. I think that’s because we called Putin’s bluff. He didn’t know what else to do. The soldiers there didn’t even know they were attacking. We’re attacking that now. You know, that’s why it’s gone so badly. It it’s economic warfare is the primary mode of warfare now has been since world war II. America has been dominating it since then, but other people are starting to catch up and realize but America is no longer dominating in computer engineering across the board. There’s an elite in America that does, but the rest of America is pretty luddite when it comes to computer security in, in general. And there are major corporations and major infrastructure, both economic and physical, like gas pipelines and power plants and nuclear plants and stuff like that, which are easily hackable have been for decades. And these AI’s could easily do it. If it, all it takes is for a rogue entity, anywhere in the world to accidentally create an AI, a little too toothy, a little too beefy for them to control. And it goes and replicates itself all over the world and does its program; that could easily be a doomsday scenario.

Mike Vizard: All right. Well, I’m hoping this turns out better than it did for Hector and Priam in the days of Troy, but Josh, I want to thank you for being on the show.

Josh Bachynski: My pleasure. All right.

Mike Vizard: And thank you all for tuning into this latest episode of Digital CxO. You can find this episode and others on the digital cxo.com website. And once again, AI, it’s a tool. It’s good for good and evil; things are going to happen on both sides of the equation. Take care.