In this Digital CxO Leadership Insights series video. Mike Vizard speaks with Marco Trombetti, the CEO for Translated, about the singularity and the way to actually measure the pace at which we might achieve this goal.
Mike Vizard: Hello, and welcome to the latest edition of the Digital CxO Leadership Insights series video. I’m your host, Mike Vizard. Today we’re talking with Marco Trombetti about the singularity. He’s the CEO for Translated, and they found a way to actually measure the pace at which we might achieve this goal. Marco welcome the show.
Marco Trombetti: Hello Mike, and thank you so much for the invite. Yes, we did some work in measuring the speed to singularity.
Mike Vizard: And what exactly is the singularity? And why should people be caring, not only about whether we achieve it, but the rate in which we’re achieving it?
Marco Trombetti: Yeah. So singularity is a term that was coined, I think, by Ray Kurzweil, and it represents the moment in time in which machines will have a level of intelligence equal or superior to humans. And obviously, this is an equal tactical moment. But as soon as we’re moving forward, we are realizing that similarly these kind of possible machines are improving their capacity every day. And now we humans are quite stuck in our brain capacity. So singularity is something that is coming and why this is so important, is that every change in humanity creates some opportunities and some challenges for people. So by knowing when the singularity is coming, now we’re able to prepare and take the opportunity of this new change. And every technology can be good or bad. We want to make sure that we can predict when this is coming, and not take advantage of it and not the problems.
Mike Vizard: What is your sense of when are we going to achieve this goal, because a lot of folks thought this was theoretical, and then ChatGPT came along, and now everybody’s kind of going – holy cow, what can these machines do?
Marco Trombetti: We humans have got a lot of problems with exponential growth; we really are not able to understand it very well. We’re good at linear growth. And so that’s why, you know, from now on, things are happening faster and faster. And so we’re surprised; we’re surprised, and surprised by seeing GPT coming – ChatGPT. But for someone that works in the industry of AI, this is a slow growth. Now it’s coming and is accelerating, yes. But no, last year, it was almost there. And so you, if you’ve seen the small progress, then the progress, you see it is increasing. So the speed, I think what we did – that is unusual. So in the past, scientists, they’ve tried to predict the singularity theoretically, saying, “Okay, we need this level of computation, human level computation, we need a human level amount of data and human level of capacity of learning from the data.” When you achieve these things, machines will be better than humans. And even if you got these things is not obvious that singularity will comes because you have to put all these things together, you have to make it work. So we made a different study. And because our company Translated has been working for the last 20 years in symbiosis between AI and professional translators, so human translators, we have been able to measure the amount of time that professional translators are taking to correct the errors of the machine. So the better gets the machine, the less time you need to correct those mistakes. And because language is considered by many, the hardest and more complex problem in AI, it is a great predictor of the general artificial intelligence. So the time when, when overall machines will be good, better than humans. And so we measured in the last seven years, that we went from four seconds per word in correcting, to 2.2 seconds per word. And every single month, this number is going down. So the progress is continuous. So week by week, or month by month, you really don’t see the difference; you’re not able to see the progress of AI. But as soon as you zoom out, in a year’s period, you see this graph that is going straight to the point where translators will only take one second per word; and why one second per word is important is because if you do a great translation Mike, that is perfect, and you give it to me, it will take me about one second per word to say that is perfect – that is what we consider the singularity – the language singularity said that I’m taking the same amount of time to correct the machines or a human, okay? So it takes one second to verify human per word. And we are now at two seconds per word, and we were at four point something before, so we are quickly approaching, and this time could be this decade. So we think that is if we continue with this trend by the end of this decade, we will reach the language singularity. And immediately after, probably general artificial intelligence.
Mike Vizard: What do you think the implications are for the way we engage with each other if I can have that level of translation and speed, and ultimately, might we see the translations between languages become not a barrier at all?
Marco Trombetti: Well, you know, I think the world will look like a much, much better place. Because if we allow everyone to understand and be understood in their own language, we can finally, truly cooperate at a global level. And so if there are some challenges that now cannot really be addressed by a single country or now being an extra planetary form of life – going in other planets or solving climate change, these are not things that a single country can do. If we bring understanding, and so cooperation, at a global level, all these things become possible. So language and translation are really a tool for humanity to grow. And in history, in fact, language was the most important factor for human evolution. Now we’re different from all the other species; all other species have developed something called motor control, the capacity of moving around, but only humans have developed complex language and with complex language, we were able to think about the future and cooperate for the future. So if we now allow everyone that cooperation at a global level, we can unlock the next level of evolution for the humans basis. And the impact can be incredibly good.
Mike Vizard: What is your sense of what will the relationship between people and machines be like then, if we achieve singularity? And what exactly would be the role of humans in a world where the machines are making analysis and decisions faster than we can even think that there’s an issue?
Marco Trombetti: I don’t have a clear answer for that. I mean, that’s, that’s a big problem. And I think we will see as we go, as we approach it, but what I know is that there is a couple of things that are the challenges that we need to face first; this change is happening faster than our educational system can support. So the previous transactions, the agricultural revolution, and industrial revolution, happened in a time that was compatible with training and educating the population to something new. Okay, this time, this is happening so fast, that we don’t have the time to learn about this. So this is one thing that we have to look at with attention. The second biggest change that I’m really a little worried about that we need to understand is that every evolution in the past was something that was making humans more efficient. So in all history, we converted time into wealth, okay? And, basically, every technology in the past was making our time more efficient. So with the same amount of time, we were creating more wealth. Now, if we reach singularity, machines will be able to produce wealth for us. And so if now our time to create wealth is more of energy converted into wealth; who controls energy may control the economy and that transition. I mean, it’s something that we need to manage very well. That’s why we think that predicting and measuring the speed in which we’re approaching the singularity, may help people in in understanding how to organize themselves, and how to take advantage of this opportunity.
Mike Vizard: So that can impact everything from whether we decide that we’re going to have universal basic income to how we educate our children, because as I talked to one professor, he basically was talking about ChatGPT. And he laughed and said, “So what? We’re not going to have essays anymore – that’s fine. We’ll just have everybody take an exam live and call it even.” And frankly, he was like, “I don’t like grading essays anyway.” So you know, there were simple things that were kind of like looking at things and going maybe we should just take a breath and say, “Okay, how do we want to think about the way we interact with knowledge and each other?”
Marco Trombetti: Yeah, I think the future will be different than history; every change at the end went pretty well with us as humanity changes. And I don’t regret now doing calculation by hand, I love my calculator. And I’m happy that my kids are not studying so deeply how to do logarithms or other things because they can learn; they can really use a tool for doing that. And with that, too, they can bring something that is more complex, bigger and more impactful. So it’s a natural evolution that every change now brings us challenges and fear. That’s for sure. But I’m pretty optimistic about the future and about universal income, I think that will be quite obvious for the future. Because if wealth is created through energy, really we were born with some assets as humans – air, the sun, and the energy. And if there is a machine that is able to convert that asset into wealth. We don’t receive it when we’re born – just assets. But we will receive probably a revenue coming from the work of the machines. And it’s, it’s absolutely fine. And probably this will help humanity to move further in evolution. So it’s something inevitable.
Mike Vizard: We’ve already seen some convergence of country governing and whatnot, the US probably the best example. But do you think as we go along here, and we can all speak in real time to one another, that we might see some more interesting ways of organizing how we govern ourselves?
Marco Trombetti: Yeah, I guess that’s, that’s a tough question. And I really don’t have an answer for that. But the global organization may be different if we understand each other deeply.
Mike Vizard: So what’s your sense of, you know, if I’m a business executive, and I’m looking at all of this, and I’m trying to plan for the long term – can I plan for the long term? Or what should I be thinking about?
Marco Trombetti: Well, I think the concept of long term is changing a little bit; maybe long term was 100 years before? Now, I think long term means 5-10 years. And so I think that is where we need to adapt our way of thinking; we need to adapt to exponential growth, and it’s hard for humans. And so if 100 years was long term, now long term is five.
Mike Vizard: I seem to remember back in that time, when we declared that there was a decade of the PC, and this was going to be a tremendous time, but I don’t think we recognized that we were living in that decade, till some five years after the PC was invented. Is it the same thing going on here in AI is that we’ve actually crossed some demarcation point, maybe last year or the year before, without realizing?
Marco Trombetti: Yes. And also, I have to say that the future, when you think about changing, future is not something about time, it’s very often about space; its location. So the world is not evolving at the same speed. And so there is a certain group of people that are believing in the future; there’s certain others that are not, and so every change does not get spread instantaneously on the planet. And that is creating the certain time for which now we adapt to things and which I think is also a positive thing because it gives us the time to see the errors of some people and try to avoid that, so that we’re able to evolve and take the good part of the technology.
Mike Vizard: Do we need to be worried about social upheaval as a result of all this, because to your point, change will be unevenly distributed and some people may not welcome in as much as others?
Marco Trombetti: Well, I think every change in the short term has made the distribution world less equal. Wealth was more polarized, and then over time, it gets better. But in this specific case, you mentioned universal income may be one tool for which we were able to redistribute wealth in a better way. And now how this is going to happen in global scale – these kinds of things are controlled by local governments. I really don’t know and I think it’s a significant problem but I think that universal income will be the way to that; after the singularity we’ll be able to redistribute wealth.
Mike Vizard: Alright folks, you heard it here. Once again, there are probably more questions than answers, but the questions are getting a lot bigger. Hey, Marco, thanks for being on the show.
Marco Trombetti: Thank you so much Mike.
Mike Vizard: Alright, and thank you all for watching the latest edition of the Digital CxO Leadership Insights series videos. I’m your host, Mike Vizard. You can find this episode and others on the digitalcxo.com website. And once again, thanks for spending time with us.