Chief Content Officer,
Techstrong Group


In this Leadership Insights video interview, Mike Vizard speaks with Josh Mesout, chief innovation officer at Civo, about how to make AI and ML more accessible to the average company.



Mike Vizard: Hello and welcome to the latest edition of the Digital CxO Leadership insight series. I’m your host, Mike Vizard. Today we’re with Josh Mesout who’s the chief innovation officer for Civo. They are a cloud service provider, and we’re talking about how to make AI and machine learning in particular, more accessible to the average company. Josh, welcome to the show.

Josh Mesout: Thank you very much for having me today Mike.

Mike Vizard: We have seen the tech giants do all these amazing things with AI as of late, but it seems like that’s a little out of reach for the average organization. So, how do you think that AI as a set of capabilities will become more accessible to these organizations and should the plan be?

Josh Mesout: That’s a really good question Mike. I think one of the challenges that machine learning presents is actually, a large amount of resource use, and that pairs with the very unique value proposition of cloud. What we’ve seen over the past few years is actually that implementation has always been focused at the at scale, looking at large companies building large machine learning models with large training sets, and data sets.

What we’ve been noticing at Civo, particularly with a strong relationship with the cloud native ecosystem and the open source ecosystem is actually that journey needs to start small and scale with the user. And, instead of having entry points that kinda start in the thousands and thousands of pounds of dollars, actually, there are people who want to get hands on with these technologies that are small scale, have their machine learning platform or cloud service provider grow with them and start to be able to scale into those large scale type machine learning models.

Mike Vizard: Do I need a lot of data to build those models initially? It seems like a lot of progress has been made in terms of making it possible to build without as much data as it used to be required. And, is that part of what’s gonna make this a little more accessible to folks?

Josh Mesout: I think it’s a really interesting question actually. We’re seeing more and more value behind things like generative data. And, actually, it all depends on the problem you’re starting to solve and how wide of a problem set you’re trying to solution around. What we traditionally see is actually it’s an experimentation process where you first start out in your machine learning journey. You might have quite a small data set, you might notice some parts of that data set are really useful for certain parts of your problem, other parts of your data set aren’t as useful. And, I think what we really wanna encourage users to do is start with a smaller data set and take complexities like feature engineering, exploratory data analytics and find easy ways to build that data set in quite an articulate way. And, we don’t believe in the mentality that you can bring all your data to a single point and be able to make sense of all of it. We wanna make sure we put tools together that help people really explore that exploratory data analytics process and also understand their data.

Mike Vizard: Am I gonna go build all these models myself if I’m an organization, or am I gonna go find somebody else’s created a model and I’m gonna download it and tune it maybe, for my own purposes? Is there gonna be a market for AI models that would then be deployed on a cloud platform?

Josh Mesout: That’s a really good question. What we’ve seen a lot recently with technologies like OpenAI Whisper is a really good example of that where, actually, you can get some billion parameter models being given to you in an open source capacity. One of the things we think is a challenge in the current machine learning landscape is actually, there’s loads of tools to start from scratch, but actually, if you want to take some of the open source work that people have already done, implement that into an open platform then start to build on top of that, that can be quite complex and difficult. We think about one of the things that machine learning can offer is allowing users to stand on the shoulders of each other and start to build that wide ecosystem.

And, if you look at some of the initiatives within the open source space you’re starting to see that collaborative learning happening where actually, you might look at a decentralized data set or a kind of problem shared between multiple people. And, that’s a big culture shift from where you might typically see the problem solving of machine learning being done inside an enterprise in a particular team or department. That, in itself, actually allows people to start to target problems which might be beneficial to more than just one user.

Mike Vizard: Do you think also they’ll be some sort of centralized service so maybe I don’t have to even hire a data science team or the data engineers because I’m gonna call a model that’s through an API essentially, and I’m gonna embed it in my application that way?

Josh Mesout: Yeah, I think that’s becoming a really interesting value proposition. There’s going to be, I think, two different types of machine learning use cases, one example of that to match up to Whisper as I mentioned earlier is, you know, text to speech or speech to text, and that’s actually a use case that could be positioned in multiple companies in multiple problems. Say for example, you’re a pharmaceutical company doing drug discovery in a particular scientific area, it’s more likely that’s gonna be a bespoke type of research project.

I think that reusability of machine learning models ends up being a really big win win situation particularly when you look at situations where 85 percent of machine learning projects fail. If you can widen the use case and your success criteria you might be more successful in those attempts. And, similar to how start up communities often pivot their companies and pivot the problem they’re trying to solve, could you take some of that agile mentality with machine learning and start to, kind of, widen the use case? Is the problem you’re trying to solve actually a wider problem that affects more people than just the one you’re trying to solve?

Mike Vizard: What is your sense of the appetite among small to medium businesses for that matter, for AI? Are they kinda hungering to get started, or are they kinda just watching this with more of a sense of, “Let’s see how this whole thing evolves.” because maybe they’re a little intimidated by the whole thing?

Josh Mesout: Absolutely. I think if we look at six seven years ago, machine learning or artificial intelligence was always something that had a big capital budget behind it. Normally a large scale initiative through something like a digital transformation program. What we’ve really noticed and think is, actually, 2022 was actually the first time it happened, is actually small companies underneath 250 employees were the primary audience for machine learning platforms and tooling. I think that’s a really interesting transition, one to do with the democratization of these machine learning tools being easier to get ahold of and access and the big open source explosion with those machine learning tools has really helped drive that.

One of the other things we’re seeing as well as those large scale companies starting to be slightly more hesitant about how they implement machine learning ’cause they’re getting a wider understanding that a lot of these projects fail, they can often cost more than the value they add, and you’ve gotta make a really prompt approach towards these types of implementations. When you’re a smaller company you can fail fast and pivot often, actually, it reduces the cost of ownership to machine learning.

Mike Vizard: What platform are people using to create these models? Because you mentioned cloud native early on, but it seems to me a lot of this AI activity goes hand in hand with Kubernetes clusters ’cause people are trying to use containers to make the models more manageable.

Josh Mesout: I think there’s a huge huge ecosystem and platforms. At our navigate conference last month I actually gave a little bit of an infographic, and the outcome from it was that there is no single tool that’s leveraged for the whole end-to-end bit of machine learning life cycle. The thing we’re really seeing is that actually, users are starting to move away from the preference of having a one size fits all machine learning platform and, actually they’re really interested in autonomy and how they can start to patch together different systems and integrate different types of data and different types of machine learning models. One of the challenges you’ll have if you fall in, so to speak, with a machine learning platform is, actually, you’re reliant on that platform’s roadmap to continually moving.

What we’re really excited about and what we think some of the key value propositions about is not recreating another proprietary tool and not pushing another machine learning standard on the user base. What we really wanna do is see those actually expand the horizons and give people the open source tools that they really want in the first place. And, if you look at a lot of the machine learning communities the demands for new features and functionality for machine learning platforms are often driven from the needs of the open source community. Someone put something really incredible on GitHub and everyone tries it out, it gets a lot of news attention, gets moving, but then you’re months behind playing around with it or getting it into an ecosystem when you can really leverage it and assess the benefits to your business.

Mike Vizard: As you think all this through for a minute, we see the rise of MLOps as a discipline and it’s kind of similar in some regards to dev ops where we’re trying to define a set of best practices for software development and then construction of the AI models. Do these two cultures need to come together, ’cause it seems like part of the challenges is, how do we insert the AI model into something at the point where the application is being deployed?

Josh Mesout: Yeah, absolutely. I think MLOps is a really prominent trend right now, it fits in really well around productionization of machine learning. What you’ve seen historically with machine learning platforms is that they give you a really wide corpus of different tools to use, different technologies to evaluate, and that’s really effective when you’re in the early stages of solving a problem. You might need to pivot, change your machine learning framework, change your machine learning tooling. And, actually, having a very dynamic development environment can help you integrate quickly, make large scale changes that give you increase of accuracy, help you evaluate concepts.

What we’re seeing is the trend is that, once you’ve got that exploratory evaluative machine learning down and you want to drive to production, actually ML ops is a really efficient way of implementing standards and guardrails, helps keep the validation steps your machine learning running really accurately. You can also implement things like model monitoring into those processes to make sure you’ve got a high barrier of performance and make sure you’re always keeping yourself up to a consistent standard with machine learning. In my opinion the combination of freedom in the exploratory stage, so you can really solve your problem rapidly, then being able to make sure that you can drive your experiment into something that’s really usable, clear to leverage value, easy to measure that value, and also easy to say when it’s not working.

Mike Vizard: Who’s taking the lead on these projects? ‘Cause, I think a lot of times people will hire a data scientist and they expect something magical will happen and the data scientist wind up solving problems that they find interesting that may not be relevant to the business. So, how do we kinda, make sure that everybody’s on the same AI page?

Josh Mesout: That’s, I think, a really big challenge right now that everyone’s facing. The recommendation I’d have from productionizing a wide range of different machine learning projects in different disciplines is actually to always colocate yourself with the domain expert. If you were building a machine learning algorithm that scanned X-rays and looked for broken bones you would want to have a doctor there making sure that your training data and test data was accurate. There’s a mentality amongst machine learning where people try to often solve problems which include an addition domain and I think that could be a really interesting research process, and there’s nothing to stop people from learning and expanding the ways that they analyze it. But, in my mindset the most effective machine learning is always done with a machine learning expert who’s got a lot of passion for the domain but then also a domain expert who can sit alongside them. And, it’s that type of technology driven business led interaction that I think machine learning does exceptionally well at. It also gives you a really easy way to hold yourself honest, is the technology solving the problem, is the problem serving the technology?

Mike Vizard: Every time we have a new use case, of course, it’s jump all in terms of cloud forms, what should folks be looking for from a cloud service provider as it relates specifically to ML workloads, and AI in general that you might not see, per se, from the big guys?

Josh Mesout: Yeah, I think when we position ourselves differently at Civo we start to see some of the feedback from the customer base around what’s really effective. What I think is actually the most attractive proposition that we offer is transparent pricing that’s significantly lower than the competition. Now, one of the reasons why machine learning projects, I think it’s 85 percent of machine learning projects fail, the reason for that is about 53 percent never make it into production. Part of the reason for that is actually the cost of running your machine learning might be substantially more than the value might return.

A good example of this might be a hedge fund building a trading algorithm. It spends $10,000.00 building the algorithm, buying the data, getting it put together, but actually when it’s put into the market and it’s generating trades and returning results, it probably won’t return a profit from your base cost until late in it’s lifecycle. With every machine learning model, the unknown is when your model stops working, it’s all about performance management and monitoring.

What we think is, by reducing those base costs and also reducing the barrier of entry costs we can start to allow people to explore these problems with a smaller risk vector. If you’re a startup and you wanna get involved in these spaces, but you don’t wanna put a huge amount of capital upfront, we’re trying to offer ways that you can actually break that down. An example might be, if you don’t need a whole GPU to train your machine learning model and, actually, your machine learning model only leverages about 25 percent of a GPU we’re happy to rent you a fraction of a GPU that brings your cost down.

The advantage of that is, if you’re already using 25 percent of the GPU you don’t have to pay for the 75 percent you’re not needing. As you start to scale ergonomically as a customer and you start to leverage the whole of that GPU you have stage gates that make it more effective, and also, I think from my perspective is safer for you to start being more exploratory, take bigger risks, almost start to look at the longer haul of how you can get the value down on the tail.

Mike Vizard: Course, training is one thing and then we have the inference engines that get deployed at the point where data is being consumed. Do you think that AI will ultimately drive some level of convergence between cloud and what we call edge computing today and the whole thing will just, kinda, fold together?

Josh Mesout: Absolutely. One of the products we’re exploring as a future element of our roadmap is actually how you can start to leverage hybrid compute with machine learning, which builds again on that cost of running machine learning at scale. If you know you’re going to drive your hardware hot and if you know you’re gonna allocate it, renting it is actually, probably quite and expensive proposition for you to take on. The alternative to that is, could you start to leverage hardware that you have inside your office? Now, that obviously, has an advantage to one cost and long term value return, but then also, your machine learning engineers in the same office will get a much better return ratio on premise, maybe if your data’s colocated there. And in some situations if you’re a bank or a hospital or a healthcare company you might have to keep your data on premise, you might not be able to release it to the cloud in all situations.

So, we really see a strong probability with edge compute both in being able to kind of, move the compute toward the user’s undertaking, but then also, on the other side, is being able to move the model towards the problem. And example of this we’ve experimented with before is actually, some image recognition and computer vision technologies, which are very high bandwidth, you know, take a huge amount of computational resources. Instead of having that call across the world to send your data, generate a result, and sent it back, could we put the thinking of that machine learning model next to the cambra?

Mike Vizard: All right. So, what ultimately, is your best advice to folks? Should I just take everybody who’s involved in ML ops, the data scientists, and the dev ops teams, and the business leaders and throw ’em all in one room and lock the door and see what happens? Or, is there some other way of thinking about this?

Josh Mesout: I think there’s lots of different effective techniques for different size organizations, different types of organizations. What I often think is a really good first step is getting someone who really understands your data, putting them in a room with someone who really understands what could be possible with that data, they don’t necessarily have to be someone from the business domain and machine learning domain but they often are. From there I think you can start to evaluate what would be your first machine learning project, start very small, I think also, start very humble. And then, as you’re building these machine learning models, generating value and, obviously, return on investment, start to transform your existing data or big data infrastructure into something that’s machine learning centric.

Now, the challenges with that is lots of platforms out there would currently have you do that in a big bang transformation. What we’re quite eager to do is help make that a smoother transition and a smoother learning curve for companies and customers. Part of that is things like fractional GPUs that we can offer, you know, hybrid compute, but then also it’s about not building more proprietary tools, helping build open source tools that are really easy to integrate and interoperate with the existing technologies you have. A lot of the current machine learning platforms pick and chose their integration partners based on the existing partnerships they have or actually other managed services they’d like you do adopt. We think giving you more choice and giving you more autonomy gives a easier pathway than the linear path.

Mike Vizard: All right folks, you heard it here, AI’s gonna be pervasive. The only question is, how’s it gonna get done? Josh, thanks for being on the show.

Josh Mesout: Thank you so much for your time Mike.

Mike Vizard: And thank you all for watching the latest episode of the Digital CxO Leadership Insights series. I’m your host, Mike Vizard, you can find this episode and others on the website. We invite you to check them all out and we’ll see you next time.