CONTRIBUTOR
Founder and Chairman,
Drishti

As a long time student of innovation and the growth of technologies, I’ve noticed a pattern of alignment resulting in successful innovation and adoption. Success almost always requires the coming together of three facets: a business problem being significant enough to get early movers going, the underlying tech and economic infrastructure being on the cusp of maturity and the social platforms (mass spread of ideas) primed to support rapid innovation.

 

Electric vehicles are a great example. The earliest electric cars were built in the 1850s. General Motors built the EV1, the first modern electric car, using lead acid batteries, in 1996 with limited success. It was only when lithium batteries became available and wealthy green advocates were willing to spend serious dollars in 2006 that Tesla was able to build the Roadster with greater success.

I have watched AI go through a similar journey for at least 35 years, starting at Stanford where my lab mate attempted to characterize the quality of cutting tools by analyzing audio signals (chatter) using simple neural networks running on a Sun 3/150 computer. Over the last five, I have seen tremendous maturation, with complicated and complex AI products like Drishti hitting the market. As exciting as this is, there is a parallel and unexpected development that excites me just as much: the newly created MLOps space. Not unlike the creation of DevOps to keep cloud infrastructures running, MLOps will keep the core ML running. MLOps offers a tremendous opportunity in which to contribute and create a career.

So, what is MLOps?

Formally, MLOps is a set of practices that combines ML development and ML execution. It aims to make the ML development life cycle efficient while ensuring the continuous execution of ML at the desired SLAs. MLOps delivers productivity, improved quality of service, higher customer satisfaction, and cost savings.

Figure 1 below, created with my colleague Sujay Narumanchi, lays out the core elements of The MLOps House. At the very foundation is the data infrastructure responsible for managing the data flowing through the system and the ML models from which an ensemble is created to deliver the best results. With this ensemble selected, the two pillars of the house, ML and Ops, step in. The ML pillar ensures that the ML is tuned well and delivered while the Ops pillar ensures that an active feedback loop is running and that the ML models remain tuned even as the physical world changes. The business application layer delivers the intelligence and workflows contextually so the users can focus on their jobs, not on the idiosyncrasies of the AI. Finally, and perhaps most importantly, the central function is problem solving — figuring out what is going wrong in a non-intuitive system and quickly resolving it so the systems run and meet the business’s committed SLAs.

Figure 1: The MLOps House (credit: The Toyota House)

What is the career opportunity?

As one can imagine, with the growing number of AI implementations, including in manufacturing, MLOps could become a part of every major operation. Organizations will be hungry for people that can validate, deploy and maintain these AI solutions, as well as the data they create and/or consume.

The truth is, like the DevOps folks before us, we found it hard to deliver AI as fast as our customers wanted. We needed engineers with specific skills to problem solve and keep the system running while the ML development engineers could focus on creating better AI. We’ve seen our MLOps engineers take our systems to higher levels of execution because they have the skills and domain understanding and recognize the criticality of their roles.

The MLOps engineer, unlike the specialist ML designer, likely steeped in advanced mathematics, is a generalist and a problem solver skilled in data structures and algorithms, with the ability to build and manage data intensive applications, and can use the latest cloud orchestration and deployment tools. Her challenge is to anticipate issues, build the scaffolding to identify root causes quickly, explore solutions around the current solution, retrain the updated system with newer data and keep the machine going.

At Drishti, we had run robustly and accurately in one of our customers’ sites for many months when, suddenly, the data quality dropped. Our MLOps team jumped in, reviewed the video feed coming from the plant floor, compared it to that coming previously when the inference quality was high and determined if the situation was due to a change on the plant floor or in our neural networks. As it turned out, the plant, which had been issuing gray gloves forever, changed vendors and bought black gloves. Our AI hadn’t seen any black gloves, disrupting its function. The MLOps engineer has to have an understanding of the ML in use, have a sense of what could create the errors being seen, be able to identify, prioritize and execute tests, and problem solve.

 As the technology matures further, we will see more AI in both the infrastructure layer and in the business solutions. Businesses are going to need more people capable of digging into the technical side of the solution. The graph in Figure 2 shows both the exponential growth in the number of search queries in Google for “DevOps Engineer” as well as the minuscule search for “MLOps Engineer” today. As MLOps goes mainstream, the MLOps graph will be a phase shifted graph like the DevOps chart, pointing to an exponential growth in opportunity.

Figure 2: The growth in MLOps roles will parallel that we have already seen in DevOps roles

If this sort of role excites you, I would urge you to abstract ideas from DevOps, polish up your math fundamentals, get familiar with ML approaches, and write to AI companies asking for an MLOps role. Remember, the early bird gets the best worms.