CONTRIBUTOR
General Manager and Editorial Director,
Techstrong Group

Synopsis

In this Digital CxO Leadership Insights series video, Mike Vizard speaks with Mike Potter, CTO for Qlik, a provider of a data analytics and integration platform.

 

Transcript

Mike Vizard: Hello and welcome to the latest edition of the Digital CxO video podcast. I’m your host, Mike Vizard. Today, we’re with Mike Potter, who is the CTO for Qlik. They’re a provider of a data analytics and integration platform. Mike, welcome to the show.

Mike Potter: Hi. Great to be here.

Mike Vizard: We see a lot of organizations all want to make better fact-based decisions faster, but it seems like they struggle to get there in the first place. What is your sense of how much progress we are making on that front? A lot of times I talk to digital CXOs and they don’t always trust the data that’s presented to them in the first place. So how do we kind of vet the data, so we can get to the point where we trust the analytics?

Mike Potter: I think we’ve learned a lot about data, particularly in the last two years. If you think about the role that data has played in the pandemic and how it has been used, misused, misunderstood, and driven a lot of governmental-level decision, I think all it’s done is really reinforced how important it is that we provide the right data in order to do that.

To answer the question, I think we’re still on that journey. I think the challenge that a lot of organizations have is that they have very advanced data management strategies, but what they don’t necessarily have is advanced data value strategies. The idea being of taking that data and actually understanding the value of the organization, and its ability to answer the questions that the organization has. I think that getting that right is an essential part of creating that end-to-end, where the data they’re managing is actually meeting the needs of the business.

Mike Vizard: I think we have high hopes for machine learning algorithms and AI to help us kind of make sense of all this data, and maybe figure out what data matters more to us. What is the current real state of AI? Because some people are skeptical and other folks are swearing by it. So what’s the reality?

Mike Potter: I think AI/machine learning techniques are a tool. They add capabilities in addition to the existing tools that we use to be able to do that transformation of raw data into data that has value, analytic value to the organization. So what we like to do is think of it in terms of a larger cognitive strategy, where we use these techniques to augment the user’s ability to look at the data, understand the data, and make the right decisions.

So the algorithms continue to evolve. Most of them are in the public domain. So where the real differentiation comes is how do you effectively use those algorithms in a strategy to actually prepare the data better for users who need to consume it, guide them towards areas in which insights exist, and help them with predictions and scenarios? I think that the applied use of that is where we have the best chance of adding the value to the customers.

That journey continues I think on all fronts, because a lot of times people look at these algorithms as the definitive, “The answer is,” as opposed to, “The answer could be,” and now it needs a human’s participation to decide what to do with that information in conjunction with all the other tools that they use.

Mike Vizard: How do we get bias out of that system? When I was in high school, I vaguely remember somebody telling me that you start out with a hypothesis and work your way back to prove it. That’s bad science. So how do I get to the point where I’m actually making decisions on what’s valid data that reflects what’s really happening versus what I wish will happen?

Mike Potter: That’s great. I think a lot of times it starts with: what questions does the organization have, and do you have the data that responds to those questions? I think that in many cases you can fall into that trap of form fitting the data to suit the narrative. But at the same time, what you need is to be able to provide the perspective of what haven’t you seen. What haven’t you considered?

So there’s this idea that we like to talk about, which is this peripheral vision on your data, where you can use the capabilities to get to the data you’re looking for, but really, the insights that surround that data that might take you to a different data set or incorporate a new consideration into the data is really where we want to go.

Mike Vizard: A lot of folks will sit there and say, “We built this AI model, but then we forgot to look at it or monitor it,” and there’s a lot of drift all of a sudden and there’s new data sources that come in or the assumptions have changed. Do we need to have a process or some way of thinking about how to manage the lifecycle of the models within the context of a platform, and what does that look like?

Mike Potter: The short answer is definitely. I think the hard part about these model-driven approaches is they’re very dependent on the training sets, the training lifecycle. And more importantly, what are the right models for the particular type of problem we are solving? There’s a real gap in understanding between the people who need to find out an answer to a question that they have, and being able to translate that into a strategy of what are the right algorithms, heuristics, and approaches to take to answer that question.

So I think where we need to get to is more into the aggregated strategies, where we have technologies that allow you to propose several different approaches, several different models that could answer the question, and then manage their lifecycle through training and keeping them up to date, based as the data change. Where we are focused particularly at Qlik is really providing that capability.

Mike Vizard: Should I or is it worth my time and effort to create a whole data science team and go build these models? Or am I just gonna consume them through a platform like yours and I don’t really need to go build the models? You’ll build them, and then I’ll just tweak them and tailor them for my use case as we go along. What’s the right mix there?

Mike Potter: I think the answer is that you still will always need to have some level of data science capability, but I think the role of the data scientist will change. A lot of times, when you start being able to leverage a platform approach where the model creation itself becomes less onerous, less complicated, you really now can give the data scientist the power to take the question to the next level. Being able to leverage an aggregation strategy of models towards a particular problem, they may have the ability now to go after a different angle on the problem or a more complex aspect of the problem.

So what we want to be able to do is, first of all, make these models consumable by end users, who don’t understand the math behind them, and also then further enable the data scientist to become more productive because we’re basically helping with the rote tasks, the mundane part of it, so that they can focus on new trends.

Mike Vizard: Speaking of those end users, one of the issues they seem to have is you present them with these amazing platforms, and then they look at you and they go, “Well, that’s awesome, but I have no idea what questions to ask.” So how do we kind of educate the end user or the digital CXO to ask better questions to run the business? Or will the platform start surfacing up interesting questions?

Mike Potter: The approach that I think has the most effectiveness is one in which not only can you prime the system with some inputs that help drive insights and observations to inform that CXO, but give the ability for them to leverage these techniques to help provide an AI/ML-based _____, and then complement that with what I like to call collective intelligence, the ability to understand what everyone else is doing around them. What types of questions are my peers asking, my leadership, the people within the organization? Those questions can be creating a closed loop feedback system into those algorithms and models, to actually help refine for the next time a question is asked.

Mike Vizard: How automated do you think things will get? Because in theory, I could take a highly prescriptive approach, if the analytics platform comes up with some sort of recommendation. Do you think people will automatically implement that, or will they stop and vet that first and double-check before they implement it, or will it be more of a closed loop kind of system?

Mike Potter: I think it’s a closed loop iterative system. I think the system has an ability to get primed, whether that be through a data science team or through a set of initial profiling of the data and the information that’s available, and then through the use of it, being able to leverage that collective intelligence in a closed-loop to continue to refine and improve it, continually retraining based on what is being asked, what new data is being added, and what new techniques could be applied given the types of questions that are being asked.

Mike Vizard: So this is not a set it and forget it kind of thing. It’s continuous and I have to continually tweak this platform, one way or the other, to make it smarter as the world around me changes. Do you think people get that or do they think it’s some sort of magical thing, that I’m just pushing a button and awesome things will happen as a result?

Mike Potter: I think that’s more of a social commentary on the level of data literacy that exists out there. The one thing that I’ve always found very surprising is just how literal people assume data is, that it is black and white, when in fact data is very gray in terms of its ability to answer questions. A lot of it has to do with the inputs that go into that data.

The analogy I like to use is when we were at school we studied great works, whether it be Shakespeare or otherwise, and we would sit in our classes and discuss those great works. We would interpret it. Well, those great works are basically data. Imagine being given a dataset, and in a classroom environment be able to discuss that dataset with the same kind of ability to interpret that data and understand that the data played multiple roles, depending on how it’s leveraged in a context.

I think that what we’re gonna find is that as people mature their level of data literacy, they’ll realize that the role of data and the role of the techniques that we are using are really just designed to help them get to their answers more quickly, but leveraging this idea that it is an augmented journey in which the human and the system work together.

Mike Vizard: What is your sense of the government these days? Are they starting to come around the corner from a compliance perspective, and asking people to show how these models work? And are we prepared to answer those kinds of questions?

Mike Potter: I think it’s inevitable. What’s clear, particularly with some of the big social media companies out there that are under a lot of scrutiny, algorithms require accountability, and that accountability to how they behave and how they’re being managed in the context of how they, first, serve the users, but more importantly, how they serve the businesses in which they’re working. I think we are going to have to head down a road where we are going to have some level of accountability and governance.

Mike Vizard: What is the right expectation that people should have? Sometimes I talk to digital CXOs and they think that AI is gonna automate everything and they’re gonna reduce the cost of labor, and they’ve got this massive idea of huge profits and things are gonna be awesome.

Then I talk to folks who have some level of AI and they’re like, “Wow. This is like hiring a new employee and it takes months for them to get trained. The good part of it is they don’t quit, they don’t get sick, and they don’t go on vacation. But the other side of the coin is that it takes a while for these things to learn and to come up to speed.” So between those two extremes, what should people expect?

Mike Potter: I think there’s no free rides. You’re not going to get this idea of a push button magic system that answers every question. I think that’s what we all strive for. But I think the practical realities are it requires governance. It requires management. It requires continuing to improve and adjust, and modify the system in order to meet the ongoing and changing needs of the business.

I think that what will change is really the types of activities that people perform. So a lot of data management activities will become automated. A lot of the data science activities will become automated. So what ends up happening is those users, those constituencies, they’re going to have to change the game to a different set of management problems.

So it’s basically progress. We will continue to progress towards that, but the types of tasks that are gonna have to be performed will evolve based on the level of automation that we provide.

Mike Vizard: What’s your best advice to folks about how to get started from where they are? A lot of folks are still running around with spreadsheets. Some folks have a BI app. Most probably aren’t anywhere near the level of sophistication that we’re talking about. So how do I walk before I run?

Mike Potter: I think the simplest way I can answer that question is when you look at your data strategy withing your organization, I think you need to understand the data you have, and at the same time, you need to understand the questions and the data that’s needed by the business in order to fulfill on its strategic objectives. I think you need to understand where those two things fit, where you have the data to meet those needs and where you don’t, and what strategy you need to build in order to fill the gaps.

I think if you think of it in those simple terms, that will serve to guide how you manage the data you have and the importance it is and the relevancy that it has, what new types of mechanisms for creating data and derivative data that you need to instill into the organization, into the processes. Then finally, being able to manage those requirements that come through those questions in terms of helping them and guiding them towards how they can get access to that data in a fashion that’s accessible to all levels of data literacy in the organization.

Mike Vizard: All right, folks, you heard it here first. Not all data is of equal value, so maybe the first step is to figure out what data matters most in ranking and then you can automate from there.

Hey, Mike, thanks for spending some time with us.

Mike Potter: My pleasure. Thank you very much.

Mike Vizard: All right. Guys, thank you all for spending some time with us and listening to our show. We’ll see you again next time.

 

Show Notes