In this Leadership Insights video series, Amanda Razani speaks with Aaron Mendes, CEO and co-founder of PrivacyHawk, about AI as it relates to consumer privacy.

 

Transcript Text

Amanda Razani: Hello, I’m Amanda Razani, with Digital CxO. And I’m excited to be here today with Aaron Mendes. He is the CEO and co-founder of PrivacyHawk. How are you doing?

Aaron Mendes: Great, how are you doing? Thanks for having me.

Amanda Razani: Thank you for being here. Can you share a little bit about your background? And then a little bit about PrivacyHawk and the services you provide?

Aaron Mendes: Yeah, sure. So my background is I’ve been in the tech world for the last 20 years, particularly as it relates to big data, artificial intelligence and consumer data. And PrivacyHawk is a company that we founded a couple of years ago; it helps consumers protect their personal data from falling into the wrong hands with an application that we call Personal Data Management – it gives users transparency into where their personal data is being used. Most people have a massive digital footprint where your data is just sitting in hundreds of databases that can get breached. And those leaks cause fraud and other things. So we’ve built technology that helps people identify where their data is, and reduce their digital footprint.

Amanda Razani: Wonderful. So with that, I have a lot of questions for you today. So let’s start with the first one, which is: AI technology is rapidly expanding, and companies want to harness it to drive business and level up. But what measures should businesses put in place to ensure compliances with data protection laws, while utilizing AI for digital transformation and improving the online customer experience, are in place?

Aaron Mendes: Yeah, so the first step is that people need to comply with the existing regulations. And the – it starts with just making sure that everyone that’s working with data, and particularly personal information, is aware of what they can and can’t do. And a lot of companies maybe have teams that are working with data that have not been educated on the regulations. So it starts with just education. And the other thing that I think is important is, businesses, when they’re working with personal data, should sit down regularly and ask themselves, you know, do some introspection like, how could this go wrong? You know – how do we protect people’s data? What are some things that could go sideways with the way that we’re using data versus just launching headlong into let’s just mine this data and do everything that we can with it without sitting down and asking like, you know, what are the risks to us and to our users for what we’re doing?

Amanda Razani: Gotcha. Now very important. So with advanced AI technology, it is rapidly expanding at this time. So with that technology, businesses can now harness more real time data and unstructured data than ever before. So how can businesses balance the benefits of using consumer data to train AI models with the need to protect consumer privacy?

Aaron Mendes: Yeah, so similar to my last answer, the first thing that you need to do is comply with the law. So businesses need to make sure that they’re educated on the laws. And they’re different in different places – like in Europe, you can’t just go gobbling up consumer data without consumers’ consent, which is why OpenAI got in a bit of trouble there. In the United States, it’s a little more fast and loose, it’s more of an opt-out culture, and most people haven’t opted out. So it’s gonna be the balance of protecting – privacy is different for each business. But it starts with being thoughtful about the data that you’re using and asking yourself, “How could this go sideways? What are some terrible things that could theoretically happen with the way that we’re using consumer data?” Just having that brainstorm is very powerful, because you’ll drum up ways that you’re putting consumers and your businesses liability at risk. Once you surface those what ifs, it’s pretty straightforward to put guardrails in place to avoid those risks. Now, one thing that I think has been top of mind in the news is ChatGPT, and they’re gobbling up all this consumer data. And people are saying, well, you know, that model can’t be untrained. And you know, what rights that people have? And then ChatGPT came out and was like, well, it would be virtually impossible for us to come up with ways to not collect any personal information. But they can; they could at least get it like 99% accurate, and not using personal information, or come up with some ways that they can anonymize that. So those kinds of things are things that businesses need to think about.

Amanda Razani:  I think you mentioned that most people are not opting out. So that goes into my next question, though, which is how can consumers limit the access of their personal information to AI models without compromising the quality of the service that they receive?

Aaron Mendes: Yeah, so there’s a couple things. So first, there’s some low-hanging fruit. And the first thing is to reduce your digital footprint. There’s all these data brokers out there that nobody wants to be in; you might as well get yourself out of those. And then there’s hundreds of companies that you gave your data to in the past that don’t need it anymore. And you now have rights that you can request that they delete that information. So you’re at least limiting the footprint to things that you actually use and that need your data. The second thing is most of these services offer a way to tighten your privacy without compromising the service; it just happens to be opt-out instead of opt-in. And so for example, unless you’re trying to be famous, which most people aren’t, you should not make your social media posts public – just have your friends and family be the only people that can see that. Most social media profiles are public by default. And that information can be gathered by AI bots and other bad actors. So tighten that stuff up and reduce your digital footprint, and you aren’t really going to be compromising the value of most services.

Aaron Mendes: So what are some of the risks though, associated with using AI models that have been trained on personal data? And how can these risks be mitigated?

Aaron Mendes: So there’s two types of risks. So the first risk is for bad actors that have access to the same technology as companies with well meaning intentions – that’s, I actually think the bigger risk is that now that the genie is out of the bottle, and we know that these models can be very creative, and solve very complex problems, similar to the way a human would, you can feed it all this information about people and then ask the model to help you figure out ways to compromise people, whether it’s government officials or individuals. You could ask the model to identify people that are going to be the easiest to target for identity theft, or things like that. So that’s very dangerous, that this technology now exists, and anyone with, you know, a few engineers could create some of these things that could be very dangerous. Now, the other kind of risk is companies with good intentions. And they’re like gobbling up all this information about us, and feeding it into these models. And they don’t even fully understand what the capabilities are of the technology until you start using it. So all this disparate information about you from social media, data brokers, websites, public records – that could be pieced together in ways that compromise your personal security and privacy. So you could go to one of these bots, and someone could say, you know, what information do you know about Amanda Razani? And then it will tell you a whole bunch of things that it figured out based on all this disparate information that maybe you don’t really want people knowing about.

Amanda Razani: Oh, yeah, that’s a little scary, you know, the information that can get out there with this technology. So what are some of the legal implications for businesses that violate consumer privacy laws in their use of AI models? Whether intentionally or unintentionally?

Aaron Mendes: Yeah, so violating privacy laws is very expensive. In Europe, it’s a percentage of revenue, which can add up to billions of dollars. In California, it’s thousands of dollars per violation. So if you violate the privacy of a million people, you’re risking billion-dollar fines. And then there’s the liability of being sued and class action lawsuits and things like that. And it can significantly damage the reputation of a business. So it’s very expensive to run afoul of privacy laws.

Amanda Razani: Potentially something that could shut a company down if it was too terrible. So then, when working with government and regulators, how can the government and regulators work together with businesses to strike a balance between using AI to enhance business operations while protecting this consumer privacy?

Aaron Mendes: Yeah, so there’s already some generally accepted rules that are pretty good encoded into law. These are things like the right to delete or what they call the right to be forgotten, the right to not be sold and the right to non-discrimination. What’s missing is something that gives users the ability to be removed from the raw data that’s used to train AI models. And this is very tricky. You know, I think we have to have a whole panel of people from various backgrounds to come together and to brainstorm how to tackle this because it’s kind of like compiling code where once it’s compiled, you can’t reverse it. And so, once the model is trained, you know, how are you – if someone wants to be out of that model, it could be enormously expensive to pull people out. And then you have to retrain the whole model. So, you know, a bunch of smart people need to get together and think about how do we make it so that this data can either be anonymized when it’s used in these models, or maybe make it so that it’s opt-in, not opt-out to be used in these models. You know, it’s a difficult problem. And I think it’s something that we need to talk about and come up with a solution.

Amanda Razani: Definitely. What are some other emerging technologies that you recommend the C-suite should be looking into, to assist with their digital transformation efforts?

Aaron Mendes: Yeah, I mean, each business is so different, they have different problems that they need to tackle. But it’s hard to imagine a business that wouldn’t somehow benefit from this giant leap in general, artificial intelligence. And it’s going to be different for each business. But, you know, businesses need to be looking into what problems they have that can now be solved by these generative AIs. Another thing is, you know, this leap in AI is also going to create a lot of cybersecurity vulnerabilities. And so, you know, I think business leaders need to constantly be thinking about technology that can be used to reduce, particularly, social engineering hacks. The company’s weakest link is its people. And it’s not that, you know, it’s not that the people are doing anything on purpose. But there’s very clever hackers out there that can do phishing attacks and social engineering hacks that can compromise systems accidentally, or on the part of the employees. And so you need to be thinking about what kinds of vulnerabilities there are out there. And how can I leverage technology to find those quickly and fix them?

Amanda Razani: Much to think about while businesses try to leverage technology, but stay within the guidelines and protect consumer privacy. So I want to thank you today for coming on the show. Is there anything else you’d like to share with us today?

Aaron Mendes: Um, no, I’m good. I’m happy to have been here – a very interesting conversation.

Amanda Razani: Thank you. Well have a good day, and I look forward to speaking with you again soon.

Aaron Mendes: Yeah. Thanks, Amanda. Great to meet you.