In this Leadership Insights video interview, Amanda Razani speaks with Torsten Grabs, the senior director of product management at Snowflake about key announcements from Snowflake Summit 2023 and the important issues enterprises are focused on today.

 

Transcript Text

Amanda Razani: Hello, I’m Amanda Razani with Techstrong Group, and I’m excited to be here today with Torsten Grabs. He is the senior director of product management at Snowflake. How are you doing today?

Torsten Grabs: I’m doing fantastic, Amanda, thanks for having me on.

Amanda Razani: Wonderful, glad to have you here. And the big topic of the day, of course, is the Snowflake Summit 2023. You’ve had some big announcements so far. And let’s start out by just sharing with our audience a little bit about Snowflake and the services you provide, and then go into some of those announcements.

Torsten Grabs: Yeah, so at Snowflake, our vision is to remove silos and to mobilize everyone’s data. And AI obviously, is a big topic for us these days, also at our summit conference, right now. And it’s mobilizing everyone at really warp speed. And it’s accelerating our vision here at Snowflake. Some of the key announcements that we made here right now at Summit – well, for me, we’re particularly in the compute space with Snowpark Container Services, that really expands Snowflakes compute infrastructure dramatically. It allows you to now run a variety of different workloads, natively on Snowflake compute and these workloads that we now enable through Snowpark Container Services. They include full stack applications; also, the secure hosting of large language models for generative AI, and more broadly, a lot of advancements for data science and machine learning practitioners to that announcement as well. We are also joined for that announcement with a number of partners that we’re very excited about. Some of them are, for example, Alteryx, but also Astronomer, Data IQ, Hex media and SaaS, and they all are using Snowpark Container Services already to provide customers with more secure, easy and governed access to the enterprise data that customers have in Snowflake.

Amanda Razani: Fantastic. Can you share a little bit more about Snowpark and some of those partnerships?

Torsten Grabs: Some of those partnerships? Yeah. So for instance, what you have the ability to do with Snowpark Container Services here is to literally run these full stack applications from those partners within the security context of your Snowflake account. And maybe just picking one example here, with Hex, you now have a state of the art notebook environment that completely runs end to end in your Snowflake account and all compute that you do either in the notebook UI itself, or that you maybe do through Snowpark data frame operations on your Snowflake warehouse. All of that stays within your organization’s security perimeter. And you don’t have to worry about  – hey, when I do this data processing, is my data leaving Snowflake and how should I think about that from a security and governance perspective? Right? Also, there are performance benefits of just essentially bringing all that compute that sits in the notebook to bring that exactly where you also work with your data into Snowflake. So those are some of the the other benefits, right?

Amanda Razani: What is some of the major impact that you think this is going to have on the enterprise in the next couple years?

Torsten Grabs: In the next couple of years, I think it’s gonna tremendously accelerate and enable generative AI and large language model use cases. So some of the other partnerships that we are very excited about are with NVIDIA. It’s providing the GPU compute backbone in Snowpark Container Services for a lot of these generative AI and large language model use cases that typically run on GPU backed machines. And in addition to that, I think the one of the key concerns around generative AI and large language models is about sensitive data in the enterprise. And how do you orchestrate that with secure processing in the generative AI and large language model space? How do you make sure that none of that sensitive information is disclosed to someone who shouldn’t have access to that? And with Snowpark Container Services, you can really bring that generative AI large language model compute to where you have your data and where you govern your data. And then you will rest assured that even if you’re interacting with a large language model, none of that sensitive data is getting disclosed in ways that you don’t want to. And I think that’s going to enable a lot of really, really interesting use cases. For us, and for our customers, and for the industry more broadly. Maybe the more obvious ones are on all the productivity enhancements that we’re seeing from generative AI and large language models right now. So think about the coding companions that you have with the likes of Copilot or on the AWS side with Code Whisperer that can mean a substantial productivity gain for builders. And for developers, all of that is enabled by generative AI technology. And now, these the same productivity enhancements, you will see them show up in Snowflake as well. And they’re enabled too through that processing infrastructure that Snowpark Container Services enables. In addition to that, besides these productivity gains, there’s also an ability for us just to become much much smarter about the data that we have, or the data that an enterprise owns. By enabling large language model and generative AI on that data, you can build up a much deeper, much better understanding of your organization’s data. And obviously, you can only do that if you actually feel secure and confident about privacy for that, that processing that you want to do with generative AI and large language models. And I think that’s going to be the other a big, big aspect that’s going to drive innovation and additional insights for enterprise enterprises. By unlocking the power of generative AI large language models over enterprise data.

Amanda Razani: Generative AI is the big topic of the year, and it seems to be advancing really quickly. I know there are some concerns around generative AI and getting some regulations on it. What is your view on enterprises as they try to harness this AI technology? How do they make sure to keep it safe and secure? And deal with the ethical standards of AI?

Torsten Grabs: Yeah, that’s a great question. And I think it’s top of mind for, for everyone. We talked about security and governance already a little bit here. What I would suggest organizations doing on that front is everybody can today already orchestrate with cloud hosted generative AI and LLM services; I would encourage organizations to really scrutinize the security situation when they do that with enterprise data. In Snowflake, to orchestrate with an external service, you want to understand what data you can actually send over, given your privacy obligations, right. And for some data, there is a lot of data, that may actually mean that you don’t want to send it into these these places, but you’d rather keep it in Snowflake, and then use the LLM capabilities from our partners to do the processing in Snowflake where you can maintain your your privacy posture. So that’s one aspect. The other one that is talked about quite a bit is is it going to displace employees completely will part of the workforce be out of jobs because of generative AI and large language models. And there’s there’s certainly some aspects of that with all of these productivity gains that generative AI and large language models will unlock. But generally speaking, the one of the big problems is that generative AI –  it still can lead to wrong results, or hallucinations, as it is being called. So I typically encourage leaders that I speak with to keep humans in the loop so that you have oversight of generative AI large language models, when you have someone who, who actually vets the results that are being produced by these automated systems. And by keeping the human in the loop, then you can have the confidence as an organization, that you are not blindly trusting a potentially wrong result. Maybe that’s that’s another kind of key aspect to keep in mind. Obviously, generative AI large language models are getting better, but it will take us some time until we can like blindly trust them. And I would caution organizations to probably not do that for the foreseeable future. In addition, in addition to that, I think there’s there’s there’s also kind of legal situation that is evolving in different jurisdictions about what use of generative AI and large language models is permissible. And we’re obviously enable customers to make well informed choices there. So for instance, if something is not allowed in a particular jurisdiction, we all make sure that the product capabilities, give them the option to opt out from from those capabilities.

Amanda Razani: Absolutely. And that key point you raised was the human element. As I speak with thought leadership, leaders across the space, they all say keep the humans in the loop that is key. It is simply just a enhancing tool at this moment in time. So as you are working with these different partners and different businesses, what are the key issues or struggles you’re seeing them have as they try to harness this AI technology? And as they try to better analyze their data for digital transformation purposes?

Torsten Grabs: One of the issues that we already touched on is how can I make sure that I have access to the most promising data that I want to process and a lot of that data is actually enterprise data. And the key challenge here is that we have made a lot of advances with large language models being trained over publicly available data. And the results from that are very promising. But I do expect even better results when we can actually access for these large language models, the data from the enterprise and then also create specific model instances that are optimized or even fine tuned about specific data sets for specific organizations for specific use cases. And I think that will unlock the next level of value from generative AI when we can do that securely, because then these large models are very, very broad. And they can, because of that, can can answer to a lot of use cases. But that may not necessarily be required for an organization that is just interested in solving a very, very specific use case. And then it might actually be better from a result quality perspective to highly optimize a large language model on this specific data set from that organization. It may also give them cost benefits, because a narrower model will not require as much compute resources. So if you actually have a highly optimized but narrow model for a specific use case, you may be able to run it with very, very high accuracy, at much, much lower cost. And those are some of the use cases that we are enabling through our partners. Like NVIDIA, for example, is very focused on that particular use case where you start with a set of base models, that you then using frameworks from NVIDIA train and optimize over enterprise data for a very specific use case.

Amanda Razani: Digital transformation is, of course, another big topic and most companies are digitally transforming in one way or another. And what is your advice to them? As far as if they’re behind the mark on this, and they’re just getting into this digital digital transformation phase, then where do they start? What is step one?

Torsten Grabs: I think maybe the word of encouragement here is that almost everyone is still kind of starting on the generative AI and large language model wave, at least in terms of the the enterprises that are embracing that for their sensitive data. I think we’re just still in a starting position here. And that levels the playing field, I would say, to to a degree, so I would encourage organizations to get started now. Now is the time to essentially wrap your head around it. And that also may put you in a position where you will feel better about how you compare to some other players in in your industry, right? Because everybody’s now starting from that same place, more or less. Also, I think it’s going to be interesting to see what is the impact of generative AI large language models, on, let’s say, data science and machine learning in the enterprise more broadly. I think through the ability of generative AI to engage with less technical users through natural language and through a more conversational structure. I think you have the ability to also provide value for different personas in the organization, without having to go through, let’s say, a data science team or machine learning engineering team, which in many organizations have been very bottlenecked over the last few years. And by leaning on something that is more self service for less technical people, I think generative AI large language models are a big opportunity for less technical organizations.

Amanda Razani: It will be interesting to watch the future and this technology unfold as it relates to business. So do you have any other key announcements or thoughts that you want to throw out there about the Snowflake Summit?

Torsten Grabs: Yeah, super excited to be here. I mean, the buzz is great. It’s great to meet all the people. I’ve already started talking with some of the customers and plugging their brains about what’s going on in the industry. So I’m very much looking forward to meeting more customers. and better understanding of how they’re thinking about the space and also getting a better sense of how all these announcements are resonating with them and what use cases they would put on Snowflake with these new capabilities. So very much looking forward to that.

Amanda Razani: Great. Well, thank you for taking time to speak with me. I’m excited to hear more news as the summit unfolds. It’s been really great so far.

Torsten Grabs: Thank you so much.