SAP is betting on both Microsoft and Google to help it infuse more advanced artificial intelligence (AI) capabilities across its application portfolio.
At the SAP Sapphire 2023 conference, SAP this week revealed it will collaborate with Microsoft to first integrate SAP SuccessFactors with Microsoft 365 Copilot and Copilot in Viva Learning, and employee experience management platform, in addition to using the Microsoft Azure OpenAI Service to access large language models (LLMs) that will enable it to drive to bring generative AI to its application portfolio.
At the same time, SAP has committed to making SAP data available alongside other data sources on the Google Cloud platform. That combination promises to make it easier to train AI models using a combination of data from both SAP and other sources using an instance of SAP DataSphere running on the Google Cloud Platform.
SAP CEO Christian Klein said the fastest way to bring the benefits of generative AI to users of SAP applications is to partner with entities that have already made massive investments in LLMs.
Specifically, SAP will leverage the Azure OpenAI Service API and data from SAP SuccessFactors to create highly targeted job descriptions that can be fine-tuned using the Copilot generative AI capabilities that Microsoft is embedding within its Microsoft Word application. SAP will also leverage the Azure OpenAI Service API to offer prompts to interviewers within Microsoft Teams with suggested questions based on a candidate’s resume, the job description and similar jobs.
Integration between SAP SuccessFactors solutions and Microsoft Viva Learning will also enable employees to use Copilot in Viva Learning to conduct natural language queries to create personalized learning recommendations based on data and learning courses in SAP SuccessFactors. As courses are completed, SAP SuccessFactors will update automatically the skills resources it enables organizations to track.
The major AI challenge organizations will need to resolve is how to effectively blend their data with the data that was used to pre-train an AI model. Most LLMs have been trained using public data that is not always accurate. Organizations that are planning to use generative AI to drive digital business transformation initiatives will need to be able to select what subset of public data, when combined with the data they have collected, yields the most optimum results.
In effect, generative AI will require organizations to considerably improve the way they have thus far managed data. Once that data is governed better, they will then need to decide to what degree to leverage a service to access a LLM versus possibly building their own domain-specific LLMs. Those LLMs are becoming easier to train using a smaller set of data, so it may be feasible for larger enterprises that have invested in data science teams to construct them.
One way or another, generative AI will be applied to enterprise data in a more controlled setting. The challenge now is finding a way to achieve that goal as quickly as possible before faster rivals attain a competitive digital business advantage that as time goes only becomes that much more difficult to surmount.