Amazon Web Services (AWS) has launched a new service called Bedrock, which provides an API offering various foundational models (FMs) for text and images from AI21 Labs, Anthropic, Stability AI and Amazon’s Titan FMs.

Bedrock provides customers with the ability to build and scale generative AI-based applications using FMs via the AWS managed service.

The cloud service uses AI models from Amazon as well as several AI startups and allows users to create chatbots, text or images. Currently in a limited preview stage, Bedrock offers serverless experience, allowing customers to customize FMs with their own data, and deploy them into their applications without managing any infrastructure.

Customers will have the flexibility to choose the way they want to build with generative AI: Build their own FM with purpose-built ML infrastructure, leverage pre-trained FMs as base models to build their applications or use services with built-in generative AI without requiring any specific expertise in FMs.

Enterprises are likely to consider several approaches – prompt engineering, prompt tuning and model fine-tuning – to achieve the outcomes they desire from the base AI models.

Providing a platform that can enable rapid experimentation and iteration with embedded automation is important to remove the friction for operationalizing these generative AI applications.

Bratin Saha, vice president of AI and ML at AWS, says many customers have explained they want to take pre-trained FMs as a base model and leverage them to build their own applications without the burden of collecting large volumes of training data and spending millions of dollars for model training.

“For generative AI to become as pervasive as we believe it will be, it needs to be much easier for customers to do this, and customers have told us there are a few big things missing from the equation today,” he says.

For one, customers need to be able to find high-performing FMs that give outstanding results that are best suited for their purposes, and they require integration to be seamless, without having to manage huge clusters of infrastructure or incur large costs.

“Finally, customers want it to be easy to take the base FM and build differentiated apps using their own data – a little data or a lot,” Saha notes.

He adds one of the most important capabilities of Bedrock is how easy it is to customize a model.

“We are providing choice for customers by working with leading FM providers, making it easy for them to leverage existing models from Amazon SageMaker,” he notes.

He argues that by providing a fully managed experience, easy customization, protection and privacy, access to state-of-the-art FMs and the choice to leverage third-party FM technologies, Amazon Bedrock is positioned to help further democratize machine learning and the use of FMs faster and in a more cost-effective way; results that will positively impact digital transformation efforts.

Gartner analyst Arun Chandrasekaran explains Amazon has been aggressive in courting third party providers along with releasing their own models, and notes Amazon seems to be taking an approach of releasing smaller scale, fine-tuned models rather than general purpose models.

“Given the supply chain shortages, Amazon’s silicon capabilities may help to entice start-ups looking for predictable scaling and cost-effective IaaS to train and operate foundation models,” he says.

The challenges businesses face when building and scaling generative AI applications with foundational models include reducing hallucination risks, fine-tuning the models with domain specific or organization specific data, legal risks associated with improper training processes, ensuring protection of the organization’s intellectual property – to name a few.

“I’d also say that the demand is fairly high for generative AI services relative the provider’s ability to meet it, particularly for NLP use cases,” Chandrasekaran adds.

He points out the competitive stakes are really high in generative AI and the top technology companies and cloud providers are accelerating their research and product development in this space.

“The cloud providers see a multi-pronged opportunity with generative AI – providing a platform for customizing and operationalizing AI foundation models, accelerating launch of generative AI APIs that enterprises can embed into an existing application or workflow, and embedding generative AI into the business applications that they offer to enterprises,” Chandrasekaran explains.

The cumulative effect of this is more consumption of their cloud IaaS and PaaS, new revenue streams, upselling generative AI within existing business applications and benefit from the platform effect.

“If a startup building a generative AI application on your platform becomes successful it creates a network effect of accelerating adoption with other start-ups and enterprises,” Chandrasekaran says.

The generative AI space, though quite nascent, is characterized by rapid innovation and interesting realignments and partnerships.

Speed to market along with the ability to deliver highly efficient, cost effective and scalable environments is key to attracting start-ups and enterprise clients to build on your platform.

“Amazon is betting on customer demand for fine-tuned models that are directly aligned with enterprise use cases in areas of text generation and code generation,” Chandrasekaran says. “The rise in more generative AI services along with more fine-tuned and open source models is a positive step for clients as it can reduce the time to market and provide more deployment flexibility.”