Does Your GPT Model Come With an Einstein Trust Layer?
Salesforce offers generative AI models with guardrails to protect data.
Who do you trust?
That’s the marketing pitch Salesforce is embracing as it enters the field in the rapidly unfolding contest to become the AI cloud platform of choice to train and host Large Language Models (LLMs) for commercial applications of generative AI.
In fact, AI Cloud is what the San Francisco-based tech giant is calling its new platform for serving text-generating LLMs.
Salesforce’s plans for AI Cloud mirror Amazon’s strategy for its recently launched Bedrock, which offers a family of LLMs trained in-house by Amazon Web Services (AWS) and also serves as a platform for pre-trained models from startups who are looking for a partner that will enable them to scale up.
AI Cloud will be offering LLMs trained by a range of partners, including AWS, Anthropic, Cohere and GPT-4 pioneer OpenAI, as well as “first-party” models trained by Salesforce’s research division to drive capabilities including code generation and business process automation.
Microsoft is calling its suite of industry-specific GPT-derivative LLMs “co-pilots,” including a digital twin co-pilot it is developing in partnership with Willow that is bringing generative AI to proptech.
Salesforce hopes to gain a competitive advantage in what the firm calls “enterprise ready” generative AI by guaranteeing customers who bring their own custom-trained models to AI Cloud that they can protect sensitive data that will still be hosted on their own infrastructure.
To use a brain metaphor, the LLM will reside in Salesforce’s frontal lobe, the customer’s data will remain in the customer’s cerebellum and the two will interact with each other in a form of digital osmosis that Mr. Spock used to call a Vulcan mind meld.
Salesforce is calling it the Einstein Trust Layer. In an interview with TechCrunch, Adam Caplan, the firm’s SVP of emerging technology, explained that a successful generative AI platform must add its own guardrails.
“It’s really about bringing generative AI in a trusted fashion to the enterprise,” Caplan told TechCrunch. “The number one question from every customer is around trust and security and how we can enable them as an enterprise to approach these new technologies—this new world —in a safe fashion.”
The Einstein Trust Layer is a new AI moderation and redaction service which intervenes and attempts (their word) to prevent a text-generating LLM from retaining sensitive data, including customer purchase orders and phone numbers.
The Einstein Trust Layer is similar to AI chip-maker Nvidia’s NeMo Guardrails, products that are aimed at companies with strict compliance and governance requirements.
According to TechCrunch, a growing list of firms have banned or restricted the use of generative AI like GPT-4, citing privacy risks, including Goldman Sachs, Verizon—and Amazon, parent of AWS, which is promising to extend the ability to develop LLMs to businesses large and small.
Salesforce has introduced nine LLMs based on its flagship products, including Sales GPT, Service GPT, Marketing GPT, Commerce GPT, Slack GPT, Tableau GPT, Flow GPT and Apex GPT.
According to the company’s website, Sales GPT can quickly auto-craft personalized emails, while Service GPT can create service briefings, case summaries and work orders based on case data and customer history.
Marketing GPT and Commerce GPT can generate audience segments for targeting and tailoring product descriptions to each buyer based on their customer data; the bots can also provide recommendations on how to increase average order value.
Slack GPT and Flow GPT let users build no-code workflows that embed AI actions. Tableau GPT can generate visualizations based on natural language prompts and surface data insights. Apex GPT can scan for code vulnerabilities and suggest inline code for Apex, Salesforce’s proprietary programming language.
According to Caplan, the company is taking an “ecosystem” approach to generative AI, “working with the best model for the best use case.”