- Nvidia, Adobe, IBM, Salesforce, and Palantir are among the companies joining voluntary White House standards on artificial intelligence (AI) development.
- The addition brings to 15 the number of companies signing onto the standards, including Alphabet, Meta Platform, Microsoft, and OpenAI, the maker of ChatGPT.
- The agreement includes requirements to disclose AI-generated content, share vulnerabilities, and commitments to external testing before releasing products.
Chipmaker Nvidia (NVDA) headlined a group of eight companies that agreed to a White House-led set of artificial intelligence (AI) standards, committing to voluntary disclosure, safety, and security requirements for the AI tools and services they are developing.
The new additions to the standards will join Amazon (AMZN), Anthropic, Alphabet (GOOGL), Inflection, Meta Platform (META), Microsoft (MSFT), and ChatGPT-maker OpenAI, all of which signed on to the White House AI plan in July.
The announcement comes as the White House is promoting its engagement with industry on AI development, including meetings with top executives and technology leaders, and as Congress and regulators consider what rules might be needed as AI becomes more widely used in society.
How AI Businesses Tie Into White House Standards
Among the group joining the commitment is Adobe (ADBE), which has marketed new AI tools through its Photoshop image software. Another is Stability, which generates AI images through its “Stable Diffusion XL” service. One of the key planks of the agreed-upon standards is that AI-generated content must be clearly labeled, such as with a watermark.
Also joining was government data mining service provider Palantir (PLTR), which has cited AI as being behind its outperformance in recent earnings reports. Another key provision in the agreement is that information will be shared across the industry and with government agencies, academics, and organizations that manage the risks that AI could pose.
Other companies focused on generative AI development joined, including Cohere, which develops large language models, and Scale AI, which provides data for training of AI tools. Companies developing AI systems are required to report their capabilities and limitations, as well as areas of appropriate and inappropriate uses under the standards.
IBM (IBM) and Salesforce (CRM) also joined the agreement, both Dow Jones Industrial Average components that are developing their own AI platforms. The voluntary commitment requires companies to prioritize research into minimizing harm that AI tools can create, including addressing security challenges, rooting out harmful biases, and protecting privacy.
The agreement, which the White House said would go into effect right away, requires all the companies to perform internal and external security testing of their AI systems before release. It also has the companies focusing on safety and security, including targeting insider threats, as well as facilitating third-party discovery and reporting of AI vulnerabilities.
The White House also has promoted other steps it’s taken regarding AI safety and security, including a “Blueprint for an AI Bill of Rights” designed to protect Americans’ rights, and an upcoming policy from the Office of Management and Budget (OMB) that will set rules for how government workers can use AI.