The Commitments Of The Big Seven In AI

Listen to Article ( 4 minutes )
 

Pixis

Share Article

The rapid progression of AI has recently moved the White House to call a meeting of the leading 7 AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. The agenda for the knights of this round table was to encourage ethical AI practices moving forward.

The 7 Leading AI Companies

As reported by The Verge, the aforementioned AI service providers have voluntarily agreed to a range of requests to mitigate the fears that surround AI development until government regulations come into effect. Though the assurance given by the companies being voluntary may seem like kids promising their parents they’ll do their homework before playing with their toys when left alone, there is a sense of responsibility given with Open AI having listed the voluntary commitments on their website.

The AI Commitments

There are three major aspects that form the commitments set forth, which are safety, security, and trust.

In commitment to safety:

1. Commit to internal and external red-teaming of models or systems in areas including misuse, societal risks, and national security concerns, such as bio, cyber, and other safety areas.

2. Work toward information sharing among companies and governments regarding trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards.

In commitment to security:

3. Invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.

4. Incent third-party discovery and reporting of issues and vulnerabilities.

In commitment to trust:

5. Develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated, including robust provenance, watermarking, or both, for AI-generated audio or visual content.

6. Publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use, including discussion of societal risks, such as effects on fairness and bias.

7. Prioritize research on societal risks posed by AI systems, including on avoiding harmful bias and discrimination, and protecting privacy.

8. Develop and deploy frontier AI systems to help address society’s greatest challenges.

A Step in the Right Direction For AI

Even if these are just voluntary promises, it is a step in the right direction for the AI industry. Since the mass adoption of AI, especially the generative models, there have been debates on the need for governance. Recent events of copyright infringement, AI hallucinations, and inability to distinguish between human and AI generated output have shadowed the benefits of AI. However, with the big AI service providers now prioritizing ethical development, these liabilities can possibly be alleviated in the future.

It certainly provides a better platform for humans to work with generative AI models. The reporting of model limitations and whether certain output has been generated by AI should encourage humans to learn to appropriately use AI as a collaborative tool rather than have it do the work for them. You can imagine the relief for educators who would now conveniently be able to tell if homework actually belongs to their students or some AI.

A new standard of governance framework being set by the leading AI providers could become an example of how to regulate this technology without dampening its progress. Having them prioritize ways to deal with bias, discrimination and privacy protection will benefit everyone in the industry. Companies would also feel safer in integrating AI models into their infrastructure with the increased investment in cybersecurity and testing through red-teaming builds.

Keeping the developments of the technology transparent for other AI service providers and start-ups will allow them to learn from their research and follow suit when building their own AI infrastructures. They would feel compelled to adhere to the same set of standards either out of personal responsibility or through societal pressure. Either way, the industry continues to benefit as it continues its progress to bigger things.

This includes future AGI models, which OpenAI has a declared interest in. If we are further looking into Artificial General Intelligence (AGI), a debate on its own between AI experts, then it should be done with regulation and boundaries. We could all probably sleep more soundly at night knowing sentient machines are being developed with caution. This is an exciting prospect, because if done right, AGI could become the most significant milestone of technological development in history.

Regardless of what comes of this, it is great to see large players in the AI industry coming together to compete with corporate consciousness towards societal benefit, though it is yet to be seen if they adhere to their voluntary promises. Governments may need to make this a ‘commitment is a two-way street’ situation by promising incentives and contracts to those that stay committed until it becomes a formal legislative piece, which could still be a while away. These commitments of the big seven in AI just may be what shapes the future of the AI industry.

How AI-led marketing can help you hit your goals?

Never miss an update

Thank You!

We have just sent you a confirmation email in your inbox.

Our AI platform experts will get in touch with you shortly.

Thank You!

We have just sent you a confirmation email in your inbox.

Our AI platform experts will get in touch with you shortly.