Microsoft’s next AI partnership wants AI to not harm people
The newest AI partnership promises safer ways to develop AI.
- The partnership will assemble a board of advisors and will plan a strategy in the following months.
- It will focus exclusively on frontier AI models, which are defined as being superior to the existing ones.
- If you are an organization that develops such AI models, you can join the partnership.
One week ago, Microsoft announced Llama 2, its AI partnership with Meta, at Microsoft Inspire 2023. Llama 2 is an open-source large language model, that you can use to build and train your own AI. This LLM is also rumored to be the first clue into achieving AGI, which is, ultimately, one of AI’s central goals.
Well, in a week from the announcement, a lot has happened. There are also rumors that OpenAI, the company behind ChatGPT, is releasing its own open-source LLM, codenamed G3PO. There isn’t any release date for it yet, but it’s going to happen in 2023 or in 2024.
And in a turn of events, it seems Microsoft has partnered up with Anthropic, Google, and Open AI to the Frontier Model Forum. The partnership is an industry body focused on ensuring the safe and responsible development of frontier AI models, according to the press release.
Today, Anthropic, Google, Microsoft and OpenAI are announcing the formation of the Frontier Model Forum, a new industry body focused on ensuring safe and responsible development of frontier AI models. The Frontier Model Forum will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem, such as through advancing technical evaluations and benchmarks, and developing a public library of solutions to support industry best practices and standards.
Frontier Model Forum
Basically, the Frontier Model Forum wants to build AIs that are not a risk to humans. If you remember, one of the partners, Anthropic, just released Claude 2 AI, and the model is renowned for the way it safely interacts with people. So, we are going to have a lot of AIs similar to Claude 2 AI, and probably even better. Either way, it’s excellent news when it comes to this industry.
What will the Frontier Model Forum do when it comes to AI
The partnership has established a set of core goals and objectives, and it will work according to them. They are:
- Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
- Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.
- Collaborating with policymakers, academics, civil society, and companies to share knowledge about trust and safety risks.
- Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.
The partnership is also open to collaboration with organizations
If you are an organization that develops frontier model AI, you can submit to join and collaborate with the Frontier Model Forum.
Expert tip:
SPONSORED
Some PC issues are hard to tackle, especially when it comes to missing or corrupted system files and repositories of your Windows.
Be sure to use a dedicated tool, such as Fortect, which will scan and replace your broken files with their fresh versions from its repository.
According to the Forum, a frontier model AI is a large-scale machine-learning model that exceeds the capabilities currently present in the most advanced existing models. And it should also be able to perform a wide variety of tasks.
To join the partnership, you, as an organization, need to meet the following criteria:
- You already develop and deploy frontier models (as defined by the Forum).
- The organization is able to demonstrate a strong commitment to frontier model safety, including through technical and institutional approaches.
- You, as an organization, are willing to contribute to advancing the Forum’s efforts including by participating in joint initiatives and supporting the development and functioning of the initiative.
This is what the Forum will do when it comes to AI
The Frontier Model Forum wants to support a safe and responsible AI development process, and it will focus on 3 key areas over 2023:
- Identifying best practices: Promote knowledge sharing and best practices with a focus on safety standards and safety practices to mitigate a wide range of potential risks.
- Advancing AI safety research: Support the AI safety ecosystem by identifying the most important open research questions on AI safety. The Forum will coordinate research to progress these efforts in areas such as adversarial robustness, mechanistic interpretability, scalable oversight, independent research access, emergent behaviors, and anomaly detection. There will be a strong focus initially on developing and sharing a public library of technical evaluations and benchmarks for frontier AI models.
- Facilitating information sharing among companies and governments: Establish trusted, secure mechanisms for sharing information among companies, governments, and relevant stakeholders regarding AI safety and risks. The Forum will follow best practices in responsible disclosure from areas such as cybersecurity.
Over the course of 2023, the Frontier Model forum will work on assembling a board, then build a strategy and establish priorities. But the organization is already looking to collaborate with as many institutions as possible, private or public. This also includes civil societies and governments, as well as other institutions that are interested in AI.
What do you think about this new partnership? Are you interested in joining? Or are you curious about frontier AI models? Let us know in the comments section below.