Meta-Microsoft AI model has 70B parameters, the most so far
The next step in AI technology is here.
- Llama 2 will let you build your own complex AI models.
- You will be able to fine-tune and deploy the 7B, 13B, and 70B-parameter Llama 2 models.
- And if you're a developer who uses Windows, you're in for a treat.
At the Microsoft Inspire 2023 Conference, something exciting happened. Microsoft joined forces with Meta to form an AI partnership with Llama 2.
Today, at Microsoft Inspire, Meta and Microsoft announced support for the Llama 2 family of large language models (LLMs) on Azure and Windows. Llama 2 is designed to enable developers and organizations to build generative AI-powered tools and experiences. Meta and Microsoft share a commitment to democratizing AI and its benefits and we are excited that Meta is taking an open approach with Llama 2. We offer developers choice in the types of models they build on, supporting open and frontier models and are thrilled to be Meta’s preferred partner as they release their new version of Llama 2 to commercial customers for the first time.
Microsoft
It makes sense, as the first day of the conference was all about AI. Microsoft announced a lot of things AI-related aside from its Meta AI partnership. Azure OpenAI Service is now available in Asia after recently being released in North America and Europe.
Bing Chat Enterprise was also announced, and it’s the version of Bing AI but for work. The tool is available now in Preview. And, Microsoft Cloud Partner Program gets an AI update. You can now use AI to develop your business plan or marketing strategy while leveraging it to build a portfolio.
So when it comes to AI, Microsoft has been at the forefront of it right from the start. And now, the Redmond-based tech giant partnership with Meta can be the next big thing in the AI world.
Meet Llama 2 AI – Microsoft x Meta partnership
As it was mentioned earlier, Llama 2 makes it possible for developers and organizations to build generative AI tools and experiences.
You will be able to fine-tune and deploy the 7B, 13B, and 70B-parameter Llama 2 models easily and more safely on Azure.
Plus, you need to know that Llama will be optimized to run locally on Windows. And if you’re a developer who works on Windows, you’ll be able to use Llama by targeting the DirectML execution provider through the ONNX Runtime.
Windows developers will also be able to easily build new experiences using Llama 2 which can be accessed via GitHub Repo. With Windows Subsystem for Linux and highly capable GPUs, you can fine-tune LLMs to meet your specific needs right on their Windows PCs.
What do you think about this partnership? Let us know in the comments section below.