There have been many good news in the field of AI this week. Google and Meta have released major updates respectively. The performance of Google Gemini model has been greatly improved and the cost has been reduced. Meta Llama 3.2 has added visual features, which is eye-catching. At the same time, Google DeepMind's AlphaChip project has also made breakthrough progress, accelerated chip design efficiency, and open sourced related models, injecting strong impetus into the development of the industry. The editor of Downcodes will explain these exciting AI developments in detail.
The AI industry has been really busy in the past week. Both Google and Meta are launching new versions of AI models, attracting a lot of attention. First, Google announced a new update to its Gemini series on Tuesday, launching two new production-ready models - Gemini-1.5-Pro-002 and Gemini-1.5-Flash-002.

This update has greatly improved the overall quality of the model, especially in mathematics, long context processing and vision tasks. Google claims that on the MMLU-Pro benchmark test, performance has improved by 7%, and performance on math-related tasks has improved by 20%. If you care about AI, you should know that benchmarks are of limited significance, but this data is still very exciting.
In addition to the performance improvement of the model, Google has also significantly reduced the cost of using Gemini1.5Pro, with input and output token fees falling by 64% and 52% respectively. This move makes it more cost-effective for developers to use Gemini.
In addition, after the update, the request processing speed of Gemini-1.5Flash and Pro has also been improved. The former can support 2,000 requests per minute, and the latter can support 1,000 requests per minute. Such improvements will undoubtedly help developers build applications more easily.

On Wednesday, Meta was not idle, launching Llama3.2, a major update to its open-weight AI model. This update includes large language models with visual capabilities, ranging in parameter size from 1.1 billion to 9 billion, and also launches lightweight text models with 100 million and 300 million parameters designed for mobile devices.
Meta claims that these visual models are comparable to the market-leading closed-source models in image recognition and visual understanding. Moreover, some AI researchers have tested new models and the results show that these small models perform well on many text tasks.

Next, on Thursday, Google DeepMind officially announced a major project-AlphaChip. The project is based on 2020 research and aims to design chip layouts through reinforcement learning. Google says AlphaChip has achieved "superhuman chip layout" speeds in generating high-quality chip layouts in its last three generations of tensor processing units (TPUs). It can be reduced from weeks or even months in humans to hours.
What's more worth mentioning is that Google has also shared AlphaChip's pre-trained model with the public on GitHub so that other chip design companies can also use this technology, and even companies such as MediaTek have begun to adopt it.
Highlight:
** Google releases a new version of the Gemini model, improving overall performance and significantly reducing the price. **
**Meta launches Llama3.2, a small language model that supports visual functions and performs well. **
** Google's AlphaChip accelerates chip design, significantly improves design efficiency and shares technology. **
All in all, this week’s innovations in the field of AI are exciting. These technological advances will promote the application of artificial intelligence in more fields and deserve continued attention. The editor of Downcodes will continue to bring you more cutting-edge technology information.