Google I/O 2024 We’ve put together all the big announcements made at the event. Important innovations in the event where artificial intelligence is at the center shown.
Google I/O 2024 At the event, we had the chance to see the interesting new technologies that the internet giant Google will introduce in the near future. As no surprise, there were many remarkable future demonstrations at the event, which was almost entirely focused on artificial intelligence. The company, which is already very strong on the artificial intelligence side with its Gemini language models, but today demonstrates that it will stand even more firmly against OpenAI, kept the hardware in the background, as expected, at the developer conference where Android 15 was also mentioned. So what exactly came from Google today? The highlights of Google I/O 2024 are as follows:
-The event kicked off with a strange show centered around MusicFX DJ, Google’s generative AI-based music creation tool. This show was directly undertaken by Marc Rebillet.
Starting today, AI Overviews will begin rolling out to everyone in the US, with more countries coming soon. #GoogleIO pic.twitter.com/KlMmmEKqI2
— Google (@Google) May 14, 2024
-Generative artificial intelligence-based “Search Generative Experiences” infrastructure will now be called “AI Overviews”. The system will be available to everyone in the USA this week, and will later be rolled out to other countries.
Ask Photos, a new feature coming to @GooglePhotos, makes it easier to search across your photos and videos with the help of Gemini models. It goes beyond simple search to understand context and answer more complex questions. #GoogleIO pic.twitter.com/OsYXZLo5S1
— Google (@Google) May 14, 2024
-A productive artificial intelligence-based search system is coming to Google Photos. This system called “Ask Photos” can find photos by detecting the questions asked or create albums according to your wishes.
-Gemini 1.5 Pro is now available to all developers globally, and is also being released specifically for Workspace Labs.
YOU MAY BE INTERESTED IN
-Gemini language model is now based on written data/information, It can create a special voice assistant that can talk about any subject you want. You can communicate with this assistant by talking, the system looks quite natural.
-Google, “AI Agents” It will make people’s lives easier with its artificial intelligence assistants. AI Agents will be able to collect information by accessing multiple Google services.
Today, we’re excited to introduce a new Gemini model: 1.5 Flash. ⚡
It’s a lighter weight model compared to 1.5 Pro and optimized for tasks where low latency and cost matter – like chat applications, extracting data from long documents and more. #GoogleIO pic.twitter.com/WP26QVUHC7
— Google DeepMind (@GoogleDeepMind) May 14, 2024
-The new Gemini 1.5 Flash large language model was introduced. This model, which is said to be faster and more efficient, has less coverage/capacity than the 1.5 Pro.
We’re sharing Project Astra: our new project focused on building a future AI assistant that can be truly helpful in everyday life. 🤝
Watch it in action, with two parts – each was captured in a single take, in real time. ↓ #GoogleIO pic.twitter.com/x40OOVODdv
— Google DeepMind (@GoogleDeepMind) May 14, 2024
-Project Astra was announced. Astra, a productive artificial intelligence-based digital assistant, can detect almost everything by seeing the environment through the phone camera and make explanations about what they are. The system, which can answer the questions asked very fluently, can even detect markers drawn on the phone screen or on a table and give answers accordingly.
We’re introducing Imagen 3: our highest quality text-to-image generation model yet. 🎨
It produces visuals with incredible detail, realistic lighting and fewer distracting artifacts.
From quick sketches to very high-res imagery, here’s a look at what it can create. 👀 #GoogleIO pic.twitter.com/XMrQYGeSiO
— Google DeepMind (@GoogleDeepMind) May 14, 2024
-Imagen 3 visual artificial intelligence model was announced. The system, which produces visuals from what is written, can provide very realistic results after the studies.
Together with @YouTube, we’ve been building Music AI Sandbox, a suite of AI tools to transform how music can be created. 🎵
To help us design and test them, we’ve been working closely with musicians, songwriters and producers. ↓ #GoogleIO pic.twitter.com/pMLa3aCveu
— Google DeepMind (@GoogleDeepMind) May 14, 2024
-Music AI Sandbox offers musicians new artificial intelligence tools with which they can produce new songs/music and transform existing music into other styles in seconds. Artists can create music simply by writing, that is, stating their wishes in writing.
🎥Introducing Veo, our new generative video model from @GoogleDeepMind.
With just a text, image or video prompt, you can create and edit HQ videos over 60 seconds in different visual styles. Join the waitlist in Labs to try it out in our new experimental tool, VideoFX #GoogleIO pic.twitter.com/RnMsWu9s1q
— Google (@Google) May 14, 2024
-OpenAI Sora rival “Veo” announced. The system, which gives 1080P outputs, can create videos from text such as Sora. The system, which can capture videos longer than 1 minute, produces realistic results according to the examples shown by Google.
-New high-performance TPU (specific for the cloud side)Tensor Processing Units) Google Trillium And ARM-based cloud-native Axiom processor announced.
–
Updating….