Google IO Google today opened its developer conference, the aptly named Google IO, with a strong nod to the ongoing COVID-19 pandemic from Alphabet CEO Sundar Pichai.
“In some places, people are starting to revive their lives with declining cases. Other places like Brazil and my own country India have gone through their most difficult moments. We are thinking of you and hoping for better days in the future, ”Pichai said, speaking outside the campus of the Chocolate Factory in Mountain View.
Last year, a coronavirus outbreak started Google to cancel its IO full show.
Pichai detailed a new feature that collaborates with Google Workspace and numerous advances in AI software and hardware, including a promising conversation technology called the Language Model for Dialogue Applications (LaMDA).
Google uses the term Smart Canvas to refer to dozen enhancements added to Workspace that aims to improve collaboration and connect with unique apps like Docs, Sheets, Slides, and Meet.
“At Smart Canvas, we combine the content and connections that have transformed collaboration, into a richer, better experience,” explains Javier Soltero, VP and overall management of Google Workspace. “For more than a decade, we’ve been pushing documents away from being just digital pieces of paper, and toward collaborative web-inspired linked content. Smart Canvas is our next big step.”
As an example Soltero describes a scenario where a team is collaborating on a shared Doc file and the writing help feature suggests changing the word “Chairman” to “Chairman” to “avoid a gender term . “
A related effort discussed at the end of the introductory keynote was Google’s effort to review digital image processing algorithms to better capture different skin colors in the Android Camera app and elsewhere. another place.
Other enhancements to Smart Canvas include: @ -mentions of team members in Docs and (soon) Sheets, where additional information such as job title, location, and information can be used contact; table templates in Docs; the ability to display Docs, Sheets, and Slides content at Meet events; and a non-formatting format in Docs for better viewing on many screen sizes, among others.
Better AI to chat again
Pichai reviewed Google’s advances in AI over the past 22 years, focusing on changes in language translation and image recognition. He described how natural language developments such as the Transformer neural network architecture in 2017 and BERT in 2019 have made computers more capable of understanding natural language queries.
“Today I am excited to share our latest breakthrough in understanding natural language, LaMDA, it is a model language for dialogue applications, “he explains.” And it’s open domain, which means it’s designed to communicate on any topic. “
Pichai described LaMDA’s conversational skills by narrating a conversation about Pluto between a human and LaMDA, with the Ai model responding as if it were the dwarf planet. Missing from the sample dialogue are any of the nonsensical statements or misunderstandings that anyone who has engaged in AI conversations will inevitably encounter, even if LaMDA is still capable of confusion.
“It’s really great to see how LaMDA can continue a conversation about any topic,” Pichai said. “It’s surprising how sensible and interesting the conversation is. Research is still early, so not everything is going well. Sometimes it can give nonsensical responses.”
Pichai said additional work is being done to ensure LaMDA, building on the research described in a Paper in 2020, meets Google’s standards for fairness, accuracy, safety and privacy. Clearly, Google is keen to avoid a Microsoft Tay-grade fiasco whenever it revolves around integrating LaMDA into its own services, such as Search and Assistant.
Pichai also unveiled the revised AI hardware, Google’s Tensor Processing Unit (TPU) v4. More than twice as fast as TPU v3, TPU v4 chips can be connected to supercomputers called pods consisting of 4,096 processors capable of delivering an exaflop, or 10 ^ 18 floating point operations. every second.
“Think of it this way, if 10 million people were on their laptops now, then all laptops combined would be almost compatible with the computing power of an exaflop,” Pichai said.
“It’s the fastest system we’ve deployed at Google, and a historic milestone for us. Previously to get an exaflop, you had to build a custom supercomputer, but we already have a lot of that deployed now.”
Pichai said soon there will be dozens of TPU v4 pods in data centers and it will be available to Google Cloud customers in the last year.
Google also opens a Quantum AI Campus in Santa Barbara, California, incorporating the company’s first quantum data center, a research lab across hardware, and a fabum chip.
Before giving the stage to more esoteric, developer-specific presentations, Pichai also previewed a novel 3D video conference system called Project Starline.
“Using high-resolution cameras and custom built-in sensors, we capture your shape and appearance from multiple perspectives, and then work with them to create a very detailed real time 3d model,” explains Pichai, who mentioned that the company has developed novel compression and streaming technology to reduce massive amounts of data, send it over the network, and display it on a novel light-field display that makes it look like you’re taking on a real person.
Pichai said Google plans to expand access to Project Starline to healthcare and media partners. ®
Disclaimer: The opinions expressed within this article are the personal opinions of the author. The facts and opinions appearing in the article do not reflect the views of knews.uk and knews.uk does not assume any responsibility or liability for the same.