跟随,学习,进步

Google

The Keyword | Google

https://www.blog.google/

Discover all the latest about our products, technology, and Google culture on our official blog.

转到作者网站

Smart strategies for growing your app business with ads

Every year Google I/O showcases the delight that technology can bring to our lives. Mobile apps have extended that delight to billions of people around the world, paving the way for app developers to unlock new business opportunities.  Today we’re sharing a few ways to help scale your business using Google’s growth and monetization solutions.Find the right app usersSmart user acquisition starts with reaching the people who will be most engaged with your app and help you generate the most revenue. With Google App campaigns, you can choose a bidding option that best supports your growth goals. Target CPA bidding, for example, makes it easy for you to find new users who install your app and take an in-app action.To grow profitably, it’s also important to also consider how much revenue you generate relative to the cost of driving those installs and actions. That’s why, you’ll soon be able to bid on a target return on ad spend (tROAS) so you can automatically pay more for users likely to spend more, and pay less for users likely to spend less. If you’re looking for users who will spend twice as much as they cost to acquire, you can set that multiplier for your tROAS bid, and it will find you the right users accordingly. tROAS will be available next month for Google App campaigns on iOS and Android globally. Learn more.Bidding is a great lever to reach the customers you want. The next step is to win and keep these customers’ attention. That’s why we’re giving you new ways to develop and manage your creatives, making it easier for you to show your customers more relevant ads in more places.YouTube–you automatically qualify to promote your app in two new YouTube placements when you have at least one landscape image and one video. The first placement is on the YouTube homepage feed, and the second is on in-stream video.Ad groups–starting later this month, you can set up multiple ad groups in the same campaign and tailor the assets in each ad group around a different “theme” or message for different customers. Learn more.Agency partnerships–we’re teaming up with 8 trusted agencies including Vidmob, Consumer Acquisition, Bamboo, Apptamin, Webpals, Creadits, Kaizen Ad and Kuaizi to help you manage creatives end-to-end, from design to reporting.Monetize more easilyThe second piece to building a profitable apps business is creating a sustainable revenue stream. In other words, you need to keep users engaged with your app, while still monetizing it effectively, which can be tough to balance. That’s why AdMob is investing in automated solutions to help you earn more from your app while delivering a great user experience.Last year we announced a new monetization model called Open Bidding that helps you maximize the value of every impression automatically. Since then, dozens of developers have joined the beta and are seeing meaningful revenue lift, including Korea-based game developer Sticky Hands.“We’re really excited about Open Bidding. In one month, revenue and ARPDAU have grown by 14% and 15% respectively, and we expect them to keep climbing as more demand sources come online. What’s even better is that we’re spending almost no time managing it.“- Minu Kim, CEO of Sticky HandsIn addition to the revenue lift, Open Bidding offers simplicity and time savings compared to traditional mediation--fewer SDKs means less time spent on integrations and more stability for your app. Stay tuned, as we’ll be expanding the program to all publishers later this year.In the meantime, here are a few more ways AdMob can help you grow your overall app revenue and protect user experience more easily:Image search is a robust new search tool that helps identify and remove bad ads across every size, campaign, and rotation, using just a screenshot of the ad. Learn more.Maximum ad content rating can prevent inappropriate ads from being shown to young users. Learn more.User metrics, such as daily active users and average session time, will be available soon in a new dashboard card, so you can quickly see how changes to your monetization strategy (e.g. adding a rewarded ad) impact key indicators of user engagement. These insights can help you optimize the lifetime value of your users across all your revenue sources - ads, in-app purchases, and commerce. Learn more.To learn more about how these solutions will help you save time while growing your business, join our ads keynote at 10:30am PDT, Wednesday May 8th at Stage 1 of Google I/O or watch the livestream.Also, stay tuned for more app advertising news at Google Marketing Live, kicking off next week at 9am PDT, Tuesday May 14th. Sign up for the livestream here.


How DIVA makes Google Assistant more accessible

My 21 year old brother Giovanni loves to listen to music and movies. But because he was born with congenital cataracts, Down syndrome and West syndrome, he is non-verbal. This means he relies on our parents and friends to start or stop music or a movie.  Over the years, Giovanni has used everything from DVDs to tablets to YouTube to Chromecast to fill his entertainment needs. But as new voice-driven technologies started to emerge, they also came with a different set of challenges that required him to be able to use his voice or a touchscreen. That’s when I decided to find a way to let my brother control access to his music and movies on voice-driven devices without any help. It was a way for me to give him some independence and autonomy.Working alongside my colleagues in the Milan Google office, I set up Project DIVA, which stands for DIVersely Assisted. The goal was to create a way to let people like Giovanni trigger commands to the Google Assistant without using their voice. We looked at many different scenarios and methodologies that people could use to trigger commands, like pressing a big button with their chin or their foot, or with a bite.  For several months we brainstormed different approaches and presented them at different accessibility and tech events to get feedback.We had a bunch of ideas on paper that looked promising. But in order to turn those ideas into something real, we took part in an Alphabet-wide accessibility innovation challenge and built a prototype which went on to win the competition. We identified that many assistive buttons available on the market come with a 3.5mm jack, which is the kind many people have on their wired headphones. For our prototype, we created a box to connect those buttons and convert the signal coming from the button to a command sent to the Google Assistant.To move from a prototype to reality, we started working with the team behind Google Assistant Connect, and today we are announcing DIVA at Google I/O 2019.The real test, however, was giving this to Giovanni to try out. By touching the button with his hand, the signal is converted into a command sent to the Assistant. Now he can listen to music on the same devices and services our family and all his friends use,  and his smile tells the best story.Getting this to work for Giovanni was just the start for Project DIVA. We started with single-purpose buttons, but this could be extended to more flexible and configurable scenarios. Now, we are investigating attaching RFID tags to objects and associating a command to each tag. That way, a person might have a cartoon puppet trigger a cartoon on the TV, or a physical CD trigger the music on their speaker.Learn more about the idea behind the DIVA project at our publication site, and learn how to build your own device at our technical site.


Raising the bar on transparency, choice and control in digital advertising

Advertising has made possible open access to quality information and communication on the web—it’s changed the way people learn, play and earn, and it’s made the internet open for everyone.But the ad-supported internet is at risk if digital advertising practices don’t evolve to reflect people’s changing expectations around how data is collected and used. Our experience shows that people prefer ads that are personalized to their needs and interests—but only if those ads offer transparency, choice and control. However, the digital advertising ecosystem can be complex and opaque, and many people don’t feel they have enough visibility into, or control over, their web experience.New protections and controls in ChromeAs you may have seen, today Chrome announced its plans to improve cookie controls. To better protect user privacy and choice on the web, Chrome intends to make it easier for users to block or clear cookies used in a third-party context, with minimal disruption to cookies used in a first-party context. While Chrome has long enabled users to block cookies, these changes will let users continue to allow their online banking site, for example, to remember their login preferences—a function that first-party cookies enable.Chrome also announced that it will more aggressively restrict fingerprinting across the web. When a user opts out of third-party tracking, that choice is not an invitation for companies to work around this preference using methods like fingerprinting, which is an opaque tracking technique. Google doesn’t use fingerprinting for ads personalization because it doesn't allow reasonable user control and transparency. Nor do we let others bring fingerprinting data into our advertising products.The changes in Chrome will empower users to make informed decisions about how to control the use of their data for personalized advertising. They will also ensure users are able to continue accessing a broad range of quality ad-supported content, with confidence that their privacy and choices will be respected.A new level of ads transparencyAs the Chrome announcements demonstrate, transparency, choice and control form the foundation of Google’s commitment to users—and advertising is no different. With tools like My Activity, Ad Settings, Why this Ad and Mute this Ad, we make it easy for people to see how Google tailors ads for them, switch off individual factors we use to tailor ads, stop seeing ads from a specific company or simply opt out of personalized ads entirely.But all of this is not enough. We believe you should also know what data is used for ads personalization and by whom.  That’s why today we’re committing to a new level of ads transparency. We want to give users more visibility into the data used to personalize ads and the companies involved in the process.As a first step, for the ads that Google shows on our own properties and those of our publishing partners, we will disclose new information through an open-source browser extension that will work across different browsers. The new information will include the names of other companies that we know were involved in the process that resulted in an ad—for example, ad tech companies that acted as intermediaries between the advertiser and publisher, and companies with ad trackers present in an ad. The browser extension will also surface the factors used to tailor an ad to a user, which we provide today.The extension will display information for each ad we show a user, and will present an aggregated snapshot for all the ads Google has shown a user recently. In the future, we will look for additional ways to make it even easier for people to access this information.In addition, we want to offer a simple means for others in the advertising industry to surface this kind of information. To that end, we will build APIs that enable other advertising companies, should they choose, to disclose this same type of information to users through the extension. We expect to begin rolling out both the browser extension and APIs in the coming months.While offering more information privately to individual users is important, we also believe that making this type of information available publicly will help increase transparency at the ecosystem level. That’s why we plan to build tools that allow researchers and others to view and analyze aggregated and anonymized data from Google and other providers that elect to use these new APIs.As we introduce these enhanced ads transparency measures, we’re eager to receive feedback from users, partners and other stakeholders so that, together, we can identify industry-wide best practices around data transparency and ads personalization, including ways that people can take action to shape their experiences.All of the changes announced today represent an important step in ensuring that the ad supported web provides people with access to high-quality content, while protecting their privacy. We will continue to explore opportunities to evolve our tools and practices in ways that enhance user transparency, choice and control.


At I/O '19: Building a more helpful Google for everyone

Today, we welcomed thousands of people to I/O, our annual developer’s conference. It’s one of my favorite events of the year because it gives us a chance to show how we’re bringing Google’s mission to life through new technological breakthroughs and products.Our mission to make information universally accessible and useful hasn’t changed over the past 21 years, but our approach has evolved over time. Google is no longer a company that just helps you find answers. Today, Google products also help you get stuff done, whether it’s finding the right words with Smart Compose in Gmail, or the fastest way home with Maps.Simply put, our vision is to build a more helpful Google for everyone, no matter who you are, where you live, or what you’re hoping to accomplish. When we say helpful, we mean giving you the tools to increase your knowledge, success, health, and happiness. I’m excited to share some of the products and features we announced today that are bringing us closer to that goal.Helping you get better answers to your questionsPeople turn to Google to ask billions of questions every day. But there’s still more we can do to help you find the information you need. Today, we announced that we’ll bring the popular Full Coverage feature from Google News to Search. Using machine learning, we’ll identify different points of a story—from a timeline of events to the key people involved—and surface a breadth of content including articles, tweets and even podcasts.Sometimes the best way to understand new information is to see it. New features in Google Search and Google Lens use the camera, computer vision and augmented reality (AR) to provide visual answers to visual questions. And now we’re bringing AR directly into Search. If you’re searching for new shoes online, you can see shoes up close from different angles and even see how they go with your current wardrobe. You can also use Google Lens to get more information about what you’re seeing in the real world. So if you’re at a restaurant and point your camera at the menu, Google Lens will highlight which dishes are popular and show you pictures and reviews from people who have been there before. In GoogleGo, a search app for first-time smartphone users, Google Lens will read out loud the words you see, helping the millions of adults around the world who struggle to read everyday things like street signs or ATM instructions.Google Lens: Urmila’s StoryHelping to make your day easierLast year at I/O we introduced our Duplex technology, which can make a restaurant reservation through the Google Assistant by placing a phone call on your behalf. Now, we’re expanding Duplex beyond voice to help you get things done on the web. To start, we’re focusing on two specific tasks: booking rental cars and movie tickets. Using “Duplex on the Web,” the Assistant will automatically enter information, navigate a booking flow, and complete a purchase on your behalf. And with massive advances in deep learning, it’s now possible to bring much more accurate speech and natural language understanding to mobile devices—enabling the Google Assistant to work faster for you.We continue to believe that the biggest breakthroughs happen at the intersection of AI, software and hardware, and today we announced two Made by Google products: the new Pixel 3a (and 3a XL), and the Google Nest Hub Max. With Pixel 3a, we’re giving people the same features they love on more affordable hardware. Google Nest Hub Max brings the helpfulness of the Assistant to any room in your house, and much more.Building for everyoneBuilding a more helpful Google is important, but it’s equally important to us that we are doing this for everyone. From our earliest days, Search has worked the same, whether you’re a professor at Stanford or a student in rural Indonesia. We extend this approach to developing technology responsibly, securely, and in a way that benefits all.This is especially important in the development of AI. Through a new research approach called TCAV—or testing with concept activation vectors—we’re working to address bias in machine learning and make models more interpretable. For example, TCAV could reveal if a model trained to detect images of “doctors” mistakenly assumed that being male was an important characteristic of being a doctor because there were more images of male doctors in the training data. We’ve open-sourced TCAV so everyone can make their AI systems fairer and more interpretable, and we’ll be releasing more tools and open datasets soon.Another way we’re building responsibly for everyone is by ensuring that our products are safe and private. We’re making a set of privacy improvements so that people have clear choices around their data. Google Account, which provides a single view of your privacy control settings, will now be easily accessible in more products with one tap. Incognito mode is coming to Maps, which means you can search and navigate without linking this activity with your Google account, and new auto-delete controls let you choose how long to save your data. We’re also making several security improvements on Android Q, and we’re building the protection of a security key right into the phone for two-step verification.As we look ahead, we’re challenging the notion that products need more data to be more helpful. A new technique called federated learning allows us to train AI models and make products smarter without raw data ever leaving your device. With federated learning, Gboard can learn new words like “zoodles” or “Targaryen” after thousands of people start using them, without us knowing what you’re typing. In the future, AI advancements will provide even more ways to make products more helpful with less data.Building for everyone also means ensuring that everyone can access and enjoy our products, including people with disabilities. Today we introduced several products with new tools and accessibility features, including Live Caption, which can caption a conversation in a video, a podcast or one that’s happening in your home. In the future, Live Relay and Euphonia will help people who have trouble communicating verbally, whether because of a speech disorder or hearing loss.Project Euphonia: Helping everyone be better understoodProject Euphonia: Helping everyone be better understoodDeveloping products for people with disabilities often leads to advances that improve products for all of our users. This is exactly what we mean when we say we want to build a more helpful Google for everyone. We also want to empower other organizations who are using technology to improve people’s lives. Today, we recognized the winners of the Google AI Impact Challenge, 20 organizations using AI to solve the world’s biggest problems—from creating better air quality monitoring systems to speeding up emergency responses.Our vision to build a more helpful Google for everyone can’t be realized without our amazing global developer community. Together, we’re working to give everyone the tools to increase their knowledge, success, health and happiness. There’s a lot happening, so make sure to keep up with all the I/O-related news.


Here are the grantees of the Google AI Impact Challenge

As part of Google’s AI for Social Good program, we launched the Google AI Impact Challenge, based on our strong belief that emerging technologies will help us address big social, humanitarian and environmental problems. We were blown away by the number of thoughtful proposals we received: 2,602 applications from 119 countries, nearly two thirds of the world’s countries.Forty percent of the applications came from organizations with no previous experience with artificial intelligence, which is still a developing concept in the social impact field. Our job, as we thoroughly vetted the applications, was to choose the best projects based on feasibility, potential for impact, scalability and the responsible use of AI.  Today, at I/O, we are announcing 20 organizations that will share $25 million in grants from Google.org, credit and consulting from Google Cloud, mentoring from Google AI experts and the opportunity to join a customized accelerator program from Google Developers Launchpad. The selected projects address issues in the areas of health, economic opportunity and empowerment, environmental protection and conservation, education, misinformation and crisis and emergency response. Here’s the full list of grantees.Google AI Impact Challenge winners 1Grantees: American University of Beirut, Colegio Mayor de Nuestra Señora del Rosario, Crisis Text Line, Inc. and Eastern HealthGoogle AI Impact Challenge grantees 2Grantees: Fondation MSF, Full Fact, Gringgo Indonesia Foundation and Hand Talk Serviços LTDA Google AI Impact Challenge grantees 3Grantees: HURIDOCS, Makerere University, New York University and Nexleaf AnalyticsGoogle AI Impact Challenge grantees 4Grantees: The Pennsylvania State University, Quill.org, Rainforest Connection and Skilllab BVGoogle AI Impact Challenge grantees 5Grantees: TalkingPoints, The Trevor Project, Wadhwani AI and WattTime CorporationAmerican University of Beirut (Lebanon): Applying machine learning to weather and agricultural data to improve irrigation for resource-strapped farmers in Africa and the Middle East.Colegio Mayor de Nuestra Señora del Rosario (Colombia): Using satellite imagery to detect illegal mines, enabling communities and the government to protect people and natural resources.Crisis Text Line, Inc. (USA): Using natural language processing to optimize assignment of texters in crisis to counselors, reducing wait times and maintaining effective communication.Eastern Health (Australia): Analyzing clinical records from ambulances to uncover trends and potential points of intervention to inform policy and public health responses around suicide.Fondation MSF (France): Detecting patterns in antimicrobial imagery to help medical staff in low-resource areas prescribe the right antibiotics for bacterial infections.Full Fact (UK): Developing trend monitoring and clustering tools to aid fact checkers’ analysis, so they can help contextualize the news and enable informed decisions.Gringgo Indonesia Foundation (Indonesia): Building an image recognition tool to improve plastic recycling rates, reduce ocean plastic pollution and strengthen waste management in under-resourced communities.Hand Talk (Brazil) Using AI to translate Portuguese into Brazilian Sign Language through a digital avatar, enabling digital communication for Brazilians who are deaf and and hard-of-hearing.HURIDOCS (Switzerland): Using natural language processing and ML to extract and connect relevant information in case-related documents, allowing human rights lawyers to effectively research and defend their cases.Makerere University (Uganda): Tracking and predicting air pollution patterns via low-cost sensors in Kampala, Uganda, improving air quality forecasting and intervention.New York University (USA): Partnering with the New York City Fire Department’s analytics team to optimize response to its yearly 1.7 million emergencies, accounting for factors like weather, traffic and location.Nexleaf Analytics (USA): Building data models to predict vaccine viability throughout the cold vaccine supply chain and ensure effective delivery.The Pennsylvania State University (USA): Using deep learning tools to better predict locations and times at risk for landslides, creating a warning system to minimize the impact of natural disasters.  Quill.org (USA): Using deep learning to provide low-income students with immediate feedback on their writing, enabling students to revise their work and quickly improve their skills.Rainforest Connection(USA): Using deep learning for bioacoustic monitoring and commonplace mobile technology to track rainforest health and detect threats.Skilllab BV (Netherlands): Helping refugees translate their skills to the European labor market and recommend relevant career pathways to explore.TalkingPoints (USA): Using AI to enable two-way translated parent/teacher engagement and coaching when language represents a barrier to communication.The Trevor Project (USA): Using natural language processing and sentiment analysis to determine a LGBTQ youth’s suicide risk level to better tailor services for individuals seeking help.Wadhwani AI (India): Using image recognition to track and analyze pest control efforts, enabling timely and localized intervention to stabilize crop production and reduce pesticide usage.WattTime Corporation (USA): Using image processing algorithms and satellite networks to replace on-site power plant emissions monitors with open-source monitoring platforms.Next week, the grantees will converge in San Francisco for the kickoff of the Google AI Impact Challenge Accelerator, the six-month program run by Google Developers Launchpad. We look forward to working with these organizations, and to seeing the impact of their projects on such a wide variety of issues around the world.


Pixel 3a: the helpful (and more affordable) phone by Google

These days, you expect a lot from a smartphone. You want a premium camera that can take vivid, share-worthy photos wherever you go. You need a tool that connects you to the world with all your favorite apps and also helps out during the day. And you want a phone that has a battery that’s reliable for long stretches, while it stays secure and up to date with the latest software. You also don’t want it to break the bank. The new Pixel 3a and Pixel 3a XL are all of those things and more, for half the price of premium phones.Pixel 3a is designed to fit nicely in your hand, and includes an OLED display for crisp images and bright colors. It comes in three colors—Just Black, Clearly White, Purple-ish—and two sizes, with prices in the U.S. starting at $399 for the 5.6-inch display and $479 for the 6-inch model. High-end features: camera, Google Assistant, battery life and securityGoogle Pixel 3a delivers what you’d expect from a premium device. Starting with the award-winning camera, Pixel 3a lets you take stunning photos using Google’s HDR+ technology with features like Portrait Mode,  Super Res Zoom, and Night Sight to capture clear shots in low light. Google Photos is built in, so you can save all your high-quality photos and videos with free, unlimited storage. And it comes with an 18-watt charger so you get up to seven hours of battery life on a 15-minute charge and up to 30 hours on a full charge.1Squeeze Pixel 3a for the Google Assistant to send texts, get directions and set reminders—simply using your voice. Plus, the Google Assistant’s Call Screen feature (available in English in the U.S. and Canada) gives you information about the caller before you pick up, and shields you from those annoying robocalls. We’ll make sure your Pixel 3a is protected against new threats, by providing three years of security and operating system updates. In a recent industry report, Pixel was rated the highest for built-in security among all smartphones. It also comes with the custom-built Titan M chip to help protect your most sensitive data. New features at a more accessible pricePixel makes it easy to use Google apps like YouTube, Google Photos and Gmail. And you'll get access to new features first. Pixel 3a and the entire Pixel portfolio will get a preview of AR in Google Maps—the next time you're getting around town, you can see walking directions overlaid on the world itself, rather than looking at a blue dot on a map.. This helps you know precisely where you are, and exactly which way to start walking (in areas covered on Street View where there’s a good data connection and good lighting).Time lapse is coming to Google Pixel 3a, so you can capture an entire sunset in just a few seconds of video—great for posting on social media or messaging to your friends.Buy it from more places, use it on more networksPixel 3a and Pixel 3 are now available through more carriers, including Verizon, T-Mobile, Sprint, US Cellular, Spectrum Mobile (Charter), C Spire and Google Fi, as well as being supported on AT&T. If you’re new to Pixel, you can transfer photos, music and media quickly with the included Quick Switch Adapter. If you need a little extra help, 24/7 support from Google is just a tap away in the tips and support link in the settings menu. You can even share your screen for guided assistance.  Look for Pixel 3a in the Google Store in countries where Pixel is sold beginning today, and through our partners beginning tomorrow. 1 Approximate battery life based on a mix of talk, data, standby, mobile hotspot and use of other features, with always on display off. An active display or data usage will decrease battery life. Charging rates are based upon use of the included charger. Charging time performance statistics are approximate. Actual results may vary.


Privacy that works for everyone

Whether it’s delivering search results in the correct language or recommending the quickest route home, data can make Google products more helpful to you. And you should be able to understand and manage your data—and make privacy choices that are right for you. That’s why easy-to-use privacy features and controls have always been built into our products. At I/O, we announced a number of additional privacy and security tools across our products and platforms: Making it easier to control your dataOne-tap access to your Google Account from all our major productsPrivacy controls should be easy to find and use. A few years ago, we introduced Google Account to provide a comprehensive view of the information you’ve shared and saved with Google, and one place to access your privacy and security settings. Simple on/off controls let you decide which activity you want to save to your account to make Google products more helpful. You can also choose which activities or categories of information you want to delete. As the number of Google products has grown, we’re making it even easier to find these controls. Today you’ll see your Google Account profile picture appear in the top right corner across products like Gmail, Drive, Contacts and Pay. To quickly access your privacy controls, just tap on your picture and follow the link to your Google Account. The prominent placement of your profile picture also makes it easier to know when you’re signed into your Google Account. We’re bringing this one-tap access to more products this month, including Search, Maps, YouTube, Chrome, the Assistant and News.Easily manage your data in Search, Maps and the AssistantLast year, we made it easier for you to make decisions about your data directly within Search. Without leaving Search, you can review and delete your recent Search activity, get quick access to the most relevant privacy controls in your Google Account, and learn more about how Search works with your data. Now we’re making it easier to manage your data in Maps, the Assistant and YouTube (coming soon). For example, you'll be able to review and delete your location activity data directly in Google Maps, and then quickly get back to your directions. Auto-delete now available for Web & App Activity, coming soon to Location HistoryLast week we announced a new control that lets you choose a time limit for the amount of time your Location History and Web & App Activity data will be saved—3 or 18 months. Any data older than that will be automatically and continuously deleted from your account if you choose. This new control is available today for Web & App Activity and coming next month to Location History.Bringing Incognito mode to Google appsSince launching more than a decade ago, Incognito mode in Chrome has given you the choice to browse the internet without your activity being saved to your browser or device. As our phones become the primary way we access the internet, we thought it was important to build Incognito mode for our most popular apps. It’s available in YouTube and coming soon to Maps and Search. Tap from your profile picture to easily turn it on or off. When you turn on Incognito mode in Maps, your activity—like the places you search or get directions to—won’t be saved to your Google Account. Building stronger privacy controls into our platformsWe also made announcements today about privacy across our platforms and products: Android Q is bringing privacy to the forefront of Settings and creating more transparency and control around location. Chrome announced plans to more aggressively restrict fingerprinting across the web and improve cookie controls. Finally, we announced plans to give users more visibility into the data used to personalize ads and the companies involved in the process for the ads that Google shows on our own properties and those of our publishing partners.Doing more for users with less dataFederated learning makes products more helpful while keeping data on your deviceAdvances in machine learning are making our privacy protections stronger. One example is federated learning, a new approach to machine learning. It allows developers to train AI models and make products smarter—for you and everyone else—without your data ever leaving your device. These new AI techniques allow us to do more with less data. Gboard, Google’s keyboard, now uses federated learning to improve predictive typing as well as emoji prediction across tens of millions of devices. Previously, Gboard would learn to suggest new words for you, like “zoodles” or “Targaryen”, only if you typed them several times. Now, with federated learning, Gboard can also learn new words after thousands of people start using them, without Google ever seeing what you’re typing. We’ve also invested in differential privacy protections, which enable us to train machine learning models without memorizing information that could reveal specific details about a user. We published early research on this topic in 2014, and since then we’ve used it in Chrome, in Gmail with Smart Compose, and in Google Maps to show you how busy a restaurant is. And with the release of the TensorFlow Privacy open-source project, ML developers can now more easily use differential privacy technology.The strongest security across our products and platformsYour data is not private if it’s not secure. We’ve always invested in systems to keep our users safe—from our Safe Browsing protection that protects nearly 4 billion devices every day to blocking more than 100 million spam and phishing attempts in Gmail every day. Security keys provide the strongest form of 2-Step Verification against phishing attacks, and now they are built into phones running on Android 7.0 and above, making it available to over one billion compatible devices.And beginning this summer, anyone with a Nest Account will have the option to migrate their Nest Account to a Google Account, which comes with the added benefits of tools and automatic security protections, like 2-Step Verification, notifications that proactively alert you about unusual account activity and access to Security Checkup. We strongly believe that privacy and security are for everyone. We’ll continue to ensure our products are safe, invest in technologies that allow us to do more for users with less data, and empower everyone with clear, meaningful choices around their data.


More help is on the way with Nest Hub Max

Today we’re bringing the Home products under the Nest brand. It’s a natural next step since our products work together to help you stay informed, feel more comfortable and safe, keep an eye on home when you’re away, and connect to friends and family. Now we’re taking our first step on a journey to create a more helpful home.We’re introducing Nest Hub Max, the first product from our newly-formed team. Nest Hub Max has all the things you love about Nest Hub (formerly Google Home Hub). It has a digital photo frame powered by Google Photos and the home view dashboard, which gives you full control of your connected device. With our new display, you'll get a bigger 10-inch HD screen and a smart camera that helps you keep an eye on your home and keep in touch with family and friends. Nest Hub Max is specifically designed for those shared places in the home where your family and friends gather.The new kitchen TVThe big screen makes Nest Hub Max the kitchen TV you’ve always wanted. With a subscription, Hub Max can stream your favorite live shows and sports on YouTube TV. Tell it what you want to watch, or if you need help deciding, just ask the Assistant.  But unlike your kitchen TV, it can also teach you how to cook, play your music, and see who’s at the front door. And you’re getting full stereo sound, with a powerful rear-facing woofer.Smart cameraNest Hub Max has a Nest Cam to help you keep an eye on things at home: you can turn it on when you’re away and check on things right from the Nest App on your phone. Just like with your Nest Cam, it’s easy to see your event history, enable Home/Away Assist and get a notification if the camera detects motion, or doesn't recognize someone in your home.The camera on Hub Max also helps you stay connected to your family and friends, and video calling is easy with Google Duo. The camera has a wide-angle lens, and it automatically adjusts to keep you centered in the frame. You can chat with loved ones on any iOS or Android device, or on a web browser. You can also use Duo to leave video messages for other members of your household.And now when the volume’s up, instead of yelling to turn it down or pause the game, you can use Quick Gestures. Just look at the device and raise your hand, and Nest Hub Max will pause your media, thanks to the camera’s on-device gesture recognition technology.Help just for youHub Max is designed to be used by multiple people in your home, and provide everyone with the help they need in a personalized way. With Nest Hub, we offered you the option to enable Voice Match, so the Assistant can recognize your voice and respond specifically to you. Today with Nest Hub Max, we’re extending your options for personalized help with a feature called Face Match. For each person in your family who chooses to turn it on, the Assistant guides you through the process of creating a face model, which is encrypted and stored on the device. Face Match's facial recognition is processed locally with on-device machine learning, so the camera data never leaves the device.Whenever you walk in front of the camera, Nest Hub Max recognizes you and shows just your information, not anyone else’s. So in the morning, when you walk into the kitchen, the Assistant knows to greet you with your calendar, commuting details, the weather, and other information you need to start your day. And when you get home from work, Hub Max welcomes you home with reminders and messages that have been waiting for you. The Assistant offers personalized recommendations for music and TV shows, and you can even see who left you a video message.Per our privacy commitments, there’s a green light on the front of Hub Max that indicates when the camera is streaming, and nothing is streamed or recorded unless you explicitly enable it. In addition, you have multiple controls to disable camera features like Nest Cam, including a hardware switch that lets you physically disable the microphone and camera.When, where, and how muchLater this summer, Nest Hub Max will be available in the U.S. for $229 on the Google Store and at Best Buy, Target, Home Depot and more. It’ll also be available in the UK for £219 and in Australia for AUS$349.We’re also bringing Nest Hub to 12 new countries—Canada, Denmark, France, Germany, India, Italy, Japan, the Netherlands, Norway, Singapore, Spain and Sweden. And Nest Hub will now be available in the US for $129. Finally, we have updated pricing for our speakers, starting today Google Home is $99 and Google Home Max is $299.We’re excited to make the helpful home more real for more people.


Google Nest: welcome to the helpful home

Since Nest joined Google’s hardware team last year, we’ve been working to make the smart home a little less complicated, and well, more helpful. It’s a home where products work together to help you feel comfortable and safe, keep an eye on things when you’re away, and connect you to friends and family. Today, we’re committing to that goal by bringing the Home products under the Nest brand. Our first step as Google Nest is to go beyond the idea of a “smart home,” and to focus instead on creating a “helpful home.”As part of that, we’re adding to our lineup of Nest devices, privacy commitments, accounts and platforms.Our commitment to privacy in the homeTo give you a better understanding of how our connected home devices and services will work in your home, we’ve outlined a set of privacy commitments that apply when these devices and services are used with Google Accounts:We’ll explain our sensors and how they work. The technical specifications for our connected home devices will list all audio, video, and environmental and activity sensors—whether enabled or not. And you can find the types of data these sensors collect and how that data is used in various features in our dedicated help center page.We’ll explain how your video footage, audio recordings, and home environment sensor readings are used to offer helpful features and services, and our commitment for how we’ll keep this data separate from advertising and ad personalization.  We’ll explain how you can control and manage your data, such as providing you with the ability to access, review, and delete audio and video stored with your Google Account at any time.Our goal is simple: earn and keep your trust by clearly explaining how our products work and how we’ll uphold our commitment to respect your privacy. To learn more, please check out our commitment to privacy in the home.One secure accountAnother step we’re taking to help keep your Nest devices secure is making Google Accounts available to anyone using existing Nest devices and services. Google has a long history of providing billions of people with industry-leading automatic security protections when accessing products like Gmail, Photos, Maps and Drive with their Google Account.Beginning this summer, you’ll have the option to migrate your Nest Account to a Google Account, which comes with the added benefits of tools and automatic security protections:Suspicious activity detection sends you notifications whenever we detect unusual or potentially dangerous activity, such as suspicious sign-ins to your account.Security Checkup provides personalized guidance to help you secure your account and manage your online security.2-Step Verification strengthens your account security by adding an advanced second verification step whenever you sign in, including additional features like a prompt from a trusted device or the use of a physical security key.If you already have a Google Account for Gmail and other Google products, you can migrate your Nest Account to that account. New Nest users will automatically start with a Google Account at the same time that existing users are invited to migrate to Google Accounts. You’ll be able to use your Google Account across both the Nest app and the Google Home app, and with all of Google’s other products. Having a single account will also enable our devices and services to work better together—for example, Nest Hub shows the live video from Nest Hello, so you can see who's at the front door, without any additional setup.One home developer platformAnd finally, we’re unifying our efforts around third-party connected home devices under a single platform for developers to build features and apps for a more helpful home. To accomplish this, we’re winding down the Works with Nest developer program on August 31, 2019, and delivering a single consumer and developer experience through the Works with Google Assistant program—which works with more than 30,000 devices from over 3,500 home automation brands.For more details about all the changes, check out the What’s Happening with Google Nest FAQ, where you can learn more.As technology becomes a bigger part of our lives—especially when we're at home—privacy and security are more important than ever. We recognize that we’re a guest in your home, and we respect and appreciate that invitation—and these updates are among the many ways we hope to continue to earn your trust.


Sharing what's new in Android Q

This year, Android is reaching version 10 and operating on over 2.5 billion active devices. A lot has changed since version 1.0, back when smartphones were just an early idea. Now, they’re an integral tool in our lives—helping us stay in touch, organize our days or find a restaurant in a new place.Looking ahead, we’re continuing to focus on working with partners to shape the future of mobile and make smartphones even more helpful. As people carry their phones constantly and trust them with lots of personal information, we want to make sure they’re always in control of their data and how it’s shared. And as people spend more time on their devices, building tools to help them find balance with technology continues to be our priority. That’s why we’re focusing on three key areas for our next release, Android Q: innovation, security and privacy and digital wellbeing. New mobile experiencesTogether with over 180 device makers, Android has been at the forefront of new mobile technologies. Many of them—like the first OLED displays, predictive typing, high density and large screens with edge-to-edge glass—have come to Android first. This year, new industry trends like foldable phone displays and 5G are pushing the boundaries of what smartphones can do. Android Q is designed to support the potential of foldable devices—from multi-tasking to adapting to different screen dimensions as you unfold the phone. And as the first operating system to support 5G, Android Q offers app developers tools to build for faster connectivity, enhancing experiences like gaming and augmented reality.See how Asphalt 9 adapts screen dimensions as you unfold the phone, a feature Android Q was built to support.We’re also seeing many firsts in software driven by on-device machine learning. One of these features is Live Caption. For 466 million deaf and hard of hearing people around the world, captions are more than a convenience—they make content more accessible. We worked closely with the Deaf community to develop a feature that would improve access to digital media. With a single tap, Live Caption will automatically caption media that’s playing audio on your phone. Live Caption works with videos, podcasts and audio messages, across any app—even stuff you record yourself. As soon as speech is detected, captions will appear, without ever needing Wifi or cell phone data, and without any audio or captions leaving your phone.A video that gives background on the history of captioning. In 2009, Google added automatic captions to videos on YouTube, taking a step towards making videos universally accessible. With Live Caption, we're bringing these captions to media on phones.On-device machine learning also powers Smart Reply, which is now built into the notification system in Android, allowing any messaging app to suggest replies in notifications. Smart Reply will now also intelligently predict your next action—for example, if someone sends you an address, you can just tap to open that address in Maps.Security and privacy as a central focusOver the years, Android has built out many industry-first security and privacy protections, like file-based encryption, SSL by default and work profile. Android has the most widely-deployed security and anti-malware service of any operating system today thanks to Google Play Protect, which scans over 50 billion apps every day. We’re doing even more in Android Q, with almost 50 new features and changes focused on security and privacy. For example, we created a dedicated Privacy section under Settings, where you’ll find important controls in one place. Under Settings, you’ll also find a new Location section that gives you more transparency and granular control over the location data you share with apps. You can now choose to share location data with apps only while they’re in use. Plus, you’ll receive reminders when an app has your location in the background, so you can decide whether or not to continue sharing. Android Q also provides protections for other sensitive device information, like serial numbers.Finally, we're introducing a way for you to get the latest security and privacy updates, faster. With Android Q, we’ll update important OS components in the background, similar to the way we update apps. This means that you can get the latest security fixes, privacy enhancements and consistency improvements as soon as they’re available, without having to reboot your phone.Helping you find balanceSince creating our set of Digital Wellbeing tools last year, we’ve heard that they’ve helped you take better control of your phone usage. In fact, app timers helped people stick to their goals over 90 percent of the time, and people who use Wind Down had a 27 percent drop in nightly phone usage.This year, we’re going even further with new features like Focus mode, which is designed to help you focus without distraction. You can select the apps you find distracting—such as email or the news—and silence them until you come out of Focus mode. And to help children and families find a better balance with technology, we’re making Family Link part of every device that has Digital Wellbeing (starting with Android Q), plus adding top-requested features like bonus time and the ability to set app-specific time limits.Available in Beta todayAndroid Q brings many more new features to your smartphone, from a new gesture-based navigation to Dark Theme (you asked, we listened!) to streaming media to hearing aids using Bluetooth LE. You can find some of these features today in Android Q Beta, and thanks to Project Treble and our partners for their commitment to enable faster platform updates, Beta is available for 21 devices from 13 brands, including all Pixel phones.


Bringing you the next-generation Google Assistant

For the past three years, the Google Assistant has been helping people around the world get things done. The Assistant is now on over one billion devices, available in over 30 languages across 80 countries, and works with over 30,000 unique connected devices for the home from more than 3,500 brands globally. We’ve been working to make your Assistant the fastest, most natural way to get things done, and today at Google I/O we’re sharing our vision for the future.The next generation AssistantTo power the Google Assistant, we rely on the full computing power of our data centers to support speech transcription and language understanding models. We challenged ourselves to re-invent these models, making them light enough to run on a phone. Today, we’ve reached a new milestone. Building upon advancements in recurrent neural networks, we developed completely new speech recognition and language understanding models, bringing 100GB of models in the cloud down to less than half a gigabyte. With these new models, the AI that powers the Assistant can now run locally on your phone. This breakthrough enabled us to create a next generation Assistant that processes speech on-device at nearly zero latency, with transcription that happens in real-time, even when you have no network connection.Running on-device, the next generation Assistant can process and understand your requests as you make them, and deliver the answers up to 10 times faster. You can multitask across apps—so creating a calendar invite, finding and sharing a photo with your friends, or dictating an email is faster than ever before. And with Continued Conversation, you can make several requests in a row without having to say “Hey Google” each time. The next generation Assistant is coming to new Pixel phones later this year, and we can’t wait for you to try it out. Running on-device, the next generation Assistant can process and understand your requests as you make themRunning on-device, the next generation Assistant can process and understand your requests as you make them, and deliver the answers up to 10 times faster.Bringing Duplex to the webLast year, we showed you how the Assistant can book restaurant reservations over the phone using Duplex technology. Since then, we’ve brought this feature to the Assistant on Android and iOS devices in the U.S., and are hearing positive feedback from both the people who’ve used it and local businesses.Today we’re extending Duplex to the web, previewing how the Assistant can also help you complete a task online. Often when you book things online, you have to navigate a number of pages, pinching and zooming to fill out all the forms. With the Assistant powered by Duplex on the web, you can complete these tasks much faster since it fills out complex forms for you.Just ask the Assistant, “Book a car with national for my next trip,” and it will figure out the rest. The Assistant will navigate the site and input your information, like trip details saved in your Gmail or payment information saved in Chrome. Duplex on the web will be available later this year in English in the U.S. and U.K. on Android phones with the Assistant for rental car bookings and movie tickets.Book a car with National for an upcoming tripWith the Assistant powered by Duplex on the web, you can complete online tasks much faster.A more personal AssistantFor a digital assistant to be helpful, it needs to understand the people, places and events that are important to you. In the coming months, the Assistant will be able to better understand references to all of these through Personal References. You’ll be able to ask for things more naturally like, “What’s the weather like at mom’s house this weekend?” or, “Remind me to order flowers a week before my sister’s birthday.” You always have control over your personal information, and can add, edit or remove details from the “You” tab in Assistant settings at any time.As the Assistant understands you better, it can also offer more useful suggestions. Later this summer on Smart Displays, a new feature called “Picks for you” will provide personalized suggestions starting with recipes, events and podcasts. So if you’ve searched for Mediterranean recipes in the past, the Assistant may show you Mediterranean dishes when you ask for dinner recommendations. The Assistant also takes contextual cues, like the time of day, into account when you’re asking for help, giving you breakfast recipes in the morning and dinner at night.“Picks for you” will provide personal suggestions starting with recipes, events and podcasts.Introducing driving modeIn the car, the Assistant offers a hands-free way to get things done while you’re on the road. Earlier this year we brought the Assistant to navigation in Google Maps, and in the next few weeks, you’ll be able to get help with the Assistant using your voice when you’re driving with Waze.Today we’re previewing the next evolution of our mobile driving experience with the Assistant’s new driving mode. We want to make sure drivers are able to do everything they need with just voice, so we’ve designed a voice-forward dashboard that brings your most relevant activities—like navigation, messaging, calling and media—front and center. It includes suggestions tailored to you, so if you have a dinner reservation on your calendar, you’ll see directions to the restaurant. Or if you started a podcast at home, you can resume right where you left off from your car. If a call comes in, the Assistant will tell you who’s calling and ask if you want to answer, so you can pick up or decline with just your voice. Assistant’s driving mode will launch automatically when your phone is connected to your car’s bluetooth or just say, “Hey Google, let’s drive,” to get started. Driving mode will be available this summer on Android phones with the Google Assistant.Preview of the next evolution of our mobile driving experience with the Assistant’s new driving modeThe Assistant’s new driving mode features a voice-forward dashboard that brings your most relevant activities—like navigation, messaging, calling and media—front and center.Streamline your drive with remote vehicle controlsWe’re also making it easier to use the Assistant to control your car remotely, so you can adjust your car’s temperature before you leave the house, check your fuel level or make sure your doors are locked. Now the Assistant can do these things with just one or two commands—for example, “Hey Google, turn on the car A/C to 70 degrees.” You can also incorporate these vehicle controls into your morning routine to kickstart your commute. This new experience will be available in the coming months to existing car models that work with Hyundai’s “Blue Link” and Mercedes-Benz’s “Mercedes me connect.”Just say “stop” to turn off your timer or alarmSometimes, you want help from your Assistant without having to say “Hey Google” every time. Starting today, you can turn off a timer or alarm by saying, “stop.” This feature runs completely on-device and is activated by the word “stop” when an alarm or timer is going off. This has been one of our top feature requests, and is available on Google Home speakers and all Smart Displays in English-speaking countries globally.With faster responses using new on-device processing, a better understanding of you and your world, and more help in the car, the Assistant is continuing to get better at helping you get things done. And today, we announced two new devices where you get help from the Assistant: Google Nest Hub Max and our new line of Pixel phones.


Easier phone calls without voice or hearing

Last year, I read a social media post from a young woman in Israel. She shared a story about a guy she was in a relationship with, who was deaf, struggling to fix the internet connection at their home. The internet service provider’s tech support had no way to communicate with him via text, email or chat, even though they knew he was deaf. She wrote about how important it was for him to feel independent and be empowered.This got me thinking: How can we help people make and receive phone calls without having to speak or hear? This led to the creation of our research project, Live Relay.Live Relay uses on-device speech recognition and text-to-speech conversion to allow the phone to listen and speak on the users’ behalf while they type. By offering instant responses and predictive writing suggestions, Smart Reply and Smart Compose help make typing fast enough to hold a synchronous phone call.Live Relay is running entirely on the device, keeping calls private. Because Live Relay is interacting with the other side via a regular phone call (no data required), the other side can even be a landline.Of course, Live Relay would be helpful to anyone who can’t speak or hear during a call, and it may be particularly helpful to deaf and hard-of-hearing users, complementing existing solutions. In the U.S., for example, there are relay and real-time text (RTT) services available for the deaf and hard-of-hearing. These offer advantages in some situations, and our goal isn’t to replace these systems. Rather, we mean to complement them with Live Relay as an additional option for the contexts where it can help most, like handling an incoming call or  when the user prefers a fully automated system for privacy consideration.We’re even more excited for Live Relay in the long term because we believe it can help all of our users. How many times have you gotten an important call but been unable to step out and chat? With Live Relay, you would be able to take that call anywhere, anytime with the option to type instead of talk. We are also exploring the integration of real-time translation capability, so that you could potentially call anyone in the world and communicate regardless of language barriers. This is the power of designing for accessibility first.Live Relay is still in the research phase, but we look forward to the day it can give our users more and better ways to communicate—especially those who may be underserved by the options available today.Follow @googleaccess for continued updates, and contact the Disability Support team (g.co/disabilitysupport) with any feedback.


How AI can improve products for people with impaired speech

Most aspects of life involve communicating with others—and being understood by those people as well. Many of us take this understanding for granted, but you can imagine the extreme difficulty and frustration you’d feel if people couldn’t easily understand the way you talk or express yourself. That’s the reality for millions of people living with speech impairments caused by neurologic conditions such as stroke, ALS, multiple sclerosis, traumatic brain injuries and Parkinson's.To help solve this problem, the Project Euphonia team—part of our AI for Social Good program—is using AI to improve computers’ abilities to understand diverse speech patterns, such as impaired speech. We’ve partnered with the non-profit organizations ALS Therapy Development Institute (ALS TDI) and ALS Residence Initiative (ALSRI) to record the voices of people who have ALS, a neuro-degenerative condition that can result in the inability to speak and move. We collaborated closely with these groups to learn about the communication needs of people with ALS, and worked toward optimizing AI based algorithms so that mobile phones and computers can more reliably transcribe words spoken by people with these kinds of speech difficulties. To learn more about how our partnership with ALS TDI started, read this article from Senior Director, Clinical Operations Maeve McNally and ALS TDI Chief Scientific Officer Fernando Vieira.Example of phrases that we ask participants to readTo do this, Google software turns the recorded voice samples into a spectrogram, or a visual representation of the sound. The computer then uses common transcribed spectrograms to "train" the system to better recognize this less common type of speech. Our AI algorithms currently aim to accommodate individuals who speak English and have impairments typically associated with ALS, but we believe that our research can be applied to larger groups of people and to different speech impairments.In addition to improving speech recognition, we are also training personalized AI algorithms to detect sounds or gestures, and then take actions such as generating spoken commands to Google Home or sending text messages. This may be particularly helpful to people who are severely disabled and cannot speak.The video below features Dimitri Kanevsky, a speech researcher at Google who learned English after he became deaf as a young child in Russia. Dimitri is using Live Transcribe with a customized model trained uniquely to recognize his voice. The video also features collaborators who have ALS like Steve Saling—diagnosed with ALS 13 years ago—who use non-speech sounds to trigger smart home devices and facial gestures to cheer during a sports game. Project Euphonia: Helping everyone be better understoodWe’re excited to see where this can take us, and we need your help. These improvements to speech recognition are only possible if we have many speech samples to train the system. If you have slurred or hard to understand speech, fill out this short form to volunteer and record a set of phrases. Anyone can also donate to or volunteer with our partners, ALS TDI and the ALS Residence Initiative. The more speech samples our system hears, the more potential we have to make progress and apply these tools to better support everyone, no matter how they communicate.


Helpful new visual features in Search and Lens

Sometimes, the easiest way to wrap your head around new information is to see it. Today at I/O, we announced features in Google Search and Google Lens that use the camera, computer vision and augmented reality (AR) to overlay information and content onto your physical surroundings -- to help you get things done throughout your day.AR in Google SearchWith new AR features in Search rolling out later this month, you can view and interact with 3D objects right from Search and place them directly into your own space, giving you a sense of scale and detail. For example, it’s one thing to read that a great white shark can be 18 feet long. It’s another to see it up close in relation to the things around you. So when you search for select animals, you’ll get an option right in the Knowledge Panel to view them in 3D and AR.Bring the great white shark from Search to your own surroundings.We’re also working with partners like NASA, New Balance, Samsung, Target, Visible Body, Volvo, Wayfair and more to surface their own content in Search. So whether you’re studying human anatomy in school or shopping for a pair of sneakers, you’ll be able to interact with 3D models and put them into the real world, right from Search.Search for “muscle flexion” and see an animated model from Visible Body.New features in Google LensPeople have already asked Google Lens more than a billion questions about things they see. Lens taps into machine learning (ML), computer vision and tens of billions of facts in the Knowledge Graph to answer these questions. Now, we’re evolving Lens to provide more visual answers to visual questions.Say you’re at a restaurant, figuring out what to order. Lens can automatically highlight which dishes are popular--right on the physical menu. When you tap on a dish, you can see what it actually looks like and what people are saying about it, thanks to photos and reviews from Google Maps. Google Lens helps you decide what to orderTo pull this off, Lens first has to identify all the dishes on the menu, looking for things like the font, style, size and color to differentiate dishes from descriptions. Next, it matches the dish names with the relevant photos and reviews for that restaurant in Google Maps. Google Lens translates the text and puts it right on top of the original wordsLens can be particularly helpful when you’re in an unfamiliar place and you don’t know the language. Now, you can point your camera at text and Lens will automatically detect the language and overlay the translation right on top of the original words, in more than 100 languages. We're also working on other ways to connect helpful digital information to things in the physical world. For example, at the de Young Museum in San Francisco, you can use Lens to see hidden stories about the paintings, directly from the museum’s curators beginning next month. Or if you see a dish you’d like to cook in an upcoming issue of Bon Appetit magazine, you’ll be able to point your camera at a recipe and have the page come to life and show you exactly how to make it. See a recipe in Bon Appetit come to life with Google LensBringing Lens to Google GoMore than 800 million adults worldwide struggle to read things like bus schedules or bank forms. So we asked ourselves: “What if we used the camera to help people who struggle with reading?”When you point your camera at text, Lens can now read it out loud to you. It highlights the words as they are spoken, so you can follow along and understand the full context of what you see. You can also tap on a specific word to search for it and learn its definition. This feature is launching first in Google Go, our Search app for first-time smartphone users. Lens in Google Go is just over 100KB and works on phones that cost less than $50.All these features in Google Search and Google Lens provide visual information to help you explore the world and get things done throughout your day by putting information and answers where they are most helpful—right on the world in front of you.


I/O 2019


The evolution of Family Link parental controls

It’s been two years since we built Family Link to help parents introduce their kids to technology. And while the devices we use every day open the door for families to explore, learn and play together online, they can also bring a new set of worries for parents. Over the past two years, we've helped families across the globe set digital ground rules with the Family Link app: A tool that offers parents a way to create and supervise Google Accounts for their kids, manage the content they see online and the amount of time spent on their devices. Available on every Android deviceToday at Google I/O, we announced we’ll be making Family Link part of every Android device, starting with Android Q. This means that Family Link will be accessible from device settings, making setup even smoother for families. Look for it under the setting “Digital Wellbeing and parental controls” in Android Q devices rolling out later this summer.App-specific time limits and bonus screen timeWe’re also giving parents the ability to set app-specific time limits, Since not all screen time is created equal, parents will soon be able to set app-specific time limits to help kids make better choices about how they’re spending time on their device. And while parents love that they can set a bedtime or daily screen time limit, sometimes kids just need a few more minutes to finish up what they’re doing on their devices. Soon, parents will be able to give kids bonus screen time directly from their own device.A screenshot showing where you can access Family Link under “Digital Wellbeing and parental controls” in Settings.A screenshot showing a phone's list of apps with certain corresponding time limits.A screenshot showing how a parents can add Bonus Time to their child's device.A foundation for healthy digital habitsSince 2017, we’ve heard your feedback loud and clear: 67 percent of parents are worried about the amount of time their kids are spending on devices. In addition to today’s updates, we’ve been focused on making sure that time spent on making sure that the time your family spends on technology is the best it can possibly be. Here are a few ways we’ve done that:While Family Link was originally designed for kids under 13, we heard from parents that the app was still useful as their kids became teenagers. Last year, we rolled out the ability for parents around the world to use Family Link to supervise their teen’s existing Google Account. Beyond mobile phones, we added better Chromebook support so parents and children can use Family Link across different Google platforms. When apps come with a stamp of approval from a teacher, parents can feel more at ease knowing their children are engaging in healthy, educational content online. That’s why we have teacher recommendations: a collection of educational Google Play apps recommended by teachers that are a good fit for children of specific ages.Families can learn, play and imagine together with the Assistant on Google Home, other smart speakers and eligible phones with over 50 games, activities, and stories designed for families with kids. Be sure to check out the latest updates and if you want to share your ideas with us, just open the Family Link app, click the menu in the top left corner and tap “Help and feedback.”


From puzzles to poster-making: 2019’s Google Play Award winners

To kick off this year’s Google I/O, we hosted our fourth annual Google Play Award ceremony to recognize the most innovative developers behind the top apps and games on Google Play over the past year. These apps and games had stiff competition across nine categories, including new additions like Most Inventive, Best Living Room Experience and Most Beautiful Game. We’re sharing the winners that rose to the top for providing the best experiences for fans, making an impact on their communities and raising the bar for quality content on Google Play.Standout Well-Being AppApps empowering people to live the best version of their lives, while demonstrating responsible design and engagement strategies.Woebot: Your Self-Care Expert by Woebot LabsBest Accessibility ExperienceApps and games enabling device interaction in an innovative way that serve people with disabilities or special needs.Envision AI by Envision Technologies BVBest Social ImpactApps and games that create a positive impact in communities around the world (focusing on health, education, crisis response, refugees, and literacy).Wisdo by Wisdo LTD.Most Beautiful GameGames that exemplify artistry or unique visual effects either through creative imagery, and/or utilizing advanced graphics API features.SHADOWGUN LEGENDS by MADFINGER GamesBest Living Room ExperienceApps that create, enhance, or enable a great living room experience that brings people together.Neverthink: Handpicked videos by NeverthinkMost InventiveApps and games that display a groundbreaking new use case, like utilize new technologies, cater to a unique audience, or demonstrate an innovative application of mobile technology for users.Tick Tock: A Tale of Two by Other Tales InteractiveStandout Build for Billions ExperienceApps and games with optimized performance, localization and culturalization for emerging markets.Canva: Graphic Design & Logo, Flyer, Poster maker by CanvaBest Breakthrough AppNew apps with excellent overall design, user experience, engagement, retention and strong growth.SLOWLY by Slowly Communications Ltd.Best Breakthrough GameNew games with excellent overall design, user experience, engagement, retention and strong growth.MARVEL Strike Force by FoxNext GamesTo check out this year’s winners, head over to play.google.com/gpa2019.


Hit the road with Android Auto’s new look

When you’re in the driver's seat, there’s a lot to think about: from getting directions to your next destination to staying connected on the go. That’s why we created Android Auto—to help make your driving experience easier and safer. Since we started five years ago, Android Auto has expanded to support more than 500 car models from 50 different brands, and we aren’t pumping the brakes there!Today, we’re introducing a new design that will roll out to all Android Auto compatible cars later this summer. The new interface is built to help you get on the road faster, show more useful information at a glance and simplify common tasks while driving. Buckle up, as we walk you through Android Auto’s new look.boardwalk_allupview_maps_spotify widget.pngSee turn-by-turn directions while controlling other apps on the same screen.boardwalk_allupview_launcher.pngThe new launcher introduces a familiar way to easily discover and start apps compatible with Android Auto.boardwalk_allupview_media playback.pngA new dark theme blends in with modern automotive interiors, while incorporating the best of Google’s Material Design.boardwalk_allupview_incoming call notification.pngNotifications for incoming phone calls and messages make it easy to stay connected.boardwalk_allupview_notification center.pngThe new notification center shows recent messages and calls in a place that’s familiar and easy to access.Get on the road faster: As soon as you start your car, Android Auto will continue playing your media and show your navigation app of choice. Simply tap on a suggested location or say “Hey Google” to navigate to a new place. Stay on top of your apps: With the new navigation bar, you’ll be able to see your turn-by-turn directions and control your apps and phone on the same screen. Do more with less taps: With the new navigation bar, you'll be able to easily control your apps with one tap. Get turn-by-turn directions, rewind your podcast or take incoming call all on the same screen.Easily manage communications: The new notification center shows recent calls, messages and alerts, so you can choose to view, listen and respond at a time that’s convenient and safe for you. A color palette that’s easy on the eyes: We’re evolving Android Auto’s design to fit in better with your car’s interior. A dark theme, coupled with colorful accents and easier to read fonts, also helps improve visibility. A screen fit for more cars: If you have a car with a wider screen, Android Auto now maximizes your display to show you more information, like next-turn directions, playback controls and ongoing calls.Get ready to hit the road later this summer with all these new features! If you’re joining us at I/O this week, check out these updates at the Android for Cars Sandbox. We’ll also be sharing details at the ‘What's New with Android for Cars’ session on May 7 from 4-5:00 p.m. PST.


Meet the Googler in charge of all things I/O

From May 7 through May 9, more than 7,000 developers will head to Shoreline Amphitheatre in Mountain View for I/O, Google’s annual conference—and take part in talks and events in an area that’s usually a parking lot. In charge of turning that blank space into a festival-like atmosphere is Amanda Matuk, who has been part of the team running the conference for the past 10 years.Amanda, who is the event’s executive producer, has been in charge of I/O for the past four years. The process takes six to nine months to plan every year, and ends with three hectic days on site. For this installment of The She Word, I asked Amanda exactly how she gets it done—and the songs she blasts in her car to get her pumped up for the big day.How do you describe your job at a dinner party?I build things: teams, processes and ideas. My role at Google is split. As the Head of Hardware Experiences, I manage all our hardware activities that take place in real life, from press moments to consumer installations where folks can get hands-on with our products. As the internal executive producer of I/O, I look after an 80+ person team, taking I/O from an idea on paper in November to a three-day live experience in May.Attendees at Shoreline Amphitheatre in 2018.You were on the team that moved I/O from San Francisco to Mountain View. How did that change the event?The change of location was a very core moment to the company. It was late 2015 when we decided to make the move, as Sundar Pichai had just stepped up as CEO of the company. We wanted to connect back to our roots with the developer community who are based in Silicon Valley.We physically connected back with our roots, and celebrated the developer community in a venue typically reserved for concerts. In doing so, we challenged the standard conference format, and also put developers—our core users on many of our platforms—at the center of the conversation.Sundar Pichai delivers last year’s keynote at I/O.Your schedule must be jam-packed, especially the week of I/O. How do you stay calm throughout the madness?I operate under the principle that if you can do something now, do it now. Procrastination is a really natural thing I think we all do, but especially on site when there are a thousand tiny micro-decisions that come up in a given day, it’s important to do what you can in the moment.Also, it’s super cheesy, but I make a playlist that I listen to on the drive in on I/O days. Last year’s playlist included “Unstoppable” by Sia, “Run the World (Girls)” by Beyoncé, and “I’m Every Woman” by Whitney Houston. There’s nothing like starting the day with a bit of musical female empowerment. (Told you I’m cheesy!)I average 28,000 steps a day during I/O.What’s your schedule like the week of I/O?Once we get to the week of I/O, my job is to support the team. Nobody builds a conference of this scale and level of creative detail alone. My only true solo moment is on the first day. I like to arrive at 6 a.m. and walk the grounds before we open the gates. I started this ritual on the first I/O at Shoreline to remind myself that what once was a parking lot is now effectively a city layout ready for thousands of developers to occupy for the following three days.A typical day is spent checking in with teammates, managing the various production teams who operate on a rolling schedule, and monitoring potential challenges like the ever-present lunch rush. I average 28,000 steps a day during I/O.After Dark, our nighttime setup, at I/O 2018.What’s one moment you’ll remember from your years on the team?Something I’ll remember for years to come is the opening moment in 2016. To have Sundar, a former product manager, stand on the stage as the CEO and open what felt like a rock concert of a conference was something really special. We had our new leader, speaking to the developer world, making them feel celebrated in a very real and genuine way, and we ushered in a new style of conference.Did you always want to run big events like this? What advice would you have for women starting out in their careers?I started my career thinking I was going to be a lawyer. I was working in a law firm, studying for the LSAT, but I wasn’t energized by the work. I took a hard left turn and got into tech, starting in sales and eventually moving into marketing on the events and experiences team. My main advice is something I have to remind myself everyday: the path’s not linear. Just because you’re on a certain path now does not mean it is “the path.” When you’re starting out in your career, keep your eyes open to possibility, really listen to your intuition and if an opportunity speaks to you, it’s probably worth a listen. Your career is not necessarily going to be a straight line, but it can absolutely be a fun journey.


Why you should thank a teacher this week, and always

Editor’s note: Happy Teacher Appreciation Week! We’re honored to have the 2019 National Teacher of the Year, Rodney Robinson, as today’s guest author (and Doodler), who shares more about his journey and all the ways we’re celebrating teachers this week and beyond.I went into teaching to honor my first teacher: my mother, Sylvia Robinson. Growing up in rural Virginia, she dreamed of becoming  an educator but was denied the chance due to poverty and segregation; instead, she ran an in-home daycare center for all the neighborhood children, where she made each of us feel like we were the most important person on earth.My mother always said, “every child deserves the proper amount of love to get what they need to be successful in life.” My sister, who had cerebral palsy, often needed more of my mother’s love and care than me and my other siblings did. Through her parenting, I learned what it meant to create a culture of equity—where every person gets the right amount of support they need to be successful—which has proven critical in my own teaching journey. Today I teach social studies in a juvenile detention facility in Virginia, where I work to create a positive school culture and empower my students to become civically-minded social advocates. When I was selected as Virginia’s Teacher of the Year, and then National Teacher of the Year, I was elated—mostly for my students. Their stories don’t often fit into the typical educational story in America, but they represent the power and possibility of second chances. They deserve a great education to take advantage of that second chance, and I’m eager to advocate for what they—along with other students from underprivileged backgrounds—need to be successful. That’s also why I’m so happy that Google is showing up this Teacher Appreciation Week, including a new $5 million grant to DonorsChoose.org, to make it easier for us to build classrooms that reflect the diversity of our students.Today’s Doodle was co-designed by the 57 2019 Teachers of the Year, representing each U.S. state, extra-state territories, the District of Columbia and the Department of Defense Education Activity.Google’s homepage today is a tribute to teachers, and I feel proud to see the contribution I made—alongside my 56 fellow State Teachers of the Year—up there for everyone to see. Since Google is a sponsor of The Council of Chief State School Officers’ (CCSSO) National Teacher of the Year program, we had the opportunity to spend a few days at Google’s Bay Area headquarters where I learned a lot about technology and using storytelling, advocacy and leadership in my practice. I am glad to see companies like Google have teachers’ backs.While at Google, I got to engage in meaningful discussions with my fellow 2019 Teachers of the Year about how together we can advocate for solutions to some of the biggest issues in education.A $5 million investment to bring teachers’ ideas to lifeToday Google is making one of its largest teacher-focused grants to date, through a $5 million Google.org grant that will unlock over $10 million for teachers through DonorsChoose.org, a nonprofit organization that helps teachers get funding for classroom resources and projects. For every dollar you donate to a teacher’s classroom on DonorsChoose.org today, Google will put in an extra fifty cents to help teachers get funding, from 8:00 AM EST on Monday, May 6 until 3:00 AM EST on Tuesday, May 7, up to $1.5 million total.Later this month, the remaining $3.5 million of this grant will also go toward supporting underrepresented teachers and those looking to create more inclusive classrooms. Representation means so much to my students, which is why it’s so important to have teachers  who value their cultures and look like them .Free resources and trainings for educators, by educatorsGoogle is also launching free online and in-person resources and trainings. In the Teacher Center, you’ll find a new section with teacher guides and lesson plans—created for teachers, by teachers—made to help create classrooms that best reflect our students. And throughout the week, you can attend free in-person trainings for educators in the new Google Learning Center in New York City, led by teachers like me(!) and 2015 National Teacher of the Year Shanna Peeples, as well as teacher-focused organizations like TED-Ed. I’ll also be doing an Education On Air session later this week, so you can even tune in virtually.Making it easier for teachers to learn from one anotherAs teachers, we often learn from each other. That’s why all of the 2019 State Teachers of the Year have recorded words of insight and encouragement to share with our fellow educators as part of CCSSO and Google’s “Lessons from Teachers of the Year” YouTube series.As part of our work with Google, we also received early access toTED Masterclass, a new TED-Ed professional learning program they sponsored that supports educators in sharing their ideas in the form of TED-style talks. You can now check out several of my fellow educators’ TED Talks on the newly launchedTED-Ed Educator Talks YouTube Channel. More than 5,000 educators, including Google Certified Innovative Educators, are busy developing their Talks.I hope you’ll join us in celebrating teachers everywhere who go the extra mile to help every student succeed. You can start exploring classroom projects eligible for today’s match on DonorsChoose.org, and of course, remember to #thankateacher—because we deserve it.