Glass Enterprise Edition has helped workers in a variety of industries—from logistics, to manufacturing, to field services—do their jobs more efficiently by providing hands-free access to the information and tools they need to complete their work. Workers can use Glass to access checklists, view instructions or send inspection photos or videos, and our enterprise customers have reported faster production times, improved quality, and reduced costs after using Glass.Glass Enterprise Edition 2 helps businesses further improve the efficiency of their employees. As our customers have adopted Glass, we’ve received valuable feedback that directly informed the improvements in Glass Enterprise Edition 2. Glass Enterprise Edition 2 with safety frames by Smith Optics. Glass is a small, lightweight wearable computer with a transparent display for hands-free work.Glass Enterprise Edition 2 is built on the Qualcomm Snapdragon XR1 platform, which features a significantly more powerful multicore CPU (central processing unit) and a new artificial intelligence engine. This enables significant power savings, enhanced performance and support for computer vision and advanced machine learning capabilities. We’ve also partnered with Smith Optics to make Glass-compatible safety frames for different types of demanding work environments, like manufacturing floors and maintenance facilities.Additionally, Glass Enterprise Edition 2 features improved camera performance and quality, which builds on Glass’s existing first person video streaming and collaboration features. We’ve also added USB-C port that supports faster charging, and increased overall battery life to enable customers to use Glass longer between charges.Finally, Glass Enterprise Edition 2 is easier to develop for and deploy. It’s built on Android, making it easier for customers to integrate the services and APIs (application programming interfaces) they already use. And in order to support scaled deployments, Glass Enterprise Edition 2 now supports Android Enterprise Mobile Device Management.Over the past two years atX, Alphabet’s moonshot factory, we’ve collaborated with our partners to provide solutions that improve workplace productivity for a growing number of customers—including AGCO, Deutsche Post DHL Group, Sutter Health, and H.B. Fuller. We’ve been inspired by the ways businesses like these have been using Glass Enterprise Edition. X, which is designed to be a protected space for long-term thinking and experimentation, has been a great environment in which to learn and refine the Glass product. Now, in order to meet the demands of the growing market for wearables in the workplace and to better scale our enterprise efforts, the Glass team has moved from X to Google.We’re committed to providing enterprises with the helpful tools they need to work better, smarter and faster. Enterprise businesses interested in using Glass Enterprise Edition 2 can contact our sales team or our network of Glass Enterprise solution partners starting today. We’re excited to see how our partners and customers will continue to use Glass to shape the future of work.
What if this was your day? At 10 a.m., explore the impact of cybersecurity on society. Over lunch, chat with a famous YouTuber. Wrap up the day with a tour of the Google X offices. Then, head home to work on a machine intelligence group project.Sound out of the ordinary? For the 65 students participating in Google’s Tech Exchange program, this has been their reality over the last nine months.Tech Exchange, a student exchange program between Google and 10 Historically Black Colleges and Universities (HBCUs) and Hispanic-Serving Institutions (HSIs), hosts students at Google’s Mountain View campus and engages them in a variety of applied computer science courses. The curriculum includes machine learning, product management, computational theory and database systems, all co-taught by HBCU/HSI faculty and Google engineers.Tech Exchange is one way Google makes long-term investments in education in order to increase pathways to tech for underrepresented groups. We caught up with four students to learn about their experiences, hear about their summer plans and understand what they’ll bring back to their home university campuses.Taylor RoperHoward UniversitySummer Plans:BOLD Internship with the Research and Machine Intelligence team at GoogleWhat I loved most:“If I could take any of my Tech Exchange classes back to Howard, it would be Product Management. This was such an amazing class and a great introduction into what it takes to be a product manager. The main instructors were Googlers who are currently product managers. Throughout the semester, we learned how design, engineering and all other fields interpret the role of a product manager. Being able to ask experts questions was very insightful and helpful.”Vensan CabardoNew Mexico State UniversitySummer Plans:Google’s Engineering Practicum ProgramFinding confidence and comrades:“As much as I love my friends back home, none of them are computer science majors, and any discussion on my part about computer science would fall on deaf ears. That changed when I came to Tech Exchange. I found people who love computing and talking about computing as much as I do. As you do these things and as you travel through life, there may be a voice in your head telling you that you made it this far on sheer luck alone, that you don’t belong here, or that your accomplishments aren’t that great. That’s the imposter syndrome talking. That voice is wrong. Internalize your success, internalize your achievements, and recognize that they are the result of your hard work, not just good luck.”Pedro Luis Rivera GómezUniversity of Puerto Rico at MayagüezSummer Plans:Software Engineering Internship at GoogleThe value of a network:“A lesson that I learned during the Tech Exchange program that has helped a lot is to establish a network and promote peer-collaboration. We all have our strengths and weaknesses, and when we are working on a project and do not have much experience, you can get stuck on a particular task. Having a network increases the productivity of the whole group. When one member gets stuck, they can ask a peer for advice.”Garrett TolbertFlorida A&M UniversitySummer Plans:Applying to be a GEM FellowAsk all the questions: “One thing I will never forget from Tech Exchange is that asking questions goes beyond the classroom. Everyone in this program has been so accessible and helpful with accommodating me for things I never thought were possible. Being in this program has showed me that if you don’t know, just ask! Research the different paths you can take within tech, and see which paths interest you. Then, find people who are in those fields and connect with them.”
Over the past three years, teams at Google have been applying AI to problems in healthcare—from diagnosing eye disease to predicting patient outcomes in medical records. Today we’re sharing new research showing how AI can predict lung cancer in ways that could boost the chances of survival for many people at risk around the world.Lung cancer results in over 1.7 million deaths per year, making it the deadliest of all cancers worldwide—more than breast, prostate, and colorectal cancers combined—and it’s the sixth most common cause of death globally, according to the World Health Organization. While lung cancer has one of the worst survival rates among all cancers, interventions are much more successful when the cancer is caught early. Unfortunately, the statistics are sobering because the overwhelming majority of cancers are not caught until later stages.Over the last three decades, doctors have explored ways to screen people at high-risk for lung cancer. Though lower dose CT screening has been proven to reduce mortality, there are still challenges that lead to unclear diagnosis, subsequent unnecessary procedures, financial costs, and more.Our latest researchIn late 2017, we began exploring how we could address some of these challenges using AI. Using advances in 3D volumetric modeling alongside datasets from our partners (including Northwestern University), we’ve made progress in modeling lung cancer prediction as well as laying the groundwork for future clinical testing. Today we’re publishing our promising findings in “Nature Medicine.”Radiologists typically look through hundreds of 2D images within a single CT scan and cancer can be miniscule and hard to spot. We created a model that can not only generate the overall lung cancer malignancy prediction (viewed in 3D volume) but also identify subtle malignant tissue in the lungs (lung nodules). The model can also factor in information from previous scans, useful in predicting lung cancer risk because the growth rate of suspicious lung nodules can be indicative of malignancy.This is a high level modeling framework. For each patient, the AI uses the current CT scan and, if available, a previous CT scan as input. The model outputs an overall malignancy prediction.In our research, we leveraged 45,856 de-identified chest CT screening cases (some in which cancer was found) from NIH’s research dataset from the National Lung Screening Trial study and Northwestern University. We validated the results with a second dataset and also compared our results against 6 U.S. board-certified radiologists.When using a single CT scan for diagnosis, our model performed on par or better than the six radiologists. We detected five percent more cancer cases while reducing false-positive exams by more than 11 percent compared to unassisted radiologists in our study. Our approach achieved an AUC of 94.4 percent (AUC is a common common metric used in machine learning and provides an aggregate measure for classification performance).For an asymptomatic patient with no history of cancer, the AI system reviewed and detected potential lung cancer that had been previously called normal.Next stepsDespite the value of lung cancer screenings, only 2-4 percent of eligible patients in the U.S. are screened today. This work demonstrates the potential for AI to increase both accuracy and consistency, which could help accelerate adoption of lung cancer screening worldwide.These initial results are encouraging, but further studies will assess the impact and utility in clinical practice. We’re collaborating with Google Cloud Healthcare and Life Sciences team to serve this model through the Cloud Healthcare API and are in early conversations with partners around the world to continue additional clinical validation research and deployment. If you’re a research institution or hospital system that is interested in collaborating in future research, please fill out this form.
Since 2015, the Family Independence Initiative (FII) has used over $2.5 million in Google.org grants to empower families to escape poverty. Their technology platform UpTogether helps low-income families access small cash investments, connect with each other and share solutions—like how to find childcare or strategies to pay off debt. With the grants last year, FII improved their technology platform and expanded their sites to more cities including Austin and Chicago.This year, the Family Independence Initiative is embarking on a mission of collaborative research to shift what’s possible for low-income families. And today, we’re expanding our investment in FII with a $1 million grant to support a pilot project called Trust and Invest Collaborative, which aims to guide policy decisions that will increase economic mobility for low-income families and their children. The grant will help FII, the City of Boston and the Department of Transitional Assistance examine learnings and successes from FII, and replicate them in future government services offered to low-income families.In addition to our original grants to FII, we offered Google’s technical expertise. Over the last six months, six Google.org Fellows have been working full-time with FII to use their engineering and user experience expertise to help improve UpTogether. They used machine learning and natural language processing to make UpTogether’s data more useful in determining what leads to family success and to make it easier for families to share their own solutions with each other. These improvements in data quality will support the research for the pilot in Boston and Cambridge and help FII continue to share learnings from families’ own voices with future collaborators.
When you have a lot of people working in a Google Doc it can look like a zoo, with anonymous animals popping into your document to write (or howl, bark or moo) their feedback. Today, 13 new animals—like the african wild dog, grey reef shark and cheetah—are joining the pack. Though they may be excellent collaborators, they also need our help.It’s Endangered Species Day, and we’re teaming up with World Wildlife Fund (WWF) and Netflix's “Our Planet” to raise awareness around animals that are at risk.According to WWF, wildlife populations have dwindled by 60 percent in less than five decades. And with nearly 50 species threatened with extinction today, technology has a role to play in preventing endangerment.With artificial intelligence (AI), advanced analytics and apps that speed up collaboration, Google is helping companies like WWF in their work to save our precious planets’ species. Here are some of the ways.Curating wildlife data quickly. A big part of increasing conservation efforts is having access to reliable data about the animals that are threatened. To help, WWF and Google have joined a number of other partners to create the Wildlife Insights platform, a way for people to share wildlife camera trap images. Using AI, the species are automatically identified, so that conservationists can act quicker to help recover global wildlife populations.Predicting wildlife trade trends. Using Google search queries and known web page content, Google can help organizations like WWF predict wildlife trade trends similar to how we can help see flu outbreaks coming. This way, we can help prevent a wildlife trafficking crisis quicker.Collaborating globally with people who can help. Using G Suite, which includes productivity and collaboration apps like Docs and Slides, Google Cloud, WWF and Netflix partnered together to draft materials and share information quickly to help raise awareness for Endangered Species Day (not to mention, cut back on paper).What you can do to helpConservation can seem like a big, hairy problem that’s best left to the experts to solve. But there are small changes we can make right now in our everyday lives. When we all collaborate together to make these changes, they can make a big difference.Check out this Slides presentation to find out more about how together, we can help our friends. You can also take direct action to help protect our planet on the “Our Planet” website.
I’m legally blind, so from the moment I pop out of bed each morning, I use technology to help me go about my day. When I wake up, I ask my Google Assistant for my custom-made morning Routine which turns on my lights, reads my calendar and plays the news. I use other products as well, like screen readers and a refreshable braille display, to help me be as productive as possible.I bring my understanding of what it's like to have a disability to work with me, where I lead accessibility for Google Search, Google News and the Google Assistant. I work with cross-functional teams to help fulfill Google’s mission of building products for everyone—including those of us in the disabled community.The Assistant can be particularly useful for helping people with disabilities get things done. So today, Global Accessibility Awareness Day, we’re releasing a series of how-to videos with visual and audible directions, designed to help the accessibility community set up and get the most out of their Assistant-enabled smart devices.You can find step-by-step tutorials to learn how to interact with your Assistant, from setting up your Assistant-enabled device to using your voice to control your home appliances, at our YouTube playlist which we’ll continue to update throughout the year. This playlist came out of conversations within the team about how we can use our products to make life a little easier. Many of us have some form of disability, or have a friend, co-worker or family member who does. For example, Stephanie Wilson, an engineer on the Google Home team, helped set up her parents’ smart home after her dad was diagnosed with Parkinson’s disease.In addition to our own teammates, we're always listening to suggestions from the broader community on how we can make our products more accessible. Last week at I/O, we showed how we’re making the Google Assistant more accessible, using AI to improve products for people with a speech impairment, and added Live Caption in Android Q to give the Deaf community automatic captions for media that’s playing audio on your phone. All these changes were made after receiving feedback from people like you.Head over to our Accessibility website to learn more, and if you have questions or feedback on accessibility within Google products, please share your feedback with us via our dedicated Disability Support team.
Many years ago, I took a dance lesson in Budapest to learn the csárdás, a Hungarian folk dance. The instructor shouted directions to me in enthusiastic Hungarian, a language I didn't understand, yet I still learned the dance by mimicking the instructor and the expert students. Now, I do love clear directions in a lesson—I am a technical writer, after all—but it’s remarkable what a person can learn by emulating the experts. In fact, you can learn a lot about machine learning by emulating the experts. That’s why we’ve teamed with ML experts to create online courses to help researchers, developers, and students. Here are three new courses:Clustering: Introduces clustering techniques, which help find patterns and related groups in complex data. This course focuses on k-means, which is the most popular clustering algorithm. Although k-means is relatively easy to understand, defining similarity measures for k-means is challenging and fascinating.Recommendation Systems: Teaches you how to create ML models that suggest relevant content to users, leveraging the experiences of Google's recommendation system experts. You'll discover both content-based and collaborative filtering, and uncover the mathematical alchemy of matrix factorization. To get the most out of this course, you'll need at least a little background in linear algebra.Testing and Debugging: Explains the tricks that Google's ML experts use to test and debug ML models. Google's ML experts have spent thousands of hours deciphering the signals that faulty ML models emit. Learn from their mistakes. These new courses are engaging, practical, and helpful. They build on a series of courses we released last year, starting with Machine Learning Course Crash (MLCC), which teaches the fundamentals of ML. If you enjoyed MLCC, you're ready for these new courses. They will push you to think differently about the way you approach your work. Take these courses to copy the moves of the world's best ML experts.
Everyone deserves access to a quality education—no matter your background, where you live, or your abilities. We’re recognizing this on Global Accessibility Awareness Day, an effort to promote digital accessibility and inclusion for people with disabilities, by sharing new features, training, and partners, along with the many new products announced at Google I/O.Since everyone learns in different ways, we design technology that can adapt to a broad range of needs and learning styles. For example, you can now add captions in Slides and turn on live captions in Hangouts Meet, and we’ve improved discoverability in the G Suite toolbar. By making these features available—with even more in the works—teachers can help students learn in ways that work best for them.Working with our partners to expand accessWe’re not the only ones trying to make learning more accessible, so we’ve partnered with companies who are building apps to make it easier for teachers to communicate with all students.One of our partners, Crick Software, just launched Clicker Communicator, a child-friendly communication tool for the classroom: bridging the gap between needs/wants and curriculum access, empowering non-verbal students with the tools to initiate and lead conversations, and enabling proactive participation in the classroom. It’s one of the first augmentative and alternative communication (AAC) apps specifically created for Chromebook users.Learn more about Clicker Communicator, an AAC app for Chromebooks.Assessing with accessibility in mindTeachers use locked mode when giving Quizzes in Google Forms, only on managed Chromebooks, to eliminate distractions. Locked mode is now used millions of times per month, and many students use additional apps for accommodations when taking quizzes. We’ve been working with many developers to make sure their tools work with locked mode. One of those developers is our partner Texthelp®. Coming soon, when you enable locked mode in Quizzes in Google Forms, your students will be able to access Read&Write for Google Chrome and EquatIO® for Google that they rely on daily.Another partner, Don Johnston, supports students with their apps including Co:Writer for word prediction, translation, and speech recognition and Snap&Read for read aloud, highlighting, and note-taking. Students signed into these extensions can use them on the quiz—even in locked mode. This integration will be rolling out over the next couple of weeks.Learn more about the accessibility features available in locked mode, including ChromeVox, select-to-speak, and visual aids including high contrast mode and magnifiers.Locked mode with Texthelp extensionsTexthelp extension Read&Write in locked mode in Quizzes in Google Forms.Locked mode with Don Johnston extensionDon Johnston extension Co:Writer in locked mode in Quizzes in Google Forms.Locked mode in Quizzes in Google FormsLocked mode in Quizzes in Google Forms, only on managed Chromebooks.Tools, training, and more resourcesAssistive technology has the power to transform learning for more students, but educators need training, support, and tutorials to help their students get the most from the technology.The new Accessibility section on our Google for Education website has information on Chromebooks and G Suite for Education, a module on our Teacher Center and printable flashcards, and EDU in 90 YouTube videos on G Suite and Chromebook accessibility features. Check out our accessibility tools and find training on how to use them to create more engaging, accessible learning experiences.Watch the EDU in 90 on Chrome accessibility features.We love hearing stories of how technology is making learning more accessible for more learners, so please share how you're using accessibility tools to support all types of learners, and requests for how we can continue to improve to meet the needs of more learners.
Editor’s note: In this post, Kristina Joye Lyles from DonorsChoose.org shares about teaming up with Google.org to launch the #ISeeMe campaign.I joined DonorsChoose.org in 2013 and have long been working with organizations like Google.org who share our belief in the power of teachers. To date, Google.org has provided over $25 million to support classrooms on DonorsChoose.org, and last week, they committed an additional $5 million to teachers, with a focus on supporting diverse and inclusive classrooms. Together, we’re kicking off #ISeeMe, a new effort to celebrate the identities of teachers and students in their classrooms.As a military brat, I attended many public schools across the U.S. but only had two teachers of color from kindergarten through twelfth grade. My teachers and professors of color had a particularly strong impact on me as mentors and role models; I was encouraged to see them as leaders in our school community, and their presence alone showed me that diversity and representation matter.My story is like those of so many others. Research shows that students benefit from seeing themselves in their teachers and learning resources. For example, black students who have just one black teacher between third and fifth grade are 33 percent more likely to stay in school. Girls who attend high schools with a higher proportion of female STEM teachers are 19 percent more likely to graduate from college with a science or math major.With this support from Google.org, teachers who are underrepresented in today’s public school classrooms—like teachers of color and female math and science teachers—as well as all teachers looking to create more inclusive classrooms will get the support they need and deserve. Teachers from all backgrounds can take steps toward creating classrooms that reflect their students, whether they’re selecting novels with diverse characters to discuss or taking trainings to meet the needs of culturally diverse students. And we’re eager to help them bring their ideas to life so that more students can see themselves reflected in their classrooms.I’m thrilled that many teachers on DonorsChoose.org are already coming up with inspiring ways to foster classroom environments where every student can feel important and included. Mr. Yung sees the power of food to bring his students together across different cultural backgrounds. Ms. McLeod is determined to bring her students from Lumberton, North Carolina, to the National Museum of African-American History and Culture in Washington, D.C. Mrs. Toro-Maysaspires to bring her bilingual students books with culturally relevant heroes and heroines.We hope you’ll join us and the philanthropists of various backgrounds who have lit the torch for #ISeeMe today. If you are a public school teacher, you can set up an #ISeeMe classroom project right now at DonorsChoose.org. You can also access free inclusive classroom resources and ideas created for educators, by educators at any time in Google’s Teacher Center. And for those of you who have been inspired by a teacher, we invite you to explore classroom projects that are eligible for Google.org’s #ISeeMe donation matching—we would love to have your support for these teachers and classrooms.
Last week we announced that we would stop supporting the Works with Nest (WWN) program on August 31, 2019 and transition to the Works with Google Assistant platform (WWGA). The decision to retire WWN was made to unify our efforts around third-party connected home devices under a single platform for developers to build features for a more helpful home. The goal is to simplify the experience for developers and to give you more control over how your data is shared. Since the announcement, we’ve received a lot of questions about this transition. Today we wanted to share our updated plan and clarify our approach.First, we’re committed to supporting the integrations you value and minimizing disruptions during this transition, so here’s our updated plan for retiring WWN:Your existing devices and integrations will continue working with your Nest Account, however you won’t have access to new features that will be available with a Google Account. If we make changes to the existing WWN connections available to you with your Nest Account, we will make sure to keep you informed.We’ll stop accepting new WWN connections on August 31, 2019. Once your WWN functionality is available on the WWGA platform you can migrate with minimal disruption from a Nest Account to a Google Account.Second, we want to clarify how this migration will work for you. Moving forward, we’ll deliver a single consumer and developer experience through the Google Assistant. WWGA already works with over 3,500 partners and 30,000 devices, and integrates seamlessly with Assistant Routines. Routines allow anyone to quickly customize how their smart devices work together based on simple triggers—whether you’re leaving home or going to bed.One of the most popular WWN features is to automatically trigger routines based on Home/Away status. Later this year, we'll bring that same functionality to the Google Assistant and provide more device options for you to choose from. For example, you’ll be able to have your smart light bulbs automatically turn off when you leave your home. Routines can be created from the Google Home or Assistant apps, and can be created using the hardware you already own. Plus we’re making lots of improvements to setup and managing Routines to make them even easier to use.We recognize you may want your Nest devices to work with other connected ecosystems. We’re working with Amazon to migrate the Nest skill that lets you control your Nest thermostat and view your Nest camera livestream via Amazon Alexa. Additionally, we’re working with other partners to offer connected experiences that deliver more custom integrations.For these custom integrations, partners will undergo security audits and we’ll control what data is shared and how it can be used. You’ll aso have more control over which devices these partners will see by choosing the specific devices you want to share. For example, you’ll be able to share your outdoor cameras, but not the camera in your nursery, with a security partner.We know we can't build a one-size-fits-all solution, so we're moving quickly to work with our most popular developers to create and support helpful interactions that give you the best of Google Nest. Our goal remains to give you the tools you need to make your home, and those of other Nest users, helpful in the ways that matter most to you.
Smartphones are key to helping all of us get through our days, from getting directions to translating a word. But for people with disabilities, phones have the potential to do even more to connect people to information and help them perform everyday tasks. We want Android to work for all users, no matter their abilities. And on Global Accessibility Awareness Day, we’re taking another step toward this aim with updates to Live Transcribe, coming next month.Available on 1.8 billion Android devices, Live Transcribe helps bridge the connection between the deaf and the hearing via real-time, real-world transcriptions for everyday conversations. With this update, we’re building on our machine learning and speech recognition technology to add new capabilities.First, Live Transcribe will now show you sound events in addition to transcribing speech. You can see, for example, when a dog is barking or when someone is knocking on your door. Seeing sound events allows you to be more immersed in the non-conversation realm of audio and helps you understand what is happening in the world. This is important to those who may not be able to hear non-speech audio cues such as clapping, laughter, music, applause, or the sound of a speeding vehicle whizzing by.Second, you’ll now be able to copy and save transcripts, stored locally on your device for three days. This is useful not only for those with deafness or hearing loss—it also helps those who might be using real-time transcriptions in other ways, such as those learning a language for the first time or even, secondarily, journalists capturing interviews or students taking lecture notes. We’ve also made the audio visualization indicator bigger, so that users can more easily see the background audio around them.Caption: See sound events, like whistling or a dog barking, in the bottom left corner of the updated Live Transcribe.With billions of active devices powered by Android, we’re humbled by the opportunity to build helpful tools that make the world’s information more accessible in the palm of everyone’s hand. As long as there are barriers for some people, we still have work to do. We’ll continue to release more features to enrich the lives of our accessibility community and the people around them.
The quality of the air we breathe has a major impact on our health. Even in Amsterdam, a city where bikes make up 36 percent of the traffic, the average life span is cut short by a year as a result of polluted air. Information about air quality at the street level can help pinpoint areas where the quality is poor, which is useful for all types of people—whether you’re a bicyclist on your daily commute, a parent taking your children to a local park, or an urban planner designing new communities.A Street View car in Amsterdam.Project Air ViewBuilding on efforts in London and Copenhagen, Google and the municipality of Amsterdam are now working together to gain insight into the city’s air quality at the street level. Amsterdam already measures air quality at several points around the city. Information from two of our Street View cars in Project Air View will augment the measurements from these fixed locations, to yield a more detailed street-by-street picture of the city’s air quality.To take the measurements, the Street View cars will be equipped with air sensors to measure nitric oxide, nitrogen dioxide, ultra-fine dust and soot (extremely small particles that are hardly ever measured). Scientists from Utrecht University are equipping the air sensors into the vehicles, and working with the municipality and Google to plan the routes for driving and lead the data validation and analysis. Once the data validation and analysis is complete, we’ll share helpful insights with the public, so that everyone—citizens, scientists, authorities and organizations—can make more informed decisions.This research can spread awareness about air pollution and help people take action. For example, if the research shows differences in air quality between certain areas in the city, people could adjust their bike route or choose another time to exercise. Our hope is that small changes like this can help improve overall quality of life. For more information about Project Air View, visit g.co/earth/airquality.
Highway Inn is an Oahu-based restaurant founded by Hawaii-born Japanese-American Seiichi Toguchi. At the start of World War II, Seiichi was taken from his home to an internment camp in California and assigned to work in the mess halls. There, Japanese-American chefs from around the country taught him how to cook, eventually inspiring him to open Highway Inn to share the foods he loved growing up. Seiichi passed the restaurant down to his son Bobby Toguchi, who has since passed it to his daughter, Monica Toguchi Ryan. Their family has been proudly serving authentic Hawaiian food for over 70 years.As the third generation owner, Monica was determined to not just honor her family traditions and legacy, but also to share with younger generations the kinds of food that keep them connected to Hawaiian and local food culture. When her grandfather started the restaurant, he relied on word of mouth to reach new customers. Now, Monica uses Google Ads and their Business Profile on Google to connect with customers, helping them to grow from one location to three across Oahu. She and her family hope to continue preserving the beauty and tradition of Hawaiian food for generations to come.This Asian American and Pacific Islander Heritage Month, we're telling this and other stories, like Kruti Dance Academy from Atlanta, Georgia. They are two of the many Asian American and Pacific Islander-owned small businesses having an impact on their local communities.
Human behavior has always intrigued me—that's the reason I studied psychology as an undergraduate. At the time, I wondered how those learnings could one day apply to life in the “real world.” As it turns out, an understanding of people and human behavior is an invaluable asset when it comes to cultivating influence—especially when it comes to design.In my role as VP of User Experience (UX) Design at Google, I’m constantly tasked with influencing others. I lead a team of designers, researchers, writers and engineers who are behind products like Google’s Shopping, Trips, Payment and Ads. To create great experiences for people, we must first convince people building these products that design is elemental to delivering not just user value, but also business value. Over the years I've seen how the ability to build influence is essential to designing the best experiences.User empathy is a fast track to influenceAs UX professionals (designers, writers, researchers and front-end engineers), it’s our job to fully grasp the needs of people using our products and be the spokesperson for them. It’s easy to fall into the trap of believing that we understand our users without witnessing them actually using our products. Or to believe that our personal experiences reflect those of people everywhere. Yet every time I go out into the real world and spend time with people actually using our products, I come back with an unexpected insight that changes how I initially thought about a problem.In 2017, I took a trip to Jakarta to research the challenges of using smartphones in a region where service is relatively expensive and bandwidth is not readily available. It wasn’t until I was on the ground that I realized how degraded the experience was from what I’d pictured. Similarly, during a recent trip to Tel Aviv, I learned how difficult it is to get funding and grow a business. Developing this kind of understanding, which can only come from experience, helps motivate you to fix a problem from a different angle.Ideally, we’d bring all of our team members into the field to have these first-hand experiences, but that approach doesn’t scale. What does scale is empathy. We can share our personal experiences, research and user stories to build greater understanding. Once we’ve built a foundation of shared understanding, we can have better influence over decisions that affect users.Understanding people's experiences and stories help build better products.Inspire action with compelling storiesResearch can provide the data and anecdotes that help others understand why your design meets a specific need, but how you present that data is equally important.Creating rich stories full of photos and video clips helps expose others to how people use products and the challenges they encounter. On multiple occasions, I’ve been in a room where research clips of people interacting with a product or prototype are shared with executives and partners. Without fail, observing real people use products gets everyone animated and excited. Watching someone fumble through a task creates a sense of urgency to solve a problem that can’t be generated through data.One way to do this is with prototyping software or animated slides that show a product flow or tell a narrative that helps people understand the pain points of a product or the ease of its well-designed experience. An interactive prototype lets people experience the full possibilities. If you’re lucky enough to work with a UX engineer, prototypes are probably already a part of your influence repertoire. There’s nothing better than prototyping and sharing a bold idea and hearing: “We need that! Let’s make it happen!”Listen firstUser experience is highly focused on empathy for users, yet we’re often so focused on people using our products that we don’t take the time to develop empathy for our colleagues. Making sure others feel seen, heard, and understood is a significant step toward influence. Similar to how we can mistakenly make assumptions about our users, we can fall into the same trap with our peers.Too often people equate influence with asserting their perspective. Instead, influence starts with understanding the goals, motivations and frustrations of others.It’s easy to make incorrect conclusions, so instead of rushing to make a point, start out by listening to your colleagues. Showing the courtesy of listening often begets reciprocity, and makes others more receptive to your perspective.Our discipline is founded on exploring human connections and motivations through empathy and listening. Now you can use those tools to build influence, whether you work in UX or not.
This March, we put out the call for super sleuths to help us track down Carmen Sandiego in Google Earth. And we were blown away by the enthusiasm and speed with which people found the reformed VILE operative—who is now an ACME agent—by traveling from city to city around the globe.You not only solved the caper, but also shared stories and memories of playing the original games, watching the shows (both old and new) and sharing the experience with friends, family and kids.Today, we’ve teamed up with Carmen Sandiego once again—this time to help her recover Tutankhamun’s Mask. Le Chevre, a master climber and classmate of Carmen Sandiego at VILE Academy, has stolen the priceless artifact. We’re counting on gumshoes everywhere to help Carmen find him and recover the loot.To get your assignment, look for the special edition Pegman icon in Google Earth for Chrome, Android and iOS. Good luck, detectives!
A year ago, we launched .app, the first open top-level domain (TLD) with built-in security through HSTS preloading. Since then, hundreds of thousands of people have registered .app domains, and we want to take a moment to celebrate them. People are making more websites and apps than ever before. A recent survey we conducted with The Harris Poll found that nearly half (48%) of U.S. respondents plan to create a website in the near future. And a lot of people, especially students, are already building on the web. Over a third (34%) of 16-24 year olds who’ve already created a website did so for a class project. Having a meaningful domain name helps students turn their projects into reality. Take Ludwik Trammer, creator of shrew.app, who said: “The site started as a project for my graduate Educational Technology class at Georgia Tech. Getting the perfect domain gave me the initial push to turn it into the real deal (instead of making a prototype, publishing a scientific paper on it, and forgetting it).”Helping creators launch their sites securelyWith so many new creators, it’s essential that everyone does their part to make the internet safer. That’s why Google Registry designed .app to be secure by default, meaning every website on .app requires a HTTPS connection to ensure a secure connection to the internet.HTTPS helps keep you and your website visitors safe from bad actors, who may exploit connections that aren’t secure by:intercepting or altering the site’s contentmisdirecting trafficspying on open Wi-Fi networksinjecting ad malware or tracking“As a social application, data protection is paramount. As cyber attacks increase, the security benefits a .app domain brings was a key factor for us. We also believe that a .app domain is significantly more descriptive than a .com domain, meaning us Daneh Westropp Founder, pickle.appThere's still work to be done. One out of two people don’t know the difference between HTTP and HTTPS. Many major browsers (like Chrome) warn users in the URL bar when content is "not secure," but there’s every website creator still has a shared responsibility to keep their users safe..App is year in, and we’re happy to see so many people using it to build secure websites and connect with the world. You can read more stories from .app owners here and get your own .app name at get.app. If you’re one of the millions of people planning to build a website, we hope you’ll join us in making the internet safer and take the steps to securely launch your website.
Whether it's their favorite YouTube channel, the latest news or that hot show everyone is talking about, connected TV lets people watch what they want anytime. This creates new opportunities for brands to engage with their audience, while bringing together the reach and familiarity of TV with the precision and measurability of digital video. In the last year, the number of connected TV ad slots available to advertisers using Display & Video 360 has increased 8X, which has helped more advertisers reach their audiences using this channel. In fact, the number of advertisers running connected TV campaigns on Display & Video 360 has increased 137 percent in the past year.To help marketers build on this success, we’re continuing to invest in new TV capabilities for Display & Video 360. First, we’re implementing privacy-forward standards for connected TV that ensure responsible audience management practices by giving users transparency and choice over their ad settings. Second, we’re launching new linear TV capabilities so you can extend your reach to traditional TV viewers by purchasing ads on national broadcast networks and local TV stations. And finally, to help you deliver consistent TV strategies across delivery methods, we’re creating a consolidated TV workflow to buy connected TV together with linear TV. Putting people first with connected TVIn today's environment, privacy is top of mind for users, advertisers, and content providers—and connected TV is no exception. As people gain more control over what content they watch, they also want to be in control of how their data is used to inform the ads they see. It’s critical that the industry adopts user-first practices to help build a sustainable connected TV advertising ecosystem.This is why we teamed up with the IAB Tech Lab’s OTT Technical Working Group to design shared industry guidelines for creating high-quality and privacy-safe connected TV advertising experiences. These guidelines formalize the Identifier for Advertising (IFA), which allows advertisers to reach audiences and measure their campaigns in a way that gives users more transparency and control than alternatives, such as solutions that use the IP address of a viewer's device.IFA was designed to protect user privacy. It gives users the ability to choose how their data is used. This means they can reset the ID associated with their data or fully opt out of experiences like interest-based advertising. They can also be assured that their choices will remain unchanged if they move their device to a new location.Display & Video 360 now supports the IAB Tech Lab guidelines, allowing marketers to manage frequency, measure reach, and begin developing responsible audience segmentation strategies while protecting user privacy. We’re also adding support for these guidelines in our publisher platform, Google Ad Manager.These guidelines will direct stakeholders down the path of best practices to allow connected TV to grow and evolve as a significant advertising platform. Dennis Buchheim Senior Vice President and GM, IAB Tech LabWe’re seeing momentum across the industry with key players adopting the IAB Tech Lab guidelines. TV and device manufacturers like Roku and measurement vendors such as Nielsen are adopting IFA. Nielsen, for example, has built IFA-based audiences into Digital Ad Ratings to bring greater understanding of reach on mobile and connected TV audiences.Adding linear TV to the pictureConnected TV is growing quickly, but traditional linear TV is still an important way people watch. There are some moments, such as sporting events or the big season finale, that just have to be enjoyed live. And many of us still tune in to watch programs from our favorite ABC, NBC, CBS and FOX local network stations. To help you reach people in those moments and on those channels, Display & Video 360 provides access to buy ad slots on national broadcast and cable networks and thousands of local TV stations. Today, U.S. network affiliates are accessible through a beta integration with WideOrbit. And soon, premium national broadcast and cable channels will be available through our partnership with clypd.To help you easily execute these campaigns and give you more control over your linear TV investments, we’re introducing new buying functionality in Display & Video 360. For example, you can now set up detailed campaign parameters such as geography, daypart, program genre or TV network directly. And throughout the campaign, you can quickly manage budget allocation and optimize reach by adjusting these parameters.Up next: Consolidated TV buyingAt many brands, we’re already seeing stronger collaboration between TV and digital media buying teams to deliver better coordinated strategies across linear and connected TV—and in the future we believe this will be the norm. To help with this shift, we’re consolidating both workflows under a single TV insertion order (IO) in Display & Video 360. This new streamlined buying experience will allow brands and agencies to further simplify the execution of their TV buys by providing insights and setups designed specifically for TV campaigns. We’ll start rolling out this new workflow this fall.Upcoming consolidated TV IO in Display & Video 360With Display & Video 360, you can now expand your digital strategy to reach connected TV audiences in a privacy-safe way. You can also connect with viewers as they sink back on the couch and see what’s on network television thanks to self-service linear buying workflows. Stay tuned as we roll out new features and bring on new partners throughout the year.
Google I/O is always exciting for me. It’s a great moment when we get to tell the world about a wide range of new products and features that can help everyone do more with technology. Because of how intertwined tech is with every aspect of our lives, how we balance its use with our wellbeing has to be front and center. So, as we did last year, we made time to discuss how our users can find a balance by using technology more intentionally (and that might mean using it less).Last year, we announced our commitment to digital wellbeing, a company-wide effort to help our users balance their technology use in a way that feels right for them. The idea has taken hold. A recent survey we commissioned found that 1 in 3 Americans have taken steps to improve their digital wellbeing in the last year, and more than 80 percent of them said this had a positive impact on their overall sense of wellbeing.It’s still early, but we’re already seeing that some of our initial Digital Wellbeing features have helped people take control of their tech use. For instance, app timers have helped people stick to their goals over 90 percent of the time, according to our internal data from March of this year, and people who use Wind Down had a 27 percent drop in nightly usage on average.Coming to Android QWe know there’s much more we can be doing, which is why we were excited to announce a number of new tools and features at I/O last week. We’re making several improvements to existing features, such as giving you more visibility into the status of your app timers, and allowing Wind Down to be scheduled by day of the week. And, building on the success of app timers, we’re extending its functionality to Chrome on Android, which will let you to set time limits on specific websites. Our devices should help support our intentions throughout the day. Whether it’s work, school or family and friends that we want to focus on, our devices shouldn’t get in the way. Notifications are an important part of keeping you informed, but not all of them are urgent enough to divert your attention. Now you can choose to make some notifications ‘Gentle’. Gentle notifications won’t make noise, vibrate or appear on the lock screen but are always available if you want to browse.And we created Focus mode, which allows you to temporarily pause distracting apps with a single tap from Quick Settings. Finally, because many people want more positive encouragement, we’re adding the ability to set a screen time goal with helpful nudges to stay on track.New features for familiesFor parents, screen time is often a unique challenge; in fact, according to a recent study commissioned by Google, 67 percent of parents are concerned about the amount of time their kids spend on devices. People with kids tell us they love that Family Link lets them set daily screen time limits, but we know that nothing about parenting is black and white. We announced last week that Family Link will roll out new features that enable parents to fine-tune these boundaries by setting app-specific time limits and awarding bonus screen time directly from their own device. (We hope this will also help provide a little balance to family dynamics.) But tools and features are just part of the solution; for families in particular, communication is key. So on wellbeing.google, we now offer tips and advice from experts, including a conversation guide to help parents talk to their kids about technology use. We believe technology should improve life, not distract from it, so we’ve made a company-wide commitment to prioritize our users’ satisfaction over the amount of time they spend with our products, and our teams are designing with digital wellbeing as a core principle. We’re focused on improving lives—today and in the future—and digital wellbeing is one of the most important ways we’re working to make that happen.
Google I/O didn’t just have developers in attendance. Rock stars, astronauts and Turing Award winners took the stage for more than a dozen Inspiration Sessions, in which attendees learned about how technology is shaping the future, from music to art to creativity. Here are just a few lessons learned from these talks:The key to creativity is thinking like a child.Academy Award-winning animator Glen Keane created beloved characters like Ariel, The Beast and Pocahontas. He told attendees that no matter your line of work, it’s important to stay in touch with your inner child. “We all had that six-year-old kid that had something to do with who you are today and what you are doing,” he said. “Don’t forget that part of the adventure you had as a child.” Glen Keane does a live drawing of The Beast from “Beauty and the Beast.”If technology is the answer, what is the question?That’s the thesis behind the work of artist and researcher Sougwen Chung, who has programmed robots to collaborate with her to create artwork. She spoke alongside Cedric Kiefer, co-founder and creative lead of the art studio Onformative, and Kenric McDowell, co-leader of the Artist + Machine Intelligence program at Google Arts & Culture. The three talked about the relationship between artists and AI, and whether AI could fully replace artists. “It’s the question of how you actually use technology in your art, in your practice,” Cedric said. “Do you just write a little bit of code and press ‘art, art, art, more art?’ That’s not exactly how it is.”Be audacious, and think bigger. Astronaut Mae Jemison, the first woman of color to go into space, leads 100 Year Starship, an initiative to make sure humans can travel to another star in the next century. “When you look at space exploration, the audacity of it makes a difference,” she said. “I don’t think Mars pushes us hard enough.” She was joined in the talk by Sheperd Doeleman, a Harvard astrophysicist who helped construct the first-ever photo of a black hole—an audacious project in itself. Astronaut Mae Jemison.Technology can push you to create new things. Claire Evans, the singer of the band YACHT, incorporated machine learning into the creation of their new album with help from Google’s Magenta, a research project that explores the role of ML in art. She used Magenta to create new melodies based on YACHT’s back catalog. “It forcibly pushed us outside of our comfort zone, and forced us to play differently and think differently about how we work,” she said. She was joined by Googlers Adam Roberts and Jesse Engel, who work on Magenta, as well as a surprise guest, Wayne Coyne of The Flaming Lips, who discussed how his band used Magenta for their I/O performance.Wayne Coyne of The Flaming Lips.AI can be used to fight climate change.Mustafa Suleyman, co-founder at DeepMind, is responsible for integrating its AI systems into Google products. He talked about how his team has made the energy from Google’s wind farms 20 percent more valuable, and has reduced energy use for Android phones. “Energy consumption is one of the largest contributors to climate change,” he said. “[We thought,] How could we as a team start to focus significant amounts of our effort on this really important problem?”Space is full of wonder—and mystery, too. Famed theoretical physicist Michio Kaku spoke with inventor and entrepreneur Taylor Wilson about a wide range of topics, from string theory to multiverses to why he’s determined to complete Einstein’s theory of everything. He also weighed in on the recent first image of a black hole: “A black hole is a cosmic roach motel. Everything checks in, nothing checks out. But then the question is, where does all that stuff go?”A feature may be a huge deal, even if you don’t use it.Hiroshi Lockheimer, Google’s senior vice president of Platforms & Ecosystems, sat down with writer and podcaster Florence Ion to share insights about the latest from Android, Chrome, Chrome OS and Google Play. And he weighed in on one of Android Q’s newest features: Dark Mode. “I will say, I’m personally not a huge Dark Mode person,” Hiroshi admitted. “I’m an outlier. But your feedback was heard.” Different is the new normal.Elise Roy, an inclusive design strategist, struggled to prove she was “normal” after becoming deaf at age 10. Eventually she realized that small design changes can make a huge difference in her life. Something as small as a bright red hearing aid “created this huge shift in my life,” she said. “It allowed me to celebrate my difference and allowed others to join in.” Two Googlers, Michael Brenner and Irene Alvarado, also took the stage to discuss another inclusive project: Euphonia, an effort to help computers understand diverse speech patterns.Elise Roy and an audience member demonstrate the “momentary disabilities” everyone faces from time to time.The most brilliant AI is inspired by how the human brain functions.Geoffrey Hinton, Google Fellow and Turing Award winner, spoke with Nicholas Thompson, editor in chief of “Wired,” about why he kept working on neural nets when the rest of the AI community started to back away from the concept in the 90s: “You have two options. You can program it, or you can learn. This had to be the right way to go.” Although he says, “We are neural nets—anything we can do they can do,” he emphasized that he's not trying to recreate the brain, but instead “looking at the brain and saying, this thing works. If we want to make something else that works, let’s look at it for inspiration.”Engineer for lasting innovation.Astro Teller, Captain of Moonshots at X, talked about the concept of lasting innovation and the role ethics and diversity of perspectives play. “The real issue is whether in the long-term society is happy with the thing that you put into society.” Astro emphasized that lasting innovation holds itself accountable to the communities in which it operates and society at large.
Travel planning is complicated. The number of tools and amount of information you need to sift through when deciding where to go, where to stay and what flight to take can be time consuming and overwhelming. That’s why today, we’re simplifying the way we help travelers plan trips with Google across devices.When you’re planning a trip—whether you’re on desktop or mobile—we want to help you find the information you need, fast. Last year, we simplified trip planning by making navigation between Google Flights, Hotels, and Trips easier on smartphones. We’re now rolling this out on desktop as well. You can either go to google.com/travel or search for something like “hotels in Tokyo” or “Vancouver” to find travel information from a variety of sources in one place.As you plan a trip, your research and reservations will be organized for you in Trips. As we continue to evolve Google Trips, we’re making this information more accessible at google.com/travel, and in Google Search and Google Maps. We’re also adding a few new features to make planning and organizing your trips easier.One place for all of your trip detailsLast year, we started adding your trip reservations for things like hotels and flights to a trip timeline for your upcoming trips, when you’re signed into your Google account and you’ve received a confirmation in Gmail. When you go to google.com/travel, you can now make edits directly to your trips timeline, and in a few weeks you’ll be able to manually add new reservations as well.Whether you’re packing your bags or finalizing your travel dates, the weather is an important part of every trip. You’ll now see the weather for any upcoming or potential trips at google.com/travel—so you can make sure you’re prepared, rain or shine.Keep track of research and keep planningIt often takes days or weeks to plan a trip. When you need to pick up planning again, we’ll keep track of your trip research across Google. Recent searches, saved places and flights you’re tracking are added automatically to your trips when you’re signed into your Google Account. Soon, we’ll add viewed things to do and saved and viewed hotels to your trips. When you want to continue planning, all of your research will be waiting for you at google.com/travel. If you don’t want to see private results, you can opt out by adjusting your results and web & app activity settings.When you’re ready to continue researching other parts of your trip, scroll down to see travel articles and find out more about a destination like suggested day plans, popular restaurants around your hotel and events happening during your dates.Continue planning on-the-go When you’re on-the-go or visiting a new place, we’ll do more to highlight things to do, restaurants and more with Google Maps. For instance, last year we made it easier to find the best places to see and things to do when using Google Maps to explore a new place. Now, we’ll also help you get the lay of the land when you’re traveling by pointing out popular neighborhoods nearby and what they’re known for.And in the next few months, your trips—including reservations for things like hotels and restaurants—will be accessible in Google Maps, too.Our goal is to simplify trip planning by helping you quickly find the most useful information and pick up where you left off on any device. We’ll continue to make planning and taking trips easier with Google Maps, Google Search and google.com/travel—so you can get out and enjoy the world.