Posted by Erica Hanson, Program Manager
ARUA, UGANDA - Samuel Mugisha is a 23 year old university student with a laugh that echoes off every wall and a mind determined to make change. Recently he heard from a healthcare worker that many children at a local clinic were missing vaccinations, so he decided to take a walk. He toured his community, neighbor to neighbor, and asked one simple question: “Can I see your vaccination card?”
In response he was given dirt stained, wrinkled, torn pieces of paper, holding life or death information - all written in scribble.
He squinted, held the cards to the light, rubbed them on his pant leg, but for no use. They were impossible to read. As Samuel put it, “They were broken.”
From the few cards he could read, Samuel noted children who had missed several vaccinations - they were unknowingly playing the odds, waiting to see if disease would find them.
Without hesitation, Samuel got right to work, determined to fix the healthcare system with technology.
He first brought together his closest friends from Developer Student Clubs (DSC), a program supporting students impacting their communities through tech. He asked them: “Why can’t technology solve our problem?”
This newly formed team, including Samuel, Joshwa Benkya and Norman Acidri, came up with a twofold plan:
The idea came together right as Developer Student Clubs launched its first Solution Challenge, an open call for all members to submit projects they recently imagined. These young developers had to give it a shot. They created a model, filled out an application, and pitched the idea. After waiting a month, they heard back - their team won the competition! Their idea was selected from a pool of 170 applicants across India, Africa, and Indonesia. In other words, everything was about to change.
In a country where talent can go unnoticed and problems often go unsolved, this new team had pushed through the odds. Developer Student Clubs is a platform for these types of bold thinkers. Students who view the issues of their region not simply as obstacles to overcome, but chances to mend their home, build a better life for themselves, and transform the experiences of their people.
The goal of the Solution Challenge, and all other DSC programs, is to educate young developers early and equip them with the right skills to make an impact in their community.
In this case, office space in Uganda was expensive and hard to find. Samuel’s team previously had few chances to all work under the same roof. After winning the challenge, Developer Student Clubs helped them find a physical space of their own to come together and collaborate - a simple tool, but one that led to a turning point. As Samuel described it,
With this new space to work, DSC then brought some of Africa’s best Google Developer Group Leads directly to the young developers. In these meetings, the students were given high-level insights on how to best leverage Android, Firebase, and Presto to propel their product forward. As Samuel put it:
As a result, the team realized that with the scarcity of internet in Uganda, Firebase was the perfect technology to build with - allowing healthcare workers to use the app offline but “check in” and receive updates when they were able to find internet.
Although the app has made impressive strides since winning the competition, this young team knows they can make it even better. They want to improve its usability by implementing more visuals and are working to create a version for parents, so families can track the status of their child’s vaccination on their own.
While there is plenty of work ahead, with these gifted students and Developer Student Clubs taking each step forward together, any challenge seems solvable.
What has the team been up to recently? From August 5th-9th they attended the Startup Africa Roadtrip, an intensive training week on how best to refine a startup business model.
Today we want to walk through some updated analysis on the benefit that prerendering can provide on load times. AMP is designed to reduce page load time, and one of the most important ways Google Search reduces page load time is through privacy-preserving-prerendering AMP documents before a link is clicked.
The AMP framework has been designed to understand the layout of all page content and the loading status of all resources, so it can determine the time when all "above the fold" content has loaded. It also knows when the document is prerendered and when it is displayed. Thus, the AMP framework can compute the time from click until the above the fold content is displayed. AMP measures page load speed with a custom metric called First Viewport Ready (FVR). This is defined as the point in time "when non-ad resources above the fold fired their load event measured from the time the user clicks (So takes pre-rendering into account)". If an AMP document is fully prerendered this metric will be 0. If prerendering was not complete at the time of click or if the document was not prerendered at all, then the metric will be greater than 0.
Google Search prerenders some AMP documents and not others so we are able to see the impact that prerendering has on FVR. The chart below shows percentiles for FVR with and without prerendering. FVR is 0 when the AMP framework successfully completes prerendering before the document is displayed.
First Contentful Paint (FCP) is a page load speed metric that is measured by the browser. It is available for all documents, not just AMP documents. FCP is the point in time when the first bit of content from the DOM is rendered. A high value for FCP indicates that a page is definitely slow, but a low value for FCP does not necessarily mean that a page loads quickly since the first bit may not be important content. This is useful, but since AMP has a better understanding of what content is visible, FVR gives a better understanding of when content becomes visible.
FCP is not aware of prerendering so AMP computes a prerender sensitive derivative metric, Prerender-adjusted First Contentful Paint (PFCP), that subtracts out the time before click. When not prerendered, PFCP = FCP. FCP also decreases with prerendering, but the difference is less dramatic than FVR.
It may be surprising that median prerendered PFCP is higher than median prerendered FVR. This happens because the browser has to draw the content to the screen after the click. PFCP includes that time, while FVR does not.
Prerendering AMP documents leads to substantial improvements in page load times. Page load time can be measured in different ways, but they consistently show that prerendering lets users see the content they want faster. For now, only AMP can provide the privacy preserving prerendering needed for this speed benefit. In the future, new web platform features, such as Signed Exchanges, will bring privacy-preserving instant loading to non-AMP documents too.
Posted by Vikram Tank (Product Manager), Coral Team
Coral’s had a busy summer working with customers, expanding distribution, and building new features — and of course taking some time for R&R.; We’re excited to share updates, early work, and new models for our platform for local AI with you.
The compiler has been updated to version 2.0, adding support for models built using post-training quantization—only when using full integer quantization (previously, we required quantization-aware training)—and fixing a few bugs. As the Tensorflow team mentions in their Medium post “post-training integer quantization enables users to take an already-trained floating-point model and fully quantize it to only use 8-bit signed integers (i.e. `int8`).” In addition to reducing the model size, models that are quantized with this method can now be accelerated by the Edge TPU found in Coral products.
We've also updated the Edge TPU Python library to version 2.11.1 to include new APIs for transfer learning on Coral products. The new on-device back propagation API allows you to perform transfer learning on the last layer of an image classification model. The last layer of a model is removed before compilation and implemented on-device to run on the CPU. It allows for near-real time transfer learning and doesn’t require you to recompile the model. Our previously released imprinting API, has been updated to allow you to quickly retrain existing classes or add new ones while leaving other classes alone. You can now even keep the classes from the pre-trained base model. Learn more about both options for on-device transfer learning.
Until now, accelerating your model with the Edge TPU required that you write code using either our Edge TPU Python API or in C++. But now you can accelerate your model on the Edge TPU when using the TensorFlow Lite interpreter API, because we've released a TensorFlow Lite delegate for the Edge TPU. The TensorFlow Lite Delegate API is an experimental feature in TensorFlow Lite that allows for the TensorFlow Lite interpreter to delegate part or all of graph execution to another executor—in this case, the other executor is the Edge TPU. Learn more about the TensorFlow Lite delegate for Edge TPU.
Coral has also been working with Edge TPU and AutoML teams to release EfficientNet-EdgeTPU: a family of image classification models customized to run efficiently on the Edge TPU. The models are based upon the EfficientNet architecture to achieve the image classification accuracy of a server-side model in a compact size that's optimized for low latency on the Edge TPU. You can read more about the models’ development and performance on the Google AI Blog, and download trained and compiled versions on the Coral Models page.
And, as summer comes to an end we also want to share that Arrow offers a student teacher discount for those looking to experiment with the boards in class or the lab this year.
We're excited to keep evolving the Coral platform, please keep sending us feedback at coral-support@google.com.
We have invested heavily in our API and service infrastructure to improve performance and security and to add features developers need to build world-class APIs. As we make changes we must address features that are no longer compatible with the latest architecture and business requirements.
The JSON-RPC protocol (http://www.jsonrpc.org/specification) and Global HTTP Batch (example) are two such features. Our support for these features was based on an architecture using a single shared proxy to receive requests for all APIs. As we move towards a more distributed, high performance architecture where requests go directly to the appropriate API server we can no longer support these global endpoints.
We had originally planned to decommission these features by Mar 25, 2019. However, it came to our attention that few highly impacted customers might not have received the earlier notification.
As a result, we are extending the deprecation timeline to Aug 12, 2020, when we will discontinue support for both these features.
Starting February 2020 and running through August 2020, we will periodically inject errors for short windows of time. Closer to February 2020, we will provide exact details and schedule of these error injection windows.
We know that these changes have customer impact and have worked to make the transition steps as clear as possible. Please see the guidance below which will help ease the transition.
To identify whether you use JSON-RPC, you can check whether you send requests to "https://www.googleapis.com/rpc" or "https://content.googleapis.com/rpc". If you do, you should migrate.
"https://www.googleapis.com/rpc"
"https://content.googleapis.com/rpc"
A batch request is homogenous if the inner requests are addressed to the same API, even if addressed to different methods of the same API. Homogenous batching will still be supported but through API specific batch endpoints. If you are currently forming homogeneous batch requests, using Google API Client Libraries or using non-Google API client libraries or no client library (i.e making raw HTTP requests), you should migrate.
A batch request is heterogeneous if the inner requests go to different APIs. Heterogeneous batching will not be supported after the turn down of the Global HTTP batch endpoint. If you are currently forming heterogeneous batch requests, change your client code to send only homogenous batch requests, i.e you should migrate.
Clients will need to make the changes outlined below to migrate.
If you are using JSON-RPC client libraries (either the Google published libraries or other libraries), switch to REST client libraries and modify your application to work with REST client libraries.
Example code for Javascript Before
// json-rpc request for the list method gapi.client.rpcRequest('zoo.animals.list', 'v2', {name:'giraffe'}).execute(x=>console.log(x))
// json-rest request for the list method gapi.client.zoo.animals.list({name:'giraffe'}).then(x=>console.log(x))
If you are not using client libraries (i.e. making raw HTTP requests):
Example code
// Request URL (JSON-RPC) POST https://content.googleapis.com/rpc?alt=json&key;=xxx // Request Body (JSON-RPC) [{ "jsonrpc":"2.0", "id":"gapiRpc", "method":"zoo.animals.list", "apiVersion":"v2", "params":{"name":"giraffe"} }]
// Request URL (JSON-REST) GET https://content.googleapis.com/zoo/v2/animals?name=giraffe&key;=xxx
If you are currently forming heterogeneous batch requests, change your client code to send only homogenous batch requests.
Example code The example demonstrates how we can split a heterogeneous batch request for 2 apis (urlshortener and zoo) into 2 homogeneous batch requests.
// heterogeneous batch request example. // Notice that the outer batch request contains inner API requests // for two different APIs. // Request to urlshortener API request1 = gapi.client.urlshortener.url.get({"shortUrl": "http://goo.gl/fbsS"}); // Request to zoo API request2 = gapi.client.zoo.animals.list(); // Request to urlshortener API request3 = gapi.client.urlshortener.url.get({"shortUrl": "https://goo.gl/XYFuPH"}); // Request to zoo API request4 = gapi.client.zoo.animal().get({"name": "giraffe"}); // Creating single heterogeneous batch request object heterogeneousBatchRequest = gapi.client.newBatch(); // adding the 4 batch requests heterogeneousBatchRequest.add(request1); heterogeneousBatchRequest.add(request2); heterogeneousBatchRequest.add(request3); heterogeneousBatchRequest.add(request4); // print the heterogeneous batch request heterogeneousBatchRequest.then(x=>console.log(x));
// Split heterogeneous batch request into two homogenous batch requests // Request to urlshortener API request1 = gapi.client.urlshortener.url.get({"shortUrl": "http://goo.gl/fbsS"}); // Request to zoo API request2 = gapi.client.zoo.animals.list(); // Request to urlshortener API request3 = gapi.client.urlshortener.url.get({"shortUrl": "http://goo.gl/fbsS"}) // Request to zoo API request4 = gapi.client.zoo.animals.list(); // Creating homogenous batch request object for urlshorterner homogenousBatchUrlshortener = gapi.client.newBatch(); // adding the 2 batch requests for urlshorterner homogenousBatchUrlshortener.add(request1); homogenousBatchUrlshortener.add(request3); // Creating homogenous batch request object for zoo homogenousBatchZoo = gapi.client.newBatch(); // adding the 2 batch requests for zoo homogenousBatchZoo.add(request2); homogenousBatchZoo.add(request4); // print the 2 homogenous batch request Promise.all([homogenousBatchUrlshortener,homogenousBatchZoo]) .then(x=>console.log(x));
If you are using Google API Client Libraries, these libraries have been regenerated to no longer make requests to the global HTTP batch endpoint. Recommendation for clients using these libraries is to upgrade to the latest version if possible. Please see the language specific guidance below for minimum Google API Client Library to upgrade to.
Update all Google API Service packages (`com.google.apis`) to a version where the supporting library version is 1.23.1 or higher. For example, upgrade `com.google.apis:google-api-services-drive` from version `v3-rev159-1.22.0` to `v3-rev20190620-1.30.1`.
com.google.api.client.googleapis.batch.BatchRequest
Drive client = Drive.builder(transport, jsonFactory, credential).setApplicationName("BatchExample/1.0").build (); BatchRequest batch = client.batch();
BatchRequest batch = client.batch();
"/batch"
"/batch/library/v1"
Rebuild
/batch/library/v1
Example code Before
HttpRequestBatch batch(service->transport());
HttpRequestBatch batch(service->transport(), service->batch_url());
For help on migration, consult the API documentation or tag Stack Overflow questions with the 'google-api' tag.
Once a year, we invite community organizers and influencers from developer groups that support diversity and inclusion in their local tech ecosystem to the Women Techmakers Summit Europe. The Women Techmakers Summit is designed to provide training opportunities, share best practices, show success stories and build meaningful relationships. The fourth edition of the WTM Summit in Europe took place in Warsaw, one of Europe’s most innovative tech and startup ecosystems.
The Women Techmakers Summit hosted 120 people, all women and men that are leading tech communities across Europe. With more than half of the sessions being delivered by community influencers, the group came together to share their best practices, learn from each other and discuss all things related to diversity & inclusion. “A fantastic opportunity to meet other community organizers across Europe and learn from each other.”
We also invited role models to draw inspiration and motivation from. Head of Google for Startups, Agnieszka Hryniewicz-Bieniek, and Cloud Engineer, Ewa Maciaś, demonstrated that stepping out of our comfort zone is something we should do more and more. No one has the right answers from the start but by trying out new ways, we can carve our individual paths. Fear of failure is real. It should not keep us from experimenting, though.
Google’s Natalie Villalobos, head of the Women Techmakers program, and Emma Haruka Iwao, record breaker for calculating the most accurate value of Pi with Google Cloud, gave a glimpse into their personal stories. Their insights? Sometimes we need to go through hard times. They equipped us with the right mindset to push through, become your boss and succeed.
This left the attendees with the right motivation to get back to their communities: “This was my first WTM Summit, and it was an incredible experience. I met some amazing ladies and role models, and will be happy to share the inspiration I got with my local community.”
“Being at the WTM Summit felt like being inside a family. I felt really included like at no conference before." To make everyone feel welcome, a code of conduct was visible for all attendees, and prayers and parents spaces were provided for all attendees. The itself needed to become the inspiration for community organizers and influencers to carry the learnings back to the communities.
One of the core elements of Women Techmakers is creating and providing community for women in tech. Women Techmakers Ambassadors thrive diversity and inclusion initiatives in their local tech community to help to bring more women into the industry. In Europe, more than 150 WTM Ambassadors from 25 countries support their local tech communities to close the gap between the number of women and men in the industry. Meetup organizers and community advocates who want to achieve parity can join the Women Techmakers program. As members, they are given the tools and opportunities to change the narrative.
If you are interested in joining the WTM Ambassadors Program, reach out to WTM-Europe@google.com
In celebration of International Women’s Day, Women Techmakers hosted its sixth annual summit series to acknowledge and celebrate women in the tech industry, and to create a space for attendees to build community, hear from industry leaders, and learn new skills. The series featured 19 summits and 305 meetups across 87 countries.
This year, Women Techmakers partnered with the Actions on Google team to host technical workshops at these events so attendees could learn the fundamental concepts to develop Actions for the Google Assistant.Together, we created hundreds of new Actions for the Assistant. Check out some of the highlights of this year’s summit in the video below:
If you couldn’t attend any of our meetups this past year, we’ll cover our technical workshops now so you can start building for the Assistant from home. The technical workshop kicked off by introducing Actions on Google — the platform that enables developers to build Actions for the Google Assistant. Participants got hands-on experience building their first Action with the following features:
During Codelab level 1, participants learned how to parse the user’s input by using Dialogflow, a tool that uses Machine Learning and acted as their Natural Language Processor (NLP). Dialogflow processes what the user says and extracts important information from that input to identify how to fulfill the user’s request. Participants configured Dialogflow and connected it to their code’s back-end using Dialogflow’s inline editor. In the editor, participants added their code and tested their Action in the Action Simulator.
In Codelab level 2, participants continued building on their Action, adding features such as:
Instead of using Dialogflow’s inline editor, participants set up a Cloud Functions for Firebase as their server.
You can learn more about developing your own Actions here. To support developers’ efforts in building great Actions for the Google Assistant, the team also has a developer community program.
Alex Eremia, a workshop attendee, reflected, “I think voice applications will have a huge impact on society both today and in the future. It will become a natural way we interact with the items around us.”
From keynotes, fireside chats, and interactive workshops, the Women Techmakers summit attendees enjoyed a mixture of technical and inspirational content. If you’re interested in learning more and getting involved, follow us WTM on twitter, check out our website and sign up to become a member.
To learn more Actions on Google and how to build for the Google Assistant, be sure to follow us on Twitter, and join our Reddit community!
Recently at Google I/O, we gave you a sneak peek at our new Local Home SDK, a suite of local technologies to enhance your smart home integrations. Today, the SDK is live as a developer preview. We've been working hard testing the platform with our partners, including GE, LIFX, Philips Hue, TP-Link, and Wemo, and are excited to bring you these additional technologies for connecting smart devices to the Google Assistant.
Figure 1: The local execution path
This SDK enables developers to more deeply integrate their smart devices into the Assistant by building upon the existing Smart Home platform to create a local execution path via Google Home smart speakers and Nest smart displays. Developers can now run their business logic to control new and existing smart devices in JavaScript that executes on the smart speakers and displays, benefitting users with reduced latency and higher reliability.
The SDK introduces two new intents, IDENTIFY and REACHABLE_DEVICES. The local home platform scans the user's home network via mDNS, UDP, or UPnP to discover any smart devices connected to the Assistant, and triggers IDENTIFY to verify that the device IDs match those returned from the familiar Smart Home API SYNC intent. If the detected device is a hub or bridge, REACHABLE_DEVICES is triggered and treats the hub as the proxy device for communicating locally. Once the local execution path from Google Home to a device is established, the device properties are updated in Home Graph.
IDENTIFY
REACHABLE_DEVICES
SYNC
Figure 2: The intents used for each execution path
When a user triggers a smart home Action that has a local execution path, the Assistant sends the EXECUTE intent to the Google Nest device rather than the developer's cloud fulfillment. The developer's JavaScript app is invoked, which then triggers the Local Home SDK to send control commands to the smart device over TCP, UDP socket, or HTTP/HTTPS requests. By defaulting to local execution rather than the cloud, users experience faster fulfillment of their requests. The execution requests can still be sent to the cloud path in case local execution fails. This redundancy minimizes the possibility of a failed request, and improves the overall user experience.
EXECUTE
Additional features of the Local Home platform include:
Figure 3: Local Home configuration tool in the Actions console
JavaScript apps can be tested on-device, allowing developers to employ familiar tools like Chrome Developer Console for debugging. Because the Local Home SDK works with the existing smart home framework, you can self-certify new apps through the Test suite for smart home as well.
To learn more about the Local Home platform, check out the API reference, and get started adding local execution with the developer guide and samples. For general information covering how you can connect smart devices to the Google Assistant, visit the Smart Home documentation, or check out the Local Technologies for the Smart Home talk from Google I/O this year.
You can send us any feedback you have through the bug tracker, or engage with the community at /r/GoogleAssistantDev. You can tag your posts with the flair local-home-sdk to help organize discussion.
We’re thrilled to announce we’ve expanded our collaboration with PayPal to make payments easy and seamless no matter how or where your customers like to shop. Now, you’ll be able to accept PayPal with Google Pay on your app or website in all 24 countries where your customers can link their PayPal account to Google Pay.
Here are 5 ways this integration can add value to your business:
Hundreds of millions of users already have their payment methods saved to their Google Account. And as of 2018, customers who use their PayPal account to make a purchase on a Google app or service like Google Play and YouTube can automatically choose that PayPal account when they pay with Google Pay—no new setup required. When you enable PayPal as a payment method on your Google Pay integration, all of these customers will be able to seamlessly check out on your website or app.
Users will be able to choose PayPal—or any other payment method—right from the Google Pay payment sheet.
Once users link their PayPal account, they won’t need to sign in to PayPal when they use it with Google Pay. This means they’ll enjoy fewer steps at checkout, which often leads to higher conversion rates. In addition, your customers will get all the advantages that come with their PayPal account—like Purchase Protection and Return Shipping—along with Google Pay’s fast, simple checkout experience and increased security.
Google Pay lets customers keep all of their payment methods in one place. They’ll easily be able to switch between debit cards, credit cards, their PayPal account, and more just by choosing Google Pay at checkout.
PayPal merchants who enable the acceptance of PayPal through Google Pay can continue to get the PayPal benefits they already enjoy. This includes the ability to receive payments directly to their PayPal Business Account within minutes, no minimum processing requirements, and seller protection on eligible transactions.
If you’ve already implemented Google Pay, enabling PayPal is as easy as adding it to your list of allowed payment methods in the body of your requests:
const payPalPaymentMethod = { type: "PAYPAL", parameters: { purchase_context: { purchase_units: [{ payee: { merchant_id: "<YOUR_PAYPAL_ACCOUNT_ID>" } }] } }, tokenizationSpecification: { type: "DIRECT" } }; paymentRequest.allowedPaymentMethods = [payPalPaymentMethod, cardPaymentMethod];
Once you’ve done that, you’ll receive a token you can send to your servers as soon as your customers confirm their transaction. You’ll use this token to issue a call against PayPal’s payment service—see PayPal’s documentation for more details and best practices.
If you haven’t implemented Google Pay yet, check out our online API introduction video or our step-by-step guided codelabs for Android and Web to learn more about it. If you prefer to explore on your own, read our documentation.
We’re excited to offer developers the best of both worlds with Google Pay and PayPal, all while making payments simpler for customers and businesses around the world. Stay tuned for more updates.
Students and working professionals use Google Docs every day to help enhance their productivity and collaboration. The ability to easily share a document and simultaneously edit it together are some of our users' favorite product features. However, many small businesses, corporations, and educational institutions often find themselves needing to automatically generate a wide variety of documents, ranging from form letters to customer invoices, legal paperwork, news feeds, data processing error logs, and internally-generated documents for the corporate CMS (content management system).
Mail merge is the process of taking a master template document along with a data source and "merging" them together. This process makes multiple copies of the master template file and customizes each copy with corresponding data of distinct records from the source. These copies can then be "mailed," whether by postal service or electronically. Using mail merge to produce these copies at volume without human labor has long been a killer app since word processors and databases were invented, and now, you can do it in the cloud with G Suite APIs!
While the Document Service in Google Apps Script has enabled the creation of Google Docs scripts and Docs Add-ons like GFormit (for Google Forms automation), use of Document Service requires developers to operate within the Apps Script ecosystem, possibly a non-starter for more custom development environments. Programmatic access to Google Docs via an HTTP-based REST API wasn't possible until the launch of the Google Docs API earlier this year. This release has now made building custom mail merge applications easier than ever!
Today's technical overview video walks developers through the concept and flow of mail merge operations using the Docs, Sheets, Drive, and Gmail APIs. Armed with this knowledge, developers can dig deeper and access a fully-working sample application (Python), or just skip it and go straight to its open source repo. We invite you to check out the Docs API documentation as well as the API overview page for more information including Quickstart samples in a variety of languages. We hope these resources enable you to develop your own custom mail merge solution in no time!
Posted by Stephen McDonald, Google Developers Engineer and Jose Ugia, Google Developers Engineer
At Google I/O 2019, we shared some of the new features we’re adding to Google Pay and discussed how you can use them to add value to your customers—whether you accept payments on your app or website or engage with customers beyond payments through loyalty cards, offers, event tickets, and boarding passes.
Read on for a summary of what we covered during the event. If you want to hear the full story, check out the recordings of our sessions: Building Powerful Checkout Experiences with Google Pay and Engaging Customers Beyond Payments: Tickets, Transit, and Boarding Passes.
Better checkout experiences are more likely to increase your conversions. Here’s a look at some of the ways Google Pay can help you improve your checkout process from start to finish.
In an effort to bring customers more detail and transparency, we’ve made some changes to the Google Pay API. Going forward, the Google Pay payment sheet will display pricing information, so customers can double-check their order before they confirm their purchase. We’re also adding modifiers based on transaction conditions (like shipping options), so customers can see all relevant purchase details quickly, without going back to the merchant site, leading to a faster checkout experience.
Users paying online can see the price of the order dynamically before they initiate the transaction.
Along with these improvements to the payment sheet, we’re offering creative new button and onboarding options to encourage customers to choose Google Pay for faster checkout. To start, we launched the createButton API for web developers. This enables a dynamic purchase button that uses the right styling and colors and is localized to your user’s device or browser settings. We’ve also been experimenting with personalized buttons that display important information before users enter the checkout flow. For instance, we can show customers exactly what card they’ll be paying with or let them know if they need to sign in or set up Google Pay – and this information is displayed right on the button. As the button is hosted and rendered by Google Pay, all of this happens without you having to make any changes.
createButton
createButton API allows to display card information directly on the checkout button
The Google Pay API for Passes lets you connect your business to millions of Android users by linking your loyalty programs, gift cards, offers, boarding passes, and event tickets to their Google Accounts. This year, we’re launching new capabilities and integrations that will help you engage customers at more times and places.
Your passengers can add their boarding pass to Google Pay for a seamless check-in experience. Google Pay sends the passengers a high priority notification with their boarding pass just a few hours before their flight so they can easily access it when needed. They’ll also receive notifications with important dynamic information like gate changes or flight delays. These notifications are high priority and will stay prominent on passengers’ phones until they dismiss it or their flight takes off.
Google’s ecosystem can help create complete user journeys across multiple touchpoints. Earlier this year, we announced the ability to check-in to flights directly from the Google Assistant. Once a flight is ready for check-in, your passenger will receive a notification that takes them directly to the Assistant to complete the process. At the end of this flow, the user is issued a boarding pass that can be accessed from the Assistant or from Google Pay. This is built on top of the Passes API, which means that as an airline, if you already added support for boarding passes, you can just add the check-in with the Assistant integration on top of it.
From left to right: new high priority notifications, integration of Myki card inside of Google Maps, new transit tickets and automatic Gmail import.
We’re excited to announce we’re making transit an open API. This means if you’re a transit provider and currently offer barcode tickets for your transportation services, you can now utilize the Passes API to get your tickets digitized in Google Pay. We’ll also be enhancing this API to support dynamic barcodes. The barcodes on customers’ transit tickets or passes will update every few seconds – even if their device is offline. This allows you to increase security -- since your QR codes are changing all the time, it makes it harder to duplicate the ticket.
Now you can also give customers the opportunity to import your loyalty cards to Google Pay right from Gmail—just by adding some markup to your emails. When customers open the Google Pay app, they’ll be shown any loyalty cards from Gmail they haven’t added to Google Pay. With just a tap, they can add them all automatically so they can access them at any time. This feature is currently only available with loyalty programs, but we’ll be expanding to other types of passes in the future.
We’re working on making Passes available to your users on Google even if they haven’t installed the Google Pay app. We are starting with boarding passes and transit tickets, then plan to extend the same functionality to the other Passes. Stay tuned for more.
To learn more about Google Pay, visit our developer resources:
Posted by Erica Hanson, Google Developer Relations
This spring, Google and Developer Student Clubs are looking for new passionate student leaders from universities across the globe!
Developer Student Clubs is a program with Google Developers. Through in-person meetups, university students are empowered to learn together and use technology to solve real life problems with local businesses and start-ups.
Less than two years ago, DSC launched in parts of Asia and Africa where 90,000+ students have been trained on Google technologies; 500+ solutions built for 200+ local startups and organizations and 170+ clubs participated in our first Solution Challenge!
Bridging the gap between theory and practical application, Google aims to provide student developers with the resources, opportunities and the experience necessary to be more industry ready.
You may be wondering what the benefit of being a Developer Student Club Lead is? Well, here are a few reasons:
Apply to be a Developer Student Club Lead at g.co/dev/dsc.
Deadline to submit applications has been extended to June 15th.
We’re committed to evolving Coral to make it even easier to build systems with on-device AI. Our team is constantly working on new product features, and content that helps ML practitioners, engineers, and prototypers create the next generation of hardware.
To improve our toolchain, we're making the Edge TPU Compiler available to users as a downloadable binary. The binary works on Debian-based Linux systems, allowing for better integration into custom workflows. Instructions on downloading and using the binary are on the Coral site.
We’re also adding a new section to the Coral site that showcases example projects you can build with your Coral board. For instance, Teachable Machine is a project that guides you through building a machine that can quickly learn to recognize new objects by re-training a vision classification model directly on your device. Minigo shows you how to create an implementation of AlphaGo Zero and run it on the Coral Dev Board or USB Accelerator.
Our distributor network is growing as well: Arrow will soon sell Coral products.
Posted by Franziska Hauck, DevRel Ecosystem Regional Lead DACH
When we look at the community landscape in programming in 2019, we find people of all backgrounds and with expertise as varied as the people themselves. There are developer groups for every imaginable interest out there. What becomes apparent, though, is that the allocation is not as equally balanced as it might be. In Europe, we observe that more programming women are in front-end development and active in the associated groups.
But what about in cloud? Recently, Global Knowledge published a ranking that showed that Google Cloud Certification is the most coveted achievement in the labor market. We knew that the interest was there. How could we capture it and get more women and diverse poeple involved? [Indeed, we had seen women succeed and in this chosen field at that. It was time to contribute to seeing more success stories coming our way.]
Immediately the Cloud Study Jam came to mind. This campaign is a self-study, highly individualized study jam for Google Developer Groups (GDGs) and other tech meetups. Organizers get access to study materials to help them prepare for their event, register it on the global map and conduct the activity with their attendees in any location they choose. Attendees receive free Qwiklabs credits to complete a number of courses of their choice. The platform even offers a complete Google Cloud environment - the best training ground for aspiring and advanced programmers!
GDGs form one pillar of our community programs. One of the other cornerstones is the Women Techmakers program with which we engage and involve organizers interested in increasing diversity worldwide. Cloud Study Jams in the local groups, with dedicated Women Techmakers, seemed like the most natural fit for us. And, as we soon realized, so thought the organizers.
For us - Almo, Abdallah and Franziska - that was the start of a great initiative and an even bigger road trip. Together with local volunteers from Google and the groups, we held 11 Cloud Study Jams all over Europe in March and April.
Over 450 attendees, 80 % of them women, learned about Cloud technologies.
This was some of their feedback:
“This made me aim for the Cloud Certificate exam as my next goal in my career!”
“I found useful everything! The labs are interesting... and I would like to have more meetups like this.”
“The labs are interesting, at least both that we did. I would like to have more meetups like this!”
As surmised, many attendees were indeed front-end developers. It was amazing to see that, with the courses, they “converted” to Cloud and are now going forward as ambassadors. We also saw quite a big number of data scientists and back-end developers. All in all, it was a great mix of enthusiastic participants.
Cloud Study Jams are a great way to engage group members by guided materials. The way they are designed makes it easy for the organizers to focus on the participants. Since attendees follow their chosen courses on their own organizers act as facilitators. They need only jump in when organizational questions arise.
If you would like to hold a Cloud Study Jam with your group or organization you will find more information here. Register your event via the link to get access to the free Qwiklabs credits for your attendees.
We are very much looking forward to supporting you!
Almo, Abdallah, Franziska & the European DevRel Ecosystem
Posted by Ben Fried, VP, CIO, & Chief Domains Enthusiast
Celebrating 100 of our favorite .app websites. See the list here.
A year ago, we launched .app, the first open top-level domain (TLD) with built-in security through HSTS preloading. Since then, hundreds of thousands of people have registered .app domains, and we want to take a moment to celebrate them.
People are making more websites and apps than ever before. A recent survey we conducted with The Harris Poll found that nearly half (48%) of U.S. respondents plan to create a website in the near future. And a lot of people, especially students, are already building on the web. Over a third (34%) of 16-24 year olds who’ve already created a website did so for a class project.
Having a meaningful domain name helps students turn their projects into reality. Take Ludwik Trammer, creator of shrew.app, who said: “The site started as a project for my graduate Educational Technology class at Georgia Tech. Getting the perfect domain gave me the initial push to turn it into the real deal (instead of making a prototype, publishing a scientific paper on it, and forgetting it).”
Helping creators launch their sites securely
With so many new creators, it’s essential that everyone does their part to make the internet safer. That’s why Google Registry designed .app to be secure by default, meaning every website on .app requires a HTTPS connection to ensure a secure connection to the internet.
HTTPS helps keep you and your website visitors safe from bad actors, who may exploit connections that aren’t secure by:
“As a social application, data protection is paramount. As cyber attacks increase, the security benefits a .app domain brings was a key factor for us. We also believe that a .app domain is significantly more descriptive than a .com domain, meaning users can find us more easily! All in all it was a no brainer for us switching to .app.” -Daneh Westropp, Founder, pickle.app
-Daneh Westropp, Founder, pickle.app
There's still work to be done. One out of two people don’t know the difference between HTTP and HTTPS. Many major browsers (like Chrome) warn users in the URL bar when content is "not secure," but there’s every website creator still has a shared responsibility to keep their users safe.
.App is year in, and we’re happy to see so many people using it to build secure websites and connect with the world. You can read more stories from .app owners here and get your own .app name at get.app. If you’re one of the millions of people planning to build a website, we hope you’ll join us in making the internet safer and take the steps to securely launch your website.
Posted by Brahim Elbouchikhi, Director of Product Management and Matej Pfajfar, Engineering Director
We launched ML Kit at I/O last year with the mission to simplify Machine Learning for everyone. We couldn’t be happier about the experiences that ML Kit has enabled thousands of developers to create. And more importantly, user engagement with features powered by ML Kit is growing more than 60% per month. Below is a small sample of apps we have been working with.
But there is a lot more. At I/O this year, we are excited to introduce four new features.
The Object Detection and Tracking API lets you identify the prominent object in an image and then track it in real-time. You can pair this API with a cloud solution (e.g. Google Cloud’s Product Search API) to create a real-time visual search experience.
When you pass an image or video stream to the API, it will return the coordinates of the primary object as well as a coarse classification. The API then provides a handle for tracking this object's coordinates over time.
A number of partners have built experiences that are powered by this API already. For example, Adidas built a visual search experience right into their app.
The On-device Translation API allows you to use the same offline models that support Google Translate to provide fast, dynamic translation of text in your app into 58 languages. This API operates entirely on-device so the context of the translated text never leaves the device.
You can use this API to enable users to communicate with others who don't understand their language or translate user-generated content.
To the right, we demonstrate the use of ML Kit’s text recognition, language detection, and translation APIs in one experience.
We also collaborated with the Material Design team to produce a set of design patterns for integrating ML into your apps. We are open sourcing implementations of these patterns and hope that they will further accelerate your adoption of ML Kit and AI more broadly.
Our design patterns for machine learning powered features will be available on the Material.io site.
With AutoML Vision Edge, you can easily create custom image classification models tailored to your needs. For example, you may want your app to be able to identify different types of food, or distinguish between species of animals. Whatever your need, just upload your training data to the Firebase console and you can use Google’s AutoML technology to build a custom TensorFlow Lite model for you to run locally on your user's device. And if you find that collecting training datasets is hard, you can use our open source app which makes the process simpler and more collaborative.
Wrapping up
We are excited by this first year and really hope that our progress will inspire you to get started with Machine Learning. Please head over to g.co/mlkit to learn more or visit Firebase to get started right away.
Posted by the Flutter and Chrome OS teams
Chrome OS is the fast, simple, and secure operating system that powers Chromebooks, including the Google Pixelbook and millions of devices used by consumers and students every day. The latest Flutter release adds support for building beautiful, tailored Chrome OS applications, including rich support for keyboard and mouse, and tooling to ensure that your app runs well on a Chromebook. Furthermore, Chrome OS is a great developer workstation for building general-purpose Flutter apps, thanks to its support for developing and running Flutter apps locally on the same device.
Since its inception, Flutter has shared many of the same principles as Chrome OS: productive, fast, and beautiful experiences. Flutter allows developers to build beautiful, fast UIs, while also providing a high degree of developer productivity, and a completely open-source engine, framework and tools. In short, it’s the ideal modern toolkit for building multi-platform apps, including apps for Chrome OS.
Flutter initially focused on providing a UI toolkit for building apps for mobile devices, which typically feature touch input and small screens. However, we’ve been building keyboard and mouse support into Flutter since before our 1.0 release last December. And today, we’re pleased to announce that Flutter for Chrome OS is now stronger with scroll wheel support, hover management, and better keyboard event support. In addition, Flutter has always been great at allowing you to build apps that run at any size (large screen or small), with seamless resizing, as shown here in the Chrome OS Best Practices Sample:
The Chrome OS best practices sample in action
The Chrome OS Hello World sample is an app built with Flutter that is optimized for Chrome OS. This includes a responsive UI to showcase how to reposition items and have layouts that respond well to changes in size from mobile to desktop.
Because Chrome OS runs Android apps, targeting Android is the way to build Chrome OS apps. However, while building Chrome OS apps on Android has always been possible, as described in these guidelines, it’s often difficult to know whether your Android app is going to run well on Chrome OS. To help with that problem, today we are adding a new set of lint rules to the Flutter tooling to catch violations of the most important of the Chrome OS best practice guidelines:
The Flutter Chrome OS lint rules in action
When you’re able to put these Chrome OS lint rules in place, you’ll quickly be able to see any problems in your Android app that would hamper it when running on Chrome OS. To learn how to take advantage of these rules, see the linting docs for Flutter Chrome OS.
But all of that is just the beginning -- the Flutter tools allow you to develop and test your apps directly on Chrome OS as well.
No matter what platform you're targeting, Flutter has support for rich IDEs and programming tools like Android Studio and Visual Studio Code. Over the last year, Chrome OS has been building support for running the Linux version of these tools with the beta of Linux on Chrome OS (aka Crostini). And, because Chrome OS also supports Android natively, you can configure the Flutter tooling to run your Android apps directly without an emulator involved.
The Flutter development tools running on Chrome OS
All of the great productivity of Flutter is available, including Stateful Hot Reload, seamless resizing, keyboard and mouse support, and so on. Recent improvements in Crostini, such as high DPI support, Crostini file system integration, easier adb, and so on, have made this experience even better! Of course, you don’t have to test against the Android container running on Chrome OS; you can also test against Android devices attached to your Chrome OS box. In short, Chrome OS is the ideal environment in which to develop and test your Flutter apps, especially when you’re targeting Chrome OS itself.
With its unique combination of simplicity, security, and capability, Chrome OS is an increasingly popular platform for enterprise applications. These apps often work with large quantities of data, whether it’s a chart, or a graph for visualization, or lists and forms for data entry. The support in Flutter for high quality graphics, large screen layout, and input features (like text selection, tab order and mousewheel), make it an ideal way to port mobile applications for the enterprise. One purveyor of such apps is AppTree, who use Flutter and Chrome OS to solve problems for their enterprise customers.
“Creating a Chrome OS version of our app took very little effort. In 10 minutes we tweaked a few values and now our users have access to our app on a whole new class of devices. This is a huge deal for our enterprise customers who have been wanting access to our app on Desktop devices.”
By using Flutter to target Chrome OS, AppTree was able to start with their existing Flutter mobile app and easily adapt it to take advantage of the capabilities of Chrome OS.
If you’d like to target Chrome OS with Flutter, you can do so today simply by installing the latest version of Flutter. If you’d like to run the Flutter development tools on Chrome OS, you can follow these instructions to get started fast. To see a real-world app built with Flutter that has been optimized for Chrome OS, check out the the Developer Quest sample that the Flutter DevRel team launched at the 2019 Google I/O conference. And finally, don’t forget to try out the Flutter Chrome OS linting rules to make sure that your Chrome OS apps are following the most important practices.
Flutter and Chrome OS go great together. What are you going to build?