Two years ago, we launched ARCore, our developer platform for building augmented reality (AR) experiences. Since then, we’ve seen developers create thousands of AR apps across Android and iOS that transform the way people play, shop, learn and create together. To enable even more shared cross-platform AR experiences, we’re announcing new updates to ARCore’s Augmented Faces and Cloud Anchors APIs.
Augmented Faces on iOS
Earlier this year, we announced our Augmented Faces API, which offers a high-quality, 468-point 3D mesh that lets users attach fun effects to their faces — all without a depth sensor on their smartphone. With the addition of iOS support rolling out today, developers can now create effects for more than a billion users. We’ve also made the creation process easier for both iOS and Android developers with a new face effects template.
Improvements to Cloud Anchors
Last year, we introduced the Cloud Anchors API, which lets developers create shared AR experiences across Android and iOS. Cloud Anchors let devices create a 3D feature map from visual data onto which anchors can be placed. The anchors are hosted in the cloud so multiple people can use them to enable shared real world experiences. Cloud Anchors power a wide variety of cross-platform apps, like Just a Line, PHAROS AR and Spacecraft AR.
In our latest ARCore update, we’ve made some improvements to the Cloud Anchors API that make hosting and resolving anchors more efficient and robust. This is due to improved anchor creation and visual processing in the cloud. Now, when creating an anchor, more angles across larger areas in the scene can be captured for a more robust 3D feature map. Once the map is created, the visual data used to create the map is deleted and only anchor IDs are shared with other devices to be resolved. Moreover, multiple anchors in the scene can now be resolved simultaneously, reducing the time needed to start a shared AR experience.
These updates to Cloud Anchors are available for developers today.
Persistent Cloud Anchors and Call for Collaborators
As we look to the future, we’re taking steps to expand the scale and timeline of shared AR experiences with persistent Cloud Anchors. We see this as enabling a “save button” for AR, so that digital information overlaid on top of the real world can be experienced at anytime.
Imagine working together on a redesign of your home throughout the year, leaving AR notes for your friends around an amusement park, or hiding AR objects at specific places around the world to be discovered by others.
Persistent Cloud Anchors are powering Mark AR, a social app being developed by Sybo and iDreamSky that lets people create, discover, and share their AR art with friends and followers in real-world locations. With persistent Cloud Anchors, users can continuously return back to their pieces as they create and collaborate over time.
Mark AR is an app that lets people create and discover AR art in real-world locations.
Reliably anchoring AR content for every use case—regardless of surface, distance, and time—pushes the limits of computation and computer vision because the real world is diverse and always changing. By enabling a “save button” for AR, we’re taking an important step toward bridging the digital and physical worlds to expand the ways AR can be useful in our day-to-day lives.
We’re currently looking for more developers to help us explore and test persistent Cloud Anchors in real world apps at scale, before making the feature broadly available. If you’re interested in early access, you can apply here.
This week is a big one for Flutter! Today, at Google Developer Days, our flagship conference for Chinese developers, we used the keynote to announce our latest stable release: Flutter 1.9. This release is our biggest update yet with more than 1,500 PRs from more than 100 contributors. The new features and updates span a wide range, from support for macOS Catalina and iOS 13 to improved tooling support, as well as new Dart language features and new Material widgets.
At the keynote, we also announced a major milestone for Flutter’s web support, with the successful integration of Flutter’s web support into the main Flutter repository, allowing developers to write for mobile, desktop and web with the same codebase. And we showcased Tencent, one of the largest worldwide internet brands, who are using Flutter in a growing number of their mobile apps.
Let’s take a deeper look at this week’s news, starting with what’s new in Flutter 1.9.
As Apple prepares to release Catalina, the latest version of macOS, we’ve worked hard to make sure that Flutter is ready for you to upgrade. We’ve updated the end-to-end tooling experience to ensure it works well on Catalina and with Xcode 11. This includes adding support for the new Xcode build system, enabling 64-bit support throughout the toolchain, and simplifying platform dependencies.
With iOS 13 on the way, we’ve also been working to ensure your Flutter apps look great on the latest iPhone release. Flutter 1.9 includes an implementation of the iOS 13 draggable toolbar, with both long-press and drag-from-right, and supports vibration feedback. Work on iOS dark mode is also well underway with a number of pull requests already merged.
Finally, in the latest development builds, you can now turn on experimental support for Bitcode, which is Apple’s platform-independent intermediate representation of a compiled program. Submitting your app as Bitcode allows Apple to optimize your binary in the future without resubmission, and opens the door to Flutter potentially supporting platforms like watchOS and tvOS that require Bitcode for app submission.
The Material components and features also get an upgrade in Flutter 1.9. Material is one of the world’s leading open-source design systems, providing a comprehensive, flexible set of building blocks for implementing interactive user experiences across many platforms.
In this release, we provide several new widgets including ToggleButtons (left) and ColorFiltered (right).
ToggleButtons
ColorFiltered
The ToggleButtons widget bundles a row of ToggleButton widgets together, often composed of a set of Icon and Text widgets, to form a set of buttons with fully customizable look and behavior. Do you want single selection or multi-select? Do you want to require at least one selection or allow none? Do you want square or rounded edges, thick or thin borders, icons or text, etc? You can see some of these options above on the left and see how they’re implemented in the ToggleButtons sample.
ToggleButton
Icon
Text
As shown in the image above on the right, the ColorFiltered widget allows you to recolor a tree of child widgets just like you can recolor an image using one of several different algorithms (some of which are shown in the example screenshot above). This has many uses, for example, handling color blindness accessibility issues for your users. To see this in action, check out the ColorFiltered sample.
We’ve also added support for 24 new languages, from Afrikaans to Zulu.
The end-to-end developer experience depends not just on the features of Flutter but also on the underlying language itself. As part of the Flutter 1.9 release, we are also releasing Dart 2.5. Dart 2.5 includes a pre-release of Foreign Function Interface (FFI) support, providing native extensions so Dart can call directly into code written in C. It also introduces machine learning-powered code completions for the IDE. You can learn about both of these and more in the Dart 2.5 announcement.
With this release, new projects default to Swift instead of Objective-C and Kotlin instead of Java for iOS and Android projects respectively. Since many packages are written with Swift, making it the default language removes manual work for adding those packages to an app created with the default options. Swift 5 is ABI stable, and thanks to app thinning work Apple has done in recent releases, the Swift dynamic libraries no longer need to be included in the distribution package for iOS 12.2 or greater, reducing the size of Swift applications compared to previous releases.
And as Kotlin is now the default language for new projects in Android Studio, it seems natural to make the language switch for Android also. These options are now the default for both the flutter CLI tool and the IntelliJ/Android Studio and VS Code plugins for Flutter, but you can always switch back to Objective-C or Java if you prefer.
flutter
Additionally, we’ve been working to improve Flutter’s error messages by making them more readable, more concise and more actionable.
The Flutter User Experience team has led the charge on this project; you can read the details in a separate blog post covering the work on structured error display. We’ve just started to apply these new patterns, and you can expect more error messages to take advantage of this work in coming releases.
And finally, we are very happy to announce that the flutter_web repository is deprecated now that web support has been merged into the main flutter repository! What this means is that if you have the latest builds of Flutter from the master or dev channel, you can target the web with the latest experimental version of Flutter by running flutter run -d chrome.
flutter run -d chrome
When you create a project, Flutter now creates a web runner via a minimal web/index.html file that bootstraps your web-compiled Flutter code. With that file in place, you can use the Flutter CLI tool or the IDE plugins to edit and run Flutter apps on the web.
web/index.html
Above is a screenshot of VS Code with web support enabled for Flutter. Notice the web/index.html file, along with the dropdown list allowing you to choose Chrome as your target development device. Support for web output with Flutter is still at an early phase, but this release represents a major step forward towards enabling production support for web development with Flutter.
At the end of July, we announced an early adopter program designed to get a group of select Flutter applications deployed to production on the web over the next six to twelve months. We received over 1,000 submissions to the program. Unfortunately, we don’t have the capacity to support everyone who applied to join the program, but now web support is merged into the Flutter framework, we’re excited that everyone can now experiment with this capability.
Some community experiments have already showcased Flutter’s web output:
Flutter Widget Livebook (left) is built with Flutter for web and shows Flutter widgets running live in your browser. Panache (right) is a tool for creating themes for Flutter which you can then download and drop directly into your code.
Please give this updated experimental support for Flutter on the web a try and let us know if you have any feedback.
We’re thrilled to see continuing fast growth and adoption of for Flutter. Here at Google, hundreds of developers are working on more than twenty projects using Flutter, including some that are released and many that are still in development. At GDD China this week, we highlighted how Tencent, one of the largest internet brands, is using Flutter pervasively for a wide variety of projects:
Switching gears to something just for fun, if you have Google Assistant on your phone or one of the Google Nest Hub devices, try saying “OK Google. Talk to Flutter Widget Quiz.” We loved seeing this community-powered quiz that tests your knowledge of Flutter.
We love the support we’ve received from the developer community, whether in the form of blogs and articles, published apps or issues and code contributions. For more details on upgrading to Flutter 1.9, including details on how to fix any breaking changes that you might experience as you migrate your code, check out the detailed Flutter 1.9 release notes.
There’s a ton for you to try with this release, from trying out the new dart:ffi or ML-based code completion features to experimenting with Flutter for web; from the new support for Catalina and iOS 13 and new ToggleButtons and ColorFilter widgets to testing yourself on your Flutter widget knowledge.
Now that you’ve got Flutter 1.9 in your hands, we’re excited to see what you will build with it!
Whether you're a city planner, a small business owner, or a software developer, gaining useful insights from data can help make services work better and answer important questions. But, without strong privacy protections, you risk losing the trust of your citizens, customers, and users.
Differentially-private data analysis is a principled approach that enables organizations to learn from the majority of their data while simultaneously ensuring that those results do not allow any individual's data to be distinguished or re-identified. This type of analysis can be implemented in a wide variety of ways and for many different purposes. For example, if you are a health researcher, you may want to compare the average amount of time patients remain admitted across various hospitals in order to determine if there are differences in care. Differential privacy is a high-assurance, analytic means of ensuring that use cases like this are addressed in a privacy-preserving manner.
Today, we’re rolling out the open-source version of the differential privacy library that helps power some of Google’s core products. To make the library easy for developers to use, we’re focusing on features that can be particularly difficult to execute from scratch, like automatically calculating bounds on user contributions. It is now freely available to any organization or developer that wants to use it.
A deeper look at the technology
Our open source library was designed to meet the needs of developers. In addition to being freely accessible, we wanted it to be easy to deploy and useful.
Here are some of the key features of the library:
Investing in new privacy technologies
We have driven the research and development of practical, differentially-private techniques since we released RAPPOR to help improve Chrome in 2014, and continue to spearhead their real-world application.
We’ve used differentially private methods to create helpful features in our products, like how busy a business is over the course of a day or how popular a particular restaurant’s dish is in Google Maps, and improve Google Fi.
This year, we’ve announced several open-source, privacy technologies—Tensorflow Privacy, Tensorflow Federated, Private Join and Compute—and today’s launch adds to this growing list. We're excited to make this library broadly available and hope developers will consider leveraging it as they build out their comprehensive data privacy strategies. From medicine, to government, to business, and beyond, it’s our hope that these open-source tools will help produce insights that benefit everyone.
Acknowledgements
Software Engineers: Alain Forget, Bryant Gipson, Celia Zhang, Damien Desfontaines, Daniel Simmons-Marengo, Ian Pudney, Jin Fu, Michael Daub, Priyanka Sehgal, Royce Wilson, William Lam
Posted by Erica Hanson, Program Manager
ARUA, UGANDA - Samuel Mugisha is a 23 year old university student with a laugh that echoes off every wall and a mind determined to make change. Recently he heard from a healthcare worker that many children at a local clinic were missing vaccinations, so he decided to take a walk. He toured his community, neighbor to neighbor, and asked one simple question: “Can I see your vaccination card?”
In response he was given dirt stained, wrinkled, torn pieces of paper, holding life or death information - all written in scribble.
He squinted, held the cards to the light, rubbed them on his pant leg, but for no use. They were impossible to read. As Samuel put it, “They were broken.”
From the few cards he could read, Samuel noted children who had missed several vaccinations - they were unknowingly playing the odds, waiting to see if disease would find them.
Without hesitation, Samuel got right to work, determined to fix the healthcare system with technology.
He first brought together his closest friends from Developer Student Clubs (DSC), a program supporting students impacting their communities through tech. He asked them: “Why can’t technology solve our problem?”
This newly formed team, including Samuel, Joshwa Benkya and Norman Acidri, came up with a twofold plan:
The idea came together right as Developer Student Clubs launched its first Solution Challenge, an open call for all members to submit projects they recently imagined. These young developers had to give it a shot. They created a model, filled out an application, and pitched the idea. After waiting a month, they heard back - their team won the competition! Their idea was selected from a pool of 170 applicants across India, Africa, and Indonesia. In other words, everything was about to change.
In a country where talent can go unnoticed and problems often go unsolved, this new team had pushed through the odds. Developer Student Clubs is a platform for these types of bold thinkers. Students who view the issues of their region not simply as obstacles to overcome, but chances to mend their home, build a better life for themselves, and transform the experiences of their people.
The goal of the Solution Challenge, and all other DSC programs, is to educate young developers early and equip them with the right skills to make an impact in their community.
In this case, office space in Uganda was expensive and hard to find. Samuel’s team previously had few chances to all work under the same roof. After winning the challenge, Developer Student Clubs helped them find a physical space of their own to come together and collaborate - a simple tool, but one that led to a turning point. As Samuel described it,
With this new space to work, DSC then brought some of Africa’s best Google Developer Group Leads directly to the young developers. In these meetings, the students were given high-level insights on how to best leverage Android, Firebase, and Presto to propel their product forward. As Samuel put it:
As a result, the team realized that with the scarcity of internet in Uganda, Firebase was the perfect technology to build with - allowing healthcare workers to use the app offline but “check in” and receive updates when they were able to find internet.
Although the app has made impressive strides since winning the competition, this young team knows they can make it even better. They want to improve its usability by implementing more visuals and are working to create a version for parents, so families can track the status of their child’s vaccination on their own.
While there is plenty of work ahead, with these gifted students and Developer Student Clubs taking each step forward together, any challenge seems solvable.
What has the team been up to recently? From August 5th-9th they attended the Startup Africa Roadtrip, an intensive training week on how best to refine a startup business model.
Today we want to walk through some updated analysis on the benefit that prerendering can provide on load times. AMP is designed to reduce page load time, and one of the most important ways Google Search reduces page load time is through privacy-preserving-prerendering AMP documents before a link is clicked.
The AMP framework has been designed to understand the layout of all page content and the loading status of all resources, so it can determine the time when all "above the fold" content has loaded. It also knows when the document is prerendered and when it is displayed. Thus, the AMP framework can compute the time from click until the above the fold content is displayed. AMP measures page load speed with a custom metric called First Viewport Ready (FVR). This is defined as the point in time "when non-ad resources above the fold fired their load event measured from the time the user clicks (So takes pre-rendering into account)". If an AMP document is fully prerendered this metric will be 0. If prerendering was not complete at the time of click or if the document was not prerendered at all, then the metric will be greater than 0.
Google Search prerenders some AMP documents and not others so we are able to see the impact that prerendering has on FVR. The chart below shows percentiles for FVR with and without prerendering. FVR is 0 when the AMP framework successfully completes prerendering before the document is displayed.
First Contentful Paint (FCP) is a page load speed metric that is measured by the browser. It is available for all documents, not just AMP documents. FCP is the point in time when the first bit of content from the DOM is rendered. A high value for FCP indicates that a page is definitely slow, but a low value for FCP does not necessarily mean that a page loads quickly since the first bit may not be important content. This is useful, but since AMP has a better understanding of what content is visible, FVR gives a better understanding of when content becomes visible.
FCP is not aware of prerendering so AMP computes a prerender sensitive derivative metric, Prerender-adjusted First Contentful Paint (PFCP), that subtracts out the time before click. When not prerendered, PFCP = FCP. FCP also decreases with prerendering, but the difference is less dramatic than FVR.
It may be surprising that median prerendered PFCP is higher than median prerendered FVR. This happens because the browser has to draw the content to the screen after the click. PFCP includes that time, while FVR does not.
Prerendering AMP documents leads to substantial improvements in page load times. Page load time can be measured in different ways, but they consistently show that prerendering lets users see the content they want faster. For now, only AMP can provide the privacy preserving prerendering needed for this speed benefit. In the future, new web platform features, such as Signed Exchanges, will bring privacy-preserving instant loading to non-AMP documents too.
Posted by Vikram Tank (Product Manager), Coral Team
Coral’s had a busy summer working with customers, expanding distribution, and building new features — and of course taking some time for R&R.; We’re excited to share updates, early work, and new models for our platform for local AI with you.
The compiler has been updated to version 2.0, adding support for models built using post-training quantization—only when using full integer quantization (previously, we required quantization-aware training)—and fixing a few bugs. As the Tensorflow team mentions in their Medium post “post-training integer quantization enables users to take an already-trained floating-point model and fully quantize it to only use 8-bit signed integers (i.e. `int8`).” In addition to reducing the model size, models that are quantized with this method can now be accelerated by the Edge TPU found in Coral products.
We've also updated the Edge TPU Python library to version 2.11.1 to include new APIs for transfer learning on Coral products. The new on-device back propagation API allows you to perform transfer learning on the last layer of an image classification model. The last layer of a model is removed before compilation and implemented on-device to run on the CPU. It allows for near-real time transfer learning and doesn’t require you to recompile the model. Our previously released imprinting API, has been updated to allow you to quickly retrain existing classes or add new ones while leaving other classes alone. You can now even keep the classes from the pre-trained base model. Learn more about both options for on-device transfer learning.
Until now, accelerating your model with the Edge TPU required that you write code using either our Edge TPU Python API or in C++. But now you can accelerate your model on the Edge TPU when using the TensorFlow Lite interpreter API, because we've released a TensorFlow Lite delegate for the Edge TPU. The TensorFlow Lite Delegate API is an experimental feature in TensorFlow Lite that allows for the TensorFlow Lite interpreter to delegate part or all of graph execution to another executor—in this case, the other executor is the Edge TPU. Learn more about the TensorFlow Lite delegate for Edge TPU.
Coral has also been working with Edge TPU and AutoML teams to release EfficientNet-EdgeTPU: a family of image classification models customized to run efficiently on the Edge TPU. The models are based upon the EfficientNet architecture to achieve the image classification accuracy of a server-side model in a compact size that's optimized for low latency on the Edge TPU. You can read more about the models’ development and performance on the Google AI Blog, and download trained and compiled versions on the Coral Models page.
And, as summer comes to an end we also want to share that Arrow offers a student teacher discount for those looking to experiment with the boards in class or the lab this year.
We're excited to keep evolving the Coral platform, please keep sending us feedback at coral-support@google.com.
We have invested heavily in our API and service infrastructure to improve performance and security and to add features developers need to build world-class APIs. As we make changes we must address features that are no longer compatible with the latest architecture and business requirements.
The JSON-RPC protocol (http://www.jsonrpc.org/specification) and Global HTTP Batch (example) are two such features. Our support for these features was based on an architecture using a single shared proxy to receive requests for all APIs. As we move towards a more distributed, high performance architecture where requests go directly to the appropriate API server we can no longer support these global endpoints.
We had originally planned to decommission these features by Mar 25, 2019. However, it came to our attention that few highly impacted customers might not have received the earlier notification.
As a result, we are extending the deprecation timeline to Aug 12, 2020, when we will discontinue support for both these features.
Starting February 2020 and running through August 2020, we will periodically inject errors for short windows of time. Closer to February 2020, we will provide exact details and schedule of these error injection windows.
We know that these changes have customer impact and have worked to make the transition steps as clear as possible. Please see the guidance below which will help ease the transition.
To identify whether you use JSON-RPC, you can check whether you send requests to "https://www.googleapis.com/rpc" or "https://content.googleapis.com/rpc". If you do, you should migrate.
"https://www.googleapis.com/rpc"
"https://content.googleapis.com/rpc"
A batch request is homogenous if the inner requests are addressed to the same API, even if addressed to different methods of the same API. Homogenous batching will still be supported but through API specific batch endpoints. If you are currently forming homogeneous batch requests, using Google API Client Libraries or using non-Google API client libraries or no client library (i.e making raw HTTP requests), you should migrate.
A batch request is heterogeneous if the inner requests go to different APIs. Heterogeneous batching will not be supported after the turn down of the Global HTTP batch endpoint. If you are currently forming heterogeneous batch requests, change your client code to send only homogenous batch requests, i.e you should migrate.
Clients will need to make the changes outlined below to migrate.
If you are using JSON-RPC client libraries (either the Google published libraries or other libraries), switch to REST client libraries and modify your application to work with REST client libraries.
Example code for Javascript Before
// json-rpc request for the list method gapi.client.rpcRequest('zoo.animals.list', 'v2', {name:'giraffe'}).execute(x=>console.log(x))
// json-rest request for the list method gapi.client.zoo.animals.list({name:'giraffe'}).then(x=>console.log(x))
If you are not using client libraries (i.e. making raw HTTP requests):
Example code
// Request URL (JSON-RPC) POST https://content.googleapis.com/rpc?alt=json&key;=xxx // Request Body (JSON-RPC) [{ "jsonrpc":"2.0", "id":"gapiRpc", "method":"zoo.animals.list", "apiVersion":"v2", "params":{"name":"giraffe"} }]
// Request URL (JSON-REST) GET https://content.googleapis.com/zoo/v2/animals?name=giraffe&key;=xxx
If you are currently forming heterogeneous batch requests, change your client code to send only homogenous batch requests.
Example code The example demonstrates how we can split a heterogeneous batch request for 2 apis (urlshortener and zoo) into 2 homogeneous batch requests.
// heterogeneous batch request example. // Notice that the outer batch request contains inner API requests // for two different APIs. // Request to urlshortener API request1 = gapi.client.urlshortener.url.get({"shortUrl": "http://goo.gl/fbsS"}); // Request to zoo API request2 = gapi.client.zoo.animals.list(); // Request to urlshortener API request3 = gapi.client.urlshortener.url.get({"shortUrl": "https://goo.gl/XYFuPH"}); // Request to zoo API request4 = gapi.client.zoo.animal().get({"name": "giraffe"}); // Creating single heterogeneous batch request object heterogeneousBatchRequest = gapi.client.newBatch(); // adding the 4 batch requests heterogeneousBatchRequest.add(request1); heterogeneousBatchRequest.add(request2); heterogeneousBatchRequest.add(request3); heterogeneousBatchRequest.add(request4); // print the heterogeneous batch request heterogeneousBatchRequest.then(x=>console.log(x));
// Split heterogeneous batch request into two homogenous batch requests // Request to urlshortener API request1 = gapi.client.urlshortener.url.get({"shortUrl": "http://goo.gl/fbsS"}); // Request to zoo API request2 = gapi.client.zoo.animals.list(); // Request to urlshortener API request3 = gapi.client.urlshortener.url.get({"shortUrl": "http://goo.gl/fbsS"}) // Request to zoo API request4 = gapi.client.zoo.animals.list(); // Creating homogenous batch request object for urlshorterner homogenousBatchUrlshortener = gapi.client.newBatch(); // adding the 2 batch requests for urlshorterner homogenousBatchUrlshortener.add(request1); homogenousBatchUrlshortener.add(request3); // Creating homogenous batch request object for zoo homogenousBatchZoo = gapi.client.newBatch(); // adding the 2 batch requests for zoo homogenousBatchZoo.add(request2); homogenousBatchZoo.add(request4); // print the 2 homogenous batch request Promise.all([homogenousBatchUrlshortener,homogenousBatchZoo]) .then(x=>console.log(x));
If you are using Google API Client Libraries, these libraries have been regenerated to no longer make requests to the global HTTP batch endpoint. Recommendation for clients using these libraries is to upgrade to the latest version if possible. Please see the language specific guidance below for minimum Google API Client Library to upgrade to.
Update all Google API Service packages (`com.google.apis`) to a version where the supporting library version is 1.23.1 or higher. For example, upgrade `com.google.apis:google-api-services-drive` from version `v3-rev159-1.22.0` to `v3-rev20190620-1.30.1`.
com.google.api.client.googleapis.batch.BatchRequest
Drive client = Drive.builder(transport, jsonFactory, credential).setApplicationName("BatchExample/1.0").build (); BatchRequest batch = client.batch();
BatchRequest batch = client.batch();
"/batch"
"/batch/library/v1"
Rebuild
/batch/library/v1
Example code Before
HttpRequestBatch batch(service->transport());
HttpRequestBatch batch(service->transport(), service->batch_url());
For help on migration, consult the API documentation or tag Stack Overflow questions with the 'google-api' tag.
Once a year, we invite community organizers and influencers from developer groups that support diversity and inclusion in their local tech ecosystem to the Women Techmakers Summit Europe. The Women Techmakers Summit is designed to provide training opportunities, share best practices, show success stories and build meaningful relationships. The fourth edition of the WTM Summit in Europe took place in Warsaw, one of Europe’s most innovative tech and startup ecosystems.
The Women Techmakers Summit hosted 120 people, all women and men that are leading tech communities across Europe. With more than half of the sessions being delivered by community influencers, the group came together to share their best practices, learn from each other and discuss all things related to diversity & inclusion. “A fantastic opportunity to meet other community organizers across Europe and learn from each other.”
We also invited role models to draw inspiration and motivation from. Head of Google for Startups, Agnieszka Hryniewicz-Bieniek, and Cloud Engineer, Ewa Maciaś, demonstrated that stepping out of our comfort zone is something we should do more and more. No one has the right answers from the start but by trying out new ways, we can carve our individual paths. Fear of failure is real. It should not keep us from experimenting, though.
Google’s Natalie Villalobos, head of the Women Techmakers program, and Emma Haruka Iwao, record breaker for calculating the most accurate value of Pi with Google Cloud, gave a glimpse into their personal stories. Their insights? Sometimes we need to go through hard times. They equipped us with the right mindset to push through, become your boss and succeed.
This left the attendees with the right motivation to get back to their communities: “This was my first WTM Summit, and it was an incredible experience. I met some amazing ladies and role models, and will be happy to share the inspiration I got with my local community.”
“Being at the WTM Summit felt like being inside a family. I felt really included like at no conference before." To make everyone feel welcome, a code of conduct was visible for all attendees, and prayers and parents spaces were provided for all attendees. The itself needed to become the inspiration for community organizers and influencers to carry the learnings back to the communities.
One of the core elements of Women Techmakers is creating and providing community for women in tech. Women Techmakers Ambassadors thrive diversity and inclusion initiatives in their local tech community to help to bring more women into the industry. In Europe, more than 150 WTM Ambassadors from 25 countries support their local tech communities to close the gap between the number of women and men in the industry. Meetup organizers and community advocates who want to achieve parity can join the Women Techmakers program. As members, they are given the tools and opportunities to change the narrative.
If you are interested in joining the WTM Ambassadors Program, reach out to WTM-Europe@google.com
In celebration of International Women’s Day, Women Techmakers hosted its sixth annual summit series to acknowledge and celebrate women in the tech industry, and to create a space for attendees to build community, hear from industry leaders, and learn new skills. The series featured 19 summits and 305 meetups across 87 countries.
This year, Women Techmakers partnered with the Actions on Google team to host technical workshops at these events so attendees could learn the fundamental concepts to develop Actions for the Google Assistant.Together, we created hundreds of new Actions for the Assistant. Check out some of the highlights of this year’s summit in the video below:
If you couldn’t attend any of our meetups this past year, we’ll cover our technical workshops now so you can start building for the Assistant from home. The technical workshop kicked off by introducing Actions on Google — the platform that enables developers to build Actions for the Google Assistant. Participants got hands-on experience building their first Action with the following features:
During Codelab level 1, participants learned how to parse the user’s input by using Dialogflow, a tool that uses Machine Learning and acted as their Natural Language Processor (NLP). Dialogflow processes what the user says and extracts important information from that input to identify how to fulfill the user’s request. Participants configured Dialogflow and connected it to their code’s back-end using Dialogflow’s inline editor. In the editor, participants added their code and tested their Action in the Action Simulator.
In Codelab level 2, participants continued building on their Action, adding features such as:
Instead of using Dialogflow’s inline editor, participants set up a Cloud Functions for Firebase as their server.
You can learn more about developing your own Actions here. To support developers’ efforts in building great Actions for the Google Assistant, the team also has a developer community program.
Alex Eremia, a workshop attendee, reflected, “I think voice applications will have a huge impact on society both today and in the future. It will become a natural way we interact with the items around us.”
From keynotes, fireside chats, and interactive workshops, the Women Techmakers summit attendees enjoyed a mixture of technical and inspirational content. If you’re interested in learning more and getting involved, follow us WTM on twitter, check out our website and sign up to become a member.
To learn more Actions on Google and how to build for the Google Assistant, be sure to follow us on Twitter, and join our Reddit community!
Recently at Google I/O, we gave you a sneak peek at our new Local Home SDK, a suite of local technologies to enhance your smart home integrations. Today, the SDK is live as a developer preview. We've been working hard testing the platform with our partners, including GE, LIFX, Philips Hue, TP-Link, and Wemo, and are excited to bring you these additional technologies for connecting smart devices to the Google Assistant.
Figure 1: The local execution path
This SDK enables developers to more deeply integrate their smart devices into the Assistant by building upon the existing Smart Home platform to create a local execution path via Google Home smart speakers and Nest smart displays. Developers can now run their business logic to control new and existing smart devices in JavaScript that executes on the smart speakers and displays, benefitting users with reduced latency and higher reliability.
The SDK introduces two new intents, IDENTIFY and REACHABLE_DEVICES. The local home platform scans the user's home network via mDNS, UDP, or UPnP to discover any smart devices connected to the Assistant, and triggers IDENTIFY to verify that the device IDs match those returned from the familiar Smart Home API SYNC intent. If the detected device is a hub or bridge, REACHABLE_DEVICES is triggered and treats the hub as the proxy device for communicating locally. Once the local execution path from Google Home to a device is established, the device properties are updated in Home Graph.
IDENTIFY
REACHABLE_DEVICES
SYNC
Figure 2: The intents used for each execution path
When a user triggers a smart home Action that has a local execution path, the Assistant sends the EXECUTE intent to the Google Nest device rather than the developer's cloud fulfillment. The developer's JavaScript app is invoked, which then triggers the Local Home SDK to send control commands to the smart device over TCP, UDP socket, or HTTP/HTTPS requests. By defaulting to local execution rather than the cloud, users experience faster fulfillment of their requests. The execution requests can still be sent to the cloud path in case local execution fails. This redundancy minimizes the possibility of a failed request, and improves the overall user experience.
EXECUTE
Additional features of the Local Home platform include:
Figure 3: Local Home configuration tool in the Actions console
JavaScript apps can be tested on-device, allowing developers to employ familiar tools like Chrome Developer Console for debugging. Because the Local Home SDK works with the existing smart home framework, you can self-certify new apps through the Test suite for smart home as well.
To learn more about the Local Home platform, check out the API reference, and get started adding local execution with the developer guide and samples. For general information covering how you can connect smart devices to the Google Assistant, visit the Smart Home documentation, or check out the Local Technologies for the Smart Home talk from Google I/O this year.
You can send us any feedback you have through the bug tracker, or engage with the community at /r/GoogleAssistantDev. You can tag your posts with the flair local-home-sdk to help organize discussion.