Today, we introduce eager execution for TensorFlow.
Eager execution is an imperative, define-by-run interface where operations are executed immediately as they are called from Python. This makes it easier to get started with TensorFlow, and can make research and development more intuitive.
The benefits of eager execution include:
Eager execution is available now as an experimental feature, so we're looking for feedback from the community to guide our direction.
To understand this all better, let's look at some code. This gets pretty technical; familiarity with TensorFlow will help.
When you enable eager execution, operations execute immediately and return their values to Python without requiring a Session.run(). For example, to multiply two matrices together, we write this:
Session.run()
import tensorflow as tf import tensorflow.contrib.eager as tfe tfe.enable_eager_execution() x = [[2.]] m = tf.matmul(x, x)
It's straightforward to inspect intermediate results with print or the Python debugger.
print
print(m) # The 1x1 matrix [[4.]]
Dynamic models can be built with Python flow control. Here's an example of the Collatz conjecture using TensorFlow's arithmetic operations:
a = tf.constant(12) counter = 0 while not tf.equal(a, 1): if tf.equal(a % 2, 0): a = a / 2 else: a = 3 * a + 1 print(a)
Here, the use of the tf.constant(12) Tensor object will promote all math operations to tensor operations, and as such all return values with be tensors.
tf.constant(12)
Tensor
Most TensorFlow users are interested in automatic differentiation. Because different operations can occur during each call, we record all forward operations to a tape, which is then played backwards when computing gradients. After we've computed the gradients, we discard the tape.
If you're familiar with the autograd package, the API is very similar. For example:
autograd
def square(x): return tf.multiply(x, x) grad = tfe.gradients_function(square) print(square(3.)) # [9.] print(grad(3.)) # [6.]
The gradients_function call takes a Python function square() as an argument and returns a Python callable that computes the partial derivatives of square() with respect to its inputs. So, to get the derivative of square() at 3.0, invoke grad(3.0), which is 6.
gradients_function
square()
grad(3.0)
The same gradients_function call can be used to get the second derivative of square:
gradgrad = tfe.gradients_function(lambda x: grad(x)[0]) print(gradgrad(3.)) # [2.]
As we noted, control flow can cause different operations to run, such as in this example.
def abs(x): return x if x > 0. else -x grad = tfe.gradients_function(abs) print(grad(2.0)) # [1.] print(grad(-2.0)) # [-1.]
Users may want to define custom gradients for an operation, or for a function. This may be useful for multiple reasons, including providing a more efficient or more numerically stable gradient for a sequence of operations.
Here is an example that illustrates the use of custom gradients. Let's start by looking at the function log(1 + ex), which commonly occurs in the computation of cross entropy and log likelihoods.
def log1pexp(x): return tf.log(1 + tf.exp(x)) grad_log1pexp = tfe.gradients_function(log1pexp) # The gradient computation works fine at x = 0. print(grad_log1pexp(0.)) # [0.5] # However it returns a `nan` at x = 100 due to numerical instability. print(grad_log1pexp(100.)) # [nan]
We can use a custom gradient for the above function that analytically simplifies the gradient expression. Notice how the gradient function implementation below reuses an expression (tf.exp(x)) that was computed during the forward pass, making the gradient computation more efficient by avoiding redundant computation.
tf.exp(x)
@tfe.custom_gradient def log1pexp(x): e = tf.exp(x) def grad(dy): return dy * (1 - 1 / (1 + e)) return tf.log(1 + e), grad grad_log1pexp = tfe.gradients_function(log1pexp) # Gradient at x = 0 works as before. print(grad_log1pexp(0.)) # [0.5] # And now gradient computation at x=100 works as well. print(grad_log1pexp(100.)) # [1.0]
Models can be organized in classes. Here's a model class that creates a (simple) two layer network that can classify the standard MNIST handwritten digits.
class MNISTModel(tfe.Network): def __init__(self): super(MNISTModel, self).__init__() self.layer1 = self.track_layer(tf.layers.Dense(units=10)) self.layer2 = self.track_layer(tf.layers.Dense(units=10)) def call(self, input): """Actually runs the model.""" result = self.layer1(input) result = self.layer2(result) return result
We recommend using the classes (not the functions) in tf.layers since they create and contain model parameters (variables). Variable lifetimes are tied to the lifetime of the layer objects, so be sure to keep track of them.
Why are we using tfe.Network? A Network is a container for layers and is a tf.layer.Layer itself, allowing Network objects to be embedded in other Network objects. It also contains utilities to assist with inspection, saving, and restoring.
tfe.Network
tf.layer.Layer
Network
Even without training the model, we can imperatively call it and inspect the output:
# Let's make up a blank input image model = MNISTModel() batch = tf.zeros([1, 1, 784]) print(batch.shape) # (1, 1, 784) result = model(batch) print(result) # tf.Tensor([[[ 0. 0., ...., 0.]]], shape=(1, 1, 10), dtype=float32)
Note that we do not need any placeholders or sessions. The first time we pass in the input, the sizes of the layers' parameters are set.
To train any model, we define a loss function to optimize, calculate gradients, and use an optimizer to update the variables. First, here's a loss function:
def loss_function(model, x, y): y_ = model(x) return tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_)
And then, our training loop:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001) for (x, y) in tfe.Iterator(dataset): grads = tfe.implicit_gradients(loss_function)(model, x, y) optimizer.apply_gradients(grads)
implicit_gradients() calculates the derivatives of loss_function with respect to all the TensorFlow variables used during its computation.
implicit_gradients()
loss_function
We can move computation to a GPU the same way we've always done with TensorFlow:
with tf.device("/gpu:0"): for (x, y) in tfe.Iterator(dataset): optimizer.minimize(lambda: loss_function(model, x, y))
(Note: We're shortcutting storing our loss and directly calling the optimizer.minimize, but you could also use the apply_gradients() method above; they are equivalent.)
optimizer.minimize
apply_gradients()
Eager execution makes development and debugging far more interactive, but TensorFlow graphs have a lot of advantages with respect to distributed training, performance optimizations, and production deployment.
The same code that executes operations when eager execution is enabled will construct a graph describing the computation when it is not. To convert your models to graphs, simply run the same code in a new Python session where eager execution hasn't been enabled, as seen, for example, in the MNIST example. The value of model variables can be saved and restored from checkpoints, allowing us to move between eager (imperative) and graph (declarative) programming easily. With this, models developed with eager execution enabled can be easily exported for production deployment.
In the near future, we will provide utilities to selectively convert portions of your model to graphs. In this way, you can fuse parts of your computation (such as internals of a custom RNN cell) for high-performance, but also keep the flexibility and readability of eager execution.
Using eager execution should be intuitive to current TensorFlow users. There are only a handful of eager-specific APIs; most of the existing APIs and operations work with eager enabled. Some notes to keep in mind:
tf.data
tf.layer.Conv2D()
tfe.enable_eager_execution()
This is still a preview release, so you may hit some rough edges. To get started today:
There's a lot more to talk about with eager execution and we're excited… or, rather, we're eager for you to try it today! Feedback is absolutely welcome.
Email remains at the heart of how companies operate. That's why earlier this year, we previewed Gmail Add-ons—a way to help businesses speed up workflows. Since then, we've seen partners build awesome applications, and beginning today, we're extending the Gmail add-on preview to include all developers. Now anyone can start building a Gmail add-on.
Gmail Add-ons let you integrate your app into Gmail and extend Gmail to handle quick actions.
They are built using native UI context cards that can include simple text dialogs, images, links, buttons and forms. The add-on appears when relevant, and the user is just a click away from your app's rich and integrated functionality.
Gmail Add-ons are easy to create. You only have to write code once for your add-on to work on both web and mobile, and you can choose from a rich palette of widgets to craft a custom UI. Create an add-on that contextually surfaces cards based on the content of a message. Check out this video to see how we created an add-on to collate email receipts and expedite expense reporting.
Per the video, you can see that there are three components to the app's core functionality. The first component is getContextualAddOn()—this is the entry point for all Gmail Add-ons where data is compiled to build the card and render it within the Gmail UI. Since the add-on is processing expense reports from email receipts in your inbox, the createExpensesCard() parses the relevant data from the message and presents them in a form so your users can confirm or update values before submitting. Finally, submitForm()takes the data and writes a new row in an "expenses" spreadsheet in Google Sheets, which you can edit and tweak, and submit for approval to your boss.
getContextualAddOn()
createExpensesCard()
submitForm()
Check out the documentation to get started with Gmail Add-ons, or if you want to see what it's like to build an add-on, go to the codelab to build ExpenseIt step-by-step. While you can't publish your add-on just yet, you can fill out this form to get notified when publishing is opened. We can't wait to see what Gmail Add-ons you build!
We recently partnered with Awwwards, an awards platform for web development and web design, to launch a Mobile Excellence Badge on awwwards.com and a Mobile Excellence Award to recognize great mobile web experiences.
Starting this month, every agency and digital professional that submits their website to Awwwards can be eligible for a Mobile Excellence Badge, a guarantee of the performance of their mobile version. The mobile website's performance will be evaluated by a group of experts and measured against specific criteria based on Google's mobile principles on speed and usability. When a site achieves a minimum score, it will be recognized with the new Mobile Excellence Badge. All criteria are listed at the Mobile Guidelines.
The highest scoring sites with the Mobile Excellence Badge will be nominated for Mobile Site of the Week. One of them will then go on to win Mobile Site of the Month.
All Mobile Sites of the Month will be candidate for Mobile Site of the Year, with the winner receiving a physical award at the Awwwards Conference in Berlin, 8-9 February 2018.
In a time where mobile is playing a dominant role in how people access the web, it is necessary that web developers and web designers build websites that meet users' expectations. Today, 53% of mobile site visits are abandoned if pages take longer than 3 seconds to load1 and despite the explosion of mobile usage, performance and usability of existing mobile sites remain poor and are far from meeting those expectations. At the moment, the average page load time is 22s globally2, which represents a massive missed opportunity for many companies knowing the impact of speed on conversion and bounce rates3.
If you created a great mobile web experience and want it to receive a Mobile Excellence Badge and compete for the Mobile Excellence Award submit your request here.
Google Data, Aggregated, anonymized Google Analytics data from a sample of mWeb sites opted into sharing benchmark data, n=3.7K, Global, March 2016 ↩
Google Research, Webpagetest.org, Global, sample of more than 900,000 mWeb sites across Fortune 1000 and Small Medium Businesses. Testing was performed using Chrome and emulating a Nexus 5 device on a globally representative 3G connection. 1.6Mbps download speed, 300ms Round-Trip Time (RTT). Tested on EC2 on m3.medium instances, similar in performance to high-end smartphones, Jan. 2017. ↩
Akamai.com, Online Retail Experience Report 2017 ↩
When we started API.AI, our goal was to provide developers like you with an API to add natural language processing capabilities to your applications, services and devices. We've worked hard towards that goal and accomplished a lot partnering with all of you. But as we've taken a look at our work over the past year and where we're heading, from new features like our Analytics tool to the 33 prebuilt agents, we realized that we were doing so much more than just providing an API. So with that, we'd like to introduce Dialogflow – the new name for API.AI.
Our new name doesn't change the work we're doing with you or our mission. Our mission continues to be that Dialogflow is your end-to-end platform for building great conversational experiences and our team will help you share what you've built with millions of users. In fact, here are 2 new features we've just launched to help you build those great experiences:
Thanks for being a part of API.AI – we can't wait to see what we do together with Dialogflow. Head over to your developer console and give these new features a try. And, as always, contact us if you have any questions.
As you may have seen, it's a big day for the Google Assistant with new features, new devices and new languages coming soon. But it's also a big day for developers like you, as Actions on Google is also coming to new devices and new languages, and getting better for building and sharing apps.
Actions on Google is already available in English in the US, UK and Australia and today, we're adding new languages to the mix—German (de-DE), French (fr-FR), Japanese (ja-JP), Korean (kr-KR), and both French and English in Canada (en-CA, fr-CA). Starting this week, you can build apps for the Google Assistant in these new languages and soon, they'll be available via the Assistant! Users will soon be able to talk to apps like Zalando, Namatata and Drop the Puck, with more apps on the way. We can't wait to see what you build!
Along with the new Pixelbook come apps for the Assistant. As soon as the Pixelbook hits shelves later this year, your apps will just work, with no extra steps from you! With that said, as with every new surface, especially one with a screen, it's good to make sure that your app is in tip top shape, including using high quality images or adding images to make your conversations more visual.
With apps on Pixelbook, you'll be able to reach a whole new audience and give users the chance to explore your app on a bigger screen, while they get things done throughout their day.
And, in case you missed it, we also recently introduced apps on headphones optimized for the Google Assistant and with the Assistant on Android TV.
Today we shared how the Assistant is great for families—giving people the chance to connect, explore, learn and have fun with the Assistant. And from trivia to storytelling, you can now build Apps for Families and get a dedicated badge via the Assistant on your phone, letting people know your app is family friendly! Soon, users will be able to say "Ok Google, what's my Justice League superhero?" or "Ok Google, play Sports Illustrated Kids Trivia" if you're looking for a game. Or 'Ok Google, let's learn" for some educational fun.
To participate, you first need to make sure your app complies with the program policies and, after that, simply submit it for review. Once approved it will be live for anyone to try! You can learn more about that here. Apps for Families will only be available in US English at the start.
It's easier than ever to make your first (or fifth!) app. With new templates, you can create your own trivia game, flash card app or personality quiz for the Google Assistant without writing code or doing any complex server configurations. All you have to do is add some questions and answers via a Google Sheet. Within minutes, voilà, you can try it out on your Google Assistant and publish it! And if you want to try one today, just say "Ok Google, Play Planet Quiz"
We even provide pre-defined personalities when you create an app from the templates, offering a voice, tone and natural conversational feel for your app's users, without any additional work on your end.
If you prefer to code your own apps, we put a fresh coat of paint on our Actions Console UI to make it easier to create apps with tools like API.AI.
In May we announced that you could start building transactional apps for the Google Assistant on phones and starting this week in the US, you can submit your apps for review! To get a first look at how transactions will work, you'll soon be able to try out 1-800-Flowers, Applebee's, Panera and Ticketmaster.
Ready to give it a try for yourself? You can build and test transactional apps that include payments, status updates and follow-on actions here.
In addition to paying, with transactional apps, a user can see their order history, get status updates and more.
And, last up, to support your efforts in building apps for the Google Assistant and celebrate your accomplishments, we created a new developer community program. Starting with up to $200 in monthly Google Cloud credit and an Assistant t-shirt when you publish your first app, the perks and opportunities available to you will grow as you hit milestone after milestone including your chance to earn a Google Home. And if you've already created an app, don't fret! Your perks are on the way!
Thanks for everything you do to make the Assistant more helpful, fun and interactive! It's been an exciting 10 months to see the platform expand to new languages and devices and to see what you've all created.
Today we're excited to launch Cloud Firestore, a fully-managed NoSQL document database for mobile and web app development. It's designed to easily store and sync app data at global scale, and it's now available in beta.
Key features of Cloud Firestore include:
And of course, we've aimed for the simplicity and ease-of-use that is always top priority for Firebase, while still making sure that Cloud Firestore can scale to power even the largest apps.
Managing app data is still hard; you have to scale servers, handle intermittent connectivity, and deliver data with low latency.
We've optimized Cloud Firestore for app development, so you can focus on delivering value to your users and shipping better apps, faster. Cloud Firestore:
As you may have guessed from the name, Cloud Firestore was built in close collaboration with the Google Cloud Platform team.
This means it's a fully managed product, built from the ground up to automatically scale. Cloud Firestore is a multi-region replicated database that ensures once data is committed, it's durable even in the face of unexpected disasters. Not only that, but despite being a distributed database, it's also strongly consistent, removing tricky edge cases to make building apps easier regardless of scale.
It also means that delivering a great server-side experience for backend developers is a top priority. We're launching SDKs for Java, Go, Python, and Node.js today, with more languages coming in the future.
Over the last 3 years Firebase has grown to become Google's app development platform; it now has 16 products to build and grow your app. If you've used Firebase before, you know we already offer a database, the Firebase Realtime Database, which helps with some of the challenges listed above.
The Firebase Realtime Database, with its client SDKs and real-time capabilities, is all about making app development faster and easier. Since its launch, it has been adopted by hundred of thousands of developers, and as its adoption grew, so did usage patterns. Developers began using the Realtime Database for more complex data and to build bigger apps, pushing the limits of the JSON data model and the performance of the database at scale. Cloud Firestore is inspired by what developers love most about the Firebase Realtime Database while also addressing its key limitations like data structuring, querying, and scaling.
So, if you're a Firebase Realtime Database user today, we think you'll love Cloud Firestore. However, this does not mean that Cloud Firestore is a drop-in replacement for the Firebase Realtime Database. For some use cases, it may make sense to use the Realtime Database to optimize for cost and latency, and it's also easy to use both databases together. You can read a more in-depth comparison between the two databases here.
We're continuing development on both databases and they'll both be available in our console and documentation.
Cloud Firestore enters public beta starting today. If you're comfortable using a beta product you should give it a spin on your next project! Here are some of the companies and startups who are already building with Cloud Firestore:
Get started by visiting the database tab in your Firebase console. For more details, see the documentation, pricing, code samples, performance limitations during beta, and view our open source iOS and JavaScript SDKs on GitHub.
We can't wait to see what you build and hear what you think of Cloud Firestore!