This page describes production limits for Cloud Spanner.
These values are subject to change.
Checking your quotas
To check the current quotas for resources in your project, go to the Quotas page in the Google Cloud Platform Console.
Increasing your quotas
As your use of Cloud Spanner expands over time, your quotas can increase accordingly. If you expect a notable upcoming increase in usage, you should make your request a few days in advance to ensure your quotas are adequately sized.
On the Quotas page, select Cloud Spanner API in the Service dropdown list.
If you do not see Cloud Spanner API, the Cloud Spanner API has not been enabled.
Select the quotas you want to change.
Click Edit Quotas.
Fill in your name, email, and phone number and click Next.
Fill in your quota request and click Submit request.
You will receive a response from the Cloud Spanner team within 48 hours of your request.
Instance limits
| Value | Limit |
|---|---|
| Instance ID length | 2 to 64 characters |
Database limits
| Value | Limit |
|---|---|
| Databases per instance | 100 |
| Database ID length | 2 to 30 characters |
| Storage size per node | 2 TB1 |
Schema limits
Schemas
| Value | Limit |
|---|---|
| Schema size | 10 MB |
| Schema change size | 10 MB |
Tables
| Value | Limit |
|---|---|
| Tables per database | 2,048 |
| Table name length | 1 to 128 characters |
| Columns per table | 1,024 |
| Column name length | 1 to 128 characters |
| Size of data per column | 10 MB |
| Number of columns in a table key | 16 Includes key columns shared with any parent table |
| Table interleaving depth | 6 A top-level table with child table(s) has depth 1. A top-level table with grandchild table(s) has depth 2, and so on. |
| Total size of a table or index key | 8 KB Includes the size of all columns that make up the key |
Indexes
| Value | Limit |
|---|---|
| Indexes per database | 4,096 |
| Indexes per table | 32 |
| Index name length | 1 to 128 characters |
| Number of columns in an index key | 16 The number of indexed columns (except for STORING columns) plus the number of primary key columns in the base table |
Query limits
| Value | Limit |
|---|---|
Columns in a GROUP BY clause |
1000 |
| Function calls | 1000 |
| Joins | 15 |
| Nested function calls | 75 |
Nested GROUP BY clauses |
35 |
| Nested subquery expressions | 25 |
| Nested subselect statements | 60 |
| Parameters | 950 |
| Query statement length | 1 million characters |
STRUCT fields |
1000 |
| Subquery expression children | 40 |
| Unions in a query | 200 |
Limits for creating, reading, updating, and deleting data
| Value | Limit |
|---|---|
| Commit size (including indexes) | 100 MB |
| Concurrent reads per session | 100 |
| Mutations per commit (including indexes)2 | 20,000 |
| Concurrent Partitioned DML statements per database | 20,000 |
Administrative limits
| Value | Limit |
|---|---|
| Administrative actions request size3 | 1 MB |
| Rate for administrative actions | 5 per second per project (averaged over 100 seconds) |
Node limits
| Value | Limit |
|---|---|
| Nodes per project per instance configuration | 25 |
Request limits
| Value | Limit |
|---|---|
| Request size other than for commits4 | 10 MB |
Notes
1. To provide high availability and low latency for accessing a database, Cloud Spanner requires that there is a node for every 2 TB of data in the database. For example, if you have a database of size 3.5 TB, you need to provision at least 2 nodes. That will handle your database until it grows to 4 TB. Once your database reaches 4 TB, you need to add another node to allow the database to grow. Otherwise, writes to the database will fail. For a smooth growth experience, add nodes before this limit is reached for your database.
2. Insert and update operations count with the multiplicity of the number of columns they affect. For example, inserting values into one key column and four non-key columns count as five mutations total for the insert. Delete and delete range operations count as one mutation regardless of the number of columns affected.
3. The limit for an administrative action request excludes commits, requests listed in note 4, and schema changes.
4. This includes requests for creating a database, updating a database, reading, streaming reads, executing SQL queries, and executing streaming SQL queries.


Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
