A batched write can contain up to 500 operations. Each operation in the batch counts separately towards your Cloud Firestore usage. Within a write operation, field transforms like serverTimestamp, arrayUnion, and increment each count as an additional operation.
Solution 1: Auto commit when limit reached
Create a wrapper class for WriteBatch
class FirestoreAutoWriteBatch: def __init__(self, batch, limit=100, auto_commit=True): self._batch = batch self._limit = limit self._auto_commit = auto_commit self._count = 0 def create(self, *args, **kwargs): self._batch.create(*args, **kwargs) self._count += 1 if self._auto_commit: self.commit_if_limit() def set(self, *args, **kwargs): self._batch.set(*args, **kwargs) self._count += 1 if self._auto_commit: self.commit_if_limit() def update(self, *args, **kwargs): self._batch.update(*args, **kwargs) self._count += 1 if self._auto_commit: self.commit_if_limit() def delete(self, *args, **kwargs): self._batch.delete(*args, **kwargs) self._count += 1 if self._auto_commit: self.commit_if_limit() def commit(self, *args, **kwargs): self._batch.commit() self._count = 0 def commit_if_limit(self): if self._count > self._limit: self._batch.commit() self._count = 0
Usage
db = firestore.client()batch = FirestoreAutoWriteBatch(db.batch())
NOTE: You can use batch normally without change of code
Solution 2: Create multiple batches and commit at the end
You can rewrite FirestoreAutoWriteBatch
to store batch
into an array when limit is reached (instead of performing commit).
At the end, the commit
function will loop through all the batches to perform commit.