+N Consulting, Inc.

Transactions with MongoDB 4.0 and .Net

Multi-Document Transactions and MongoDB

TL;DR - MongoDB 4.0 supports multi-document transactions!

When you have 2 mutations (write operations), each affecting 1 document, Mongo used to apply each write as an independent commit. Consequently, there was a point in time in which mutation 1 was applied but 2 was not. If 2 failed for whatever reason, 1 was still applied.

This behavior caused some pain in attempting to manage all-or-nothing style operations which affect multiple documents. For example, if you had to lend a book to a person you might have wanted the bookmarked as lent-out, and the library visitor to have the book appended to their lent-out list.

If marking the book succeeded but marking the visitor failed, then the book is lent to no one. If marking the visitor first and failing to mark the book as lent, it can be double-lent. Further, there is a point in time in which the visitor “has” the book, but the book is not yet “held”.

Now, there are potential structural solutions to this problem in a document-oriented world, but let’s go with transactions. The newly available transaction support lets you wrap a transaction around the 2 operations. I like thinking of a Mongo transaction in terms of visibility:

  • Before the “begin-transaction”, other readers see data as it was.
  • During the transaction’s lifetime, other readers see data as it was. Mutations happening during the transaction are not visible to readers (*there are some choices here, more on that in a bit..)
  • After the transaction commits, other readers see the results of all mutations.

Speaking of visibility, one of the core changes that needed to occur in the Mongo engine is marking OpLog entries with a global logical cluster time. Why is that important? Because transactions are really about controlling the visibility of written items across the Replica Set. The implementation of point-in-time reads is another key piece of the puzzle. This feature provides a view for reading such that the document states visible to the reader are of the same version they were when the read operation started. Modifications occurring during a long-running operation would not be exposed to the reader, so a consistent view is ensured.

To scope several operations into a transaction, Mongo relies on the already available session implementation. Sessions existed in 3.6, so the leap is smaller. A session groups a sequence of commands, and is tracked throughout the cluster. It is therefore already suitable for the notion of a transaction. All that was added is a syntax for a developer to pass along a session into the commands themselves. All method signatures that mutate data now accept a session handle in the latest official drivers. From a client perspective (using a supported driver version), creating a transaction looks like:

  1. Create a session.
    1. Issue some CRUD operations with the session handle.
  2. Commit the session.

Mongo will control the visibility of the constituent operations according to the session settings.

Consider this session initiation C# code:

Demo code based on .Net driver mongodb.driver version 2.7.0

using (var session = client.StartSession())
{
session.StartTransaction(new TransactionOptions(

readConcern: ReadConcern.Snapshot,

writeConcern: WriteConcern.WMajority));

...

A few things are readily apparent from this small code structure.

A session is a disposable object, so proper disposal is guaranteed by a using claws.

A session by itself is not a transaction. We explicitly start a transaction by calling the StartTransaction() method. Within a session, only one transaction may be “live” at a given time. Since we are within a using scope, this code has a low risk of breaking that rule.

TransactionOptions describe 2 key parts of the transaction treatment: read and write concerns. The write-concern describes the durability expectation of the mutations. Just like any Replica Set write, it lets us control the risk of roll-back of individual writes in case of primary transitions or other adverse events.

The read-concern describes the visibility mode of the mutation during the transaction- the time between the start and the would-be commit or abort commands. As mentioned earlier, what happens during the transaction lifetime before it is finished - successful or not - is really what transactions are about.

The particular setting of ReadConcern.Snapshot, when paired with a write-concern WriteConcern.WMajority guarantee that any reads occurring as part of a transaction view data that is majority committed. Those reads are “stable” and should not roll back since the majority of nodes already have applied that data. You might be tempted to use a weaker read-concern such as ReadConcern.Local or ReadConcern.Majority for sake of speed. That choice may not be treated as you expect. For one, Mongo might “upgrade” the concern to a higher one such as snapshot. Further, Mongo does not guarantee that the writes won’t be rolled back in the face of cluster adverse events. In case of a rollback, your whole transaction might be rolled back so what’s the point really?…

Snapshot

Snapshot is a read-concern relating to read-your-own-writes and causal consistency. Causal consistency describes a relationship between operations where one causes the other: A read operation returning the value of filed count right after a write operation setting count = 123 expects the count to be 123. The write preceding the read caused the mutation and the reader expects the write to be “the one caused by the preceding operation”. An implied order is what this is about. As mentioned before, one of the underpinnings of transactions is a global timestamp, allowing a strict order of operations. Within a causally consistent session, pure read operations following a write is guaranteed to see the results of that write. It may seem trivial - desirable certainly - but keep in mind that other concurrent writes may occur during your sequence which may affect the state of a document. Causal consistency assures that the state of a read document following a write is seen as that writer’s result.

In the diagram below, a session with causal consistency ensures the reader sees the results of its preceding write. A session with no causal consistency does not ensure that, and depending may result in Client One reading a document modified by Client Two rather than the result of Client One’s own write.

With and without causal relationship

The default for creating a session is to create it with causal consistency. The code below creates a session with the default value or explicit option. Either of these result in the same.

// setting CausalConsistency explicitly
using (var session = client.StartSession(new ClientSessionOptions { CausalConsistency = true }))
{ ...

// setting CausalConsistency implicitly
using (var session = client.StartSession())
{ ...

Now we can state this: A transaction in a causally consistent session with a read-concern of “snapshot” and write-concern of “majority” containing reads, will view documents committed to a majority of the nodes. This guarantee level extends to reads within the transaction such that not only will the transaction writes succeed if majority acknowledged, but the reads within the transactions will also only rely on majority committed documents according to the snapshots time. This shuts down the concern of having a transaction rely on document state which might be rolled back because once majority-committed, it won’t be rolled back. This provides a consistent view since the causal consistency kicks in and guarantees that.

Code it Up

The theory above gives us the background necessary to understand what’s going on. The code below implements a multi-document transaction touching 3 documents across 3 different collections.

The scenario is that we have some Tool which can be borrowed by some Person and is then logged in the LendingLedger. We start by creating a new session. We then perform the sequence of operations inside the transaction:

  1. Mark the tool as held by the person.
  2. Check that the tool was indeed found and marked.
  3. If the tool is not found to be un-held, or if not found at all or update failed we’ll throw an exception, which is then caught and aborts the transaction.
  4. Add a ledger entry detailing the tool, person, and time the tool was lent out.
  5. Increase the number of tools the person has by 1.

Under the cover of a transaction, performing this sequence gives us assurance that all three entities would be manipulated to satisfaction, or rolled back completely. Further - other concurrent operations in parallel would not interfere with the operations happening inside this transaction.

For a more complete demonstration please see my GitHub repo.

using (var session = client.StartSession(new ClientSessionOptions { CausalConsistency = true }))
{

session.StartTransaction(new TransactionOptions(
readConcern: ReadConcern.Snapshot,
writeConcern: WriteConcern.WMajority));

try
{
var personCollection = db.GetCollection<Person>(nameof(Person));
var toolCollection = db.GetCollection<Tool>(nameof(Tool));
var lendLogCollection = db.GetCollection<LendingLedger>(nameof(LendingLedger));


var holdTool = toolCollection.UpdateOne(session,
Builders<Tool>.Filter.Eq(t => t.Id, toolId) & Builders<Tool>.Filter.Eq(t => t.HeldBy, null),
Builders<Tool>.Update.Set(t => t.HeldBy, personId));

if (holdTool.ModifiedCount != 1)
{
throw new InvalidOperationException($"Failed updating hold on tool {toolId}. It might be held or non-existent");
}

lendLogCollection.InsertOne(session, new LendingLedger
{
ToolId = toolId,
PersonId = personId,
CheckOutTime = DateTime.UtcNow
});

var toolCount = personCollection.UpdateOne(
session,
Builders<Person>.Filter.Eq(p => p.Id, personId),
Builders<Person>.Update.Inc(p => p.ToolCount, 1)
);

if (toolCount.ModifiedCount != 1)
{
throw new InvalidOperationException($"Failed updating tool count on person {personId}");
}
}
catch (Exception exception)
{
Logger.Error($"Caught exception during transaction, aborting: {exception.Message}.");
session.AbortTransaction();
throw;
}

session.CommitTransaction();

}

Epilog

Transactions have been long awaited by some for quite a while. Others see transactions as a performance and scalability hindrances, placing an undue burden on the core engine. There are performance implications to transactions. Measuring those is tricky because the effect depends on concurrency, velocity, and size of data. Transactions also introduce more controls on timing, with defaults favoring quick transactions and opting to abort rather than consume precious resources. How much overhead will a transaction introduce? I don’t know - better measure it. The documentation currently states only a vague warning:

…In most cases, multi-document transaction incurs a greater performance cost over single document writes, and the availability of multi-document transaction should not be a replacement for effective schema design …

Certainly, something to consider, and I for one definitely model entities with the mindset of embedding where appropriate. After all - if you want completely “independent” entities with cross-references, there’s an app for that… We chose a document-oriented-database for its document-oriented nature - let’s leverage that. A good rule of thumb is that if your RDBMS schema was translated into one-collection-per-table in MongoDB - try again.

Lastly, I should mention that the v4.0 release includes multi-document transactions on replica sets but not on sharded collections. Support for sharded collections with transactions is slated for V4.2.

Happy transacting!

Notice

We use cookies to personalise content, to allow you to contact us, to provide social media features and to analyse our site usage. Information about your use of our site may be combined by our analytics partners with other information that you’ve provided to them or that they’ve collected from your use of their services. You consent to our cookies if you continue to use our website.