Concurrency and integrity with validates_uniqueness_of

Here’s another tidbit that I hadn’t noticed in the rails docs before.  I was looking at validations for uniqueness and I saw this:

Using this validation method in conjunction with ActiveRecord::Base#save does not guarantee the absence of duplicate record insertions, because uniqueness checks on the application level are inherently prone to race conditions.

And the docs also offer some solutions:

This could even happen if you use transactions with the ‘serializable’ isolation level. There are several ways to get around this problem:  

 

By locking the database table before validating, and unlocking it after saving. However, table locking is very expensive, and thus not recommended.   

 

By locking a lock file before validating, and unlocking it after saving. This does not work if you‘ve scaled your Rails application across multiple web servers (because they cannot share lock files, or cannot do that efficiently), and thus not recommended. 

 

Creating a unique index on the field, by using ActiveRecord::ConnectionAdapters::SchemaStatements#add_index. In the rare case that a race condition occurs, the database will guarantee the field‘s uniqueness.

This typically isn’t something you’d need to worry about until you get to some traffic of scale and size.  So don’t worry about it too much until you get there, but be aware of the problem.  Read the docs for more details and information.  tip! 
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s