Thursday, 24 January 2013

Be aware of exception caching when using System.Lazy<T> to initialise shared objects

System.Lazy<T> gives us a nice declarative way of lazily creating object instances. The class is useful for a couple of reasons.

Firstly, the wrapped object is only initialised (lazily) when the Value property is accessed, allowing work to be deferred until it is actually needed, and could be used to improve an application's start up time. Quoting from this Martin Fowler article is mandatory here:
As many people know, if you're lazy about doing things you'll win when it turns out you don't need to do them at all.
In addition, Lazy<T> can be configured to control initialisation of the value when the Value property is accessed concurrently by multiple threads. This prevents nasty bugs from arising due to different threads getting different copies of an object, or multiple threads running non-thread-safe code concurrently.

Lazy<T>'s thread safe behaviour makes it ideal for managing the initialisation of an object that is costly to create. A trivial example is shown below:


Here are some sort-of-real-world example scenarios where we might consider using a thread-safe Lazy<T> to initialise a shared object within our application:

  • Loading some information from a database when the application starts and storing it for use throughout the application
  • Working with a third party library or framework that requires you to maintain a single application-wide instance of an object instance. For example, it is intended that single NHibernate Configuration and ISessionFactory instances are shared by all threads within an application.
If you are doing something similar to the above and are thinking of using Lazy, then you need to be aware of its thread safety modes and exception caching mechanism.

First, let's look at thread safety in more detail. To illustrate the thread safety options, here are some examples of how you can create instances of Lazy<T>:

There are three LazyThreadSafetyModes that you can choose from:

  • None - The Lazy<T> instance is not thread-safe and multiple threads can initialise the value concurrently. Returning to the example scenarios above, this would result in multiple threads running code to connect to the database or several NHibernate Configuration and ISessionFactory instances being created if a web application were under load.
  • ExecutionAndPublication - Locking is used to ensure that only a single thread initialises the value. Any other concurrent threads accessing the Value property will wait and receive the value initialised by the chosen thread. 
  • PublicationOnly - This one sounds quite fun... If the value has not yet been initialised, all threads that access the Value property concurrently get to have a go at initialising the value. The first to complete is the "winner" and all other threads accessing the Lazy<T>'s Value property will get the value initialised by the winning thread. 

ExecutionAndPublication is the most useful mode if you want to ensure that only a single thread is allowed to execute some costly initialisation functionality. As we saw in the Lazy<T> constructor examples above, Lazy<T> uses this thread safety mode by default, unless you specify another mode explicitly.

So far, so good. But what happens if something goes wrong during initialisation of a lazy value and an exception is thrown? The way that exceptions are dealt with varies according to thread safety mode. This is the full description of ExecutionAndPublication from MSDN (with my emphasis on exception caching):
Locks are used to ensure that only a single thread can initialize a Lazy<T> instance in a thread-safe manner. If the initialization method (or the default constructor, if there is no initialization method) uses locks internally, deadlocks can occur. If you use a Lazy<T> constructor that specifies an initialization method (valueFactory parameter), and if that initialization method throws an exception (or fails to handle an exception) the first time you call the Lazy<T>.Value property, then the exception is cached and thrown again on subsequent calls to the Lazy<T>.Value property. If you use a Lazy<T> constructor that does not specify an initialization method, exceptions that are thrown by the default constructor for T are not cached. In that case, a subsequent call to the Lazy<T>.Value property might successfully initialize the Lazy<T> instance. If the initialization method recursively accesses the Value property of the Lazy<T> instance, an InvalidOperationException is thrown.
This is quite subtle and I'm glad I RTFMed this.

To clarify:

  • If you create your Lazy<T> using an initialisation method (i.e., a "valueFactory" parameter), e.g. var lazy = new Lazy<T>(() => Initialise()) and that delegate throws an exception, no further attempts will be made to initialise the value. The original exception will be thrown every time the Value property is accessed. 
  • If you don't use a delegate (i.e., you are specifying that Lazy<T> will initialise an instance of target type T using its default constructor), an exception thrown during initialisation will not be cached and further attempts will be made to create the value until initialisation succeeds.

Let's consider this in the context of our sort-of-real-world example scenarios.

Firstly, imagine a web application that uses a Lazy<T> with an initialisation method that loads data from a database to initialise an object. If an exception were thrown by the initialisation method, perhaps due to a temporary connectivity issue, the same exception would be thrown every time the lazy value were accessed. Every request that required access to the lazy value would result in a server error.

The second scenario, using Lazy<T> to initialise an NHibernate Configuration or ISessionFactory, could also potentially cripple the application for similar reasons. An exception will be thrown if the database specified in the configuration is not available when Configuration.BuildSessionFactory is called and no further attempts would be made to build the session factory, even when the database became available.

So, let's conclude with ... a top tip: You must not use a thread-safe (ExecutionAndPublication) Lazy<T> with a initialisation method ("valueFactory" parameter) if there is any risk of the method failing. If it fails, it won't get the chance to run again, which could bring down your application.

If this helps just one or two developers out there I'll be happy. Let me know if this has helped you!

Some alternatives and work-arounds will be explored in a future post.

Thursday, 10 January 2013

Generating a script in SQL server to delete data from your database

In my previous post, I described how a simple SQL script could be an effective way of resetting your database to a known state between tests in your database integration test suite.

Writing a SQL reset script manually for a large database is tedious though. Imagine having to go through 100 tables and work out what order to delete records in... Fortunately, most database engines expose data about tables and the relationships between them in system tables. You can use this information to derive the dependencies between your tables and automatically generate a sequence of delete statements in the correct order. In Isolating database data in integration tests, Jimmy Bogard demonstrates some C# code that uses the sysobjects and sysforeignkeys tables in SQL Server to do this.

In some cases, you might prefer to have a SQL script that does a similar thing. No code to compile or connection string to configure, just a script that you can copy and paste into SQL Server Management Studio (or your client tool of choice) and execute against your application's database.

The following script analyses the tables and relationships in the current database and generates a sequence of “delete from [table]” statements in the correct order. If it can’t resolve the delete order for all tables, it reports the foreign key constraints that are preventing it from doing so.

Here it is:


Some database schemas may need you to hand-code some parts of your SQL reset script. For example on one project, we had to modify the generated script to accommodate a self referencing constraint:


The number of customisations required will vary from project to project. Hopefully the script will do 90% or more of the work for you.

Why not run the script against your application database and see whether it could help you? I’d be keen to hear from you if you have any comments or ideas.

Wednesday, 9 January 2013

Using a SQL script to reset data in database integration tests

Automated database tests need to be isolated and repeatable. It’s a lot simpler if each test begins with the database in a known state. The simplest way to do this is to delete unwanted data from the database before each test. As one of my colleagues likes to say: “We need to flatten the database”.

On a recent project, I went through a number of options for resetting the database, similar to those outlined by Jimmy Bogard in Isolating database data in integration tests. I also spent some time attempting to query and delete test objects via features of our ORM before realising that it wasn’t really set up to help me delete a complex graph of inter-related objects within a single unit of work.

Based on this experience, I am pleased to share a top tip: Direct SQL is the fastest and simplest mechanism for resetting your database and is probably the one that you should try first.

So how do we generate and run our SQL script? Jimmy describes a way of calculating the dependencies between tables at run time in some C# test infrastructure code and executing a bunch of “delete from xxx” statements. Nice. An automated solution like this will even accommodate changes to the database schema, er, automatically. However, there are some drawbacks. Diagnostics / analysis options are limited. If a schema / mapping change broke the assumptions in the code and made it impossible to generate the delete scripts in the correct order, we’d probably need to fire up the debugger and step through the code to work out which constraints were preventing the dependency order from being established.

Another option is to develop a SQL script yourself and execute it when preparing to run each test. It’s very simple and works as follows:


This approach has a couple of advantages:
  • It is clear at-a-glance how the database is being reset
  • You may need to go beyond several “delete from [table]” statements and get down and write some bespoke logic. Maybe you need to preserve a couple of records in a certain table. Perhaps there is one table in your 100 table system where you need to drop constraints, delete the data and re-create the constraints. A raw SQL script allows for manual intervention.
The main drawback, of course, is that you will need to update the script as the structure of your application's database changes, but you probably won't find that too annoying in practice.

Note that we don't have to hand-code our SQL from the ground up. See the next post for details of a TSQL script that you can run directly in SQL Server to help generate your reset script.