Go to Top

SQL FAQ – How does In Memory computing affect SQL Server recovery?

One of the most anticipated features of Microsoft SQL Server 2014 was the introduction of a new in memory OLTP system called Hekaton. Hekaton is designed to speed up OLTP transactions by holding entire database tables in RAM, rather than having to read and write data from disk.

The speed advantages of RAM over hard disk drives are well known, performing faster than solid state disks (SSD) even. However this speed comes at a cost – any data held in RAM is usually lost when the server reboots or crashes. Obviously this is unacceptable for mission critical applications, or those that need to comply with strict industry regulations. So how do you go about building a database system that is balances these speeds with the need for SQL Server recovery if something goes wrong?

Built-in provisions

Hekaton allows DBAs to select high priority tables and objects that need to be stored in RAM for super-fast querying. However the underlying architecture has also been engineered to create a localised backup of that data:

Transaction log

Just like a standard disk-based database, Hekaton also relies on a transaction log record to track operations, and assist with SQL Server recovery. To ensure complete coverage, Hekaton integrates both disk-based and memory-based transaction logs, writing both to disk in batches.

Understanding the Data File

Rather than the traditional page structures used for data rows, Hekaton uses a pair of sequential files – Data and Delta files. The Data file contains free-form data rows that are then linked using in-memory indexes. Every time a row is changed by an INSERT or UPDATE statement, the changed row is appended to the end of the Data file.

To maintain speed of operation and to maintain an accurate audit trail, when a row is deleted or updated from an in-memory table, it is not removed from, or changed in the Data file.

Understanding the Delta File

Every Data File is paired with a Delta file. The Delta file stores information about every row that is updated or deleted from the corresponding Data file. Every time a row is deleted from the Data file, a reference to that row is added to the end of the Delta file (remember, the row is not deleted from the Data file to maintain performance).

Both Data and Delta files are updated with a background thread, and written to disk. New files are created automatically as system-defined limits are reached. Over time they are merged to free up space in RAM, ensuring that performance levels are maintained and memory ceilings are never reached.

What happens when the Microsoft SQL Server crashes?

The key question for DBAs considering the use of Hekaton and in memory OLTP is ‘what happens when the server crashes?’ Fortunately Hekaton can cope, using the Data and Delta files to perform an automated SQL Server recovery of sorts.

Using the transaction log and the Data and Delta files, Hekaton can automatically rebuild the in-memory tables, significantly reducing downtime. Outages are further shortened because Data and Delta files are loaded and processed concurrently. Finally transactions logs are applied to bring data back to the point at which the server crashed.

Because of this approach, Microsoft claim that no additional backup provisions are required to protect against data loss. This also means that SQL Server recovery should be relatively straightforward, even for Hekaton databases.

Other SQL Server Recovery options

As it stands, there are not as yet any third party solutions designed specifically for use with the in memory features of Microsoft SQL Server 2014. Similar in memory database solutions from Oracle and SAP HANA use other frameworks that appear to be less resilient that SQL Server and as a result need to use more third party “pipes” to capture changes to data as they flow in and out of the database engine.

The good news for DBAs and systems architects is that Microsoft SQL Server 2014 has made in memory processing more affordable, accessible and resilient.

, , , , ,