Essentials of setting up a good performing production Sql Server instance

To set up a Sql Server in production, it is better to get it right for the first time. Obviously, you can’t plan for the best performance when you don’t know how exactly your data will grow, but there are a few key points that won’t change regardless of your data growth pattern. You better get them right from the first day. Two components of a typical sql server system need special attention: First is the log file, and second is the tempdb.

Sql Server writes any transaction on the log file instantly as it happens, it then writes them back to pages. Note that pages might not be written back to disk for a while. If server crashes before writing down a page, you can recover from the log file. Tempdb is also very important bacause sql server does all its internal jobs on tempdb. Stuff like index or table spooling, row versioning, etc all happen at tempdb. Below are the list of cautions you may take to architect a good performing mid size sql server database: (and by mid size I mean anything above 1GB. For larger databases -say above 20GB- you also have to think about partitioning, filegroups, etc as minimum).

1- Keep the log file in a sperate dedicated and fast drive. If you are using a virtual host make sure to get a separate physical drive for ONLY your log file.

2- Run sql service using a specificly created account not the generic service or network accounts. Make sure no one (specially service accounts) has any read or write access to the hard drive with the log file. This will asure that the disk head will not mock around, so that Sql Server can write its log files using the advertised hard disk write speed.

3- Use log shipping and log based replication with caution, because they read from the log file, hence they may make hard disk’s head to dance instead of running.

4- Put the tempdb on a very fast har drive. Possibly an SSD or at least make sure its hard drive is seperated from the database file, log file, and even operating system’s drive.

5- Make sure you have enough RAM. RAMs are cheap but their effect on performance is more than anything. Run a task that monitors the “SqlServer – BufferManagement: Page Life Expectancy” counter periodically, and add more RAM as the value starts heading south.

Obviously, above considerations will not make your slow dead database to rock and roll but it will assure you that effect of the environmental on performance is minimum. Then you can use your smart Sql tachniques to make the db fast and performing.

Advertisement

Automatic Mid-tier Cache!

I have been thinking about this for a while. I started to work on a Silverlight project last year which was a nifty little business app. One of those apps that is just sitting down and working, you know, a web server, an average DB with a few million records, and a silverlight client with a bunch of forms, grids, and menus.

Everything was straightforward and brainless, like just follow a pattern and do the job. However, I was deeply dis satisfied from one aspect of the project. The Caching. What I could not accept was that in 21st century, when you have Entity Framework and IQueryable, you should still manually cache your data when it is appropriate and do all the pointless work of cache invalidation and loading, etc. Apart from the pain of working on something that should have been automated, I wouldn’t trust a programmer to decide which parts of the data should be cached and which part shouldn’t be. Not that I don’t believe they can do a good job on that, they don’t have enough information (at the time of dev work) to decide on it.

Caching strategy should be based on user behaviour and is subject to change by passage of time. For example at some stage lots of QLD pharmacies are queried, but next week NSW users decide to get ready for their conferences and start hammering the system for NSW pharmacies.

Le me be clear about my expectations of a caching system. It should have the following charachteristics:

  1. It should know what users are going to query a lot and cache that (and only that) part of the database.
  2. It should be able to re-use the caches. For example if I say 1.”Give me all QLD pharmacies”, and next one says 2.”Give me all QLD Chemists Warehouses”, the cache manager should be smart enough to run this new query 2., over the results of query 1. which has been retrieved a few minutes ago.
  3. It should optimize the indexes for performance based on the user queries.
  4. It should change the cache when user behaviour changes.
  5. It can call back the database only if there is absolutely no way of answering the query from cache.

Above requests seems to be a lot, but not really in 2011. All these methods are possible, in fact DBMSs do those kind of stuff for ages. We also have IQueryable, which makes it even easier to have a decent caching system.

So let me write a few examples:

Q1: Pharmacies.Join( … Address …).Join( … State …).Where( s => s.Sate = “QLD”).Select(…)

Q2: Pharmacies.Join( … Address …).Join( … State …).Where( a =>a.Sate = “QLD” && a.PostCode > 4000 && a.PostCode<4079).Select(…)

Q3: Pharmacies.Join( … Address …).Join( … State …).Where( s => s.Sate = “QLD”).GroupBy(…).Where( pg => pg.Count() > 4).Select(…)

Q4:  PharmacyStaff.Where(ps => ps.Position == “Manager”).Select(…)

Q5: Pharmacies.Join( … Address …).Join( … State …).Join(…PharmacyStaff…).Where( s => s.Sate=”QLD” && s.Position == “Manager” ).Select(…)

Users login to our system  and do stuff that will cause the above queries to be issued. Normally they will all be issued against the database, but it means that our caching strategy is stupid as a donkey. Really what I would expect is that only Q1 and Q4 are ran against the database. Q2, Q3, and Q4 are all subsets of Q1, hence if we already have those results from Q1, such a waste to run these new queries against the database. Why not look at the Expression Tree and figure out that Q2 is forms a query which is a subset of Q1. Then change the queries as below:

Q1: not changed…

Q2: Q1.Where(a => a.PostCode > 4000 && a. PostCode<4079).Select(…)

Q3: Q1.GroupBy(…).Where(…)

Q4: not changed…

Q5: Q1.Join(…Q4…).Select(…)

Check out the above queries. Aren’t they much better. We don’t expect user or programmer to waste his time on translating those queries. The caching system should do that. It should be an IQueryable that reads the ExpressionTree and translates it into a new ExpressionTree that uses existing data in the cache if there is no need to query database.

This specially make  sense in CLOUD, where you have to pay for querying your SQL Asure.

Enough talking about the dreams, lets become realistic! I did a bit of research and as I expected no such caching manager exists (if you know some tell me and save my hair). So I decided to do it myself. Check the Auto-mid-tier-cache project which I have already started. I haven’t gone far with it yet. It is just a proof on concepts and it implements no IQueryable. It uses a set of objects defined by myself for Relational Algebra operators. It does the very basic of view-matching to find what query is subset of what other, and it is able to translates queries to run against the database or cache alternatively.

I ran it and it worked fine and a bunch of benchmark analysis proved its effectiveness. What is left now is to complete the view-matching and write an IQueryable on top of it. Lot of work but it is worth it.

I forgot to say that you can limit the cache size by setting cost upper-bound. Next issue is that it does not keep itself up-to-date, but this is really another story.

Column Store Databases

Most RDBMS systems that are common, i.e. SQL Server, Oracle, etc. store data in rows. Indeed a bunch of rows form a page and a page is virtually the unit block that can be loaded into memory. This model is helpful when large amount of data should be store on disk. Due to slow speed operation of disk you want to keep related information physically as close as possible.

This also has a dark side to it by making redundancy in data. One obvious thing is that most of the data that we use comes from a limited domain, for example list of cities, post codes, countries. Names, area codes, etc. We usually don’t care about storage much since storage is the cheapest thing to get these days, yet it comes with the price of performance.

If you want to count the number of distinct countries in your database of 10,000,000 records, you need to pass through the whole 10M. Then if you are trying to run expensive categorization, ontology extraction, etc on multiple column you have to deal with tons of duplicate values which slow you down.

Also if you want to change schema of a data sets dynamically, you have to deal with extra complexity. That is when column store data stores come to rescue with dramatic performance improvement. Yet there are certain tasks that will be very slow on such data bases.

Column store databases like MonetDB store data domain in column, and maintain relationships as pointers to these data. This is much like traditional way of storing data in memory where you have your actual data objects somewhere but you organise a List<T> as a list of pointers to your data. You just keep one copy of month names and everything else is pointing to that.

UPDAE — MonetDB has an interesting property that it is designed to break the columns into sizes that fit into the CPU cache. The “Memory Wall” is a big problem for most modern DBMSs as random access to large pages of data which do not fit in the CPU cache significantly reduces the performance of the data processing. As column store databases work very well with bulk operations, and optimised set of relational algebra operators called BQA (Bulk Query Algebra).  Bonk’s group at CWI, Amsterdam who have developed MonetDB have an interesting little paper that describes the history and future of their work. I suggest you should take a look at their paper (Download it from google scholar).

Although it provides flexibility and good performance hit in memory, the story turns back on disk. Indeed the performance for getting data from one relation is as if you had the relation joined to another table for every single column of it.

I think column storage is a very good candidate for caching data. I have been playing with this concept in the auto-midtier-cache project a bit, you can also take a look.

Sampling data quality in distributed SQL Servers (Part 1)

Before going forward with this post, I feel I should say why sampling is so important and why sample data quality? First of all, sampling is heavily used in various query optimization techniques. The very key thing a query optimizer needs to know is selectivity of the query. Selectivity means what are the number of results after running the query against a table. This is extremely important to decide the join direction and wrong estimation on selectivity can change the actual runtime of the query massively. For example when joining three  huge tables AxBxC together, which B is the many to many relation table, the query can be planned as (AxB)xC or Ax(BxC). You may say what is the difference? but imagine the query returns only one row from table C and 1 million rows from A. Which direction do you use? Of course Ax(BxC) ensures a million less lookups.

Sampling is an expensive way to estimate selectivity, because the query should run anyway (but over a much smaller set) but could be the simplest or sometimes the only option. Other ways include complex statistical modeling and collecting forms of statistics about data like histograms.

Although in the context of a single database histograms seem more appealing, in distributed databases, they are not god for several reasons that is out of the scope of this post.

I am personally more interested in distributed databases, not as talked in literature (or federated databases) but in a more practical collaborative enterprise system.

How to simulate bag access in Windows azure table storage? (Part 1)

It is nice to hear that Microsoft is providing table storage. Hopefully we can get it for non-azure platforms as well. The idea is fast and scalable access to persisted objects without limitations of tabular world. No doubt that relational databases are amazing and let for super complex queries and transactions to happen. Downside is their complexity of design and usage. It tends to be extremely hard to provide real scalable relational data yet satisfying service level agreements on response time, availability etc.

Efforts on developing non-relational non-schema bound data sets are as old as databases, and in the cloud era, they make so much sense. For example Mnesia is a lovely database designed to work with Erlang with a LINQ-like query language. Enough to say it is developed in 80’s and is easy to scale, and provides 100% uptime (you get a mechanism to do hot patching). I also read about this database (RavenDB) a few days ago which is based on a similar motive.

One important thing to remember when working with non-relational databases, is that they are not relational. Thus, you don’t run SQL scripts against them and there is no join, no views, no foreign keys and primary keys. These terms make sense for tabular data. Databases like table storage are semi-structured data storage. Structured is tabular and relational data storage store them. Semi structure is XML, JSON, or any other form of persisted object. Unstructured is web and free-form text, etc.

Mnesia (as a pioneer of table-storage like databases) stores data in set’s and bags. A set is a table, which each record has a unique key. Fair enough, we are used to work with table with primary key which is the same. But a bag, is a table in which many records can share a key, hence there might be no way to access a single row of a table because it does not have a unique key (You may say now, WTF? what happens to my candidate keys and primary keys – and my answer is wait a minute. We are not in relational world, so non of these terms exist here).

So what is the value of having a row in a table which we can not access it directly? It of course has some value. Bearing in mind again that table storage is not relational, a good design paradigm is to NEVER query anything except the key (and of course partition key for table storage). Any other query (which is not bounded to partition key for table storage) is similar to a full table scan in you SQL Server database and full table (or index) scan is is THE killer. You can never become scalable if you have a single operation with full table scan over your growing data.

to be continued…

A framework for in memory LINQ

I had a blog post about comparing the performance of cache versus database. Unfortunately Linq to object takes a naive staraight forward approach to any query which is the brute-force look everything. This should change sometime and it annoys me alot. Be a man and start fixing this. I will cach up on it someday after I mowed my yard. I am too busy right now and my weeds are close to 2 meters. Here is my whish list: When I write this code


IEnumerable indexedCollection = myMemColl.AddIndex( i => new {i.FName, i.LName}, IndexOptions.CreateStatistics );

indexedCollection.Where(i => i.FName > "c" && i.FName < "d")
.OrderBy( i => i.FName)
.AsIndexedQueryable().Take(10);

I want the the indexed collection smartly utilize my indexes and take the top 10 items for me without scanning the whole collection. Is it really a big request in 21st century??

To Cache or Not To Cache? This is the Question!

How many times in a developing a business app you have wondered if you should cache the data in your web server to reduce the number of database queries for performance.

Assuming you have unlimited memory, should you cache the whole table into memory (e.g. using ASP.Net method output cache) or you should rely on database and use Linq to database directly?

I guess, the answer to this question is pretty obvious. We know the limitation of both methods, linq to object, till now has no support for indexes and would perform memory scan for any query, while linq to DB has all the overhead of connection and query parsing, etc.

So, lets do a little test to see what is the actual limits of these methods? We will set up a super simple DB with a table that contains two integers. Then hammer DB with queries and cache lookups for different cache size to compare the result.

Run the following query to set up our DB:

create table Smash( Id int primary key, Val int )
go

declare @i int = 0
while @i<1000000
begin
	insert into Smash (Id, Val)
	values (@i, @i)
	set @i = @i + 1
end
go

create nonclustered index [IX_val] on [dbo].[Smash]
(
	[Val] asc,
	[Id] asc
)
GO

Now that we have our DB ready, lets write query for the result.


        const int Cnt = 100;
        private void button1_Click(object sender, EventArgs e)
        {
            var cacheSize = 1000000;
            for (cacheSize = 100; cacheSize < 1000000; cacheSize *= 2)
            {
                var cache = GetCecheAsBigAs(cacheSize);
                //Steps
                var i = cacheSize / 2;
                var time1 = QueryDBMultipleTimes(Cnt, i);
                WriteResult(time1, "DB", cacheSize);
                var time2 = QueryMemoryMultipleTimes(Cnt, i, cache);
                WriteResult(time2, "MEM", cacheSize);
            }
        }

And the query methods:


        private List GetCecheAsBigAs(int cacheSize)
        {
            using (var db = new TempDataContext())
            {
                return db.Smashes.Take(cacheSize).ToList();
            }
        }

        private TimeSpan QueryMemoryMultipleTimes(int Cnt, int lookup, IEnumerable cache)
        {
            var t = DateTime.Now;
            for(var i=0; i< Cnt; i++)
               cache.Where(s => s.Val == lookup).First().Val.Value;
            return DateTime.Now - t;
        }

        private TimeSpan QueryDBMultipleTimes(int Cnt, int lookup)
        {
            using (var db = new TempDataContext())
            {
                var t = DateTime.Now;
                for (var i = 0; i < Cnt; i++)
                     db.Smashes.Where(s => s.Val == lookup).First().Val.Value;
                return DateTime.Now - t;
            }
        }

Now comes the interesting part. I ran the above code and graphed the result:

The blue line represents cache access and the red one represents DB access. It can be clearly seen that cache terribly beats DB until the cache size becomes 20% of the table size. It can also be spotted that DB access has a steady response time. Remember we don’t change the DB size at all during the test. However, this graph has got no new message. It is the same old graph of linear vs tree access to data.

But don’t be fooled with this experiment. There are things to note of. First, our data structure is super simple. On a more complex query when lookups and joins come in, DB would definitely over perform memory drastically due to all the query optimization effort made into DBMS cores in last 30 years. This graph also doesn’t show the overhead of populating and re-populating cache. However, if your data is reasonably small (up-to a few thousand records), and your query is fairly simple, cache it in the web server.