One of my biggest pet peeves in .NET programming is that inefficiency is coddled, if not encouraged. This isn’t completely Microsoft’s fault – sometimes they provide great tools, but the tools aren’t used.

A few weeks ago, I was handed a C# program with the complaint that it was taking around five seconds to load. I quickly found the biggest problem – this
program had a set of around several hundred options that are loaded from an options table in the database. These options were loaded into a List<> class and then put into member variables.

So the code looked something like this:

List<program_setting> Settings = new List<program_setting>();
using SqlConnection conn = new SqlConnection(strConn))
   using (SqlCommand cmd = conn.CreateCommand())
      cmd.CommandText =
         “SELECT property_name, property_value FROM program_settings”;
      cmd.CommandType = CommandType.Text;
      using (SqlDataReader rs = cmd.ExecuteReader())
         Settings.Add(new program_setting(((String)rs[“property_name”]).Trim(),

Properties were then referenced using a function FindProperty:

String FindProperty(String Property)
   foreach (program_setting setting in Settings)
      if (setting.SettingName == Property)
         return setting.SettingValue;
   return “”;

In this list class, the time required to locate a single item in the list is proportional to the number of items in the list. In asymptotic analysis, we refer to this as O(n).

There are at least three improvements that can be made on this data structure. The first, most basic improvement is that we could sort the list. Sorting is itself a lengthy operation, which would eat up any gain we hope to make, but fortunately all of these values are coming from a database, which, one would hope, will have an index on property_name, so we can change our query to get a pre-sorted list:

property_name, property_value


ORDER BY property_name

Now, a simple binary search of our list would reduce our performance time to one based on log2 n, or, O(log n). Fortunately, Microsoft gives us a perfect binary search class for this need called SortedList. In a SortedList, elements are stored as a sorted array and searched using a binary search.

A second potential improvement is to store elements in a tree. Now, in our particular example, where we are able to pre-sort our list elements, this isn’t particularly useful, but in the case where, not only are your list elements not presorted, but you are adding and removing them frequently, this gives you a substantial performance improvement. So to meet this need, Microsoft offers SortedDictionary. SortedDictionary elements are stored in a binary tree. The search tree, like the
search array, has a search time of O(log n), but it makes insertion and removal faster when data is not presorted.

A third potential improvement is to use what is called a hash table. Imagine if we had a separate bucket for each letter of the alphabet. Settings that begin with the letter “A” go in one list, settings that begin with the letter “B” go in another list, and so forth. However long it would have taken to look up our list element, we have just divided it by 26. Add a second letter and we have divided it by 676. But suppose that instead of using the first two letters, we just did some sort of mathematical operation on the property name that gives us a (relatively) unique value. This operation is called a “hash function” and it allows the search process to be a near constant time, with only a negligible increase as the number of items grows. For a hash search, Microsoft provides the Dictionary class.

Do you have a process that is running slowly? Talk to our eBusiness Solutions team and let’s make sure your website is properly designed and running efficiently.

Photo credit: Sese Ingolstadt

Comments are closed.