C#: Implicit type cast and class conflict

Before jumping into the topic, be clear of 3 things:

  1. This post is about implicit operator overloading.
  2. This is a C# only language feature.
  3. This post is gonna focus on how to resolve class conflict from multiple web references. If you are only wondering what implicit operator is, head to this intro page.

If you haven’t lost your way, then let the party begin ūüėÄ

The Problem

It’s perfectly fine if your have multiple classes using the same name in different name spaces. Although people usually don’t do that, but it could be a common case when adding multiple web references from the same provider.

One good example is the Bing Maps SOAP services. There are 4 SOAP services that you can reference from Bing Maps: Geocode, Imagery, Route, and Search. When you add one of these services to your project as a web reference, you need to give it a name space, and Visual Studio can auto generate the necessary proxy classes to be used with the web reference. If you add a second one, you need to give it a different name space. So far it’s pretty straight forward.

Now, the problem is, on Bing Maps server side, all of the services could be sharing some common classes in their back-end. These common classes are usually located under a common name space and all of the services just import this name space and use the same common classes. However, when you add the web references, you give a different name space to each one of them, and the generated classes are all under the name space you’ve given to the respective reference. Therefore, if you add all 4 Bing Maps web references, you will notice that there are some duplicate classes in each name space. For example, the ExecutionOptions class., it exists in all 4 name spaces.

When you have created an ExceptionOptions object of the Geocode service name space, and you want to use it with other services, the compiler complains that the object you’ve created can’t be used with them because it’s not in the correct name space.

The normal way of getting over this problem is that you re-create one ExceptionOptions object for each name space:

using MyApp.BingMaps.Geocode;
using MyApp.BingMaps.Imagery;
using MyApp.BingMaps.Route;
using MyApp.BingMaps.Search;


var exOptGeo = new BingMaps.Geocode.ExecutionOptions();
exOptGeo.SuppressFaults = true;
var exOptImg = new BingMaps.Imagery.ExecutionOptions();
exOptImg.SuppressFaults = true;
var exOptRte = new BingMaps.Route.ExecutionOptions();
exOptRte.SuppressFaults = true;
var exOptSrh = new BingMaps.Search.ExecutionOptions();
exOptSrh.SuppressFaults = true;

var geoReq = new GeocodeRequest { ExecutionOptions = exOptGeocode };
var imgReq = new ImageryRequest { ExecutionOptions = exOptImg };
var rteReq = new RouteRequest { ExecutionOptions = exOptRte };
var srhReq = new SearchRequest { ExecutionOptions = exOptSrh };

The code looks very repetitive in this way. It could be much more tedious if the object to re-create is complex.

Since the request objects are all different, we can’t really re-use them, but we can definitely re-use the ExecutionOptions object, which is 100% the same in all 4 name spaces.

Implicit Operator Overloading

Our solution is via Implicit Operator Overloading. The implicit operator is mainly used to eliminate unnecessary casts to improve source code readability. It does not require programmer to cast one type to another. In our case, it will be used to eliminate the re-creation code of each ExecutionOptions object.

Due to that all of the generated proxy classes are partial classes. Without modifying the generated code, we can create our own class file that extends any of the proxy classes like this:

namespace MyApp.BingMaps.Imagery
    public partial class ExecutionOptions
        public int NewProperty() { get; set; }

By extending the ExecutionOptions class like above, you can directly use NewProperty like any other generated properties:

var exOpt = new BingMaps.Geocode.ExecutionOptions();
exOpt.NewProperty = 1;

Knowing this, we can overload the implicit operator in this partial class as below:

namespace MyApp.BingMaps.Imagery
    public partial class ExecutionOptions
        public static implicit operator ExecutionOptions(MyApp.BingMaps.Geocode.ExecutionOptions value)
            return new ExecutionOptions() { ExecutionOption = value.ExecutionOption };

The implicit operator is a short cut to recreate the object of the target type from another type. In our case, we overload the implicit operator in the ExecutionOptions partial class under the BingMaps.Imagery name space. The operator automatically handles type casting from a BingMaps.Geocode.ExecutionOptions object by re-creating a local ExecutionOptions type in the class itself.

Now, we can do an implicit type cast without even noticing it:

var exOpt = new BingMaps.Geocode.ExecutionOptions { SuppressFaults = true };
var imgReq = new ImageryRequest { ExecutionOptions = exOpt };

As you can see, we can assign a BingMaps.Geocode.ExecutionOptions to a BingMaps.Imagery.ExecutionOptions without even casting it. If we overload the implicit operator for each ExecutionOptions class in the other 3 name spaces, we can simplify our code to something as below:

var exOpt = new BingMaps.Geocode.ExecutionOptions  { SuppressFaults = true };

var geoReq = new GeocodeRequest { ExecutionOptions = exOpt };
var imgReq = new ImageryRequest { ExecutionOptions = exOpt };
var rteReq = new RouteRequest { ExecutionOptions = exOpt };
var srhReq = new SearchRequest { ExecutionOptions = exOpt };

Update: “var” is used in the code samples to shorten the code for better readability. Thanks for the tip Jeff ūüėČ

SQL: Recompile after DB restore

I have recently come to realise an issue with database restoring in SQL Server 2005.

Depending on how the database is restored from another database, there are chances for some stored procedures to stop working (AFAIK this could be random), because their execution plan isn’t relevant anymore. The typical symptom is that when you run a stored proc, it simply sticks in the running state forever. This requires recompilation of the stored proc.

Recompiling stored procs is quite simple. This can be setup as automatic SQL jobs after each restore to prevent broken stored procs.

Recompiling all stored procs that access a key table

EXEC sp_recompile 'table_name';

Otherwise we can also target a specific stored proc

EXEC sp_recompile 'sp_name';

If identifying these stored procs is not realistic (due to its randomness), we can also consider recompiling all DB objects by using sp_MSforeachtable

EXEC sp_MSforeachtable @command1="EXEC sp_recompile '?'";

What sp_recompile does is simply marking the target object for recompilation. The objects will be recompiled when the next time it is accessed. Even though, the last option could take a long time because it loops through each table in the DB and recursivly marks all objects that depends on their recursive parent objects for recompilation.

Source: http://www.sqlservercurry.com/2009/06/recompiling-stored-procedures-in-sql.html

WP7: Toolkit Fed 2011 Release

Silverlight Toolkit

Head over to codeplex to grab the just released Fed 2011 Silverlight For Windows Phone Toolkit.

In this version the toolkit brings you default tilt effect on most common controls and also included the PerformanceProgressBar by Jeff Wilcox.

Definately not something huge but why not just grab it and recompile your project while it’s completely free?

Update: I’ve also noticed that page transition animations are now using better easing functions then the previous version.

Brilliant comment of the day :D

Today while I was going through some very old server code, the following comment was found:

    (skipping 200+ lines of code here)
Catch ex As Exception
    '' DB server probably dropped her shit again.
    '' TODO: now since you've reached this far, might as well log the error and send an email to dbadmin@m*****al.com.
End Try

As if this is not enough to entertain your day, look here and don’t get busted by your boss :D.

RX: A reactive approach to event handling

Microsoft released RX (Reactive Extensions) for .NET on November 2009, but I bet there are only 1% of the developers are really using it so far. Apart from that Microsoft is not really pushing it, the real reason, IMO, could be both that it is only suitable for some very specific development cases, and the idea it is using is quite hard to get around with from scratch. Because it is using a reversed logic to process objects.

Let’s put it simple:

  1. RX is the library to support Reactive Programming.
  2. Reactive Programming means your code “react” to something when it becomes available instead of “act” on it actively. So it’s more of a passive approach.
  3. RX aims to convert objects that you want to interact into a list, and invoke an action each time an item is added to this list. The idea is similar to listening to an ObservableCollection.

Why do I reckon RX is only suitable for some very specific cases? Let’s take a look at this very simple example:

Simple example: Processing a list

Let’s say we have a list, filled with objects, and we only want to print out the string objects in this list:

List list = new List<object>() { "string 1", 1, "string 2", 2 };

What we normally do is to use a loop to process this list:

foreach(object obj in list) if (obj is string) Console.Write(obj.ToString());

In the normal way, we actively go through each object in the list and process it. The RX model takes this in a passive way, asynchronously:

list.ToObservable().Where(obj => obj is string).Subscribe(obj => Console.Write(obj.ToString()));

In the RX model, you convert the normal list to a list that can be observed by calling list.ToObservable(). This will return you an IObservable<object> which is a RX supported list and able to use all of the magical RX functions and extensions. The Where functions takes a function delegate that determines what object in the list will be selected, and returns the filtered list. The Subscribe function can take an Action<object>, which will be called against each object in the list as soon as they become available. In our case, because the list is pre-filled, so the objects are already available. When you execute the Subscribe function, it performs similar to our for loop, processing each of the already-available objects in the list.The real difference here is that:

  1. The monitoring and processing happens asynchronously
  2. If you add more items to the list after the RX statement, the newly added items will also be processed as soon as they are added. Because you have “subscribed” to the list and are monitoring it.

This could be useful when you are not sure when items will be added to the list, if you don’t really want to use an ObservableCollection. However, RX runs slower than the traditional model and requires more memory, as it is doing a whole bunch of conversion and event handling behind the scene.So far, RX has been mainly used in Silverlight, where user interaction with the UI can become quite complicated and asynchronous processing is the only option in most cases.Let us walk through the following example to understand why RX could be critical to a rich user interface.

Real life example: Detecting mouse button press and hold event

Take a minute and think about how you would implement the detection of a mouse button press and hold event in Silverlight, let’s say, we want to do something when the mouse left button is down for 2 seconds without moving?

The triditional model

You definitely need a timer to time this 2 second interval:

DispatcherTimer myTimer = new DispatcherTimer() { Interval = TimeSpan.FromSseconds(2) };
myTimer.Tick += (s, e) =>{
    // Do something when the timer ticks.

To start the timer, we will need to listen to the MouseLeftButtonDown event:

this.MouseLeftButtonDown += (s, e) =>

We also need to detect mouse up and mouse movement in order to cancel the timer, so that the timer tick event doesn’t miss fire when the user doesn’t really mean to hold down the mouse button:

this.MouseLeftButtonUp += (s, e) =>
this.MouseMove += (s, e) =>

This could be all if you are working on a normal Silverlight app, but it gets messier for a WP7 app where multi-touch needs to be considered. You want to cancel the timer as well when a second finger is pressed on the screen.To do this, you need to modify the MouseLeftButtonDown event handler above:

this.MouseLeftButtonDown += (s, e) =>
    // Register a 2nd event handler to detect 2nd finger press.
    this.MouseLeftButtonDown += SecondMouseDown;
    // Because we are attaching a new handler each time the mouse left button is down,
    // it is important to detach this event handler when it is invoked,
    // so that the same handler doesn't get invoked multiple times.

public void SecondMouseDown(object sender, MouseButtonEventArgs e)
    // Detach this event handler to prevent been invoked multiple times.
    this.MouseLeftButtonDown -= SecondMouseDown;

This traditional event handling model works fine, except that it requires creating a timer plus 4 event handlers, and code is scattered making reading and maintenance difficult.

The RX model

I’m gonna give you the complete working RX code to do this, then explain through it:

public partial class MainPage : PhoneApplicationPage{
    // Constructor
    public MainPage()
        Observable.Throttle(Observable.FromEvent(this, "MouseLeftButtonDown"), TimeSpan.FromSeconds(2))
                      .TakeUntil(Observable.FromEvent(this, "MouseLeftButtonUp"))
                      .TakeUntil(Observable.FromEvent(this, "MouseMove"))
                      .Subscribe(e =>
                          // Do something here...

This is it! 1 single RX statement with no event listener at all.First, we use the Throttle extension to specify that, whatever we are monitoring, we don’t care for an initial period of time. This extension takes 2 parameters, the first parameter tells the extension what observable list to monitor, and the 2nd parameter says how long the initial period is.Then we specify with the 2 TakeUntil, that we want to stop monitoring by MouseLeftButtonUp or MouseMove event.We also need to use the ObserveOnDispatcher extension to specify that when we perform any actions on the list, perform it on the current thread (i.e. the UI thread). This can prevent any invalid cross thread access exceptions been thrown.Next we use the Subscribe function to start monitoring this list. When the MouseLeftButtonDown event happens, and after the initial 2 seconds period, execute the Action we specified for the Subscribe extension.

WP7: Activating before Deactivated, a problem of Tombstoning

Tombstoning is the process that you save your app’s “state” to, usually Isolated Storage when the user leaves the app, so that your app can reload the state when it is been reactivated via the Back button.

Before going into the problem, you should have a good understanding on this concept. Shawn Wildermuth has a very good walk through of tombstoning and James Ashley also talked about why deactivating the application is not the same as tombstoning.

Assuming you save your app state using the¬†Application_Deactivated event handler, there should be no problem at all, except this boundary case: what if the app is re-activated even before the¬†Application_Deactivated handler finishes it’s execution?

It’s quite simple to produce this situation. You can:

  • either put a break point in debug mode in the¬†Application_Deactivated event handler, then press Home then Back buttons, then F5 your app to resume it;
  • or simply put Thread.Sleep(5000) in the event handler and press Home then Back buttons within 5 seconds.

If you observe this process carefully in debug mode, the behavior is quite interesting here: the¬†Application_Activated event handler will still be invoked after¬†Application_Deactivated but your app instance is not destroyed and recreated. This means when the app is reactivated, none of the constructor of your objects will be called, as if there weren’t a deactivation at all. To put this in short: tombstoning is not needed in this case!

Having this in mind, we should carefully code our logic so that we don’t do unnecessary tombstoning operations that may cause unwanted side effects.

For example, if you were saving and loading your setting like this:

private ObservableCollection<string> _searchResults;
public ObservableCollection<string> SearchResults{
    get { return _searchResults; }
    set { _searchResults = value; RaisePropertyChanged("SearchResults"); }

private void Application_Activated(object sender, ActivatedEventArgs e)
    SearchResults = LoadSearchResultsFromIsoStore();

private void Application_Deactivated(object sender, DeactivatedEventArgs e)

The above code could trigger an unnecessary write and read of IsolatedStorage in our special case, and if SearchResults is large and bound to a UI list with complicated item template, there will be much more side effects then just performance hit for reloading it unnecessarily.

Unfortunately, it’s most likely that we won’t be able to avoid the potential redundant write to IsolatedStorage in Application_Deactivated, because there is no way that we can predict if the app will be reactivated too soon. But we can always be smart in Application_Activated. For example:

private void Application_Activated(object sender, ActivatedEventArgs e)
    if (SearchResults == null) SearchResults = LoadSearchResultsFromIsoStore();

In this example, we can easily know that if the app has been destroyed or not by checking whether SearchResults is null. If the app instance was never recreated, then SearchResults would still be pointing to something other than null. Depends on how you organise your tombstoning logic, there could be a thousand different ways to be smart here. Just keep this case in mind when you code.