Seattle Aquarium


I promised Adi that we would go to the aquarium and for some reason we just postponed it over and over again. But it finally happened this weekend. I love fish tanks, but this one is in a different league. The dome is quite an attraction.

If you do get to go, I’m quite sure you’ll see the otter rubbing his cheeks. Its quite tempting to say that they are so human but then again it could be the other way around you know.

Implementing an IAsyncResult


  • Overview
  • Defining IAsyncResult
  • The BeginXXX/EndXXX Contract
  • Principles for BeginXXX/EndXXX implementation
  • Purpose for our AsyncResult
  • AsyncResult Members
  • AsyncState
  • AsyncWaitHandle Implementation
  • TAsyncResult End<TAsyncResult>(IAsyncResult result)
  • void Complete(bool completedSynchronously) and void Complete(bool completedSynchronously,Exception ex)
  • Exception Handling
  • Handling Exceptions on the Callback thread
  • Using and Building your AsyncResult
  • Guidelines for building our AsyncResult
  • Sequence Diagram
  • Using the AsyncResult
  • Advanced Patterns
  • Chaining AsyncResults
  • Sequence of Execution
  • When to avoid Chaining
  • AsyncResult without the Begin/End Semantics
  • References
  • Source – AsyncResult.cs


Continue reading

Visual Studio 2010 and .Net Framework 4.0 Beta1

With a whole new workflow runtime based on the pretty rock solid WCF implementation, this is one little monster of a release. Visual studio is getting quite a nice facelift and the framework with ton of cool features like side by side execution and lots more.

The new flowcharts and the WF designers, with the mini map and argument and variable docks, should be quite an interesting experience. I am really excited to see an increase in the WF usage for applications with its boost in performance and improved usability.

You can find a lot more screen shots here

MSDN subscribers can download here.
For the rest of the world it will be available tomorrow.

How to use an AsyncResult?

I have a much longer article in mind for this and will be publishing it out soon. But to quickly answer this let us use an implementation from the framework itself using a delegate. Please note, this is for a usage pattern sample only since all that a delegate does is put the work on the CLR’s thread pool which means more latency for non-blocking work.Lets create a driver for our async operation which will define our little piece of work. Here we just return a string. [more]

class AyncOperation{    // Your delegate to simulate an AsyncResult implementation    static Func operation = () => "Hello operation.";    internal static IAsyncResult BeginOperation(AsyncCallback callback, object state)...    internal static string EndOperation(IAsyncResult result)...}

This is what is expected on the 2 paths that the code can complete on.

  • Synchronous Path
    • IAsyncResult result = BeginOperation()
    • Is Result.CompletedSynchronously  == true
      • Yes
        • Handle the completion like calling end etc.
        • Handle exceptions.
      • No
        • Do nothing and return
  • Callback
    • IS Result.CompletedSynchronously  == true
      • Yes
        • return as the sync path will take care of the rest
      • NO
        • Handle the completion
        • Handle exception if any

So here is the end to end implementation.

namespace Microsoft.Samples.SimpleDelegateAsyncResult{    using System;    using System.Threading;    class Program    {        static AsyncCallback onCallback = new AsyncCallback(Callback);        ManualResetEvent waitHanlde = new ManualResetEvent(false);        static void Main(string[] args)        {            Program state = new Program();            try            {                IAsyncResult result = AyncOperation.BeginOperation(onCallback, state);                if (result.CompletedSynchronously)                {                    state.HandleCompletion(result);                }            }            catch (Exception exception)            {                Console.WriteLine(" Exception : " + exception.Message);                // Do the same exception handling here.                state.waitHanlde.Set();            }            state.waitHanlde.WaitOne();            state.waitHanlde.Close();            Console.WriteLine("Completed");        }        static void Callback(IAsyncResult result)        {            if (result.CompletedSynchronously)                return;            Program thisPtr = result.AsyncState as Program;            Exception completionException = null;            try            {                thisPtr.HandleCompletion(result);            }            catch (Exception ex)            {                // Don't throw as you cannot bubble up exceptions on  a callback thread                completionException = ex;            }            finally            {                if (completionException != null)                {                    // you need to handle exception here as well                    Console.WriteLine("Callback Exception : " + completionException.Message);                }                //release the main thread;                thisPtr.waitHanlde.Set();            }        }        void HandleCompletion(IAsyncResult result)        {            Console.WriteLine(AyncOperation.EndOperation(result));             //throw new InvalidOperationException("test exception");            Thread.Sleep(100);        }    }    class AyncOperation    {        // Your delegate to simulate an AsyncResult implementaion        static Func operation = () => "Hello operation.";        internal static IAsyncResult BeginOperation(AsyncCallback callback, object state)        {            return operation.BeginInvoke(callback, state);        }        internal static string EndOperation(IAsyncResult result)        {            return operation.EndInvoke(result);        }    }}

Lock Free iterator

Most common data structures are usually meant for single threaded access and queues are no exception. When there are multiple producers writing to the queue we usually need to make sure that the writer is given a unique index that he can put the element into. If our operations are synchronized already and thread safe we can do something like this 


public void Enqueue(T item){    ...    tail = (tail+1) % size;    items[tail] = item;}


On the other hand if 2 threads simultaneously got to the highlighted line then they could even overwrite each other if the read the value of the tail at the same time. Now what are the issues we face.

  1. The read might be stale.
  2. One thread might get an old value and corrupt state.
  3. The value of the tail is corrupted and would not actually get incremented for each thread.

We could consider using Interlocked increment and get away with something like this


public void Enqueue(T item){    int next = Interlocked.Increment(ref tail) % size;    ...    items[next] = item;}


Hmm this doesn’t seem right. Well the issue is the wrap around behavior for interlocked.Incremement. Once it reaches Int32.Max value it then gets back to Int32.Min value and then on it’s pretty much useless as the next count would read something like (Int32.MinValue%size) wiz quite a an odd value. For example for size 100 we could end up with indexes like this

… 46,47,-48,-49

Lets look at another approach which is CompareExchage. Some fundamentals about Interlocked.CompareExchange

  1. It returns the latest value for the location.
  2. It will exchange the value if it matches the comperand atomically. 

From this we can deduce that if a thread tried to increment a value and failed then someone already updated it. This also means that we might not succeed in a single try and hence would need to retry until till we successfully get a slot which we will own. The fact that we were able to increment it also means that no other thread will get this slot. So now lets look at the implementation.


class LockFreeCircularIterator{  int index = 0;  readonly int limit;  public LockFreeCircularIterator(int limit)  {    this.limit = limit;  }  public int GetNext()  {    int slot = this.index;    while(true)    {      if(slot == (slot = Interlocked.CompareExchange(ref this.index, (slot+1) % limit, slot)))      {        return slot;      }    }  }}


Here we see that we iterate till the index was updated with the incremented value. Each time it failed the new value of the index was read and attempted again. This was one of the implementations of the iterator. This allows multiple threads to write to a circular buffer. I’m omitting the usage as there are quite a few places this could come in hand.