Tag Archives: .net

Debugging Assembly loading

Does a referenced assembly get loaded if no types in the assembly are “not used”?The term used is is very subjective. For a developer it would mean that you probably never created an instance or called a method on it. But this does not cover the whole story. You can instead consider what are the reasons for an assembly load occurring. Suzanne’s blog on Assembly loading Failures would give you a good understanding of failures if that is what you are interested in. This post focuses on how to identify what exactly is causing an assembly to load.We in the WCF team are very cautious on introducing assembly dependencies and how how our code paths can cause assembly loads since this impacts the reference set of your process. Images that get loaded during a WCF call can become the cause of slow start up since every assembly is a potential disk look up and larger the number the higher the impact to startup.  As a guidance for quick app startup is that you can eliminate a lot of the unnecessary assemblies from being loaded to speed up application startup if you refactor types properly. Continue reading

Implementing an IAsyncResult

Contents

  • Overview
  • Defining IAsyncResult
  • The BeginXXX/EndXXX Contract
  • Principles for BeginXXX/EndXXX implementation
  • Purpose for our AsyncResult
  • AsyncResult Members
  • AsyncState
  • AsyncWaitHandle Implementation
  • TAsyncResult End<TAsyncResult>(IAsyncResult result)
  • void Complete(bool completedSynchronously) and void Complete(bool completedSynchronously,Exception ex)
  • Exception Handling
  • Handling Exceptions on the Callback thread
  • Using and Building your AsyncResult
  • Guidelines for building our AsyncResult
  • Sequence Diagram
  • Using the AsyncResult
  • Advanced Patterns
  • Chaining AsyncResults
  • Sequence of Execution
  • When to avoid Chaining
  • AsyncResult without the Begin/End Semantics
  • References
  • Source – AsyncResult.cs

Overview

Continue reading

How to use an AsyncResult?

I have a much longer article in mind for this and will be publishing it out soon. But to quickly answer this let us use an implementation from the framework itself using a delegate. Please note, this is for a usage pattern sample only since all that a delegate does is put the work on the CLR’s thread pool which means more latency for non-blocking work.Lets create a driver for our async operation which will define our little piece of work. Here we just return a string. [more]

class AyncOperation{    // Your delegate to simulate an AsyncResult implementation    static Func operation = () => "Hello operation.";    internal static IAsyncResult BeginOperation(AsyncCallback callback, object state)...    internal static string EndOperation(IAsyncResult result)...}

This is what is expected on the 2 paths that the code can complete on.

  • Synchronous Path
    • IAsyncResult result = BeginOperation()
    • Is Result.CompletedSynchronously  == true
      • Yes
        • Handle the completion like calling end etc.
        • Handle exceptions.
      • No
        • Do nothing and return
  • Callback
    • IS Result.CompletedSynchronously  == true
      • Yes
        • return as the sync path will take care of the rest
      • NO
        • Handle the completion
        • Handle exception if any

So here is the end to end implementation.

namespace Microsoft.Samples.SimpleDelegateAsyncResult{    using System;    using System.Threading;    class Program    {        static AsyncCallback onCallback = new AsyncCallback(Callback);        ManualResetEvent waitHanlde = new ManualResetEvent(false);        static void Main(string[] args)        {            Program state = new Program();            try            {                IAsyncResult result = AyncOperation.BeginOperation(onCallback, state);                if (result.CompletedSynchronously)                {                    state.HandleCompletion(result);                }            }            catch (Exception exception)            {                Console.WriteLine(" Exception : " + exception.Message);                // Do the same exception handling here.                state.waitHanlde.Set();            }            state.waitHanlde.WaitOne();            state.waitHanlde.Close();            Console.WriteLine("Completed");        }        static void Callback(IAsyncResult result)        {            if (result.CompletedSynchronously)                return;            Program thisPtr = result.AsyncState as Program;            Exception completionException = null;            try            {                thisPtr.HandleCompletion(result);            }            catch (Exception ex)            {                // Don't throw as you cannot bubble up exceptions on  a callback thread                completionException = ex;            }            finally            {                if (completionException != null)                {                    // you need to handle exception here as well                    Console.WriteLine("Callback Exception : " + completionException.Message);                }                //release the main thread;                thisPtr.waitHanlde.Set();            }        }        void HandleCompletion(IAsyncResult result)        {            Console.WriteLine(AyncOperation.EndOperation(result));             //throw new InvalidOperationException("test exception");            Thread.Sleep(100);        }    }    class AyncOperation    {        // Your delegate to simulate an AsyncResult implementaion        static Func operation = () => "Hello operation.";        internal static IAsyncResult BeginOperation(AsyncCallback callback, object state)        {            return operation.BeginInvoke(callback, state);        }        internal static string EndOperation(IAsyncResult result)        {            return operation.EndInvoke(result);        }    }}

Lock Free iterator

Most common data structures are usually meant for single threaded access and queues are no exception. When there are multiple producers writing to the queue we usually need to make sure that the writer is given a unique index that he can put the element into. If our operations are synchronized already and thread safe we can do something like this 

[code:c#]

public void Enqueue(T item){    ...    tail = (tail+1) % size;    items[tail] = item;}

[/code]

On the other hand if 2 threads simultaneously got to the highlighted line then they could even overwrite each other if the read the value of the tail at the same time. Now what are the issues we face.

  1. The read might be stale.
  2. One thread might get an old value and corrupt state.
  3. The value of the tail is corrupted and would not actually get incremented for each thread.

We could consider using Interlocked increment and get away with something like this

[code:c#]

public void Enqueue(T item){    int next = Interlocked.Increment(ref tail) % size;    ...    items[next] = item;}

[/code]

Hmm this doesn’t seem right. Well the issue is the wrap around behavior for interlocked.Incremement. Once it reaches Int32.Max value it then gets back to Int32.Min value and then on it’s pretty much useless as the next count would read something like (Int32.MinValue%size) wiz quite a an odd value. For example for size 100 we could end up with indexes like this

… 46,47,-48,-49

Lets look at another approach which is CompareExchage. Some fundamentals about Interlocked.CompareExchange

  1. It returns the latest value for the location.
  2. It will exchange the value if it matches the comperand atomically. 

From this we can deduce that if a thread tried to increment a value and failed then someone already updated it. This also means that we might not succeed in a single try and hence would need to retry until till we successfully get a slot which we will own. The fact that we were able to increment it also means that no other thread will get this slot. So now lets look at the implementation.

[code:c#]

class LockFreeCircularIterator{  int index = 0;  readonly int limit;  public LockFreeCircularIterator(int limit)  {    this.limit = limit;  }  public int GetNext()  {    int slot = this.index;    while(true)    {      if(slot == (slot = Interlocked.CompareExchange(ref this.index, (slot+1) % limit, slot)))      {        return slot;      }    }  }}

[/code]

Here we see that we iterate till the index was updated with the incremented value. Each time it failed the new value of the index was read and attempted again. This was one of the implementations of the iterator. This allows multiple threads to write to a circular buffer. I’m omitting the usage as there are quite a few places this could come in hand.

Preprocessor vs Conditional Compilation Attribute

Here is a little teaser program –

class Program {

#if DEBUG
   
static int test = 0;
#endif

    static void Main(string[] args) 
   
{
       
TestMethod(test); 
    }

    [Conditional(“DEBUG”)]
   
static void TestMethod(int t) { }
}

Like all questions, should this compile or not? The thing to note here is the difference in behavior between the Conditional compilation attribute and a preprocessor directive. The answer is in the documentation of the conditional attribute class.

Applying ConditionalAttribute to a method indicates to compilers that a call to the method should not be compiled into Microsoft intermediate language (MSIL) unless the conditional compilation symbol that is associated with ConditionalAttribute is defined. Applying ConditionalAttribute to an attribute indicates that the attribute should not be emitted to metadata unless the conditional compilation symbol is defined. Any arguments passed to the method or attribute are still type-checked by the compiler.

 

UPDATE to my question in the community DL by Eric Lippert

>>So this in my understanding means that unlike a preprocessor directive, happens only during translation and that means type checking and parsing etc does happen even though the method is conditional in a sense and removed only at a later stage.

Correct. If you think about it a moment you’ll see why that has to work that way. Ask yourself a few questions about how this works:

Q: How does the compiler know to remove the method? 

A: Because the method called has the conditional attribute on it.

Q: How does the compiler know that the method called has the conditional attribute on it?

A: Because overload resolution chose that method, and the metadata associated with that method has the attribute.

Q: How does the compiler know to choose that method?

A: By examining its arguments.

Therefore the arguments must be well-defined at the point of the call, even if the call is going to be removed. In fact, the call CANNOT be removed unless the arguments are there!

>>Could someone point me to which parts of the c# spec defines this behavior?

FYI, the specification begins with a handy table of contents”, which is very useful for answering such questions. The table of contents of the specification states:

2.5.1 Conditional compilation symbols

17.4.2 The Conditional attribute

So my advice would be for you to look at sections 2.5.1 and 17.4.2 if you want the specification for conditional compilation symbols vs the conditional attribute.

Cheers,

Eric

The EventLog and Message limits

Sometimes I have to fix certain issues in BizTalk and during one of these exercises I had to write a large message into the eventlog.OUCH! Ok now who does that ? Well I’m not at fault here but then again we did require this for some convoluted reason. Anyway there was some whacky boundary condition code. So to get a quick value just decided to flood my event log and see how much can it store.

        static void Main(string[] args)
        {
            int limit=31000;
            int source = 10; //212 is the limit here.
            while(true)
            {
                try
                {
                    SD.EventLog.WriteEntry(new string(‘s’,source) , new string(‘v’, limit), System.Diagnostics.EventLogEntryType.Information, 0, 0);
                    limit++;
                }
                catch (Win32Exception ex)
                {
                    Console.WriteLine(–limit);
                    break;
                }
            }

            SD.EventLog.WriteEntry(new string(‘s’, source), new string(‘v’, limit), System.Diagnostics.EventLogEntryType.Information, 0, 0);
            Console.WriteLine(“Total = ” + (source + limit));
        }

And guess – its a magic number 31884  and not 32k. System.Diagnostics.Eventlog actually uses advapi32!ReportEvent but this seems to have a 32k limit. Anyway short answer is that its not “exactly” 32k.

Disabling JIT Optimizations

INI files are not dead yet :)

Now if you wanted to disable JIT optimization then place this into your application directory as a <appName>.ini file

[.NET Framework Debugging Control]
GenerateTrackingInfo=1
AllowOptimize=0

Why would you want to do this?
Mostly when debugging in release mode you might not the get the correct stack dump. Eg: JIT might inline method definitions and you could actualy not see the method in which the exception is being raised. So when your debugging in a production environment. try to disable JIT optimizations before geting the stack dump.