日期:2013-05-29  浏览次数:20399 次

IOCP Thread Pooling in C#By William KennedyContinuum Technology CenterIntroduction

 


When building server based applications in C#, it is important to have the ability to create thread pools.  Thread pools allow our server to queue and perform work in the most efficient and scalable way possible.  Without thread pooling we are left with two options.  The first option is to perform all of the work on a single thread.  The second option is to spawn a thread every time some piece of work needs to be done.  For this article, work is defined as an event that requires the processing of code.  Work may or may not be associated with data, and it is our job to process all of the work our server receives in the most efficient and fastest way possible.


 


As a general rule, if you can accomplish all of the work required with a single thread, then only use a single thread.   Having multiple threads performing work at the same time does not necessarily mean our application is getting more work done, or getting work done faster.  This is true for many reasons. 


 


For example, if you spawn multiple threads which attempt to access the same resource bound to a synchronization object, like a monitor object, these threads will serialize and fall in line waiting for the resource to become available.  As each thread tries to access the resource, it has the potential to block, and wait for the thread that owns the resource to release the resource.  At that point, these waiting threads are put to sleep, and not getting any work done.  In fact, these waiting threads have caused more work for the operating system to perform.  Now the operating system must task another thread to perform work, and then determine which thread, waiting for the resource, may access the resource next, once it becomes available.  If the threads that need to perform work are sleeping, because they are waiting for the resource to become available, we have actually created a performance problem.  In this case it would be more efficient to queue up this work and have a single thread process the queue. 

 

Threads that start waiting for a resource before other threads, are not guaranteed to be given the resource first.  In diagram A, thread 1 requests access to the resource before thread 2, and thread 2 requests access to the resource before thread 3.  The operating system however decides to give the resource to thread 1 first, then thread 3, and then thread 2.  This scenario causes work to be performed in an undetermined order.  The possible issues are endless when dealing with multi-threaded applications.


 






If work received can be performed independent of each other, we could always spawn a thread for processing that piece of work.  The problem here is that an operating system like Windows has severe performance problems when a large number of threads are created or running at the same time, waiting to have access to the CPU.  The Windows operating system needs to manage all of these threads, and compared to the UNIX operating system, it just doesn’t hold up.  If large amounts of work are issued to the server, this model will most likely cause the Windows operating system to become overloaded.  System performance will degrade drastically.


 


This article is a case study comparing thread performance between Windows NT and Solaris.

http://www.usenix.org/publications/library/proceedings/usenix-nt98/full_papers/zabatta/zabatta_html/zabatta.html


 


In the .NET framework, the “System.Threading” namespace has a ThreadPool class.  Unfortunately, it is a static class and therefore our server can only ha