Friday, June 22, 2012

IIS Worker Process Quits, Service Unavailable

Ran into one of those issue yesterday that I have come across almost 2 years back. My WCF service was hosted on IIS, and we were doing a few load tests, after a few rounds of requests no request went through and was in error.
Tried browsing the WCF SVC file from IIS and got the message "Service Unavailable", went to the application pool section in IIS and noticed that the app pool running the WCF service has stopped.
In a situation like this, I never try to just fix the issue but to see why the problem has come in the first place.
Took a look at the event viewer and I could see a number of errors from ASP.NET and that the worker processor has quit.
Took a look at the code and the code pointed out that their was a piece of code that runs on a separate thread and that thread throws an error due to a missing stored procedure in the database.

So why does the worker processor stop when there is an error?, by default, from .NET 2.0, if there is an unhandled exception, the worker processor quits, but only if this exception is thrown from a thread that does not service the request. If an unhandled exception is thrown from the thread that services the request then its is thrown as a exception to the user.
However, if the exception is thrown from a thread not part of the thread that service the request then it will quit the worker processor.

Any the reason as per Microsoft for this behavior "We do not recommend that you change the default behavior. If you ignore exceptions, the application may leak resources and abandon locks"

So watch out, if you are spawning thread from the ASP.NET request, make sure to handle exception within the thread and log it.


Wednesday, May 16, 2012

Hosting WCF services on IIS or Windows Services?

There came one of those questions from the client whether to use II7 hosting or windows service hosting for WCF services. I tried recollecting a few points and thought of writing it down.

WCF applications can be hosted in 2 main ways

- In a Windows service

- On IIS 7 and above

When WCF was first released, IIS 6 did not support hosting WCF applications that support Non-HTTP communication like Net.TCP or Net.MSMQ and developers had to rely on hosting these services on Windows Services.

With the release of IIS 7, it was possible to deploy these Non-Http based applications also on IIS 7. Following are the benefits of using IIS 7 to host WCF applications


Less development effort


Hosting on Windows service, mandates the creating of a Windows service installer project on windows service and writing code to instantiate the service, whereas the service could just be hosted on IIS by creating an application on IIS, no further development is needed, just the service implementation is needed. Hence, IIS becomes the natural option for hosting WCF services with ease. Although, you might have to create an svc file and rename your app.config to web.config if you have used the service library project template, but this could be circumvented if you either create your WCF service through the WCF application template or you could go for file less activation or you could use the publishing feature of Visual Studio to generate a svc and a web.config file.


Monitoring


You can monitor health of your application using AppFabric hosting if you use IIS to host your WCF application, this way you can monitor any bottlenecks in your service and adjust settings accordingly. The information ranges from the number of errors that were thrown to the number of throttled hits to the service. With AppFabric hosting you also get added benefits of configuring your endpoints behaviors and service throttling values, you can also increase your service quotes like maximum amount of bytes to be sent through the UI itself without the pain of editing configuration.. You can also get the duration it took for a call to finish and the number of calls that hit the service.

Currently there is no support for monitoring of Windows Services.


Process management


IIS takes care of process management automatically, this allows IIS to monitor worker process and automatically generate a new worker process if the current one is deadlocked or has taken more memory and shut down the one that has faulted, this counts to the great deal of reliability that IIS to offer. It also shuts down worker processes if there are no active requests for configured amount of idle time recovering system resources. Additional implementation has to be done to achieve the same thing for Windows Services.

You can also allow application pools not to recycle starting from IIS 7.5 mimicking the “always on” capabilities of Windows Services, however, this is not required for the middle layer as this is a state less service and does not require the worker process to run when there is no active requests.


IIS Modules


Hosting you WCF applications on IIS allows applications to take advantage of IIS modules that are optional to use, for an example, you can use the request tracing module or the logging module to log request that come into IIS and the application initialization module to warm up your requests. These functionalities are not supported for Windows Services and need to be implemented separately. You can also use Connection string module to manage your connection string through the IIS Manager UI, rather than digging into the configuration files.


Web Farms


You can centrally manage a farm of WCF hosted in IIS in a clustered environment, this is much easier compared to clustered environment using Windows Service to host WCF application, where each service has to be managed centrally.


Other benefits


Sometimes you would need to make use of the ASP.NET shared model, by running the service on AspNetCompatibility mode. This can allow WCF to access session states as well as use IIS authentication mechanism.


In summary, if you have licencing issues where you cannot buy server that run on Windows 2008 or above and still runs on Windows 2003 server, then your only option of hosting Non-Http WCF services are through Windows Services other wise IIS would be the best option.


Sunday, April 29, 2012

MEF (Managed Extensibility Framework), .NET 4, Dependency Injection and Plug-in Development

Almost after .Net 4 was released I remember reading about MEF (Managed Extensibility Framework), this was a framework for developers to compose their application with required dependencies. At first this looks like the Unity Container used for dependency injection, but MEF is much more than a dependency container but there is nothing stopping you from using MEF as a dependency injector.

I remember around 5 years back when I was in a project that created a framework that allows developers to plug-in there modules as WinForm screens. The developer would create a set of screens with the intended functionalities and the drop this component in the bin folder of the framework and then will move on to do some painful configurations for the framework to pick up the module. Dropping the component into the bin folder and doing a bit of configuration is all that s needed for the framework to pick up display the screens. Typically, the configurations would also contain metadata about the screen.

Although in this model, plugging in a component is easier, there is of course the pain of creating and testing the framework. This is where MEF comes in, MEF allows developers to create and consume components (known as parts) through an attributed model, and what’s more MEF comes as part of .Net 4.0 . Let me take a simple scenario, say your application needs to request third party services like ProductService or CustomerService based on dynamic requests. Let’s say you have created adapters for ProuctService and CustomerService and the ServiceManager class is responsible for mediating the service calls.

MEF works with the concept of imports and exports, if you have a property that you want MEF to inject, in our example the set of adapters, this field becomes an Import where as the adapter it self become the export.

The adapters would look like this…

   [Export(typeof(IAdapter))]
public class ProductserviceAdapter : IAdapter
{
public object Invoke(object data)
{
return "From Product";
}
}
And
[Export(typeof(IAdapter))]
public class CustomerServiceAdapter : IAdapter
{
public object Invoke(object data)
{
return "From Customer";
}
}

In the code above, both our adapters implements the contract IAdapter, the Export attribute on top of the class tells MAF that this class can be used to satisfy an import for the contract of IAdapter.

The ServiceManager class would be like this…

   public class ServiceManager
{
[ImportMany]
private IEnumerable<IAdapter> adapters;
}

The ServiceManager class contains reference to IEnumerable, this is the reference we want MEF to fill for us. Note this reference has been marked with the ImportMany attribute. By default ImportMany will take the type of the property that it decorates, in this case IAdapter, or else you would need to explicitly pass in the type of the contract that you want to import into the constructor of ImportMany.

Now let’s wire up the adapters instance variable with the CustomerAdapter and the ProductAdapter. You can do this like this...assuming that both the adapters reside in the same assembly as the ServiceManager, I can do this…

    static CompositionContainer container;
private static void SetupPart(ServiceManager manager)
{
if (container == null)
{
container =
new CompositionContainer(new AssemblyCatalog(typeof(ServiceManager).Assembly));
}
container.ComposeParts(manager);
}

First we are creating a CompositionContainer, this is the container that will manage the life time of the components that it imports. An AssemblyCatalog is passed into the container, the catalog has information of where to find your imports, for an example, the adapters may have been dropped into a separate directory, we can tell MEF to search a particular directory by passing in DirectoryCatalog. In this case we are passing in the assembly name where the adapter can be found. You can search in both these places by adding both these catalogs into an AggregateCatalog and passing that into the container.

Next the ComposeParts method takes in the instance which we want to wire up Imports, in our case we want to fill in the adapters inside the ServiceManager class. So, we pass the ServiceManager instance into the ComposeParts method. This method would search in the assembly that we added in the catalog and would try to find any class the exposes itself as an Export for the contract IAdapter. One catch on the ComposeParts method is that it resides as an extension method in the namespace System.ComponentModel.Composition. Now if you try to access the adapters instance variable inside the ServiceManager instance, you will see that it contains the ProductAdapter and the CustomerAdapter.

Ways of importing

In our example above, our adapters instance variable is a IEnumerable, if you want to Import only one instance of a component you could do this…

     [Import]
private IAdapter adapter;

In this example, if MEF find more than one export for IAdapter, it will throw an error. It will also throw an error if no export is found at all. You can use default values for imports like this..

    [Import(AllowDefault=true)]
private IAdapter adapter;

The allow default parameter specifies that MEF should not throw an exception if it does not find matching exports instead, set the value to null.

You can also, import the adapters as Lazy components, like this…

     [ImportMany]
private IEnumerable<Lazy<IAdapter>> adapters;

In this case adapters will be loaded only when the Value property of the Lazy instance is invoked.

There might be cases where the Adapter that is being exported itself as a dependency with another class say CommonConfigurationProvider, you can specify this as an Import in the export component itself, in our case the ProductAdapter…Our code would look like this…

   [Export(typeof(IAdapter))]
public class ProductserviceAdapter : IAdapter
{
public object Invoke(object data)
{
return "From Product";
}
[Import]
public CommonConfigProvider Provider
{
get;
set;
}
}

Now as long as there is a an export matching the type CommonConfigProvider, the import for the Provider property in the ProductAdapter will be filled when the ProductAdapter is being exported in the ServiceManager class. So if the CommonConfigProvider has an import declaration within it (another dependency), that will also be filled, this means MEF imports are recursive.

Metadata

Most of the time the when imports are loaded, you would want to use only specific component according to certain metadata approached to the component. For an example, when a request for the ProductService comes is, you would want to send the request to the correct component, i.e to the ProductAdapter. MEF allows you to attach metadata to your exports, so for an example, I can attach metadata to the ProductAdapter like this.

   [Export(typeof(IAdapter))]
[ExportMetadata("ServiceName", "ProductService")]
public class ProductserviceAdapter : IAdapter
{
public object Invoke(object data)
{
return "From Product";
}
}

The ExportMetadata takes in a key value pair to define metadata, in our example, we define a key called ServiceName and gives it a value “ProductService”

To get the metadata, you need to use Lazy for the import, so you would define your import like this…

     [ImportMany]
private IEnumerable<Lazy<IAdapter, IAdapterMetadata>> adapters;

In the above code snippet, AdapterMetadata is an interface that defines a property to get the service name.

     public interface IAdapterMetadata
{
string ServiceName { get; }
}

When MEF imports components, it’s going to create a class dynamically that implements the IAdapter interface and set the values of the metadata set in the ExportMetadata attribute as property values of this dynamically generated class. Hence, you can now send the request to the ProductService like this…

 IAdapter adapter = adapters.
Where(a => a.Metadata.ServiceName == "ProductService")
.Single().Value;

Life time management

The ComposeParts() method composes all imports for an instance, by default the composed parts are shared and container managed, if you want this to be changed to per call you can do this by adding a PartCreationPolicy attribute..

   [Export(typeof(IAdapter))]
[PartCreationPolicy(CreationPolicy.NonShared)]
public class CustomerServiceAdapter : IAdapter
{
public object Invoke(object data)
{
return "From Customer " ;
}
}

You can remove parts from the container through the ReleaseExport method in the container, you can also do a composition without attaching the components to the container like this..

 container.SatisfyImportsOnce(manager);

MEF can be used as a dependency injection container; however, MEF provides more functionalities then Unity. There are good reasons to use MEF as it continues to evolve. .Net 4.5 upgrades MEF to use convention based than attribute based. That’s all I have time for this weekend.

Thursday, April 12, 2012

Task based Asynchronous pattern, Async & Await and .NET 4.5

One of the key features in .Net 4.5 is to write asynchronous programs much easier. So if I was to write asynchronous programs in .Net 2.0/3.5, I would either follow the event based model or the callback based model. For an example, a synchronous method that does intensive work (say the DoWork()) can be made asynchronous by using the following patterns

1) Implementing the IAsyncResult pattern. in this implementation, 2 methods are exposed for the DoWork() synchronous method, the BeginDoWork() and the EndDoWork() method. The user will call the BeingDoWork() passing in the required parameters and a callback of the delegate type AsyncCallback(IAsyncResult). The BeginDoWork() will spawn a new thread a return control back to the user. Once work is completed in the spawned method, as a last step, it will call the inform the AsyncResult implementation, which in turns will call the EndDoWork() (which is the callback that was passed in to the BeginDoWork()).

2) Implementing the event pattern. Here a BeginDoWork() is implemented but an event is exposed, so that the user can subscribe to that event. The event is invoked by the implementation once the method completes.

.Net 4.0 introduced Tasks, the Task object exposes a continuation feature by which it becomes much more easier to write asynchronous programs. Using this feature, i can assign some work to the task to be executed in a separate thread and then subscribe a continuation to executed once the primary task is done.
       Task<int> task = new Task<int>(() =>
{/*Do intensive work here*/ return 100; });
task.ContinueWith(t => { Console.WriteLine("Result is -- " + t.Result); });
task.Start();
In .Net 4.5 this becomes much easier with the async and await keywords. So the same program can be written like this...
     private async static void InvokeAsync()
{
int val = await WorkAsync();
//This is your continuation code...
Console.WriteLine("Result is -- " + val);
}
The async keyword tells the compiler that this method is an asynchronous method. Once the compiler sees the await method, it subscribes the rest of the code i.e the consoling (in this case) and any other logic as a continuation to the Task object returned by the WorkAsync(), control is given to the main thread.
The WorkAsync method creates a Task object that encapsulate the work that needs to be done and returns it. The await keyword works on the Task object returned by the WorkAsync()

The WorkAsync() method would be like this...
     private static Task<int> WorkAsync()
{
//Do intensive work here...
Task<int> task = new Task<int>(() =>
{/*Do stuff here*/ return 1000; });
task.Start();
return task;
}
Essentially, the await keyword is used at the point where your are doing some work in a separate thread, this is usually through a Task class, the rest of the method after the await call is the callback method. It is interesting to note that you can use multiple await calls inside a async meth
Practically, you would not need to implement the WorkAsync() method, as most of the asynchronous version of APIs also support a XXXAsync() in .net 4.5, that can be directly used with the await keyword. For an example, WCF proxy's would now expose Task based asynchronous method so it can work with the await keyword.

Saturday, March 31, 2012

REST Services with ASP.NET Web API

Sometimes back, ASP.NET MVC 4 beta was release, this comes with quite a lot of capabilities to create mobile web applications with HTML5, to new features on Razor etc..
One of the milestones of this release is ASP.NET Web APIs, which allows developers to implement REST services. .NET 3.5/4.0 WCF provides us with the support of creating REST APIs using the webHttpBinding, however most of the features required to run the service needs the ASP.NET compatibility mode, this basically means the request to the REST service would first go through the ASP.NET pipeline, before being handed over to WCF. Then there was the WCF Web API, where Microsoft tried to re-define how REST services are created, but they opted in for moving REST support into ASP.NET and then to WCF, hence ASP.NET Web API, becomes the de facto technology for creating REST services. WCF WebHttpBinding would still exist, bu the recommendation is to to use Web APIs.

Implementing a Web API is simple, the steps involved are...
1) Create a MVC 4 project
2) Define a route for web APIs
2) Create a controller but that inherits from ApiController instead of Controller
4) Start writing those GET, POST, PUT and DELETE methods....

Once you have installed ASP.Net MVC 4, you can create a controller like this...

public class ProjectsController : ApiController

{

public IList GetProjects()

{

return GetProjectsFromDB();

}

public void PostProject(Project project)

{

SaveProjectIntoDB(project);

}

}


in your global file, you can create a route setting like this...

routes.MapHttpRoute(

name: "DefaultApi",

routeTemplate: "api/{controller}/{id}",

defaults: new { id = RouteParameter.Optional }

);

That's it you can open up a web browser or fiddler and make request, to call the method GetProjects(), you URL would be like this http://localhost:8080/api/Product. In fiddler you would see the JSON response similar to this....

{"EndDate":"\/Date(1333251468665+0530)\/","ProjectID":1,"ProjectName":"XYZ 12","ProjectType":"T&M","StartDate":"\/Date(1333251468665+0530)\/"},{"EndDate":"\/Date(1333251468665+0530)\/","ProjectID":2,"ProjectName":"JJ 777","ProjectType":"FB","StartDate":"\/Date(1333251468665+0530)\/"}

Now, if you know MVC, you would be asking me, you did not refer the action in the URL or in the route table, so how does asp.net know to invoke the GetProducts()?. The answer is the controller check to see the HTTP method the request arrived on, if it is a GET, the controller will try to match any controller method that starts with the word GET. This is the same convention for POST. For an example , the PostProject() method can be invoked by a call to the URL format http://localhost:8080/api/Product. As long as you make the request as a POST method, the PostProject() method will be called. This convention allows the concept of a resource being governed by one controller by the HTTP verbs GET, POST, DELETE and PUT.

With the default convention discussed above, if you want to have method that does not start with the word Get or multiple methods with the same signature starting with the word GET, you would have to specify a route for this and explicitly specify the action part in the URL. You would also need to attribute the methods with either HttpGet, HttpPost etc...

[HttpGet]

public IList<Project> ActiveProjects()

{

return GetProjectsFromDB(true);

}

Your route settings in the global file would be...

routes.MapHttpRoute(

name: "DefaultApiWithAction",

routeTemplate: "api/{controller}/{action}/{id}",

defaults: new { id = RouteParameter.Optional }


The URL format you would use to invoke this method would be

http://localhost:51499/api/projects/activeprojects

Unlike WCF REST service where you need to attribute your method to return the result in a specific format (WebMessageFormat), Web APIs allow the client to specify the format they accept, and Web API will format it accordingly. This is achieved by the client specifying the format in the Accept header in the HTTP request. For an example, if you want the data returned as XML, you would set the Accept header in Fiddler like this Accept : application/xml.

Another cool thing about Web APis, is that it allows OData like syntax to query the API it self for an example, if I want to get only the top 2 projects...my URL would be like this

http://localhost:51499/api/projects/activeprojects?$top=2

However, this to work, you need change your return type IQueryable like this...

public IQueryable<Project> ActiveProjects()

{

return GetProjectsFromDB(true).AsQueryable();

}

The URL format for your return the top 2 projects ordering it by ProjectType would be like this..

http://localhost:51499/api/projects/activeprojects?$top=2&$orderby=ProjectType

Web APIs can be self hosted in a console application and also be written in ASP.net web forms

Thats all i have time for today...

Friday, March 30, 2012

Simple factory, Dependency Injection and Unity

Someone asked me the question whether a simple factory can be implemented using dependency injection container. The example he bought up is where a simple factory takes in a string and switches this string to return the correct instance of the object needed....I am talking about something like this.

internal IWaterMarkProvider GetProvider(string fileExtension)

{

IWaterMarkProvider provider = null;

switch (fileExtension.ToLower())

{

case "pdf":

provider = new PDFProvider();

break;

case ".docx":

provider = new WordProvider();

break;

case "pptx":

provider = new PPTProvider();

break;

case "xlsx":

provider = new ExcelProvider();

break;

}

return provider;

}

The answer was yes..So, if you are using Microsoft Unity (a dependency injection container), you would have your configuration file like this...

<configuration>

<configSections>

<section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration"/>

configSections>

<unity xmlns="http://schemas.microsoft.com/practices/2010/unity">

<container name="providerContainer">

<register type="TestLibrary.IWaterMarkProvider,TestLibrary" name="docx" mapTo="TestLibrary.WordProvider,TestLibrary" />

<register type="TestLibrary.IWaterMarkProvider,TestLibrary" name="pdf" mapTo="TestLibrary.PDFProvider,TestLibrary" />

<register type="TestLibrary.IWaterMarkProvider,TestLibrary" name="pptx" mapTo="TestLibrary.PPTProvider,TestLibrary" />

<register type="TestLibrary.IWaterMarkProvider,TestLibrary" name="xlsx" mapTo="TestLibrary.ExcelProvider,TestLibrary" />

<container>

<unity>

<configuration>

We are basically configuring a Unity container and configuring "named" registrations for the IWaterMarkProvider interface. So for an example, the named registration "docx" is mapped to the WordProvider. Hence when we resolve an instance for the IWaterMarkProvider passing the file extension "docx", the Unity container will create an instance of the WordProvider class.
The code below shows how to do this....

IUnityContainer container = new UnityContainer().LoadConfiguration("providerContainer");

IWaterMarkProvider provider = container.Resolve<IWaterMarkProvider>(fileType);

provider.WaterMark();


Note that "providerContainer" is the name of the container specified in the configuration file.
The "fileType" is a parameter that holds the type of the file e.g. "docx"
Whats the advantage of this approach to the approach that we initial put forward, that is with the simple factory we implemened in the first code listing?
First, the code for creation of these classes are "outsourced" and responbility of the container
Secondly, the providers can be changed without recompiling the source, this is even true when we add a new file type. We can just configure it on the configuration file.
Thirdly, If these providers depend on other types, the container will take care of injecting these types into the provider.
Another advantage (unreleated to this example) of using Unity, is that it promotes loose coupling. This becomes really easy when mocking while unit testing. As you can just point Unity to your mocks rather then the real dependencies.

Also note that the lifetime of the instances created through the container can be controlled. By default, each call to the container will create a new instance, if you want to have the WordProvider be a singleton, you can do this...

<register type="TestLibrary.IWaterMarkProvider,TestLibrary" name="docx" mapTo="TestLibrary.WordProvider,TestLibrary" >

<lifetime type="singleton"/>

register>

Wednesday, March 28, 2012

System.Runtime.Caching (.Net 4.0)

.Net 4.0 introduced the System.Runtime.Caching namespace, this made developers to make use of caching functionality independent of the cache found in the System.Web DLL.
Caching using the System.Runtime.Caching provides an in memory cache but also allows developers to extend this with a different provider. For an example, you could have an implementation that could hold the items in memory but also persists it into the hard disk for the fear of cache eviction.

Adding an item into the cache is done this way...

//Get the default cache
MemoryCache cache = MemoryCache.Default;

//Add the item

cache.Add("MyKey",

"Nairooz Nilafdeen",

DateTimeOffset.Now.AddMinutes(19));

//Get the item from the cache

if (cache.Contains("MyKey"))

{

Console.WriteLine(cache.Get("MyKey"));

}


The caching namespace also allows you to set ChangeMonitors for the cache policy, for an example, you can specify that the cache needs to expire once a file changes or the database query changes..
You can watch for a file and expire the cache after a specified time period (absolute or sliding) or when the file content changes...the code below demonstrates this.

string fileName = @"C:\ERR_LOG.log";

CacheItemPolicy policy = new CacheItemPolicy();

policy.AbsoluteExpiration = DateTimeOffset.Now.AddSeconds(40);

policy.ChangeMonitors.Add(new HostFileChangeMonitor

(new List<string> {fileName }));


cache.Add("FileKey", "Nairooz Nilafdeen", policy);


You can also attach a SqlDependency with the SqlChangeMonitor to expire the cache when a the result of a watched query changes...

SqlDependency dependency = new SqlDependency();

//initialize sql dependency heree...

policy.ChangeMonitors.Add(new SqlChangeMonitor(dependency));