Sunday, November 14, 2010

WCF 4 - File Less Activation, Default Endpoints and Default Bindings

Started going through WCF 4 a few months back, however, its today I got a chance to write down something about it

So let me talk about something that was really interesting when I first read about it, File Less Activation of services.

In WCF 3.5 when you wanted to host a service in IIS, you had to go through, adding endpoints, bindings and also a SVC file so that IIS can pick the request up.

In WCF 4, this has been simplified so much, that you can get a service up and running in no time.
Lets take an example, I created a service Service1 in a namespace DefaultEndpointSVC, and this is my web.config file....

<add service="DefaultEndpointSVC.Service1" relativeAddress="myService.svc"/>


Now, if I go and deploy my service in IIS inside a virtual directory /DefaultEndpointSVC, I can access my service like this...

Note that I did not add any svc file called myService.svc, instead I have configure it in the serviceActivations element, the relativeAddress attribute specifies the relative .svc file and the service attribute specifies that service to activate when IIS gets a request for a "myService.svc" file and in our case, our Service1 will get activated.

Now, are we missing something here?....where is the endpoint tag? wasn't WCF all about the ABC (address, binding and contract)??

WCF 4 introduces the concept of DefaultEndpoints, that is if you dont configure an endpoint, WCF will add a default endpoint for you..., so now the question how does it do this..

WCF does this by looking at the addressing schema, in our case, we are accessing our service through http, and because we have not defined an endpoint in our config file, it will add a basicHttpBinding endpoint.

WCF has default binding for different transport protocols, for http the defualt binding is basicHttpBinding, for net.tcp it used netTcpBinding; you can get a list of the default bindings that WCF uses from the machine.config.comments.config file found in the folder
C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Config\ ; within the section protocolMapping

In WCF 4.0, you can put your WCF configurations extensively in the machine.config file, so that it would effect all the services hosted in that machine. So, if all your services use wsHttpBinding, all you have to do is change the default protocol mapping to choose wsHttpBinding instead of the default basicHttpBinding like this...

<add scheme="http" binding="wsHttpBinding" bindingConfiguration="" />

Now, what if, for a new service that you are creating you need to expose it as basicHttpBinding, you put the above line your web or app config, this would override the machine level or the default protocol mapping.

If you have noticed, you would see there is no binding configuration, for each binding there is an associated default binding, if you want to change this at your application level, you just need to add that binding in the config file and you don't need to no longer associate the binding name on the endpoint (see config section below), if you place a netTcpBinding binding configuration, then any net tcp endpoint defined for that application will pick up that binding configuration, but you can still use named configurations for your endpoints.

The advantage of no longer needing to associate your binding configuration into your endpoint is powerful, now you can just add the binding configuration with the standard values for your applications into your machine config like this...

<!-- put your settings here -->

Notice that there is no name attribute, hence this binding configuration will be used in any service that is hosted in this machine that uses net.tcp endpoint but without a named binding.

The same concept is there for service behaviors, hence you can put a default service behavior again in your machine config file that can be globally used in all the services hosted.

You can also name this behavior and now if you specify the same behavior name in your application web.config, the settings for the behavior in the machine.config will be inherited.

Guess, that all I have time for now...need to go and finish up the movie I stared...

Sunday, November 7, 2010

Running .NET code on a 64 bit Machine

Needed some information on how to develop .NET application for 64 bit machine, so I was doing the usual, reading a little bit of it, so before I forgot, i thought of posting it here for future reference.

The main advantage of using a 64 bit machine over a 32 bit machine is that memory is not constrained to 4GB; in a 32 bit machine, the highest address that the CPU understands is around 4GB, but with a 64 bit machine, the address space become much more then this in the order of 2^64.

Now lets talk about running .NET code, code that is 100% managed has no problem when it executes in a 64 bit, this is due to the fact that the CLR takes care of this.

Going back to the basic, when you compile a piece of C# or VB.NET code, the compiler produces MSIL (Microsoft Intermediate Language) code, that contains enough information for the CLR (Common Language Runtime) to start execution, like meta data and the types used and so...

When you start a program the windows loader, peeks at the assembly and sees if the application needs the CLR to be loaded, and the version of CLR to be used and the platform (X64 or X84) the code was meant to run in.
and then the CLR is loaded, the MSIL would be executed, but this code has to be first interpreted (JITTED) into native code according to the current machines instruction set so that the CPU can understand and execute the program.

So, building your managed code in a 64 bit or 32 bit machine produces the same MSIL code, however generally for managed code ,its just when this code gets JITTED you need to worry about machine architecture.

When you install, the .NET framework on a 64 bit machine running a 64 bit OS, the installer installs both the version of the CLR, yes, there are 2 versions of the CLR (2 versions of the JIT as well), one for 32 bit and the other catering for 64 bit architecture. The windows loader will be responsible for choosing which CLR to be used. The loader chooses the CLR based on the information on the assembly the developer sets at compile time; if you note that when you build your .NET application in Visual Studio (2005 and higher), you can specify for which platform your are building your code against, possible values include, 64, 32, itanium and ANY

When you specify X64 for your build platform, then the loader will execute your executable in the X64 bit CLR, meaning that your MSIL code will be JITTED into X64 native code. This is same when you specify X32 for your build target.
By setting the above you are forcibly telling your loader which version of the CLR/JIT to be used, this would become very useful when you are loading some 32 bit DLLs into your application as well, which we will discuss in a few seconds...
When you specify "ALL" for you build target process, the loader will select the CLR/JIT according to the machine your code is running on, i.e if its on a 32 bit, the 32 bit CLR will be used and if its a 64 bit then the 64 bit CLR will be used.

Now lets discuss some important rules...
You cannot run a 32 bit DLLs with 64 bit applications in process or mix 64 bit and 32 bit assemblies in process.

Windows 64 bit OS was designed in such away that 32 bit applications can still run on it...64 bit version of Windows comes with an emulator called WOW64, and all 32 bit applications will be running on this emulator, as per MSDN there is not much of a performance implication on this, more on this here....
Something interesting to note is that, if you install Visual Studio on a 64 bit machine, it would install both the version of the CLR, however, Visual Studio is a 32 bit application, and hence will run on WOW64

So, if you are developing a pure 100% managed application you don't need to worry about porting your code to a 64 bit, an xcopy would just work fine.
However, you might need to review, if your application is...
1) using a third party dlls that are built for 32 bit machines
2) If you are using COM objects
3) using unmanaged code in your application.
4) Serelization of objects, this would be a problem, when you are sharing object state serlized from a 64 bit machine and consumed by a 32 bit machine, this can be overcomed by using XML serelization to an extent.

When porting .NET code from 32 bit to 64 bit, you need to review the above, do necessary changes, deploy and then test it out.

This is an old article that can be used as a guide.

Wednesday, October 20, 2010

Nullable Value vs Casting.

Was talking to a team member half an hour ago, and he pointed out that he needs to change a line of code like this code (isLocked is a nullable bool)
bool isEnabled = (bool)isLocked


bool isEnabled = isLocked.Value

I asked him why you need to do it that way….and his answer was NO IDEA…
So thought of dissembling the IL to see what was really happening. This is the dissembled code I got from Reflector..

public struct Nullable where T: struct


private bool hasValue;

internal T value;

public Nullable(T value) { this.value = value; this.hasValue = true; }
public bool HasValue { get { return this.hasValue; } }

public T Value { get { if (!this.HasValue) }

ThrowHelper.ThrowInvalidOperationException(ExceptionResource.InvalidOperation_NoValue); return this.value; }
public T GetValueOrDefault() { return this.value; }

Digging it a little bit more, doing a casting on a nullable generates IL the same as when you use .Value property, so there is no difference in casting a nullable type or using the .Value property.

But interesting to note from the dissembled code is that using the method GetValueOrdefault() on a nullable type is more efficient then using Value, as the check .HasValue does not happen, i.e you would prefer Value over GetValueOrDefault() if you just want to get the default value when the type does not have a value.

Monday, July 26, 2010

Parallel LINQ (PLINQ) - Intro

.Net 4.0 supports parallel LINQ or PLINQ, PLINQ is a parallel implementation of LINQ.
PLINQ has the same characteristics has LINQ, in that it executes queries in a differed manner.
However, the main difference is that with PLINQ, your data source gets partitioned and each chunk is processed by different worker threads (taking into account the number of processor cores that you have) , making your query execute much faster in certain occasions.

Running a query in parallel is just a matter of calling the AsParallel() method of the data source, this will return a ParallelQuery<T> and your query will execute parallel.

Let's look a code sample...

var query = from num in source.AsParallel()
where num % 3 == 0
select ProcessNumber(num);

Now, when this query is iterated our a foreach loop or when you call ToList() etc..the query will be run in different worker threads.

Although, you have parallelized your query execution, if you want to do something with that result within a loop, then that processing will happen serially although you query executed in a parallel way.

You can achieve this parallelism by running the loop using a Parallel.ForEach() or you can use the ForAll method like this....

var query = from num in source.AsParallel()
where num % 3 == 0
select ProcessNumber(num);

( x => { /*Do Something*/ } );

In the code above the query will run in parallel as well as the result will be processed parallely.

Running LINQ queries in parallel is does not always gives you best performance, this is basically due to the fact that the initialization and partitioning outwits the cost of actually running the query in parallel.
Hence, it's necessary for you to compare which option is best LINQ or PLINQ.
MSDN documents that PLINQ will first see if the query can be run in parallel, then sees the cost of running this query in parallel vs sequentially, if the cost of running this in parallel is more then running it sequentially, then the runtime will run this query in a sequential manner.
I tried it out, but could not actually see the difference :)

Another good option that you might want to run your query if you are thinking of running it in a ForAll method is running the parallel query with a ParallelMergeOptions.
By default, although the query executes parallely, the runtime would have to merge the results from different worker threads into one single result if your running the query over a foreach loop or doing a ToList(), this sometimes causes a partial buffering.

However if you are iterating the query over a ForAll() you can take the benefit of not buffering the record and processing it once the result return from the thread without is a code sample on how to do this...

var query = from num in source.AsParallel().WithMergeOptions(ParallelMergeOptions.NotBuffered)
where num % 3 == 0
select ProcessNumber(num);

( x => { /*Do Something*/ } );

Although using a ForAll() consumes the items when it returns from the thread, I saw some noticeable difference when running the query with a ParallelMergeOption.

Implementing Asynchronous Callbacks with Task Parallel Library

Bored thought of posting how you can implement callback with Task Parallel Library (TPL).

So what am I talking here, basically I start a task in one thread and I want it to call another method once it completes (Asynchronous callbacks ).

Here is a sample code....

Task<int> parent = new Task<int>(
() =>
Console.WriteLine("In parent");
return 100; }

Task<int> child = parent.ContinueWith( a =>
Console.WriteLine("In Child");
return 19 + a.Result;



The code explains it all, all I have to do is create the task and then call its ContinueWith method and register the callback, its important to note that the parent task is an input to the continuation callback and the result of the parent can be accessed by the callback.

The callback is again another Task, so it does not block the calling thread.
The callback in TPL gives you more flexibility, in the way you want the callback to be invoked, for an example I can specify that I want the callback to be invoked only if the parent did not run successfully to the end.

I can re-write the above code to do that exactly by passing in a TaskContinuationOptions option as the 2nd parameter of the ContinueWith method.

Task<int> parent = new Task<int>(
() =>
Console.WriteLine("In parent");
return 100; }

Task<int> child = parent.ContinueWith( a =>
Console.WriteLine("In Child");
return 19 + a.Result;



The option is bitwise so I can specify several options demarcated by the pipe line. A few important options would be NotOnCanceled, OnlyOnRanToCompletion, OnlyOnFaulted etc

Running Parallel Tasks with The Task Parallel Library

Down at home with Conjunctivitis, was boring at home, so was listening to some old classics and then thought of writing a post on how you can run tasks with the Task Parallel Library (TPL).

Going forward, Microsoft encourages developers to use TPL for concurrent programming. In my previous post I talked about data parallelism , where I showed how blocks of work running inside a loop can be scheduled to run on different threads.

In previous versions of .Net if I want to execute a task in another thread I had do this.
Thread thread = new Thread(
() =>
//Do some work
Console.WriteLine("Starting thread");

With TPL I only do this..

() =>
//Do some work
Console.WriteLine("Starting thread");
The static Invoke method of the Parallel class has 2 overloads, the one that we use takes in a number of varying void and parameter less delegates.

If you want more control over, what you pass into the thread and if you also need the return value, you could use the Task class within the System.Threading.Tasks namespace.

.Net 2.0 introduced the Thread class with another constructor that takes in a ParameterizedThreadStart delegate that does not have a return type but takes in an object as a parameter.

With TPL this can be achieved much more easily with a Task class, which will take care of scheduling this work in another thread.
Lets take a look at some code...

Task <int> task = new Task <int>(obj =>
return ((int)obj + 10);


Line number 1, we create a task object, the generic integer specifies that the return value from the thread is an integer.

Next, the first parameter into the .Ctor of the Task object is a delegate that takes in object and returns a value of the type specified as the generic, in our case it is an integer.

The next parameter is the state object, basically this is the input parameter into the thread. Finally the 3rd parameter takes the actual value of the parameter that we pass into the thread, in this case I am passing 14.

Inside the lambda function, I just add the input value with 10 and return, now I can access the Task.Result property and would see 24.

Accessing the Result property of the Task object before the execution of the thread will cause the calling (main) thread to halt and will return once the value for result is available.

Another efficient way of running this task in TPL is like this....

Task<int>t = Task.Factory.StartNew<int>(
obj =>
return ((int)obj + 10);


I like the above if I don't need the flexibility of creating the Task separately and the starting it separately.

Sunday, July 25, 2010

ConcurrentBag<T> - Thread Safe Collections

.Net 4.0 introduces a new namespace, System.Collections.Concurrent, this namespace contains a set of collections that would be very much useful in threaded programming.
The Add or Remove methods of List<T> is not thread safe, meaning that if you are adding or removing items from a list which is accessed from multiple threads, you will end up overwriting some of the items.
So, in multi threading programming, you would need to lock the list before adding or removing items from it.

The ConcurrentBag<T&gt in the System.Collections.Concurrent is a thread safe collections i.e all you have to do is call Add or Remove in the usual way and the collection will take care of adding the items without overwriting them.

Here is an example using the ConcurrentBag<T&gt with the Parallel Task Library...

ConcurrentBag array = new ConcurrentBag();
Parallel.For(0, 100000, i =>
//Do some work here


catch (AggregateException e)
foreach (Exception ex in e.InnerExceptions)


The ConcurrentBag resembles the List in that it is unordered and contain duplicate values and also accept a null as valid value for a reference type.

The System.Collection.Concurrent namespace also contains other implementation of thread safe collections, to name a few,ConcurrentStack, ConcurrentDictionary, ConcurrentQueue...

Task Parallel Library - Refresher - Stop an Iteration

I have been talking about the Task Parallel Library (TPL) 2 years back , when it was in CTP, I was taking a class on High Performance Computing yesterday, and I just remembered that I have forgotten all about this library :).

This library has now been officially released with .Net 4.0 and Microsoft recommend you to actually use this library if possible when writing concurrent programs, so that your program can take maximum advantage on the numbers of processes that you have.

I thought of posting a sample code as a refresher,.
Here is the example, I have a list of Customer objects and I need to get the object that matches a specific criteria, lets say the Name property should be "F".

Here is how my Customer object looks like.

public class Customer
public int ID { get; set; }
public string Name { get; set; }
public int Age { get; set; }

Lets assume that the Name property is unique.

If I was to write the algorithm for this in .Net 1.1 or .Net 2.0 my logic would look like this.

foreach (Customer c in dataSource)

if (c.Name == "F")
result = c;

This code will run in the same thread, unless you want to write a partition algorithm and then give crunches of the data source to different threads.

This is where TPL comes in to play, if I was using TPL I would write this code like this.

Customer result = null;
IList dataSource = GetMockDataSource(); //Get the data

Parallel.For(0, dataSource.Count, (i, state) =>
if (!state.IsStopped)
Customer c = dataSource[i];
if (c.Name == "F")
result = c;



This is what happens under the cover, the TPL runtime would partition the array (in our case we are using mere numbers and accessing the Customer object through the index) and create threads and give crunches of the indexes to each and every thread. The runtime can actually now spawn thread on different cores according to resource availability. By comparison this will increase performance as we are dividing the Customer list into crunches and each crunch is processed by different threads managed by the runtime.

Lets examine the code,
Line number 1 and 2 says it all, line number 3 is the place where we are using the TPL library.
The Parallel class is within the System.Threading.Tasks namespace, the static method For has many overloads, in the one that we used, the first parameter specifies the index the loop should start from and the second parameter specifies where the loop should end.

The 3 parameter, takes in an Action Delegate of type , for simplicity I have implemented it as a lambda function.
Within the lambda function, I check if the current Customer object satisfies our criteria, if so I use the ParallelLoopState object to signal to the runtime that we should now stop all iterations as we have found what we have been looking for by signaling ParallelLoopState.Stop().

When you call the Stop method on the ParallelLoopState object, the runtime will not create any more iteration , however, it cannot stop the iteration that have already started, so we explicitly check if the some other thread has signaled to stop by checking the IsStopped property of the ParallelLoopState object.

Although this example could have been done more efficiently using PLINQ, I chose the task library to show the underlining basics.

Saturday, July 17, 2010

HashSet<T> vs .Net 4.0 SortedSet<T>

HashSet<T> has been there for a while, it can store objects in such a way that the time to add an item to the set or remove it or search for an item is O(1), constant time.
It uses hash based implementation to achieve this constant time for these operations, however when you want to iterate this collection in a sorted way, the operation is intensive, as the values within the HashSet is not sorted, you need to create a sorted collection and then iterate this collection.
As the values are stored within the collection indirectly based on hash, the sort operation is expensive and also a new collection has to be created.

This is where SortedSet<T> comes into play, this collection type was introduced in .Net 4.0, when you add item to this collection, the item is placed in the collection according to the sort criteria, thus when you need to iterate this sorted collection, it is much faster then the HashSet.

SortedSet has it's own cons, now that the items has to be placed in the correct position in the collection according to the sort order the Add() operation and Remove() operation don't take constant time anymore.

Searching for an element, would mean that a binary search has to be done on the collection which is logarithmic time.

The conclusion is that these Sets can be used accordingly to the requirements that you have and you are not forced to choose one collection over the other, if you need to iterate over a sorted collection, then a SortedSet would be the best choice.

.Net 4.0 also introduces the ISet<T> interface, both HashSet and SortedSet implements this interface, so you can always program to an interface and change your type in the middle of the implementation if you feel HashSet would do better then SortedSet.

Btw, why do you need to use sets anyways ?, cos' then they can be manipulated with set operation like Union, Intersect etc.. and that they can contain only unique elements.

Saturday, July 10, 2010

ASP.NET Localization - Implicit Resource Assignment

In one of my previous post I talked about ASP.NET localization basic and 2 ways of which resources can be assigned to ASP.NET pages and controls were also discussed.
In this post I will talk about a powerful feature that can be used to assign resources implicitly.
This feature can be used to assign multiple resources to a control at one shot without individually assigning resources.

Lets take an example to demonstrate this, I am going to place a simple button control on a page.
Now I want assign 3 types of resources, the Text, Color of the button and the tooltip of the button.
This is my resource file.

I have assigned values for all fields that I require. Note how the key has been named, that is a prefix followed by a period and then followed by the property of the control that I need to assign these resources too, for an example "button" is my prefix followed by a period and then the "Text" property.

Now, in my ASPX page all I have to do is this.
Note that without assigning resources to individual properties I add a meta:resourcekey attribute, and I pass the prefix as the value for this attribute.

In the background ASP.NET will collect all the resource keys for this page and then filter out the ones that contain the prefix "button" and then for each of these filtered key, it will see if the suffix matches a property in the control if so it will assign the resource value to that property.

For an example in our example the initial filtered list will contain 3 keys, "button.Text", "button.Tooltip" and "button.BackColor", now we have assigned this meta tag to the button control, so will see if the suffix matches the properties of the button control, the suffix of the first key is Text, so the resource value for the key "button.Text" will be assigned to the text property of the button, next Tooltip is also a property of the button so that resource value for the key "button.Tooltip" will be assigned to the button's tooltip.

The advantage of using the meta:resourcekey as you can see is that you can assign multiple resources to properties to in one shot implicitly.

Sunday, July 4, 2010

Satellite Assemblies and Strong Names

A good friend of mine from another project was talking about how a satellite assembly for a given culture that he created for an ASP.NET server control project did not get picked up for the that culture and the default resource was picked up.

I was so curious to why this did not happen, and it was today that it hit me that there project is signed with a strong key and that he did not sign the satellite assembly with the same key.

In other words, if your main project is signed then so does all your satellite assemblies need to be signed with the same key, if not the loading for that assembly will fall and it will resort to the default resource bundle, that is contained within the main assembly.

Friday, July 2, 2010

ASP.NET Localization

I have written some blog post on globalization before, but all this was something to do with desktop application or general best practices. This week I was interested in localizing ASP.NET applications (well, I was forced to :))

So, let me start by giving an introduction to ASP.NET resource assignment features. Basically it's the same .NET concept, you create resource files, compile and create satellite assemblies and link it to the main DLL.
The resource assignment would happen through the resource manager.

Although you can compile your resources before deploying, you can also take the feature of a ASP.NET folder named App_LocalResources, you can just plaace all your resource files with these folder and ASP.NET will automatically compile and create satellite assemblies.

The screen shot on the left shows a screen shot of my solution.

Here I have added the resource files for the Local.aspx into the App_LocalResources folder; I have added 3 different resource files, one for the default (Engligh), one for French (fr-FR) and the other one for Spanish (es-ES).

The Local.aspx file contains a button with id button1, If I need to assign resource text to the button I can do it in 3 ways, I will examine the first 2 ways in this post and the last one in the next post (hopefully !).

The first way I can do this is using an explicit declaration like this.
By doing the above declaration, ASP.NET will fetch the correct resource string from the local resource file (i.e is the resource file with the App_LocaResources folder that matches the aspx file name and the culture suffix eg

The second way is to actually call the resource manager from the code behind, like this.

Button1.Text = GetLocalResourceObject("button.Text") .ToString();

The GetLocalResourceObject is method within the Page base class.

You can also access global resources using the GetGlobalResourceObject().

The other cleaner and faster way of assigning resources to controls is by using the implicit assignments, which I will post in my next blog post.

On a final note, although the default implementation of the resource manager queries resource files, you can always extend this with your own resource provider that queries a text file or a database or any other data store, this is a great link where you can start doing that.

Monday, June 21, 2010

XQuery - Projecting Attributes With a Where Clause a Query

Had to write an xquery yesterday to extract some attribute values from an xml column, so I thought of posting it here for future reference.

This the structure of my XML data stored in the column xmlColumn

This is the records that I have in my table

What I need is to select the Id column from a normal table column and then the "value" attributes from the xml coulumn whose "SiteText" is equal to 1

This is the query written in 2 different ways

select id, xmlColumn.
from temp

select id, xmlColumn.
value('data(AuditRecord/Records/Record[@SiteText=1]/@value)[1]', 'varchar(50)')
from temp

and this is the result set I get

Tuesday, June 15, 2010

Validating XML with an XSD using LINQ

So how do you validate an XML document that you get with a particular XSD?.
One way you can do is with LINQ to XML with these steps.
1) Create a XmlSchemaSet
2) Add the schemas that your document needs to confirm to
3) Call the validate method of the XDocument

So here is the example code.

//Load an xml document
XDocument doc = XDocument.Load(@"C:\test.xml");

//Define the schema set
XmlSchemaSet set = new XmlSchemaSet();

//Add the schema to the set
set.Add(string.Empty, @"C:\test.xsd");

//A variable to test whether our document is valid
bool isValid = true;

//Cal the validate method
//Note that I am using a lambda function , alternativley you can use
//pass in a delegate method as in the traditional method :)
doc.Validate(set, (o, e) => { isValid = false; });

//Print out the result

That's it !!.

So what if you want a WCF service to validate this? can accept a stream or a string as the input, create the XDocument and then validate it.

Another way you can enforce the validation is generate a class hierarchy that confirms to the XSD and expose that has the parameter of the WCF method.
In this case the client has no other choice but to use the class types, and if the input you get to the WCF service does not serialize correctly to the XML confirming to the XSD an error will be thrown.

Exposing Enum Types From WCF

There are many cases where you would want to expose enum types from a WCF service, so how do you do this?

Simply create a data contract and mark the values with the EnumMember attribute and thats it

public enum MyEnumName
XML = 2

Monday, June 14, 2010

The maximum nametable character count quota (16384) has been exceeded

Some of our services were growing and the other day it hit the quote, I could not update the service references, nor was I able to run the WCFTest client. An error is diplayed saying "The maximum nametable character count quota (16384) has been exceeded "
The problem was with the mex endpoint, where the XML that was sent was too much for the client to handle, this can be fixed by do the following.
Just paste the lines below within the configuration section of the devenve.exe.config and the svcutil.exe.config files found at the locations

C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE,
C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin

Restart IIS and you are done.
The detailed error that you get is the following :

Error: Cannot obtain Metadata from net.tcp://localhost:8731/Services/SecurityManager/mex If this is a Windows (R) Communication Foundation service to which you have access, please check that you have enabled metadata publishing at the specified address. For help enabling metadata publishing, please refer to the MSDN documentation at Exchange Error URI: net.tcp://localhost:8731/Services/SecurityManager/mex Metadata contains a reference that cannot be resolved: 'net.tcp://localhost:8731/Services/SecurityManager/mex'. There is an error in the XML document. The maximum nametable character count quota (16384) has been exceeded while reading XML data. The nametable is a data structure used to store strings encountered during XML processing - long XML documents with non-repeating element names, attribute names and attribute values may trigger this quota. This quota may be increased by changing the MaxNameTableCharCount property on the XmlDictionaryReaderQuotas object used when creating the XML reader.

Saturday, May 22, 2010

Configuring mexConnections with net.tcp binding

This is one of the issues that does not explain it self to me and I could not find any help from MSDN, so I thought of sharing in the blog for future use.

I have a WCF library that is hosted in IIS, after some time we wanted to increase the max connection property of the service to 100 from the default value of 10 and also change the listenBackLog value to the same (if you dont know what these attributes do, a googling would help :0)

The service was configured with 2 endpoints, a net;tcp endpoint and the the other being the meta data exchange endpoint using mexTcpBinding.
I also configured a base address, part of the configuration file looks like this :

Host it in IIS 7 and the service does not start at all and throws up an error like this :

There is no compatible TransportManager found for URI 'net.tcp://ct-svr:8731/AuthorisationManager/Services.Security.AuthorisationManager.svc/mex'. This may be because that you have used an absolute address which points outside of the virtual application, or the binding settings of the endpoint do not match those that have been set by other services or endpoints. Note that all bindings for the same protocol should have same settings in the same application.

So, I removed the mex endpoint and it worked, but I wanted to keep the mex endpoint so after some googling and hard luck the solution was to expose the metadata through http, now the configuration file looks like this.

well, this works fine, but I still won't to know why the earlier piece of config did not work, if any ones has any idea, just put up a comment.

Monday, May 10, 2010

Response.Redirect vs PostBackUrl

I normally use Response.Redirect to navigate from page to page, someone told me the other day that it would be better to use PostBackUrl of a control to redirect to a page then use Response.Redirect.

So, I ran a little test of my own, created a 2 sample pages, where on a button click I do a Response.Redirect like this.

protected void Button1_Click(object sender, EventArgs e)

Next I ran Fiddler, this is the result I got on the button click, the response I get back from the server is not the content of the page I want but this...

HTTP/1.1 302 Found
Server: ASP.NET Development Server/
Date: Mon, 10 May 2010 16:17:57 GMT
X-AspNet-Version: 4.0.30319
Location: /Advance.aspx
Cache-Control: private
Content-Type: text/html; charset=utf-8
Content-Length: 130
Connection: Close

Object moved to here.

Here; point to the page you want to navigate to (have to live with this HTML formating :))

The server issues a 302 and the browser issues another request to the actual page I want, so we got 2 round trips to the server.

Next< I set the PostBackUrl property of the button, and now if i see the page source, I see a javascript redirect, and if I take a look at Fiddler, on the button click, I see that there is a POST request going to the server. Now, the server returns me the page I want in just one round trip. The other advantage I get is that I can access the state of the previous page by using the PreviousPage property, all i need is to cast the return type of this property to the type of the previous page. I can access the control state from this, however you want be able to access the ViewState of the previous page from this property, but you can expose the needed view state key/value through a public property of the previous page. You can also set the %@PreviousPageType directive like this

now, you would be able to access the previous page through the PreviousPage property without having to cast into the type of the previous page.

So, in summary, use the PostBackUrl when ever you can over Response.Redirect
You would only be able to use this property with controls that implement the IButtonControl interface.

Sunday, May 2, 2010

Passing in parameters into OPENQUERY

I was struggling for some time now, trying to pass in parameters into OPENQUERY, OPENQUERY does not support passing in parameters, all it does it takes a string value as the 2nd parameter like this,


What's worse, it does not support passing in a varchar variable as the 2nd parameter.

So, if you want to pass in parameters, then one of your option is creating a dynamic query and executing it like this

declare @var int = 10
declare @query varchar(max) =
'select * from openquery(Test_link,' + ''''
+ 'SELECT * FROM dbo.TEMP where ID > ' + CAST(@var AS VARCHAR(MAX))
+ '''' + ')'


IF you want to use the result returned by OPENQUERY, like for an example for joining to another source table, you would have to create a table variable, populate your result and start joining, and illustration would be like this (hypothetical example);

declare @var int = 10
declare @query varchar(max) =
'select * from openquery(Test_link,' + ''''
+ 'SELECT * FROM dbo.TEMP where ID > ' + CAST(@var AS VARCHAR(MAX))
+ '''' + ')'

declare @table Table( id int, name varchar(max))

insert into @table

select * from @table r
inner join temp t
on = t.ID
where = 'M';

But I would like a better way to do this, for 3 reasons
1) I am executing dynamic queries, so my performance is not what I want.
2) Results returned by OPENQUERY can be joined to another source, but with the method show above, I have to opt to a table variable.
3) Inefficient string concatenations.

Any better solution to this?

Friday, April 30, 2010

Generate from using - creating stubs first in VS 2010

A feature that every one loved in VS 2005 is to just type in you class and press Alt+Shift+F10 and Visual studio shows you the options to add the using statements.

VS 2010 this has been extended, to create class stubs for you while you code.

For an example, if I want to use a class called Person in my code and if the Person class is not there yet in your code base, all you got to do is type the following.

Person p = new Person()

Press Alt+Shift+F10 and it will show an option asking if it should create the stub Person, if you say yes, VS will create the Person class for you in your current project.

Now next when you need to put a property into the Person class all you have to do is :

p.Name = "Jack";

again press Alt+Shift+F10, and tell VS to create the property, now in the person class you will see the property Name.

You can do the same to generate method stubs as well.

Thursday, April 29, 2010

Web Protection Library and XSS attacks

I started using the AntiXSS library that comes with the Microsoft Web Protection Library (WPL), this library gives you several APIs that can be used to protect your application from Cross Site Scripting (XSS).

So, What is an XSS attack?, this is where a malicious user injects scripts into the web page the user is viewing and the script runs as part of the response from compromised web site.

As a simple example, lets say a malicious user enters a comment on a blog post, with the comment he also appends a script that runs for 1000 times where a alert box is displayed.
Now every time a user views the comment section of the particular post, the script would execute.

An extream scenario would be where a user enter a script in the comment box where the script passes on the session id or a cookie of the logged in user to the hacker's web site.

In not these cases, if the input of the comment text box was scanned for any malicious code before saving it to a data source.

WPL supports APIs that will encode inputs and sanitize malicious code.

TO demonstrate how it is easy to use this library, all you have to do is to add a reference to the AntiXSS library and in the Microsoft.Security.Application namespace
you will get a Class called Encoder, this class has several static methods.

One of these methods is Encoder.HtmlEncode

EG Encoder.HtmlEncode(input)

WPL is still in CTP and can be downloaded from the MS site.

By rule, any input that is displayed to the user from an external source, for an example a share database has to go through the encode methods in the Encoder class and
any HTML that is displayed to the user composed form user inputs like query strings ot input fields need to be encoded before be show to the user and also these values have to encoded if they are persisted to in a data store.

ASP.NET by default supports input validation, by default this is enabled, and will not accept any malicious characters (like script and tags) from inputs, if they are present an exception is thrown.

WPL also comes with the Security Runtime Engine (SRE), this is a again a component that can be used to implicitly protect your application from XSS and SQ Injection attacks.

TO configure SRE, you can use the SRE configuration generator windows application.
You can point to an existing web.config file and add Encoded types, that is what types of controls needs to be encoded by SRE.

And that's it you are done, you application is protected from SQL injection and XSS.

Monday, April 26, 2010

Linked Servers and OPENQUERY

So how do you access data from one database to another? (specifically talking about SQL SERVER)

One possible way you can do is to create a Linked Server to the database you want to connect to.
The easiest way to do this is to go to Management Studio, right click on "Server Objects" and click "New Linked Server"

Now on the "New Linked Server" dialog you have 2 ways you can create a Linked Server.
1) Create a Linked Server pointing directly to a SQL SERVER instance
2) Create a Linked Server pointing to several set of supported data sources, this includes OlE DB data sources and also SQL SERVER.

I prefer choosing the 2nd option to create a linked server to SQL SERVER, although I can do this easily by using the first option, the 2nd option gives me the flexibility to easily change my data source without effecting any objects that use it.

For an example, if you use option one, you can access an object (eg an SP) in the other database by using a 4 part name, as following

EXEC LinkedServerName.DataBaseName.SchemaName.ObjectName

so in this case if you change the database name, you need to go change all your objects that uses this Linked Server.

However, if we go for the 2nd option you don't need to, if you use OPENQUERY.

OPENQUERY is a way to execute distibuted queries through a linked server, so for an example, you can write a query like this.

SELECT * FROM OPENQUERY(LinkedServerName, "Select * from countries")

The catch here is that you need to pass in the query as a string to openquery, this means that your execution would be a dynamic SQL.
There is an advantage here as well, now the query executes in the other database, and you can ease up the processing on you database, for this to happen you need to make sure that the query that is passed into OPENQUERY returns a limited filtered set of data, that is the query has a restrictive WHERE clause.

The problem with OPENQUERY is that you won't be able to create your query on the fly with your parameters, nor can you pass in a VARCHAR. The only way you can create your query with your parameters is to create a query that encapsulate the OPENQUERY clause itself, execute the dynamic query and populate a table variable.

Let's hope the next version of SQL SERVER will ease up developer effort on using OPENQUERY.

Saturday, April 24, 2010

Still no Workflow support in Visual Studio 2010

I was trying out Visual Studio 2010, but was disappointed that they have not included Workflow templates into the express edition (They did not in the 2008 express edition too.)

However, you can still create workflows, as it is part of .NET 4.0, but with a lot of pain, that is you have to manually code what ever the designer does for you in the normal editions !.

Well, this is where I start hating Microsoft :)

Friday, April 23, 2010

Workflows and MS WF

Yesterday, I completed my training on Windows Workflow foundation.
In my opinion, it yet a new tech and provides you with a set of good features.
However, a question that was racing my mind all the time was, why the hell do you need workflows, I mean almost all programs that we write have activities and business logic within them, so what the big use of using a framework to model workflows.

These were some of the pointers the trainer think of if you need to go for workflows.

1) Activities can be clearly identified with boundaries
2) Whether to use state machine workflows can be determined if the logic is push based and not pull based
3) When rules need to be customized externally without rebuilding the system.
4) You have huge number of human interaction (to determine if you need to use state machines)
5) Long running process that can be done asynchronously without user intervention

On the other hand, MS Workflow foundation has its own advantages.
1) A cool designer support.
2) Custom activities can be created that is compiled but you can have your work flow in an external XML file, so it s configurable. The workflows loaded into the runtime by de-serelizing the XML and passing the XML reader into the CreateWorkFlow method of the runtime.
3) Workflows can be started and stopped at will, and the program state can be persisted (the property values and all), there is a default persistence service you can use with SQL server.
4) Transaction handling is cool, putting up a transaction scope would revert back the whole set of state of the workflows (even reseting member variables that we changed in the process).
5) For the stuff that the workflow cannot revert back in a transaction rollback, you can write your compensation transaction.
6) Fault handling can be done for each activity of the work flow.
7) You can host the runtime in a asmx webservice or a wcf sebservice, how you do this is also becomes simple.

We'll these are some of this stuff I remember and by the way state machine gets deprecated in .net 4.0