Home | About Me | Developer PFE Blog | Become a Developer PFE



On this page

Parsing ASP.NET event log error messages for fun and profit
PowerShell: Restoring a whole heap of SQL Databases
Micro optimization or just good coding practice?
Fixing the DynamicControlsPlaceholder control – Making the community better
OpenXML: How to refresh a field when the document is opened
Working with the Bing Translator API
IE9: DOM Traversal
Azure: Why did my role crash?
IIS: When to enable web gardening?
Are you ready for your next challenge? (We’re hiring!)



The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

Sign In

# Sunday, August 29, 2010
Sunday, August 29, 2010 5:10:07 PM (Central Daylight Time, UTC-05:00) ( ASP.NET | Best Practice | Development )

image Sometimes a customer will ask me to look at their site and make some recommendations on what can be improved.  One of the many things I’ll look at is their event logs.  One of the nice things about ASP.NET is that when you encounter an unhandled exception, an event will be placed into your Application event log.  The message of the event log entry will usually include lots of good stuff like the application, path, machine name, exception type, stack trace, etc.  Loads of great stuff and all for free.  For customers that don’t have a centralized exception logging strategy, this can be a gold mine.

The way it usually works is that they will provide me an EVTX from their servers.  If you’re not aware, an EVTX is just an archive of the events from the event log you specify.  By itself, looking at the raw event logs from a server can be quite daunting.  There are usually thousands of entries in the event log and filtering down to what you actually care about can be exhausting.  Even if you do find a series of ASP.NET event log messages, the problem has always been – how do you take all of this great information that’s just dumped into the Message property of the event log entry and put it into a format you can easily report on, generate statistics, etc.  Fortunately, I have a non-painful solution. 

I’ve broken this down into a relatively simple 4-step process:

  • Get the EVTX
  • Generate a useful XML file
  • Parse into an object model
  • Analyze and report on the data

Let’s get to it.

Step 1:  Get the EVTX

This step is pretty short and sweet.  In the Event Log manager, select the “Application” group and then select the “Save All Events As…” option. 


That will produce an EVTX file with whatever name you specify.  Once you have the file, transfer it to your machine as you generally do not want to install too many tools in your production environment.

Step 2:  Generate a useful XML file

Now that we have the raw EVTX file, we can get just the data we care about using a great tool called LogParserJeff Atwood did a nice little write-up on the tool but simply put it’s the Swiss Army knife of parsing tools.  It can do just about anything data related you would wish using a nice pseudo-SQL language.  We’ll use the tool to pull out just the data from the event log we want and dump it into an XML file.  The query that we can use for this task is daunting in its simplicity:

SELECT Message INTO MyData.xml
FROM ‘*.evtx’
WHERE EventID=1309

The only other thing we need to tell LogParser is the format in which it the data is coming in and the format to put it into.  This makes our single command the following:

C:\>logparser -i:EVT -o:XML
"SELECT Message INTO MyData.xml FROM ‘*.evtx’ WHERE EventID=1309"

This will produce a nice XML file that looks something like the following:

<?xml version="1.0" encoding="ISO-10646-UCS-2" standalone="yes" ?>
<ROOT DATE_CREATED="2010-08-29 06:04:20" CREATED_BY="Microsoft Log Parser V2.2">
<Message>Event code: 3005 Event message: An unhandled exception has occurred...

One thing that you may notice is that all of the nicely formatted data from our original event log message is munged together into one unending string.  This will actually work in our favor but more on that in the next step.

Step 3:  Parse into an object model

So, now that we have an XML file with all of our event details, let’s do some parsing.  Since all of our data is in one string, the simplest method is to apply a RegEx expression with grouping to grab the data we care about. 

In a future post, I’ll talk about a much faster way of getting this type of data without a RegEx expression.  After all, refactoring is a way of life for developers.

private const string LargeRegexString = @"Event code:(?<Eventcode>.+)" +
@"Event message:(?<Eventmessage>.+)" +
@"Event time:(?<Eventtime>.+)" +
@"Event time \(UTC\):(?<EventtimeUTC>.+)" +
@"Event ID:(?<EventID>.+)" +
@"Event sequence:(?<Eventsequence>.+)" +
@"Event occurrence:(?<Eventoccurrence>.+)" +
@"Event detail code:(?<Eventdetailcode>.+)" +
@"Application information:(?<Applicationinformation>.+)" +
@"Application domain:(?<Applicationdomain>.+)" +
@"Trust level:(?<Trustlevel>.+)" +
@"Full Application Virtual Path:(?<FullApplicationVirtualPath>.+)" +
@"Application Path:(?<ApplicationPath>.+)" +
@"Machine name:(?<Machinename>.+)" +
@"Process information:(?<Processinformation>.+)" +
@"Process ID:(?<ProcessID>.+)" +
@"Process name:(?<Processname>.+)" +
@"Account name:(?<Accountname>.+)" +
@"Exception information:(?<Exceptioninformation>.+)" +
@"Exception type:(?<Exceptiontype>.+)" +
@"Exception message:(?<Exceptionmessage>.+)" +
@"Request information:(?<Requestinformation>.+)" +
@"Request URL:(?<RequestURL>.+)" +
@"Request path:(?<Requestpath>.+)" +
@"User host address:(?<Userhostaddress>.+)" +
@"User:(?<User>.+)" +
@"Is authenticated:(?<Isauthenticated>.+)" +
@"Authentication Type:(?<AuthenticationType>.+)" +
@"Thread account name:(?<Threadaccountname>.+)" +
@"Thread information:(?<Threadinformation>.+)" +
@"Thread ID:(?<ThreadID>.+)" +
@"Thread account name:(?<Threadaccountname>.+)" +
@"Is impersonating:(?<Isimpersonating>.+)" +
@"Stack trace:(?<Stacktrace>.+)" +
@"Custom event details:(?<Customeventdetails>.+)";

Now that we have our RegEx, we’ll just write the code to match it against a string and populate our class.   While I’ve included the entire regex above, I’ve only included a partial implementation of the class population below.

public class EventLogMessage

private static Regex s_regex = new Regex(LargeRegexString, RegexOptions.Compiled);

public static EventLogMessage Load(string rawMessageText)

Match myMatch = s_regex.Match(rawMessageText);
EventLogMessage message = new EventLogMessage();
message.Eventcode = myMatch.Groups["Eventcode"].Value;
message.Eventmessage = myMatch.Groups["Eventmessage"].Value;
message.Eventtime = myMatch.Groups["Eventtime"].Value;
message.EventtimeUTC = myMatch.Groups["EventtimeUTC"].Value;
message.EventID = myMatch.Groups["EventID"].Value;
message.Eventsequence = myMatch.Groups["Eventsequence"].Value;
message.Eventoccurrence = myMatch.Groups["Eventoccurrence"].Value;
return message;

public string Eventcode { get; set; }
public string Eventmessage { get; set; }
public string Eventtime { get; set; }
public string EventtimeUTC { get; set; }
public string EventID { get; set; }
public string Eventsequence { get; set; }
public string Eventoccurrence { get; set; }

The last step is just to read in the XML file and instantiate these objects.

XDocument document = XDocument.Load(@"<path to data>\MyData.xml");

var messages = from message in document.Descendants("Message")
select EventLogMessage.Load(message.Value);

Now that we have our objects and everything is parsed just right, we can finally get some statistics and make sense of the data.

Step 4:  Analyze and report on the data

This last step is really the whole point of this exercise.  Fortunately, now that all of the data is an easily query’able format using our old friend LINQ, the actual aggregates and statistics are trivial.  Really, though, everyone’s needs are going to be different but I’ll provide a few queries that might be useful.

Query 1:  Exception Type Summary

For example, let’s say you wanted to output a breakdown of the various Exception Types in your log file.  The query you would use for that would be something like:

var results = from log in messages
group log by log.Exceptiontype into l
orderby l.Count() descending, l.Key
select new
ExceptionType = l.Key,
ExceptionCount = l.Count()

foreach (var result in results)

Console.WriteLine("{0} : {1} time(s)",


This would then output something like:

WebException : 15 time(s)
InvalidOperationException : 7 time(s)
NotImplementedException : 2 time(s)
InvalidCastException : 1 time(s)
MissingMethodException : 1 time(s)

Query 2:  Exception Type and Request URL Summary

Let’s say that you wanted to go deeper and get the breakdown of which URL’s generated the most exceptions.  You can just expand that second foreach loop in the above snippet to do the following:

foreach (var result in results)

Console.WriteLine("{0} : {1} time(s)",

var requestUrls = from urls in messages
where urls.Exceptiontype == result.ExceptionType
group urls by urls.RequestURL.ToLower() into url
orderby url.Count() descending, url.Key
select new
RequestUrl = url.Key,
Count = url.Count()

foreach (var url in requestUrls){

Console.WriteLine("\t{0} : {1} times ",

This then would produce output like this:

WebException  : 15 time(s)
http://localhost/menusample/default.aspx : 11 times
http://localhost:63188/menusample/default.aspx : 4 times
InvalidOperationException : 7 time(s)
http://localhost:63188/menusample/default.aspx : 6 times
http://localhost/menusample/default.aspx : 1 times
NotImplementedException : 2 time(s)
http://localhost/samplewebsiteerror/default.aspx : 2 times
InvalidCastException : 1 time(s)
http://localhost:63188/menusample/default.aspx : 1 times
MissingMethodException : 1 time(s)
http://localhost:63188/menusample/default.aspx : 1 times

Query 3:  Exception Type, Request URL and Method Name Summary

You can even go deeper, if you so desire, to find out which of your methods threw the most exceptions.  For this to work, we need to make a slight change to our EventLogMessage class to parse the Stack Trace data into a class.  First, we’ll start with our simple little StackTraceFrame class:

public class StackTraceFrame
public string Method { get; set; }


Second, add a new property to our EventLogMessage class to hold a List<StackTraceFrame>:

public List<StackTraceFrame> StackTraceFrames { get; set; }

Lastly, add a method (and its caller) to parse out the stack frames and assign the resulting List to the StackTraceFrames property mentioned above:

public EventLogMessage(string rawMessageText)
Match myMatch = s_regex.Match(rawMessageText);
Stacktrace = myMatch.Groups["Stacktrace"].Value;
StackTraceFrames = ParseStackTrace(Stacktrace);

private List<StackTraceFrame> ParseStackTrace(string stackTrace)
List<StackTraceFrame> frames = new List<StackTraceFrame>();
string[] stackTraceSplit = stackTrace.Split(new string[] { " at " },
foreach (string st in stackTraceSplit)
if (!string.IsNullOrEmpty(st))
frames.Add(new StackTraceFrame() { Method = st });
return frames;

Please Note:  You could enhance the ParseStackTrace(…) method to parse out the source files, line numbers, etc. I’ll leave this as an exercise for you, dear reader.

Now that we have the infrastructure in place, the query is just as simple.  We’ll just nest this additional query inside of our URL query like so:

foreach (var url in requestUrls){

Console.WriteLine("\t{0} : {1} times ",

var methods = from method in messages
where string.Equals(method.RequestURL,
group method by method.StackTraceFrames[0].Method into mt
orderby mt.Count() descending, mt.Key
select new
MethodName = mt.Key,
Count = mt.Count()

foreach (var method in methods)
Console.WriteLine("\t\t{0} : {1} times ",

This would then produce output like the following:

WebException  : 15 time(s)
http://localhost/menusample/default.aspx : 11 times
System.Net.HttpWebRequest.GetResponse() : 11 times
http://localhost:63188/menusample/default.aspx : 4 times
System.Net.HttpWebRequest.GetResponse() : 4 times
InvalidOperationException : 7 time(s)
http://localhost:63188/menusample/default.aspx : 6 times
System.Web.UI.WebControls.Menu... : 6 times
http://localhost/menusample/default.aspx : 1 times
System.Web.UI.WebControls.Menu... : 1 times

One last thing you may notice is that the in the example above, the first frame for each of those exceptions are somewhere in the bowels of the .NET BCL.  You may want to filter this out even further to only return YOUR method.  This can be accomplished very easily with the method below.  It will simply loop through the StackTraceFrame List and return the first method it encounters that does not start with “System.” or “Microsoft.”.

private static string GetMyMethod(List<StackTraceFrame> frames)

foreach (StackTraceFrame frame in frames)

if (!frame.Method.StartsWith("System.") &&
return frame.Method;


return "No User Code detected.";

Then, you can just call that method from the new query we wrote above:

var methods = from method in messages
where ...
group method by
GetMyMethod(method.StackTraceFrames) into mt

Finally, with this new snippet in place, we’ll get output like this:

WebException  : 15 time(s)
http://localhost/menusample/default.aspx : 11 times
_Default.Page_Load(Object sender, EventArgs e)...: 8 times
No User Code detected. : 3 times

http://localhost:63188/menusample/default.aspx : 4 times
_Default.Page_Load(Object sender, EventArgs e)... : 1 times
No User Code detected. : 1 times
WebControls.CustomXmlHierarchicalDataSourceView.Select()... : 2 times

As you can see, the sky’s the limit.


# Sunday, August 22, 2010
Sunday, August 22, 2010 11:14:37 AM (Central Daylight Time, UTC-05:00) ( PowerShell | Scripting )

powershell_icon[1]_2 PowerShell is one of those things that falls into my “other duties as assigned” repertoire.  It’s something that I’ve used for years to get things done but it’s not often I encounter a Dev at a customer that has worked with it much.  In my honest opinion, I think adoption would increase if the PowerShell studio had intellisense or if it was just another project type in Visual Studio.  That’s purely my opinion and is not a reflection on my employer in any way. 

In any case, when I get pinged to help someone on a PowerShell task, I’ll usually jump on it.  A customer at a large company came to me with a problem.  He wanted to restore about 500 databases from their production environment to a development/staging environment.  He found this script that should theoretically get the job done but he was running into a few problems.  The problems he was encountering were mostly related to different drive mappings and different logical database names.  Of course, we had to use a trick to get to that point.

The Problem

He originally was receiving the following error:

“Exception calling "SqlRestore" with "1" argument(s): "Restore failed for Server '<Server2>'. "

At :line:97 char:20

+       $restore.SqlRestore <<<< ($instance)

The line the error above references is the line where the Restore action is called to be executed and the character is the 'e' on $restore.SQLRestore'”

While it’s not a very informative message, we can get more information by running the command:

$error[0]|format-list –force

This provides a lot more information on what the cause of the error is.   For example, when I force an error on the SqlRestore method, I might get the above error message by default but once I execute the above command, I’ll see:

“Exception             : System.Management.Automation.MethodInvocationException: Exception calling "SqlRestore" with "1" argument(s): "Restore failed for Server 'GREGVAR1\SQLEXPRESS2'. " ---> Microsoft.SqlServer.Management.Smo.FailedOperationException: Restore failed for Server 'GREGVAR1\SQLEXPRESS2'.  ---> Microsoft. SqlServer.Management.Common.ConnectionFailureException: Failed to connect to server GREGVAR1\SQLEXPRESS2. ---> System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified)…”

So, after he ran the magic command, the errors he was actually experiencing were the following:

  • The script expects the database’s logical name to be the file name of the database.
  • Some of the databases had a full text catalog associated with it.  The script was not handling the relocation of the full text catalog either.

Many thanks to Michiel Wories for initially introducing me to that really cool command a year or so ago.

The Research

So, once we knew the problem, we needed a method to get additional information from the backup file.  Enter the Restore.ReadFileList(…) method.  You can use this method like the following:

 $server = New-Object("Microsoft.SqlServer.Management.Smo.Server") 
$backupDevice = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem")
($restorefile, "File") $smoRestore = new-object("Microsoft.SqlServer.Management.Smo.Restore") #restore settings $smoRestore.NoRecovery = $false; $smoRestore.ReplaceDatabase = $true; $smoRestore.Action = "Database" $smoRestore.PercentCompleteNotification = 10; $smoRestore.Devices.Add($backupDevice) # Get Database Logical File Names $sourceLogicalNameDT = $smoRestore.ReadFileList($server)

The ReadFileList gives us a DataTable which contains tons of additional information about the contents of the file we are attempting to restore.  Once we  have that object, we clearly saw that it contains all of the information we could possibly need:

LogicalName          : foodb
PhysicalName         : T:\…\DB\foo.mdf
Type                 : D
FileGroupName        : PRIMARY
Size                 : 104857600
MaxSize              : 35184372080640
FileId               : 1
CreateLSN            : 0
DropLSN              : 0
UniqueId             : 4992c6f2-2282-4391-851e-d1177ab03920
ReadOnlyLSN          : 0
ReadWriteLSN         : 0
BackupSizeInBytes    : 30081024
SourceBlockSize      : 512
FileGroupId          : 1
LogGroupGUID         :
DifferentialBaseLSN  : 1297000000068200037
DifferentialBaseGUID : 69e8b951-7db1-4a1b-b0fc-e2cb012b3bcf
IsReadOnly           : False
IsPresent            : True

LogicalName          : foodb_log
PhysicalName         : R:\…\Logs\foo_log.LDF
Type                 : L

LogicalName          : sysft_OtherTables
PhysicalName         : T:\Microsoft SQL Server\MSSQL.1\MSSQL\FTData\OtherTables004e
Type                 : F

Now that we have all of this great information, we need to iterate through the rows and put the values we care about into some local variables.  For that, we can use the foreach construct:

 $FileType = ""  
foreach($Row in $sourceLogicalNameDT) { # Put the file type into a local variable. # This will be the variable that we use to find out which file # we are working with. $FileType = $Row["Type"].ToUpper() # If Type = "D", then we are handling the Database File name. If ($FileType.Equals("D")) { $sourceDBLogicalName = $Row["LogicalName"] } # If Type = "L", then we are handling the Log File name. elseif ($FileType.Equals("L")) { $sourceLogLogicalName = $Row["LogicalName"] } # If Type = "F", then we are handling hte Full Text Catalog File Name. elseif ($FileType.Equals("F")) { $sourceFTSLogicalName = $Row["LogicalName"] # We may also want to grab the full path of the Full Text catalog. $sourceFTSPhysicalName = $Row["PhysicalName"] } }

The Solution

Now, that we have all of the data we need, we can populate the Relocate File objects so that the SqlRestore object will know how to handle these additional files:

 #specify new data and log files (mdf and ldf)            
 $smoRestoreDBFile = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")            
 $smoRestoreLogFile = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")            
 #the logical file names should be the logical filename stored in the backup media                     
 $smoRestoreDBFile.LogicalFileName = $sourceDBLogicalName            
 $smoRestoreDBFile.PhysicalFileName = $mdfFilePath + "\" + $sourceDBLogicalName + ".mdf"            
 $smoRestoreLogFile.LogicalFileName = $sourceLogLogicalName            
 $smoRestoreLogFile.PhysicalFileName = $ldfFilePath + "\" + $sourceLogLogicalName + ".ldf"            
 # Check to see if the $SourceFTSLogicalName is empty or not. If its not empty            
 # then we do have a full text catalog present and thus we add the appropriate            
 # entries to restore those files. If the variable is empty we just continue with            
 # the restore.            
 if (![String]::IsNullOrEmpty($sourceFTSLogicalName)) {            
     "We DO have a Full Text Catalog in our Backup"            
     # Adding full text catalog restore parameters.            
     $smoRestoreFTSFile = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")            
     $smoRestoreFTSFile.LogicalFileName = $sourceFTSLogicalName            
     # Here we specify the new location by truncating the first 45 characters in the path            
     # that is specified on the backup file. This could be done a better way.            
     $smoRestoreFTSFile.PhysicalFileName = $ftsFilePath + "\" + 
$sourceFTSPhysicalName.Substring(45) $smoRestore.RelocateFiles.Add($smoRestoreFTSFile) }

Lastly, we just need to execute the Restore command and clear out the variables for the next iteration of the loop.

 # Restore Database            
 # We now clear the variables before the next loop starts                        
 Remove-Variable sourceDBLogicalName            
 Remove-Variable sourceLogLogicalName            
 Remove-Variable smoRestoreDBFile            
 Remove-Variable smoRestoreLogFile            
 # If a full text catalog was present, we clear those variables too                           
 if (![String]::IsNullOrEmpty($sourceFTSLogicalName)) {                       
     Remove-Variable sourceFTSLogicalName            
     Remove-Variable smoRestoreFTSFile            

And that’s all there is to it.  Keep in mind that these are the guts of the script and not the full contents.  If there’s interest, I can post the whole script.  Just drop a comment.


# Friday, August 20, 2010
Friday, August 20, 2010 11:36:27 PM (Central Daylight Time, UTC-05:00) ( .NET | Best Practice | C# | Code Reviews | Development | Performance )

cheetah This is a common topic and I thought I’d write up some thoughts I have on it.  In-fact, I was just working with a customer on improving their code reviews and what they should be checking for and the question arose - “Should performance be targeted during a code review?”  It’s an interesting question.  I’m a big fan of performance testing early and often and not waiting until the end of a dev cycle but code reviews, IMO, should focus on logic, maintainability and best practices.  I may be in the minority and if you look around the web, you’ll see varying opinions on the topic.  For example, one of the PAG articles states:

“Code reviews should be a regular part of your development process. Performance and scalability code reviews focus on identifying coding techniques and design choices that could lead to performance and scalability issues. The review goal is to identify potential performance and scalability issues before the code is deployed. The cost and effort of fixing performance and scalability flaws at development time is far less than fixing them later in the product deployment cycle.

Avoid performance code reviews too early in the coding phase because this can restrict your design options. Also, bear in mind that that performance decisions often involve tradeoffs. For example, it is easy to reduce maintainability and flexibility while striving to optimize code.”

As I mentioned above, I am a huge proponent of performance analysis and optimization many times throughout a typical product development cycle.  I can say with a fair amount of certainty that if you don’t build performance reviews into your project plan at regular intervals, you will hit some problem (or multiple problems) in production and have to refactor some code. 

Circling back to the original question, though, are code reviews the place for performance analysis?  Typically, I’d recommend using them to squash little bits of bad code but maintainability and code-cleanliness should be first and foremost in your minds.  That said, if you see a pattern that you know can be improved, by all means bring it up.  What’s an example of that type of situation? 

Let’s take a look at predicates, specifically their usage in the Find method of a List<T>.  If you’re not aware, the Find() method performs a linear search through all of the items until it finds the first match – then it returns.  This makes it a O(n) operation where “n” is the number of items in the list.  Basically, this means that the more items you have in the list, the longer a Find() operation can potentially take.  So, if we slam about 10,000 elements into a list:

private static List<Data> LoadList()
List<Data> myList = new List<Data>();
for (int i = 0; i < 10000; i++)
myList.Add(new Data() { Id = "Id" + i.ToString(),
Value = "Value" + i.ToString() });

return myList;

Then, if someone wants to return the instance of the Data class that contains an Id of say “Id10000”, they might write the following code:

static Data Find1(List<Data> myList, string idToFind)
Data data = myList.Find(s =>
s.Id.ToLower() ==

return data;

Now, keep in mind that the predicate is executed for each element in the List<T> until it finds the instance you care about.  With that in mind, we would probably want to refactor out the “idToFind.ToLower()” above the predicate since that value isn’t changing.  So, you might end-up with something like this:

static Data Find2(List<Data> myList, string idToFind)

idToFind = idToFind.ToLower();

Data data = myList.Find(s =>
s.Id.ToLower() ==

return data;

Another route you may want to go is just to use the string.Equals(…) method to perform the comparison.  That would look like:

static Data Find3(List<Data> myList, string idToFind)

Data data = myList.Find(s =>

return data;


Fact is, the last method IS the fastest way to perform the operation.  I can say that without even needing to run it through a profiler.  But if you don’t believe me…  

Function Name
Inclusive Time

That’s something I might put into the category of a micro-optimization AND just good coding practice.  But is this something that should be caught during a code review?  I’d say “yes” because logically it all makes sense and none of the solutions would really hurt maintainability or readability of the code.
So, I’d tag this as a good coding practice.  Other thoughts on the topic?
# Saturday, August 14, 2010
Saturday, August 14, 2010 12:45:39 PM (Central Daylight Time, UTC-05:00) ( ASP.NET | Development | Premier Field Engineer (PFE) )

11954221481914068549johnny_automatic_mister_fix_it_svg_hi In my job as a PFE for Microsoft, I read, review and fix a lot of code.  A lot of code.  It’s a large part of what I love about my job.  The code is generally written by large corporations or for public websites.  Every now and again I’ll get pinged on an issue and after troubleshooting the issue, it’s pretty clear that the core issue is with some community code.  When I say community code, in this instance, I don’t mean a CodeProject or CodePlex project.  In this case, I am referring to a control that Denis Bauer created and then made available to the community on his website – the “DynamicControlsPlaceholder” control.  This is a great little control that inherits from a PlaceHolder and allows you to  create dynamic controls on the fly and then it will persist the controls you add on subsequent requests – like a postback.

The Problem

The customer was experiencing a problem that could only be replicated in a web farm when they don’t turn on sticky sessions.  They found that when a request went from one server to another server in their farm they would get a FileNotFoundException with the following details:

Type Of Exception:FileNotFoundException
Message:Error on page http://blahblahblah.aspx
Exception Information:System.IO.FileNotFoundException:
Could not load file or assembly 'App_Web_myusercontrol.ascx.cc671b29.ypmqvhaw,
Version=, Culture=neutral, PublicKeyToken=null' or one of its dependencies.
The system cannot find the file specified.
File name: 'App_Web_myusercontrol.ascx.cc671b29.ypmqvhaw,
Version=, Culture=neutral, PublicKeyToken=null'
at System.RuntimeTypeHandle.GetTypeByName(String name,
Boolean throwOnError,
Boolean ignoreCase,
Boolean reflectionOnly,
StackCrawlMark& stackMark)
at DynamicControlsPlaceholder.RestoreChildStructure(Pair persistInfo,
Control parent)
at DynamicControlsPlaceholder.LoadViewState(Object savedState)
at System.Web.UI.Control.LoadViewStateRecursive(Object savedState)
at System.Web.UI.Control.LoadViewStateRecursive(Object savedState)
at System.Web.UI.Page.LoadAllState()
at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint,
Boolean includeStagesAfterAsyncPoint)

So, we can gleam a few things from the error details:

  • They are using the ASP.NET website model (the “app_web_….dll” assembly is the clue).
  • The error is occurring in the RestoreChildStructure method of the DynamicControlsPlaceholder control.

The Research

The way that ASP.NET Websites work is that each component of your site can be compiled into a separate assembly.  The assembly name is randomly generated.  This also means that on two servers, the name of the assemblies can end up being different.  So, an assumption to make is that something is trying to load an assembly by its name.  If we look at the RestoreChildStructure method, we see the following:

Type ucType = Type.GetType(typeName[1], true, true);

MethodInfo mi = typeof(Page).GetMethod("LoadControl",
new Type[2] { typeof(Type), typeof(object[]) });
control = (Control) mi.Invoke(this.Page, new object[2] { ucType, null });
catch (Exception e)
throw new ArgumentException(String.Format("The type '{0}' …",
ucType.ToString()), e);

The important thing to look at here is the Type.GetType(…) call.  Since the code for the control is in a separate assembly from everything else, the “typeName[1]” value MUST BE A FULLY QUALIFIED ASSEMBLY NAME.  From the exception details, we can see that it is attempting to load the type from the following string:

App_Web_myusercontrol.ascx.cc671b29.ypmqvhaw, Version=, Culture=neutral, PublicKeyToken=null

The “typeName[1]” variable is loaded from ViewState because that’s where the control persists its child structure.  So, for some reason the fully qualified assembly name is stored in ViewState.  If we look at the code that inserts the value into ViewState (in the PersistChildStructure(…) method), we see:

typeName = "UC:" + control.GetType().AssemblyQualifiedName;

So, here we see the AssemblyQualifiedName is being stored into ViewState – which is then used to persist the controls across postback using the above code.  As I mentioned, this won’t work with an ASP.NET website hosted in a web farm because the assembly qualified name will probably be different from server to server.  We even have a KB article that discusses this issue somewhat

The Fix

Fortunately, the fix is pretty simple. 

First, we need to store the path to the User Control instead of the AQN in ViewState.  To do this, you can comment out the “typeName = ….” line from directly above and replace it with:

UserControl uc = control as UserControl;
typeName = "UC:" + uc.AppRelativeVirtualPath;

So, now we store the path to the UserControl in ViewState.  Then, we need to fix the code that actually loads the control.  Replace the code from above in the RestoreChildStructure(…) method with this code:

string path = typeName[1];

control = Page.LoadControl(path);
catch (Exception e)
throw new ArgumentException(String.Format(
"The type '{0}' cannot be recreated from ViewState",
path), e);

That’s all there is to it.  Just load the user control from where it is being stored in the site and ASP.NET will take care of loading the appropriate assembly.


# Monday, August 9, 2010
Monday, August 9, 2010 1:09:32 AM (Central Daylight Time, UTC-05:00) ( Development | OpenXML )

logo_Office_2010 I was working on an internal project a bit ago and one of the requirements was to implement a fancy Word document.  The idea was that all of the editing of the text/code samples/etc. would be done in the application and then the user could just export it to Word to put any finishing touches and send off to the customer.  The final report needed to include section headers, page breaks, a table of contents, etc.  There are a number of ways we could have accomplished the task.  There’s the Word automation stuff that relies upon a COM based API, there’s the method of just creating an HTML document and loading that into Word and then finally there’s the Open XML API.  Now, someone had hacked up a version of this export functionality previously using the Word automation stuff but considering we’re often dealing with 1,000+ page documents – it turned out to be a little slow.  Also, there are some restrictions around using the automation libraries in a server context.  Lastly, since my OpenXML kung-fu is strong, I thought I would take the opportunity to implement a better, more flexible and much faster solution.  For those just starting out, Brian and Zeyad’s excellent blog on the topic is invaluable

One of the requirements for the export operation was to have Word automagically refresh the table of contents (and other fields) the first time the document is opened.  This was something that took a bit of time to research but you really end up with 2 options:

w:updateFields Element

The “w:updateFields” element is a document-level element that is set in the document settings part and tells Word to update all of the fields in the document:

<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<w:updateFields w:val="true" />


If you’re wondering what the document settings part is – just rename a Word doc from “blah.docx” to “blah.docx.zip” and extract it to a folder on your computer.  In the new folder is a directory called “word”.  In that directory, you should see a file called “settings.xml”:

image In that file are all of the document level settings for your docx.  There’s some really great stuff in here

If you’d like to use the OpenXML SDK to set that value (and you’d be crazy not to), here’s some sample code:

using (WordprocessingDocument document = WordprocessingDocument.Open(path, true))

DocumentSettingsPart settingsPart =

// Create object to update fields on open
UpdateFieldsOnOpen updateFields = new UpdateFieldsOnOpen();
updateFields.Val = new DocumentFormat.OpenXml.OnOffValue(true);

// Insert object into settings part.


w:dirty Attribute

This attribute is applied to the field you would like to have refreshed when the document is opened in Word.  It tells Word to only refresh this field the next time the document is opened.  For example, if you want to apply it to a field like your table of contents, just find the w:fldChar and add that attribute:

<w:fldChar w:fldCharType="begin" w:dirty="true"/>

For a simple field, like the document author, you’ll want to add it to the w:fldSimple element, like so:

<w:fldSimple w:instr="AUTHOR \* Upper \* MERGEFORMAT"
w:dirty="true" >

A caveat or two

Both of these methods will work just fine in Word 2010. 

In Word 2007, though, you need to clear out the contents of the field before the user opens the document.  For example, with a table of contents, Word will normally cache the contents of the TOC in the fldChar element.  This is good, normally, but here it causes a problem. 

For example, in a very simple test document, you would see the following cached data (i.e.:  Heading 1, Heading 2, etc.):

<w:p w:rsidR="00563999" w:rsidRDefault="00050B09">
<w:fldChar w:fldCharType="begin"/>
<w:r w:rsidR="00563999">
<w:instrText xml:space="preserve"> TOC \* MERGEFORMAT </w:instrText>
<w:p w:rsidR="00F77370" w:rsidRDefault="00F77370">
<w:t>Heading 1</w:t>
<w:p w:rsidR="00F77370" w:rsidRDefault="00F77370">
<w:t>Heading 2</w:t>
<w:p w:rsidR="00F77370" w:rsidRDefault="00F77370">
<w:fldChar w:fldCharType="end"/>

After you clear out the schmutz, you end up with just the begin element, the definition of the TOC and the end element:

<w:p w:rsidR="00563999" w:rsidRDefault="00563999">
<w:fldChar w:fldCharType="begin"/>
<w:instrText xml:space="preserve"> TOC \* MERGEFORMAT </w:instrText>
<w:p w:rsidR="00B63C3C" w:rsidRDefault="00563999" w:rsidP="00B63C3C">
<w:fldChar w:fldCharType="end"/>

Once you’ve made the updates, you can safely open up your file in Word 2007 and your fields will update when the document opens.

Big thanks for Zeyad for his tip on trimming out the schmutz.

Just to stress, this is improved in Word 2010 and you no longer need to clear out the cached data in your fields.


# Wednesday, August 4, 2010
Wednesday, August 4, 2010 1:23:21 AM (Central Daylight Time, UTC-05:00) ( Bing | Development | Google | Online Translator )

imageAn online translator really isn’t all that new.  They’ve been around for at least 8 years or so.  I remember the days when I would use Babelfish for all of my fun translations.  It was a great way to get an immediate translation for something non-critical.  The problem in a lot of cases was grammatical correctness.  Translating word for word isn’t particularly difficult but context and grammar varies so much between languages that it was always challenging to translate entire sentences, paragraphs, passages, etc. from one language to another. 

Fortunately the technology has improved a lot over the years. Now, you can somewhat reliably translate entire web pages from one language to another.  I’m not saying it’s without fault – but I am saying that it’s gotten a lot better over time. These days there are a few big players in this space.  Notably Google Translate, Babelfish and the Bing Translator.  The interesting thing I’ve found is that only Bing actually has a supported API into its translation service

There are 3 primary ways to interact with the service:

They all seem to expose the same methods but it’s just the way you call them that differs.  For example, the sample code published for the HTTP method looks like:

   1: string appId = "myAppId";
   2: string text = "Translate this for me";
   3: string from = "en";
   4: string to = "fr";
   6: string detectUri = "http://api.microsofttranslator.com/v2/Http.svc/Translate?appId=" + appId +
   7:     "&text;=" + text + "&from;=" + from + "&to;=" + to;
   8: HttpWebRequest httpWebRequest = (HttpWebRequest)WebRequest.Create(detectUri);
   9: WebResponse resp = httpWebRequest.GetResponse();
  10: Stream strm = resp.GetResponseStream();
  11: StreamReader reader = new System.IO.StreamReader(strm);
  12: string translation = reader.ReadToEnd();
  14: Response.Write("The translated text is: '" + translation + "'.");

Then, for the SOAP method:

   1: string result;
   2: TranslatorService.LanguageServiceClient client = 
   3:                     new TranslatorService.LanguageServiceClient(); 
   4: result = client.Translate("myAppId", 
   5:                           "Translate this text into German", 
   6:                           "en", "de"); 
   7: Console.WriteLine(result);

And lastly for the AJAX method:

   1: var languageFrom = "en";
   2: var languageTo = "es";
   3: var text = "translate this.";
   5: function translate() {
   6:     window.mycallback = function(response) { alert(response); }
   8:     var s = document.createElement("script");
   9:     s.src = "http://api.microsofttranslator.com/V2/Ajax.svc/Translate?oncomplete=mycallback&appId;=myAppId&from;=" 
  10:                 + languageFrom + "&to;=" + languageTo + "&text;=" + text;
  11:     document.getElementsByTagName("head")[0].appendChild(s);
  12: }

Fortunately, it all works as you’d expect – cleanly and simply.  The really nice thing about this (and the Google Translator) is that when faced with straight-up HTML like:

   1: <p class="style">Hello World!</p>

They will both return the following:

   1: <p class="style">¡Hola mundo!</p> 

Both translators will keep the HTML tags intact and only translate the actual text.  This undoubtedly comes in handy if you do any large bulk translations.  For example, I’m working with another couple of guys here on an internal (one day external) tool that has a lot of data in XML files with markup.  Essentially we need to translate something like the following:

   1: <Article Id="this does not get translated" 
   2:            Title="Title of the article" 
   3:            Category="Category for the article"
   4:            >
   5:   <Content><![CDATA[<P>description for the article<BR/>another line </p>]]></Content>
   6: </Article>

The cool thing is that if I just deserialize the above into an object and send the value of the Content member to the service like:

   1: string value = client.Translate(APPID_TOKEN, 
   2:                                 content, "en", "es");

I get only the content of the HTML translated:

   1: <p>Descripción del artículo<br>otra línea</p> 

Pretty nice and easy.  One thing all of the translator services have trouble with is if I just try to translate the entire xml element from the above in one shot.  Bing returns:

   1: <article id="this does not get translated" 
   2:          title="Title of the article" 
   3:          category="Category for the article">
   4: </article> 
   5:     <content><![CDATA[<P>Descripción del artículo<br>otra línea]]</content> >

And Google returns:

   1: <= Id artículo "esto no se traduce"
   2: Título = "Título del artículo"
   3: Categoría = "Categoría para el artículo">
   5: <Content> <! [CDATA [descripción <P> para el artículo <BR/> otra línea </ p >]]>
   6: </ contenido>
   7: </> Artículo

Oh well – I guess no one’s perfect and for now we’ll be forced to deserialize and translate each element at a time.


# Tuesday, August 3, 2010
Tuesday, August 3, 2010 1:50:24 AM (Central Daylight Time, UTC-05:00) ( Development )

Really interesting blog post by the IE team on some of the new DOM traversal features in IE9 (and other browsers).  Often times, you need to traverse the DOM to find a particular element or series of elements.  In the past, you might need to write some recursive JavaScript functions to navigate through the HTML on your page to act upon functions you care about.

Now, in IE9 (and other browsers that follow the W3C spec), you can use node iterators to get a flat list of the elements that you actually care about.  For example:

   1: // This would work fine with createTreeWalker, as well
   2: var iter = document.createNodeIterator(elm, 
   3:                                        NodeFilter.SHOW_ELEMENT, 
   4:                                        null, 
   5:                                        false); 
   7: var node = iter.nextNode();
   8: while (node = iter.nextNode())
   9: {
  10:     node.style.display = "none";
  11: }

The NodeFilter enum by default allows for the following values (from the w3c spec here - http://www.w3.org/TR/2000/REC-DOM-Level-2-Traversal-Range-20001113/traversal.html#Traversal-NodeFilter):

   1: const unsigned long       SHOW_ALL                       = 0xFFFFFFFF;
   2: const unsigned long       SHOW_ELEMENT                   = 0x00000001;
   3: const unsigned long       SHOW_ATTRIBUTE                 = 0x00000002;
   4: const unsigned long       SHOW_TEXT                      = 0x00000004;
   5: const unsigned long       SHOW_CDATA_SECTION             = 0x00000008;
   6: const unsigned long       SHOW_ENTITY_REFERENCE          = 0x00000010;
   7: const unsigned long       SHOW_ENTITY                    = 0x00000020;
   8: const unsigned long       SHOW_PROCESSING_INSTRUCTION    = 0x00000040;
   9: const unsigned long       SHOW_COMMENT                   = 0x00000080;
  10: const unsigned long       SHOW_DOCUMENT                  = 0x00000100;
  11: const unsigned long       SHOW_DOCUMENT_TYPE             = 0x00000200;
  12: const unsigned long       SHOW_DOCUMENT_FRAGMENT         = 0x00000400;
  13: const unsigned long       SHOW_NOTATION                  = 0x00000800;

While this is great – you can also write your own NodeFilter callback function to filter the results even further:

   1: var iter = document.createNodeIterator(elm, 
   2:                                        NodeFilter.SHOW_ALL, 
   3:                                        keywordFilter, 
   4:                                        false);
   6: function keywordFilter(node)
   7: {
   9:     var altStr = node.getAttribute('alt').toLowerCase();
  11:     if (altStr.indexOf("flight") != -1 || altStr.indexOf("space") != -1)
  12:         return NodeFilter.FILTER_ACCEPT;
  13:     else
  14:         return NodeFilter.FILTER_REJECT;
  15: }

Really nice and can help make your code simpler to read and faster too!


Tuesday, August 3, 2010 1:48:17 AM (Central Daylight Time, UTC-05:00) ( .NET | Azure | Development | Logging )

One thing you might encounter when you start your development on windows-azure-logo_1-f2e19cWindows Azure is that there is an insane number of options available for number of options for logging.   You can view a quick primer here.  One of the things that I like about it is that you don’t necessarily need to learn a whole new API just to use it.  Instead, the logging facilities in Azure integrates really well with the existing Debug and Trace Logging API in .NET.  This is a really nice feature and is done very well in Azure.  In-fact to set-up and configure it is all of about 5 lines of code.  Actually it’s four lines of code with one line that wraps:

   1: public override bool OnStart() 
   2: { 
   3:     DiagnosticMonitorConfiguration dmc =          
   4:             DiagnosticMonitor.GetDefaultInitialConfiguration(); 
   5:     dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1); 
   6:     dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
   7:     DiagnosticMonitor.Start("DiagnosticsConnectionString", dmc); 
   8: } 

One specific item to note is the ScheduledTransferPeriod property of the Logs property.  The minimum value you can set for that property is the equivalent of 1 minute.  The only downside to this method of logging is that if your Azure role crashes within that minute, whatever data you have written to your in-built logging will be lost.  This also means that if you are writing exceptions to your logging, that will be lost as well.  That can cause problems if you’d like to know both when your role crashed and why (the exception details).

Before we talk about a way to get around it, let’s review why the Role might crash.  In the Windows Azure world, there are two primary roles that you will use, a Worker and Web role.  Main characteristics and reasons it would crash are below:

Role Type Analogous Why would it crash/restart?
Worker Role Console Application
  • Any unhandled exception.
Web Role ASP.NET Application hosted in IIS 7.0+
  • Any unhandled exception thrown on a background thread.
  • StackOverflowException
  • Unhandled exception on finalizer thread.

As you can see, the majority of the reasons why an Azure role would recycle/crash/restart are essentially the same as with any other application – essentially an unhandled exception.  Therefore, to mitigate this issue, we can subscribe to the AppDomain’s UnhandledException Event.  This event is fired when your application experiences an exception that is not caught and will fire RIGHT BEFORE the application crashes.  You can subscribe to this event in the Role OnStart() method:

   1: public override bool OnStart()
   2: {
   3:     AppDomain appDomain = AppDomain.CurrentDomain;
   4:     appDomain.UnhandledException += 
   5:         new UnhandledExceptionEventHandler(appDomain_UnhandledException);
   6:     ...
   7: }

You will now be notified right before your process crashes.  The last piece to this puzzle is logging the exception details.  Since you must log the details right when it happens, you can’t just use the normal Trace or Debug statements.  Instead, we will write to the Azure storage directly.  Steve Marx has a good blog entry about printf in the cloud.  While it works, it requires the connection string to be placed right into the logging call.  He mentions that you don’t want that in a production application.  in our case, we will do things a little bit differently.  First, we must add the requisite variables and initialize the storage objects:

   1: private static bool storageInitialized = false;
   2: private static object gate = new Object();
   3: private static CloudBlobClient blobStorage;
   4: private static CloudQueueClient queueStorage;
   6: private void InitializeStorage()
   7: {
   8:    if (storageInitialized)
   9:    {
  10:        return;
  11:    }
  13:    lock (gate)
  14:    {
  15:       if (storageInitialized)
  16:       {
  17:          return;
  18:       }
  21:       // read account configuration settings
  22:       var storageAccount = 
  23:         CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
  25:       // create blob container for images
  26:       blobStorage = 
  27:           storageAccount.CreateCloudBlobClient();
  28:       CloudBlobContainer container = blobStorage.
  29:           GetContainerReference("webroleerrors");
  31:       container.CreateIfNotExist();
  33:       // configure container for public access
  34:       var permissions = container.GetPermissions();
  35:       permissions.PublicAccess = 
  36:            BlobContainerPublicAccessType.Container;
  37:       container.SetPermissions(permissions);
  39:       storageInitialized = true;
  40:   }
  41: }

This will instantiate the requisite logging variables and then when the InitializeStorage() method is executed, we will set these variables to the appropriate initialized values.  Lastly, we must call this new method and then write to the storage.  We put this code in our UnhandledException event handler:

   1: void appDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e)
   2: {
   3:     // Initialize the storage variables.
   4:     InitializeStorage();
   6:     // Get Reference to error container.
   7:     var container = blobStorage.
   8:                GetContainerReference("webroleerrors");
  11:     if (container != null)
  12:     {
  13:         // Retrieve last exception.
  14:         Exception ex = e.ExceptionObject as Exception;
  16:         if (ex != null)
  17:         {
  18:             // Will create a new entry in the container
  19:             // and upload the text representing the 
  20:             // exception.
  21:             container.GetBlobReference(
  22:                String.Format(
  23:                   "<insert unique name for your application>-{0}-{1}",
  24:                   RoleEnvironment.CurrentRoleInstance.Id,
  25:                   DateTime.UtcNow.Ticks)
  26:                ).UploadText(ex.ToString());
  27:         }
  28:     }
  30: }

Now, when your Azure role is about to crash, you’ll find an entry in your blog storage with the details of the exception that was thrown.  For example, one of the leading reasons why an Azure Worker Role crashes is because it can’t find a dependency it needs.  So, in that case, you’ll find an entry in your storage with the following details:

   1: System.IO.FileNotFoundException: Could not load file or assembly 
   2:     'GuestBook_Data, Version=, Culture=neutral, 
   3:     PublicKeyToken=f8a5fcb6c395f621' or one of its dependencies. 
   4:     The system cannot find the file specified.
   6: File name: 'GuestBook_Data, Version=, Culture=neutral, 
   7:     PublicKeyToken=f8a5fcb6c395f621'

You can find some other common reasons why an Azure role might crash (especially when you first deploy it) can be found at Anton Staykov’s excellent blog:

Hope this helps.


# Thursday, July 29, 2010
Thursday, July 29, 2010 6:50:09 PM (Central Daylight Time, UTC-05:00) ( Best Practice | IIS | Performance )

I seem to get this question a lot and come across many customer environments where they have enabled web gardening thinking that it will automagically improve the performance for their site/application.


Most time, that is not the case.  The funny thing is that once I finally convince them that web gardening is not the way to go, they try to apply that same knowledge to other sites and applications in their environment.  When this happens, I’ll get an e-mail or phone call asking for some guidelines on when to enable web gardening.

We typically recommend using Web Gardening as a stop-gap (or workaround) for when a customer has a core issue that is limiting their website and web application scalability.

For example, if a customer has a memory issue that is causing OutOfMemoryExceptions in their main website – we may recommend web gardening to spread the load across multiple worker processes while we assist them in resolving the core memory issue.  Please note that this would also increase the memory and processor utilization on the server and in some cases might not be viable.

As a best practice, create Web gardens only for Web applications that meet the following criteria (taken from here):

  • The application runs multi-instantiated, so that a different instance of the application can be assigned to each worker process.
  • The Web application is not CPU-intensive. If the CPU is the bottleneck, then adding worker processes cannot help improve performance.
  • The application is subject to synchronous high latency. For example, if an application calls a back-end database and the response is slow, then a Web garden supports other concurrent connections without waiting for the slow connection to complete.

A good discussion of why not to use Web Gardening can be found here as well:  http://blogs.technet.com/b/mscom/archive/2007/07/10/gardening-on-the-web-server.aspx


Thursday, July 29, 2010 4:21:57 PM (Central Daylight Time, UTC-05:00) ( Hiring | Microsoft | Premier Field Engineer (PFE) )

mslogo-1 First, a few questions:

  • Do you enjoy helping developers write better code?
  • Do you enjoy solving complex problems that span multiple technologies?
  • Do you enjoy optimizing and improving code?
  • Are you passionate about software development?
  • Do you enjoy managing your own calendar?
  • Do you want to make the world a better place?

If the answer to these questions is “YES!”, then please read-on.

We are now actively recruiting for 3 Developer Premier Field Engineering positions at Microsoft

You may ask yourself, what does a Dev PFE do?net_v_web

We do both proactive and reactive work encompassing a variety of Microsoft’s developer products.  In general, the reactive work is where a customer is experiencing a problem (usually in production) and they need someone onsite to help them resolve the issue. The proactive work usually takes the form of knowledge transfer to companies on how they can improve the maintainability of their code, how to debug problems and how to optimize their applications.  I have also done a fair number of “proofs of concept” for customers where they just don’t know how to do something or they want me to prove that it can be done. 

microsoftbizcard219border333-thumbIn the past year, I’ve worked on projects using .NET (1.1 – 4.0), Windows Azure, Internet Explorer, Bing, Bing Translator, Windows 7 and many others. 

And the list really does go on.  You will work with our largest customers around the world helping them write better code, solving complex issues, teaching them about the latest technology and just making the world a better place.

If this sounds like a good fit for you, click here to go to the Microsoft Careers site to apply.