Home | About Me | Developer PFE Blog | Become a Developer PFE

Contact

Categories

On this page

Fixing the DynamicControlsPlaceholder control – Making the community better
OpenXML: How to refresh a field when the document is opened
Working with the Bing Translator API
IE9: DOM Traversal
Azure: Why did my role crash?
IIS: When to enable web gardening?
Are you ready for your next challenge? (We’re hiring!)
Site update July 2010 (aka - Where are all of your posts?)
How to reset your Flip UltraHD Video camcorder
Creating an instructor guide from a PowerPoint deck.

Archive

Blogroll

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

Sign In

# Saturday, 14 August 2010
Saturday, 14 August 2010 12:45:39 (Central Daylight Time, UTC-05:00) ( ASP.NET | Development | Premier Field Engineer (PFE) )

11954221481914068549johnny_automatic_mister_fix_it_svg_hi In my job as a PFE for Microsoft, I read, review and fix a lot of code.  A lot of code.  It’s a large part of what I love about my job.  The code is generally written by large corporations or for public websites.  Every now and again I’ll get pinged on an issue and after troubleshooting the issue, it’s pretty clear that the core issue is with some community code.  When I say community code, in this instance, I don’t mean a CodeProject or CodePlex project.  In this case, I am referring to a control that Denis Bauer created and then made available to the community on his website – the “DynamicControlsPlaceholder” control.  This is a great little control that inherits from a PlaceHolder and allows you to  create dynamic controls on the fly and then it will persist the controls you add on subsequent requests – like a postback.

The Problem

The customer was experiencing a problem that could only be replicated in a web farm when they don’t turn on sticky sessions.  They found that when a request went from one server to another server in their farm they would get a FileNotFoundException with the following details:

Type Of Exception:FileNotFoundException
Message:Error on page http://blahblahblah.aspx
Exception Information:System.IO.FileNotFoundException:
Could not load file or assembly 'App_Web_myusercontrol.ascx.cc671b29.ypmqvhaw,
Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies.
The system cannot find the file specified.
File name: 'App_Web_myusercontrol.ascx.cc671b29.ypmqvhaw,
Version=0.0.0.0, Culture=neutral, PublicKeyToken=null'
at System.RuntimeTypeHandle.GetTypeByName(String name,
Boolean throwOnError,
Boolean ignoreCase,
Boolean reflectionOnly,
StackCrawlMark& stackMark)
...
at DynamicControlsPlaceholder.RestoreChildStructure(Pair persistInfo,
Control parent)
at DynamicControlsPlaceholder.LoadViewState(Object savedState)
at System.Web.UI.Control.LoadViewStateRecursive(Object savedState)
...
at System.Web.UI.Control.LoadViewStateRecursive(Object savedState)
at System.Web.UI.Page.LoadAllState()
at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint,
Boolean includeStagesAfterAsyncPoint)

So, we can gleam a few things from the error details:

  • They are using the ASP.NET website model (the “app_web_….dll” assembly is the clue).
  • The error is occurring in the RestoreChildStructure method of the DynamicControlsPlaceholder control.

The Research

The way that ASP.NET Websites work is that each component of your site can be compiled into a separate assembly.  The assembly name is randomly generated.  This also means that on two servers, the name of the assemblies can end up being different.  So, an assumption to make is that something is trying to load an assembly by its name.  If we look at the RestoreChildStructure method, we see the following:


Type ucType = Type.GetType(typeName[1], true, true);

try
{
MethodInfo mi = typeof(Page).GetMethod("LoadControl",
new Type[2] { typeof(Type), typeof(object[]) });
control = (Control) mi.Invoke(this.Page, new object[2] { ucType, null });
}
catch (Exception e)
{
throw new ArgumentException(String.Format("The type '{0}' …",
ucType.ToString()), e);
}

The important thing to look at here is the Type.GetType(…) call.  Since the code for the control is in a separate assembly from everything else, the “typeName[1]” value MUST BE A FULLY QUALIFIED ASSEMBLY NAME.  From the exception details, we can see that it is attempting to load the type from the following string:

App_Web_myusercontrol.ascx.cc671b29.ypmqvhaw, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null

The “typeName[1]” variable is loaded from ViewState because that’s where the control persists its child structure.  So, for some reason the fully qualified assembly name is stored in ViewState.  If we look at the code that inserts the value into ViewState (in the PersistChildStructure(…) method), we see:


typeName = "UC:" + control.GetType().AssemblyQualifiedName;

So, here we see the AssemblyQualifiedName is being stored into ViewState – which is then used to persist the controls across postback using the above code.  As I mentioned, this won’t work with an ASP.NET website hosted in a web farm because the assembly qualified name will probably be different from server to server.  We even have a KB article that discusses this issue somewhat

The Fix

Fortunately, the fix is pretty simple. 

First, we need to store the path to the User Control instead of the AQN in ViewState.  To do this, you can comment out the “typeName = ….” line from directly above and replace it with:


UserControl uc = control as UserControl;
typeName = "UC:" + uc.AppRelativeVirtualPath;

So, now we store the path to the UserControl in ViewState.  Then, we need to fix the code that actually loads the control.  Replace the code from above in the RestoreChildStructure(…) method with this code:


string path = typeName[1];

try
{
control = Page.LoadControl(path);
}
catch (Exception e)
{
throw new ArgumentException(String.Format(
"The type '{0}' cannot be recreated from ViewState",
path), e);
}

That’s all there is to it.  Just load the user control from where it is being stored in the site and ASP.NET will take care of loading the appropriate assembly.

Enjoy!

# Monday, 09 August 2010
Monday, 09 August 2010 01:09:32 (Central Daylight Time, UTC-05:00) ( Development | OpenXML )

logo_Office_2010 I was working on an internal project a bit ago and one of the requirements was to implement a fancy Word document.  The idea was that all of the editing of the text/code samples/etc. would be done in the application and then the user could just export it to Word to put any finishing touches and send off to the customer.  The final report needed to include section headers, page breaks, a table of contents, etc.  There are a number of ways we could have accomplished the task.  There’s the Word automation stuff that relies upon a COM based API, there’s the method of just creating an HTML document and loading that into Word and then finally there’s the Open XML API.  Now, someone had hacked up a version of this export functionality previously using the Word automation stuff but considering we’re often dealing with 1,000+ page documents – it turned out to be a little slow.  Also, there are some restrictions around using the automation libraries in a server context.  Lastly, since my OpenXML kung-fu is strong, I thought I would take the opportunity to implement a better, more flexible and much faster solution.  For those just starting out, Brian and Zeyad’s excellent blog on the topic is invaluable

One of the requirements for the export operation was to have Word automagically refresh the table of contents (and other fields) the first time the document is opened.  This was something that took a bit of time to research but you really end up with 2 options:

w:updateFields Element

The “w:updateFields” element is a document-level element that is set in the document settings part and tells Word to update all of the fields in the document:

<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<w:settings>
<w:updateFields w:val="true" />

</w:settings>

If you’re wondering what the document settings part is – just rename a Word doc from “blah.docx” to “blah.docx.zip” and extract it to a folder on your computer.  In the new folder is a directory called “word”.  In that directory, you should see a file called “settings.xml”:

image In that file are all of the document level settings for your docx.  There’s some really great stuff in here

If you’d like to use the OpenXML SDK to set that value (and you’d be crazy not to), here’s some sample code:

using (WordprocessingDocument document = WordprocessingDocument.Open(path, true))
{

DocumentSettingsPart settingsPart =
document.MainDocumentPart.GetPartsOfType<DocumentSettingsPart>().First();

// Create object to update fields on open
UpdateFieldsOnOpen updateFields = new UpdateFieldsOnOpen();
updateFields.Val = new DocumentFormat.OpenXml.OnOffValue(true);

// Insert object into settings part.
settingsPart.Settings.PrependChild<UpdateFieldsOnOpen>(updateFields);
settingsPart.Settings.Save();

}

w:dirty Attribute

This attribute is applied to the field you would like to have refreshed when the document is opened in Word.  It tells Word to only refresh this field the next time the document is opened.  For example, if you want to apply it to a field like your table of contents, just find the w:fldChar and add that attribute:

<w:r>
<w:fldChar w:fldCharType="begin" w:dirty="true"/>
</w:r>

For a simple field, like the document author, you’ll want to add it to the w:fldSimple element, like so:

<w:fldSimple w:instr="AUTHOR \* Upper \* MERGEFORMAT"
w:dirty="true" >
<w:r>
...
</w:r>
</w:fldSimple>

A caveat or two

Both of these methods will work just fine in Word 2010. 

In Word 2007, though, you need to clear out the contents of the field before the user opens the document.  For example, with a table of contents, Word will normally cache the contents of the TOC in the fldChar element.  This is good, normally, but here it causes a problem. 

For example, in a very simple test document, you would see the following cached data (i.e.:  Heading 1, Heading 2, etc.):

<w:p w:rsidR="00563999" w:rsidRDefault="00050B09">
...
<w:r>
<w:fldChar w:fldCharType="begin"/>
</w:r>
<w:r w:rsidR="00563999">
<w:instrText xml:space="preserve"> TOC \* MERGEFORMAT </w:instrText>
</w:r>
</w:p>
<w:p w:rsidR="00F77370" w:rsidRDefault="00F77370">
...
<w:r>
...
<w:t>Heading 1</w:t>
</w:r>
...
</w:p>
<w:p w:rsidR="00F77370" w:rsidRDefault="00F77370">
...
<w:r>
...
<w:t>Heading 2</w:t>
</w:r>
...
</w:p>
<w:p w:rsidR="00F77370" w:rsidRDefault="00F77370">
...
<w:r>
<w:rPr>
<w:noProof/>
</w:rPr>
<w:fldChar w:fldCharType="end"/>
</w:r>
</w:p>

After you clear out the schmutz, you end up with just the begin element, the definition of the TOC and the end element:

<w:p w:rsidR="00563999" w:rsidRDefault="00563999">
...
<w:r>
<w:fldChar w:fldCharType="begin"/>
</w:r>
<w:r>
<w:instrText xml:space="preserve"> TOC \* MERGEFORMAT </w:instrText>
</w:r>
</w:p>
<w:p w:rsidR="00B63C3C" w:rsidRDefault="00563999" w:rsidP="00B63C3C">
<w:r>
<w:fldChar w:fldCharType="end"/>
</w:r>
...
</w:p>

Once you’ve made the updates, you can safely open up your file in Word 2007 and your fields will update when the document opens.

Big thanks for Zeyad for his tip on trimming out the schmutz.

Just to stress, this is improved in Word 2010 and you no longer need to clear out the cached data in your fields.

Enjoy!

# Wednesday, 04 August 2010
Wednesday, 04 August 2010 01:23:21 (Central Daylight Time, UTC-05:00) ( Bing | Development | Google | Online Translator )

imageAn online translator really isn’t all that new.  They’ve been around for at least 8 years or so.  I remember the days when I would use Babelfish for all of my fun translations.  It was a great way to get an immediate translation for something non-critical.  The problem in a lot of cases was grammatical correctness.  Translating word for word isn’t particularly difficult but context and grammar varies so much between languages that it was always challenging to translate entire sentences, paragraphs, passages, etc. from one language to another. 

Fortunately the technology has improved a lot over the years. Now, you can somewhat reliably translate entire web pages from one language to another.  I’m not saying it’s without fault – but I am saying that it’s gotten a lot better over time. These days there are a few big players in this space.  Notably Google Translate, Babelfish and the Bing Translator.  The interesting thing I’ve found is that only Bing actually has a supported API into its translation service

There are 3 primary ways to interact with the service:

They all seem to expose the same methods but it’s just the way you call them that differs.  For example, the sample code published for the HTTP method looks like:

   1: string appId = "myAppId";
   2: string text = "Translate this for me";
   3: string from = "en";
   4: string to = "fr";
   5:  
   6: string detectUri = "http://api.microsofttranslator.com/v2/Http.svc/Translate?appId=" + appId +
   7:     "&text;=" + text + "&from;=" + from + "&to;=" + to;
   8: HttpWebRequest httpWebRequest = (HttpWebRequest)WebRequest.Create(detectUri);
   9: WebResponse resp = httpWebRequest.GetResponse();
  10: Stream strm = resp.GetResponseStream();
  11: StreamReader reader = new System.IO.StreamReader(strm);
  12: string translation = reader.ReadToEnd();
  13:  
  14: Response.Write("The translated text is: '" + translation + "'.");

Then, for the SOAP method:

   1: string result;
   2: TranslatorService.LanguageServiceClient client = 
   3:                     new TranslatorService.LanguageServiceClient(); 
   4: result = client.Translate("myAppId", 
   5:                           "Translate this text into German", 
   6:                           "en", "de"); 
   7: Console.WriteLine(result);

And lastly for the AJAX method:

   1: var languageFrom = "en";
   2: var languageTo = "es";
   3: var text = "translate this.";
   4:  
   5: function translate() {
   6:     window.mycallback = function(response) { alert(response); }
   7:     
   8:     var s = document.createElement("script");
   9:     s.src = "http://api.microsofttranslator.com/V2/Ajax.svc/Translate?oncomplete=mycallback&appId;=myAppId&from;=" 
  10:                 + languageFrom + "&to;=" + languageTo + "&text;=" + text;
  11:     document.getElementsByTagName("head")[0].appendChild(s);
  12: }

Fortunately, it all works as you’d expect – cleanly and simply.  The really nice thing about this (and the Google Translator) is that when faced with straight-up HTML like:

   1: <p class="style">Hello World!</p>

They will both return the following:

   1: <p class="style">¡Hola mundo!</p> 

Both translators will keep the HTML tags intact and only translate the actual text.  This undoubtedly comes in handy if you do any large bulk translations.  For example, I’m working with another couple of guys here on an internal (one day external) tool that has a lot of data in XML files with markup.  Essentially we need to translate something like the following:

   1: <Article Id="this does not get translated" 
   2:            Title="Title of the article" 
   3:            Category="Category for the article"
   4:            >
   5:   <Content><![CDATA[<P>description for the article<BR/>another line </p>]]></Content>
   6: </Article>

The cool thing is that if I just deserialize the above into an object and send the value of the Content member to the service like:

   1: string value = client.Translate(APPID_TOKEN, 
   2:                                 content, "en", "es");

I get only the content of the HTML translated:

   1: <p>Descripción del artículo<br>otra línea</p> 

Pretty nice and easy.  One thing all of the translator services have trouble with is if I just try to translate the entire xml element from the above in one shot.  Bing returns:

   1: <article id="this does not get translated" 
   2:          title="Title of the article" 
   3:          category="Category for the article">
   4: </article> 
   5:     <content><![CDATA[<P>Descripción del artículo<br>otra línea]]</content> >

And Google returns:

   1: <= Id artículo "esto no se traduce"
   2: Título = "Título del artículo"
   3: Categoría = "Categoría para el artículo">
   4:  
   5: <Content> <! [CDATA [descripción <P> para el artículo <BR/> otra línea </ p >]]>
   6: </ contenido>
   7: </> Artículo

Oh well – I guess no one’s perfect and for now we’ll be forced to deserialize and translate each element at a time.

Enjoy!

# Tuesday, 03 August 2010
Tuesday, 03 August 2010 01:50:24 (Central Daylight Time, UTC-05:00) ( Development )

Really interesting blog post by the IE team on some of the new DOM traversal features in IE9 (and other browsers).  Often times, you need to traverse the DOM to find a particular element or series of elements.  In the past, you might need to write some recursive JavaScript functions to navigate through the HTML on your page to act upon functions you care about.

Now, in IE9 (and other browsers that follow the W3C spec), you can use node iterators to get a flat list of the elements that you actually care about.  For example:

   1: // This would work fine with createTreeWalker, as well
   2: var iter = document.createNodeIterator(elm, 
   3:                                        NodeFilter.SHOW_ELEMENT, 
   4:                                        null, 
   5:                                        false); 
   6:  
   7: var node = iter.nextNode();
   8: while (node = iter.nextNode())
   9: {
  10:     node.style.display = "none";
  11: }

The NodeFilter enum by default allows for the following values (from the w3c spec here - http://www.w3.org/TR/2000/REC-DOM-Level-2-Traversal-Range-20001113/traversal.html#Traversal-NodeFilter):

   1: const unsigned long       SHOW_ALL                       = 0xFFFFFFFF;
   2: const unsigned long       SHOW_ELEMENT                   = 0x00000001;
   3: const unsigned long       SHOW_ATTRIBUTE                 = 0x00000002;
   4: const unsigned long       SHOW_TEXT                      = 0x00000004;
   5: const unsigned long       SHOW_CDATA_SECTION             = 0x00000008;
   6: const unsigned long       SHOW_ENTITY_REFERENCE          = 0x00000010;
   7: const unsigned long       SHOW_ENTITY                    = 0x00000020;
   8: const unsigned long       SHOW_PROCESSING_INSTRUCTION    = 0x00000040;
   9: const unsigned long       SHOW_COMMENT                   = 0x00000080;
  10: const unsigned long       SHOW_DOCUMENT                  = 0x00000100;
  11: const unsigned long       SHOW_DOCUMENT_TYPE             = 0x00000200;
  12: const unsigned long       SHOW_DOCUMENT_FRAGMENT         = 0x00000400;
  13: const unsigned long       SHOW_NOTATION                  = 0x00000800;

While this is great – you can also write your own NodeFilter callback function to filter the results even further:

   1: var iter = document.createNodeIterator(elm, 
   2:                                        NodeFilter.SHOW_ALL, 
   3:                                        keywordFilter, 
   4:                                        false);
   5:  
   6: function keywordFilter(node)
   7: {
   8:  
   9:     var altStr = node.getAttribute('alt').toLowerCase();
  10:     
  11:     if (altStr.indexOf("flight") != -1 || altStr.indexOf("space") != -1)
  12:         return NodeFilter.FILTER_ACCEPT;
  13:     else
  14:         return NodeFilter.FILTER_REJECT;
  15: }

Really nice and can help make your code simpler to read and faster too!

Enjoy!

Tuesday, 03 August 2010 01:48:17 (Central Daylight Time, UTC-05:00) ( .NET | Azure | Development | Logging )

One thing you might encounter when you start your development on windows-azure-logo_1-f2e19cWindows Azure is that there is an insane number of options available for number of options for logging.   You can view a quick primer here.  One of the things that I like about it is that you don’t necessarily need to learn a whole new API just to use it.  Instead, the logging facilities in Azure integrates really well with the existing Debug and Trace Logging API in .NET.  This is a really nice feature and is done very well in Azure.  In-fact to set-up and configure it is all of about 5 lines of code.  Actually it’s four lines of code with one line that wraps:

   1: public override bool OnStart() 
   2: { 
   3:     DiagnosticMonitorConfiguration dmc =          
   4:             DiagnosticMonitor.GetDefaultInitialConfiguration(); 
   5:     dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1); 
   6:     dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
   7:     DiagnosticMonitor.Start("DiagnosticsConnectionString", dmc); 
   8: } 

One specific item to note is the ScheduledTransferPeriod property of the Logs property.  The minimum value you can set for that property is the equivalent of 1 minute.  The only downside to this method of logging is that if your Azure role crashes within that minute, whatever data you have written to your in-built logging will be lost.  This also means that if you are writing exceptions to your logging, that will be lost as well.  That can cause problems if you’d like to know both when your role crashed and why (the exception details).

Before we talk about a way to get around it, let’s review why the Role might crash.  In the Windows Azure world, there are two primary roles that you will use, a Worker and Web role.  Main characteristics and reasons it would crash are below:

Role Type Analogous Why would it crash/restart?
Worker Role Console Application
  • Any unhandled exception.
Web Role ASP.NET Application hosted in IIS 7.0+
  • Any unhandled exception thrown on a background thread.
  • StackOverflowException
  • Unhandled exception on finalizer thread.

As you can see, the majority of the reasons why an Azure role would recycle/crash/restart are essentially the same as with any other application – essentially an unhandled exception.  Therefore, to mitigate this issue, we can subscribe to the AppDomain’s UnhandledException Event.  This event is fired when your application experiences an exception that is not caught and will fire RIGHT BEFORE the application crashes.  You can subscribe to this event in the Role OnStart() method:

   1: public override bool OnStart()
   2: {
   3:     AppDomain appDomain = AppDomain.CurrentDomain;
   4:     appDomain.UnhandledException += 
   5:         new UnhandledExceptionEventHandler(appDomain_UnhandledException);
   6:     ...
   7: }

You will now be notified right before your process crashes.  The last piece to this puzzle is logging the exception details.  Since you must log the details right when it happens, you can’t just use the normal Trace or Debug statements.  Instead, we will write to the Azure storage directly.  Steve Marx has a good blog entry about printf in the cloud.  While it works, it requires the connection string to be placed right into the logging call.  He mentions that you don’t want that in a production application.  in our case, we will do things a little bit differently.  First, we must add the requisite variables and initialize the storage objects:

   1: private static bool storageInitialized = false;
   2: private static object gate = new Object();
   3: private static CloudBlobClient blobStorage;
   4: private static CloudQueueClient queueStorage;
   5:  
   6: private void InitializeStorage()
   7: {
   8:    if (storageInitialized)
   9:    {
  10:        return;
  11:    }
  12:  
  13:    lock (gate)
  14:    {
  15:       if (storageInitialized)
  16:       {
  17:          return;
  18:       }
  19:  
  20:  
  21:       // read account configuration settings
  22:       var storageAccount = 
  23:         CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
  24:  
  25:       // create blob container for images
  26:       blobStorage = 
  27:           storageAccount.CreateCloudBlobClient();
  28:       CloudBlobContainer container = blobStorage.
  29:           GetContainerReference("webroleerrors");
  30:  
  31:       container.CreateIfNotExist();
  32:  
  33:       // configure container for public access
  34:       var permissions = container.GetPermissions();
  35:       permissions.PublicAccess = 
  36:            BlobContainerPublicAccessType.Container;
  37:       container.SetPermissions(permissions);
  38:  
  39:       storageInitialized = true;
  40:   }
  41: }

This will instantiate the requisite logging variables and then when the InitializeStorage() method is executed, we will set these variables to the appropriate initialized values.  Lastly, we must call this new method and then write to the storage.  We put this code in our UnhandledException event handler:

   1: void appDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e)
   2: {
   3:     // Initialize the storage variables.
   4:     InitializeStorage();
   5:  
   6:     // Get Reference to error container.
   7:     var container = blobStorage.
   8:                GetContainerReference("webroleerrors");
   9:  
  10:  
  11:     if (container != null)
  12:     {
  13:         // Retrieve last exception.
  14:         Exception ex = e.ExceptionObject as Exception;
  15:  
  16:         if (ex != null)
  17:         {
  18:             // Will create a new entry in the container
  19:             // and upload the text representing the 
  20:             // exception.
  21:             container.GetBlobReference(
  22:                String.Format(
  23:                   "<insert unique name for your application>-{0}-{1}",
  24:                   RoleEnvironment.CurrentRoleInstance.Id,
  25:                   DateTime.UtcNow.Ticks)
  26:                ).UploadText(ex.ToString());
  27:         }
  28:     }
  29:     
  30: }

Now, when your Azure role is about to crash, you’ll find an entry in your blog storage with the details of the exception that was thrown.  For example, one of the leading reasons why an Azure Worker Role crashes is because it can’t find a dependency it needs.  So, in that case, you’ll find an entry in your storage with the following details:

   1: System.IO.FileNotFoundException: Could not load file or assembly 
   2:     'GuestBook_Data, Version=1.0.0.0, Culture=neutral, 
   3:     PublicKeyToken=f8a5fcb6c395f621' or one of its dependencies. 
   4:     The system cannot find the file specified.
   5:  
   6: File name: 'GuestBook_Data, Version=1.0.0.0, Culture=neutral, 
   7:     PublicKeyToken=f8a5fcb6c395f621'

You can find some other common reasons why an Azure role might crash (especially when you first deploy it) can be found at Anton Staykov’s excellent blog:

Hope this helps.

Enjoy!

# Thursday, 29 July 2010
Thursday, 29 July 2010 18:50:09 (Central Daylight Time, UTC-05:00) ( Best Practice | IIS | Performance )

I seem to get this question a lot and come across many customer environments where they have enabled web gardening thinking that it will automagically improve the performance for their site/application.

webgardening

Most time, that is not the case.  The funny thing is that once I finally convince them that web gardening is not the way to go, they try to apply that same knowledge to other sites and applications in their environment.  When this happens, I’ll get an e-mail or phone call asking for some guidelines on when to enable web gardening.

We typically recommend using Web Gardening as a stop-gap (or workaround) for when a customer has a core issue that is limiting their website and web application scalability.

For example, if a customer has a memory issue that is causing OutOfMemoryExceptions in their main website – we may recommend web gardening to spread the load across multiple worker processes while we assist them in resolving the core memory issue.  Please note that this would also increase the memory and processor utilization on the server and in some cases might not be viable.

As a best practice, create Web gardens only for Web applications that meet the following criteria (taken from here):

  • The application runs multi-instantiated, so that a different instance of the application can be assigned to each worker process.
  • The Web application is not CPU-intensive. If the CPU is the bottleneck, then adding worker processes cannot help improve performance.
  • The application is subject to synchronous high latency. For example, if an application calls a back-end database and the response is slow, then a Web garden supports other concurrent connections without waiting for the slow connection to complete.

A good discussion of why not to use Web Gardening can be found here as well:  http://blogs.technet.com/b/mscom/archive/2007/07/10/gardening-on-the-web-server.aspx

Enjoy!

Thursday, 29 July 2010 16:21:57 (Central Daylight Time, UTC-05:00) ( Hiring | Microsoft | Premier Field Engineer (PFE) )

mslogo-1 First, a few questions:

  • Do you enjoy helping developers write better code?
  • Do you enjoy solving complex problems that span multiple technologies?
  • Do you enjoy optimizing and improving code?
  • Are you passionate about software development?
  • Do you enjoy managing your own calendar?
  • Do you want to make the world a better place?

If the answer to these questions is “YES!”, then please read-on.

We are now actively recruiting for 3 Developer Premier Field Engineering positions at Microsoft

You may ask yourself, what does a Dev PFE do?net_v_web

We do both proactive and reactive work encompassing a variety of Microsoft’s developer products.  In general, the reactive work is where a customer is experiencing a problem (usually in production) and they need someone onsite to help them resolve the issue. The proactive work usually takes the form of knowledge transfer to companies on how they can improve the maintainability of their code, how to debug problems and how to optimize their applications.  I have also done a fair number of “proofs of concept” for customers where they just don’t know how to do something or they want me to prove that it can be done. 

microsoftbizcard219border333-thumbIn the past year, I’ve worked on projects using .NET (1.1 – 4.0), Windows Azure, Internet Explorer, Bing, Bing Translator, Windows 7 and many others. 

And the list really does go on.  You will work with our largest customers around the world helping them write better code, solving complex issues, teaching them about the latest technology and just making the world a better place.

If this sounds like a good fit for you, click here to go to the Microsoft Careers site to apply.

# Sunday, 25 July 2010
Sunday, 25 July 2010 12:47:04 (Central Daylight Time, UTC-05:00) ( )

If you have visited my blog anytime in the last 2 imageweeks – you may have noticed an error page.  This was due to my hosting provider “accidentally” deleting my site’s database.  This was actually a perfect storm of sorts.  All three of these things happened within the past few weeks:

  • I had recently repaved my main machine and decided to wipe out my database backups.  I figured I would just get a fresh backup afterwards.
  • With some recent travel for work, I did not have a chance to get those database backups.
  • The hosting company did not have a backup of the database they deleted.

All three of these extenuating circumstances led to this site being in a sorry state of repair. 

Fortunately, all is not entirely lost as both Bing and Google have cached copies of most of my posts but it leaves me in a quandary on if I should bother reposting everything from the past four years or if I should just start fresh.   For now, I’ve decided to repost only those blog entries that had more than 1,000 views.  That said, if you remember a blog entry and it hasn’t been reposted, please send me a message at greg [at] samuraiprogrammer dot com and I will get it reposted.

Please enjoy the new site. 

Thanks!

# Friday, 23 July 2010
Friday, 23 July 2010 23:35:28 (Central Daylight Time, UTC-05:00) ( )

I recently ran into a problem with my Flip UltraHD Video camcorder where it would not turn on.  Unlike other camcorders in the Flip family, there is no microscopic reset button anywhere on the device.

After e-mailing support, I received a response the next day with the steps to reset the device.  For some reason, this information is not listed in their knowledge base on the support site, so I thought I would post it here. 

The steps to reset/resolve an issue where your UltraHD will not turn on is as follows:

  1. Remove the battery pack from the camcorder.
  2. Connect the camcorder to a powered USB port on your computer.
  3. When the "Connected" indicator comes on, insert the battery pack into the camcorder.
  4. Safe eject your camcorder from your computer.
  5. Reconnect your camcorder to your computer.
  6. The battery pack should now begin to charge within the camcorder

Hope this helps someone else out there. 

Friday, 23 July 2010 22:58:54 (Central Daylight Time, UTC-05:00) ( Powerpoint | Presentations | VBA )

I recently came across the situation where I had several PowerPoint decks that were VERY well documented.  Essentially, each slide had reams of notes in the Notes panel of the deck.  This is both good and bad.  It was good because for preparation purposes, it was very easy to review the notes as you reviewed the slides.  It was bad because on many machines where I presented from, you could not split the monitors so the slides were on one machine and the notes were on another.

This presented a conundrum because it’s nice to be able to have the notes handy as I present the material.  That way, I can refer back to the main bullets to make sure I covered everything before moving on.  For other workshops I teach, we have an instructor guide on paper that you can have up next to your presenter machine on the podium and all is good to go.

Unfortunately, there was nothing like that setup already for this workshop.  Fortunately, PowerPoint 2007 provides a feature to export your presentation to Word 2007.  Full steps are located here but essentially you are looking for this screen:

Creating handouts in Word

This is a nice solution but the main problem that I had is that it embeds PowerPoint objects into the slide deck.  This *sounds* like a good idea but it grows the file size of the document to a huge degree.  For example, one deck had 36 slides and the file size of the resulting Word document was >50MB! 

Fortunately, with some nice VBA code, you can do the following:

  1. Iterate through every shape in the document.
  2. Copy the PowerPoint object.
  3. Paste that object as a JPEG.
  4. Delete the PowerPoint object.

That brings the file size back down to where it should be, from my perspective.  For example, in that file I mentioned previously, it brought the size down from 55MB to 642KB.  Talk about a tremendous improvement! 

The VBA code to use is as follows:

   1: Sub ConvertPowerPointToImage()
   2:     
   3:     Dim j As InlineShape
   4:     For Each j In Word.ActiveDocument.InlineShapes
   5:         j.Select
   6:         Selection.Copy
   7:         Selection.PasteSpecial Link:=False, _
   8:                   DataType:=15, 
   9:                   Placement:=wdInLine, 
  10:                   DisplayAsIcon:=False
  11:         j.Delete        
  12:     Next j
  13: End Sub

Just copy/paste the above code into a new module.  Then execute the “ConvertPowerPointToImage” macro.  That will clean everything up and make the file a lot more manageable. 

Enjoy!