Home | About Me | Developer PFE Blog | Become a Developer PFE

Contact

Categories

On this page

Working with the Bing Translator API
IE9: DOM Traversal
Azure: Why did my role crash?
IIS: When to enable web gardening?
Are you ready for your next challenge? (We’re hiring!)
Site update July 2010 (aka - Where are all of your posts?)
How to reset your Flip UltraHD Video camcorder
Creating an instructor guide from a PowerPoint deck.
.NET 2.0 and Access Databases
ASP.NET 2.0: Textbox ReadOnly Property

Archive

Blogroll

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

Sign In

# Wednesday, 04 August 2010
Wednesday, 04 August 2010 01:23:21 (Central Daylight Time, UTC-05:00) ( Bing | Development | Google | Online Translator )

imageAn online translator really isn’t all that new.  They’ve been around for at least 8 years or so.  I remember the days when I would use Babelfish for all of my fun translations.  It was a great way to get an immediate translation for something non-critical.  The problem in a lot of cases was grammatical correctness.  Translating word for word isn’t particularly difficult but context and grammar varies so much between languages that it was always challenging to translate entire sentences, paragraphs, passages, etc. from one language to another. 

Fortunately the technology has improved a lot over the years. Now, you can somewhat reliably translate entire web pages from one language to another.  I’m not saying it’s without fault – but I am saying that it’s gotten a lot better over time. These days there are a few big players in this space.  Notably Google Translate, Babelfish and the Bing Translator.  The interesting thing I’ve found is that only Bing actually has a supported API into its translation service

There are 3 primary ways to interact with the service:

They all seem to expose the same methods but it’s just the way you call them that differs.  For example, the sample code published for the HTTP method looks like:

   1: string appId = "myAppId";
   2: string text = "Translate this for me";
   3: string from = "en";
   4: string to = "fr";
   5:  
   6: string detectUri = "http://api.microsofttranslator.com/v2/Http.svc/Translate?appId=" + appId +
   7:     "&text;=" + text + "&from;=" + from + "&to;=" + to;
   8: HttpWebRequest httpWebRequest = (HttpWebRequest)WebRequest.Create(detectUri);
   9: WebResponse resp = httpWebRequest.GetResponse();
  10: Stream strm = resp.GetResponseStream();
  11: StreamReader reader = new System.IO.StreamReader(strm);
  12: string translation = reader.ReadToEnd();
  13:  
  14: Response.Write("The translated text is: '" + translation + "'.");

Then, for the SOAP method:

   1: string result;
   2: TranslatorService.LanguageServiceClient client = 
   3:                     new TranslatorService.LanguageServiceClient(); 
   4: result = client.Translate("myAppId", 
   5:                           "Translate this text into German", 
   6:                           "en", "de"); 
   7: Console.WriteLine(result);

And lastly for the AJAX method:

   1: var languageFrom = "en";
   2: var languageTo = "es";
   3: var text = "translate this.";
   4:  
   5: function translate() {
   6:     window.mycallback = function(response) { alert(response); }
   7:     
   8:     var s = document.createElement("script");
   9:     s.src = "http://api.microsofttranslator.com/V2/Ajax.svc/Translate?oncomplete=mycallback&appId;=myAppId&from;=" 
  10:                 + languageFrom + "&to;=" + languageTo + "&text;=" + text;
  11:     document.getElementsByTagName("head")[0].appendChild(s);
  12: }

Fortunately, it all works as you’d expect – cleanly and simply.  The really nice thing about this (and the Google Translator) is that when faced with straight-up HTML like:

   1: <p class="style">Hello World!</p>

They will both return the following:

   1: <p class="style">¡Hola mundo!</p> 

Both translators will keep the HTML tags intact and only translate the actual text.  This undoubtedly comes in handy if you do any large bulk translations.  For example, I’m working with another couple of guys here on an internal (one day external) tool that has a lot of data in XML files with markup.  Essentially we need to translate something like the following:

   1: <Article Id="this does not get translated" 
   2:            Title="Title of the article" 
   3:            Category="Category for the article"
   4:            >
   5:   <Content><![CDATA[<P>description for the article<BR/>another line </p>]]></Content>
   6: </Article>

The cool thing is that if I just deserialize the above into an object and send the value of the Content member to the service like:

   1: string value = client.Translate(APPID_TOKEN, 
   2:                                 content, "en", "es");

I get only the content of the HTML translated:

   1: <p>Descripción del artículo<br>otra línea</p> 

Pretty nice and easy.  One thing all of the translator services have trouble with is if I just try to translate the entire xml element from the above in one shot.  Bing returns:

   1: <article id="this does not get translated" 
   2:          title="Title of the article" 
   3:          category="Category for the article">
   4: </article> 
   5:     <content><![CDATA[<P>Descripción del artículo<br>otra línea]]</content> >

And Google returns:

   1: <= Id artículo "esto no se traduce"
   2: Título = "Título del artículo"
   3: Categoría = "Categoría para el artículo">
   4:  
   5: <Content> <! [CDATA [descripción <P> para el artículo <BR/> otra línea </ p >]]>
   6: </ contenido>
   7: </> Artículo

Oh well – I guess no one’s perfect and for now we’ll be forced to deserialize and translate each element at a time.

Enjoy!

# Tuesday, 03 August 2010
Tuesday, 03 August 2010 01:50:24 (Central Daylight Time, UTC-05:00) ( Development )

Really interesting blog post by the IE team on some of the new DOM traversal features in IE9 (and other browsers).  Often times, you need to traverse the DOM to find a particular element or series of elements.  In the past, you might need to write some recursive JavaScript functions to navigate through the HTML on your page to act upon functions you care about.

Now, in IE9 (and other browsers that follow the W3C spec), you can use node iterators to get a flat list of the elements that you actually care about.  For example:

   1: // This would work fine with createTreeWalker, as well
   2: var iter = document.createNodeIterator(elm, 
   3:                                        NodeFilter.SHOW_ELEMENT, 
   4:                                        null, 
   5:                                        false); 
   6:  
   7: var node = iter.nextNode();
   8: while (node = iter.nextNode())
   9: {
  10:     node.style.display = "none";
  11: }

The NodeFilter enum by default allows for the following values (from the w3c spec here - http://www.w3.org/TR/2000/REC-DOM-Level-2-Traversal-Range-20001113/traversal.html#Traversal-NodeFilter):

   1: const unsigned long       SHOW_ALL                       = 0xFFFFFFFF;
   2: const unsigned long       SHOW_ELEMENT                   = 0x00000001;
   3: const unsigned long       SHOW_ATTRIBUTE                 = 0x00000002;
   4: const unsigned long       SHOW_TEXT                      = 0x00000004;
   5: const unsigned long       SHOW_CDATA_SECTION             = 0x00000008;
   6: const unsigned long       SHOW_ENTITY_REFERENCE          = 0x00000010;
   7: const unsigned long       SHOW_ENTITY                    = 0x00000020;
   8: const unsigned long       SHOW_PROCESSING_INSTRUCTION    = 0x00000040;
   9: const unsigned long       SHOW_COMMENT                   = 0x00000080;
  10: const unsigned long       SHOW_DOCUMENT                  = 0x00000100;
  11: const unsigned long       SHOW_DOCUMENT_TYPE             = 0x00000200;
  12: const unsigned long       SHOW_DOCUMENT_FRAGMENT         = 0x00000400;
  13: const unsigned long       SHOW_NOTATION                  = 0x00000800;

While this is great – you can also write your own NodeFilter callback function to filter the results even further:

   1: var iter = document.createNodeIterator(elm, 
   2:                                        NodeFilter.SHOW_ALL, 
   3:                                        keywordFilter, 
   4:                                        false);
   5:  
   6: function keywordFilter(node)
   7: {
   8:  
   9:     var altStr = node.getAttribute('alt').toLowerCase();
  10:     
  11:     if (altStr.indexOf("flight") != -1 || altStr.indexOf("space") != -1)
  12:         return NodeFilter.FILTER_ACCEPT;
  13:     else
  14:         return NodeFilter.FILTER_REJECT;
  15: }

Really nice and can help make your code simpler to read and faster too!

Enjoy!

Tuesday, 03 August 2010 01:48:17 (Central Daylight Time, UTC-05:00) ( .NET | Azure | Development | Logging )

One thing you might encounter when you start your development on windows-azure-logo_1-f2e19cWindows Azure is that there is an insane number of options available for number of options for logging.   You can view a quick primer here.  One of the things that I like about it is that you don’t necessarily need to learn a whole new API just to use it.  Instead, the logging facilities in Azure integrates really well with the existing Debug and Trace Logging API in .NET.  This is a really nice feature and is done very well in Azure.  In-fact to set-up and configure it is all of about 5 lines of code.  Actually it’s four lines of code with one line that wraps:

   1: public override bool OnStart() 
   2: { 
   3:     DiagnosticMonitorConfiguration dmc =          
   4:             DiagnosticMonitor.GetDefaultInitialConfiguration(); 
   5:     dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1); 
   6:     dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
   7:     DiagnosticMonitor.Start("DiagnosticsConnectionString", dmc); 
   8: } 

One specific item to note is the ScheduledTransferPeriod property of the Logs property.  The minimum value you can set for that property is the equivalent of 1 minute.  The only downside to this method of logging is that if your Azure role crashes within that minute, whatever data you have written to your in-built logging will be lost.  This also means that if you are writing exceptions to your logging, that will be lost as well.  That can cause problems if you’d like to know both when your role crashed and why (the exception details).

Before we talk about a way to get around it, let’s review why the Role might crash.  In the Windows Azure world, there are two primary roles that you will use, a Worker and Web role.  Main characteristics and reasons it would crash are below:

Role Type Analogous Why would it crash/restart?
Worker Role Console Application
  • Any unhandled exception.
Web Role ASP.NET Application hosted in IIS 7.0+
  • Any unhandled exception thrown on a background thread.
  • StackOverflowException
  • Unhandled exception on finalizer thread.

As you can see, the majority of the reasons why an Azure role would recycle/crash/restart are essentially the same as with any other application – essentially an unhandled exception.  Therefore, to mitigate this issue, we can subscribe to the AppDomain’s UnhandledException Event.  This event is fired when your application experiences an exception that is not caught and will fire RIGHT BEFORE the application crashes.  You can subscribe to this event in the Role OnStart() method:

   1: public override bool OnStart()
   2: {
   3:     AppDomain appDomain = AppDomain.CurrentDomain;
   4:     appDomain.UnhandledException += 
   5:         new UnhandledExceptionEventHandler(appDomain_UnhandledException);
   6:     ...
   7: }

You will now be notified right before your process crashes.  The last piece to this puzzle is logging the exception details.  Since you must log the details right when it happens, you can’t just use the normal Trace or Debug statements.  Instead, we will write to the Azure storage directly.  Steve Marx has a good blog entry about printf in the cloud.  While it works, it requires the connection string to be placed right into the logging call.  He mentions that you don’t want that in a production application.  in our case, we will do things a little bit differently.  First, we must add the requisite variables and initialize the storage objects:

   1: private static bool storageInitialized = false;
   2: private static object gate = new Object();
   3: private static CloudBlobClient blobStorage;
   4: private static CloudQueueClient queueStorage;
   5:  
   6: private void InitializeStorage()
   7: {
   8:    if (storageInitialized)
   9:    {
  10:        return;
  11:    }
  12:  
  13:    lock (gate)
  14:    {
  15:       if (storageInitialized)
  16:       {
  17:          return;
  18:       }
  19:  
  20:  
  21:       // read account configuration settings
  22:       var storageAccount = 
  23:         CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
  24:  
  25:       // create blob container for images
  26:       blobStorage = 
  27:           storageAccount.CreateCloudBlobClient();
  28:       CloudBlobContainer container = blobStorage.
  29:           GetContainerReference("webroleerrors");
  30:  
  31:       container.CreateIfNotExist();
  32:  
  33:       // configure container for public access
  34:       var permissions = container.GetPermissions();
  35:       permissions.PublicAccess = 
  36:            BlobContainerPublicAccessType.Container;
  37:       container.SetPermissions(permissions);
  38:  
  39:       storageInitialized = true;
  40:   }
  41: }

This will instantiate the requisite logging variables and then when the InitializeStorage() method is executed, we will set these variables to the appropriate initialized values.  Lastly, we must call this new method and then write to the storage.  We put this code in our UnhandledException event handler:

   1: void appDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e)
   2: {
   3:     // Initialize the storage variables.
   4:     InitializeStorage();
   5:  
   6:     // Get Reference to error container.
   7:     var container = blobStorage.
   8:                GetContainerReference("webroleerrors");
   9:  
  10:  
  11:     if (container != null)
  12:     {
  13:         // Retrieve last exception.
  14:         Exception ex = e.ExceptionObject as Exception;
  15:  
  16:         if (ex != null)
  17:         {
  18:             // Will create a new entry in the container
  19:             // and upload the text representing the 
  20:             // exception.
  21:             container.GetBlobReference(
  22:                String.Format(
  23:                   "<insert unique name for your application>-{0}-{1}",
  24:                   RoleEnvironment.CurrentRoleInstance.Id,
  25:                   DateTime.UtcNow.Ticks)
  26:                ).UploadText(ex.ToString());
  27:         }
  28:     }
  29:     
  30: }

Now, when your Azure role is about to crash, you’ll find an entry in your blog storage with the details of the exception that was thrown.  For example, one of the leading reasons why an Azure Worker Role crashes is because it can’t find a dependency it needs.  So, in that case, you’ll find an entry in your storage with the following details:

   1: System.IO.FileNotFoundException: Could not load file or assembly 
   2:     'GuestBook_Data, Version=1.0.0.0, Culture=neutral, 
   3:     PublicKeyToken=f8a5fcb6c395f621' or one of its dependencies. 
   4:     The system cannot find the file specified.
   5:  
   6: File name: 'GuestBook_Data, Version=1.0.0.0, Culture=neutral, 
   7:     PublicKeyToken=f8a5fcb6c395f621'

You can find some other common reasons why an Azure role might crash (especially when you first deploy it) can be found at Anton Staykov’s excellent blog:

Hope this helps.

Enjoy!

# Thursday, 29 July 2010
Thursday, 29 July 2010 18:50:09 (Central Daylight Time, UTC-05:00) ( Best Practice | IIS | Performance )

I seem to get this question a lot and come across many customer environments where they have enabled web gardening thinking that it will automagically improve the performance for their site/application.

webgardening

Most time, that is not the case.  The funny thing is that once I finally convince them that web gardening is not the way to go, they try to apply that same knowledge to other sites and applications in their environment.  When this happens, I’ll get an e-mail or phone call asking for some guidelines on when to enable web gardening.

We typically recommend using Web Gardening as a stop-gap (or workaround) for when a customer has a core issue that is limiting their website and web application scalability.

For example, if a customer has a memory issue that is causing OutOfMemoryExceptions in their main website – we may recommend web gardening to spread the load across multiple worker processes while we assist them in resolving the core memory issue.  Please note that this would also increase the memory and processor utilization on the server and in some cases might not be viable.

As a best practice, create Web gardens only for Web applications that meet the following criteria (taken from here):

  • The application runs multi-instantiated, so that a different instance of the application can be assigned to each worker process.
  • The Web application is not CPU-intensive. If the CPU is the bottleneck, then adding worker processes cannot help improve performance.
  • The application is subject to synchronous high latency. For example, if an application calls a back-end database and the response is slow, then a Web garden supports other concurrent connections without waiting for the slow connection to complete.

A good discussion of why not to use Web Gardening can be found here as well:  http://blogs.technet.com/b/mscom/archive/2007/07/10/gardening-on-the-web-server.aspx

Enjoy!

Thursday, 29 July 2010 16:21:57 (Central Daylight Time, UTC-05:00) ( Hiring | Microsoft | Premier Field Engineer (PFE) )

mslogo-1 First, a few questions:

  • Do you enjoy helping developers write better code?
  • Do you enjoy solving complex problems that span multiple technologies?
  • Do you enjoy optimizing and improving code?
  • Are you passionate about software development?
  • Do you enjoy managing your own calendar?
  • Do you want to make the world a better place?

If the answer to these questions is “YES!”, then please read-on.

We are now actively recruiting for 3 Developer Premier Field Engineering positions at Microsoft

You may ask yourself, what does a Dev PFE do?net_v_web

We do both proactive and reactive work encompassing a variety of Microsoft’s developer products.  In general, the reactive work is where a customer is experiencing a problem (usually in production) and they need someone onsite to help them resolve the issue. The proactive work usually takes the form of knowledge transfer to companies on how they can improve the maintainability of their code, how to debug problems and how to optimize their applications.  I have also done a fair number of “proofs of concept” for customers where they just don’t know how to do something or they want me to prove that it can be done. 

microsoftbizcard219border333-thumbIn the past year, I’ve worked on projects using .NET (1.1 – 4.0), Windows Azure, Internet Explorer, Bing, Bing Translator, Windows 7 and many others. 

And the list really does go on.  You will work with our largest customers around the world helping them write better code, solving complex issues, teaching them about the latest technology and just making the world a better place.

If this sounds like a good fit for you, click here to go to the Microsoft Careers site to apply.

# Sunday, 25 July 2010
Sunday, 25 July 2010 12:47:04 (Central Daylight Time, UTC-05:00) ( )

If you have visited my blog anytime in the last 2 imageweeks – you may have noticed an error page.  This was due to my hosting provider “accidentally” deleting my site’s database.  This was actually a perfect storm of sorts.  All three of these things happened within the past few weeks:

  • I had recently repaved my main machine and decided to wipe out my database backups.  I figured I would just get a fresh backup afterwards.
  • With some recent travel for work, I did not have a chance to get those database backups.
  • The hosting company did not have a backup of the database they deleted.

All three of these extenuating circumstances led to this site being in a sorry state of repair. 

Fortunately, all is not entirely lost as both Bing and Google have cached copies of most of my posts but it leaves me in a quandary on if I should bother reposting everything from the past four years or if I should just start fresh.   For now, I’ve decided to repost only those blog entries that had more than 1,000 views.  That said, if you remember a blog entry and it hasn’t been reposted, please send me a message at greg [at] samuraiprogrammer dot com and I will get it reposted.

Please enjoy the new site. 

Thanks!

# Friday, 23 July 2010
Friday, 23 July 2010 23:35:28 (Central Daylight Time, UTC-05:00) ( )

I recently ran into a problem with my Flip UltraHD Video camcorder where it would not turn on.  Unlike other camcorders in the Flip family, there is no microscopic reset button anywhere on the device.

After e-mailing support, I received a response the next day with the steps to reset the device.  For some reason, this information is not listed in their knowledge base on the support site, so I thought I would post it here. 

The steps to reset/resolve an issue where your UltraHD will not turn on is as follows:

  1. Remove the battery pack from the camcorder.
  2. Connect the camcorder to a powered USB port on your computer.
  3. When the "Connected" indicator comes on, insert the battery pack into the camcorder.
  4. Safe eject your camcorder from your computer.
  5. Reconnect your camcorder to your computer.
  6. The battery pack should now begin to charge within the camcorder

Hope this helps someone else out there. 

Friday, 23 July 2010 22:58:54 (Central Daylight Time, UTC-05:00) ( Powerpoint | Presentations | VBA )

I recently came across the situation where I had several PowerPoint decks that were VERY well documented.  Essentially, each slide had reams of notes in the Notes panel of the deck.  This is both good and bad.  It was good because for preparation purposes, it was very easy to review the notes as you reviewed the slides.  It was bad because on many machines where I presented from, you could not split the monitors so the slides were on one machine and the notes were on another.

This presented a conundrum because it’s nice to be able to have the notes handy as I present the material.  That way, I can refer back to the main bullets to make sure I covered everything before moving on.  For other workshops I teach, we have an instructor guide on paper that you can have up next to your presenter machine on the podium and all is good to go.

Unfortunately, there was nothing like that setup already for this workshop.  Fortunately, PowerPoint 2007 provides a feature to export your presentation to Word 2007.  Full steps are located here but essentially you are looking for this screen:

Creating handouts in Word

This is a nice solution but the main problem that I had is that it embeds PowerPoint objects into the slide deck.  This *sounds* like a good idea but it grows the file size of the document to a huge degree.  For example, one deck had 36 slides and the file size of the resulting Word document was >50MB! 

Fortunately, with some nice VBA code, you can do the following:

  1. Iterate through every shape in the document.
  2. Copy the PowerPoint object.
  3. Paste that object as a JPEG.
  4. Delete the PowerPoint object.

That brings the file size back down to where it should be, from my perspective.  For example, in that file I mentioned previously, it brought the size down from 55MB to 642KB.  Talk about a tremendous improvement! 

The VBA code to use is as follows:

   1: Sub ConvertPowerPointToImage()
   2:     
   3:     Dim j As InlineShape
   4:     For Each j In Word.ActiveDocument.InlineShapes
   5:         j.Select
   6:         Selection.Copy
   7:         Selection.PasteSpecial Link:=False, _
   8:                   DataType:=15, 
   9:                   Placement:=wdInLine, 
  10:                   DisplayAsIcon:=False
  11:         j.Delete        
  12:     Next j
  13: End Sub

Just copy/paste the above code into a new module.  Then execute the “ConvertPowerPointToImage” macro.  That will clean everything up and make the file a lot more manageable. 

Enjoy!

Friday, 23 July 2010 22:51:28 (Central Daylight Time, UTC-05:00) ( .NET | .NET Upgrade | Access Database | ASP.NET | Development | Security )

For an internal application at my company, very particular users have the rights to download an MDB that contains links to a replicated instance of a DB.  This was done so they could create their own queries without any assistance from IT - essentially a poor man's Cognos or Reporting Services.  This worked fine in .Net 1.1.

We are currently in the process of upgrading this application to .Net 2.0 and one of the developers on the project brought to my attention that the download was failing with a message stating:

downloadmdfError

She did some research and uncovered this forum post with step by step instructions on how to re-enable the downloading of an Access database.

Please keep in mind, though, that this was done for security purposes - so any page that links to a secured Access DB - should be secured in a variety of ways:

1.  Code Access Security:

   1: <PrincipalPermission(SecurityAction.Demand, _ 
   2:                      Authenticated:=True, _
   3:                      Role:="Secured User")> _

This should be placed in the code-behind on the page with the link to the file.  As you can see, the "Role" property will secure this page to just a user in a particular role.  If you have a page that is accessible to multiple roles, you can stack the class attributes on top of each other as seen here:

   1: <PrincipalPermission(SecurityAction.Demand, _
   2:                      Authenticated:=True, 
   3:                      Role:="First Role")> _
   4: <PrincipalPermission(SecurityAction.Demand, _
   5:                      Authenticated:=True, 
   6:                      Role:="Second Role")> _
   7: Partial Class PagesToBeSecured

2.  File Access Restriction:

Also, by putting a web.config in the folder that houses the Access DB, you can secure access to the contents of a particular folder.  For example with a structure of:

   1: Root/Downloads/Database/db.mdb

You can put a web.config in the "Database" folder that restricts access to that folder to a particular user/role/etc.  For example, the web.config in this example would look something like:

   1: <?xml version="1.0" encoding="utf-8"?>
   2: <configuration>
   3:     <system.web>
   4:         <authorization>
   5:             <allow roles="Role 1" />
   6:             <allow roles="Role 2"/>
   7:             <deny users="*" />
   8:         </authorization>
   9:     </system.web>
  10: </configuration>

As you can see, we are only allowing users in roles Role 1 and Role 2 to access the contents of this folder.  If any other users attempt to access the file, they will get a Page Cannot Be found exception.

In conclusion - if you are going to remove this particular restriction in IIS to allow users to download Access DB files directly from your web application - then you shouldn't completely discount security and you implement both of the following measures.

Enjoy!

Friday, 23 July 2010 22:42:12 (Central Daylight Time, UTC-05:00) ( .NET | .NET Upgrade | ASP.NET | Development )

This has been documented in numerous places including here, here, here (with an incorrect fix) and here.

Essentially, if you set the Readonly Property of a textbox using either the XHTML or in the code-behind, any changes you make to the value of that textbox client-side (ie:  javascript) will not be persisted across postbacks.  I had found this method extremely effective for Calendar controls and what-not.  You just provide a link to a popup and then when the user selects an item in the popup, you can populate the underlying Textbox with the value from the popup.  Unfortunately, out of the box, that functionality has been taken away.

The code in the Textbox that actually signals to the page that textboxes using the ReadOnly property should not be saved in ViewState is in a few places:

   1: Protected Overrides Function SaveViewState() As Object
   2:     If Not Me.SaveTextViewState Then
   3:         Me.ViewState.SetItemDirty("Text", False)
   4:     End If
   5:     Return MyBase.SaveViewState
   6: End Function
   7:  
   8: Private Shadows ReadOnly Property SaveTextViewState() _
   9:                                             As Boolean
  10:     Get
  11:  
  12:         If (Me.TextMode = TextBoxMode.Password) Then
  13:             Return False
  14:         End If
  15:         If (((MyBase.Events.Item(EventTextChanged) Is Nothing) _
  16:                 AndAlso MyBase.IsEnabled) _
  17:                 AndAlso ((Me.Visible AndAlso Not Me.ReadOnly) _
  18:                 AndAlso (MyBase.GetType Is GetType(TextBox)))) Then
  19:             Return False
  20:         End If
  21:  
  22:         Return True
  23:     End Get
  24: End Property

In the SaveTextViewState() method, part of the conditional statement specifies that if the ReadOnly property is set to true, then do not save the ViewState.

Also, in the LoadPostData() method, there is another check to make sure that the control's ReadOnly property is not set to true before it loads the data from the form post:

   1: Protected Overrides Function LoadPostData( _
   2:                 ByVal postDataKey As String, _
   3:                 ByVal postCollection As _
   4:                     NameValueCollection) As Boolean
   5:     MyBase.ValidateEvent(postDataKey)
   6:     Dim text As String = Me.Text
   7:     Dim text2 As String = _
   8:                 postCollection.Item(postDataKey)
   9:     If (Not Me.ReadOnly _
  10:         AndAlso Not [text].Equals(text2, _
  11:             StringComparison.Ordinal)) Then
  12:         Me.Text = text2
  13:         Return True
  14:     End If
  15:  
  16:     Return False
  17: End Function

If the ReadOnly property is set to true, then it will not set the Text property of the TextBox to be the value from the Form post.

The fix that people are touting is relatively simple.  In the code behind of the page, you can simply add the attribute manually to the TextBox like this:

   1: Protected Sub Page_Load(ByVal sender As Object, _
   2:         ByVal e As System.EventArgs) Handles Me.Load
   3:     With Me.txtTest
   4:         .Attributes.Add("readonly", "readonly")
   5:     End With
   6: End Sub

And while this would work - this just feels like a bandage.  Plus, if you have a big web project you are converting from .net 1.1 to .net 2.0 that contains TextBoxes all over the place that have the ReadOnly=True set, then this would be a lot of code to include on every single page.  Even if you have a shared function somewhere that you can call - that means you still have to call the function for each Textbox that you wish to set as ReadOnly.

An alternative is to create your own TextBox class that inherits from the existing TextBox and handles the ReadOnly property in the fashion mentioned above.  Any example of the code for the new TextBox is as follows:

   1: Namespace WebControls
   2:     Public Class TextBox
   3:         Inherits System.Web.UI.WebControls.TextBox
   4:  
   5:         Public Overrides Property [ReadOnly]() As Boolean
   6:             Get
   7:                 Return False
   8:             End Get
   9:             Set(ByVal value As Boolean)
  10:                 If value Then
  11:                     Me.Attributes("readonly") = "readonly"
  12:                 Else
  13:                     Me.Attributes.Remove("readonly")
  14:                 End If
  15:             End Set
  16:         End Property
  17:  
  18:     End Class
  19: End Namespace

That's fine for new TextBoxes, but what about my hundreds and hundreds of TextBoxes that are already in my application?  Well, .Net 2.0 allows you to use something called tagMappings in your web.config.  Essentially, they are a way to redirect the compiler to use a different class for an existing mapping when it compiles your pages.  From MSDN: 

"Defines a collection of tag types that are remapped to other tag types at compile time."

So, how do we use this neat feature to solve this problem?  Well, in the web.config, we will tell the .Net compiler to use our new TextBox control instead of the existing System.Web.UI.WebControls.TextBox control.  In order to do this, you simply add the following to your web.config:

   1: <pages>
   2:       <tagMapping>
   3:         <add tagType="System.Web.UI.WebControls.TextBox" 
   4:              mappedTagType="WebControls.TextBox"/>
   5:       </tagMapping>
   6: </pages>

The tagType attribute is the existing tag you would like to change (System.Web.UI.WebControls.TextBox) and the mappedTagType is the new TextBox control from above.

With this tag added in the web.config - whenever you have the following in a page:

   1: <asp:TextBox runat="server" 
   2:              ID="txtTest" 
   3:              ReadOnly="true" />

Your application will use the TextBox defined directly above.

Enjoy!