The Most Moronic Value Converter Ever

§ August 26, 2009 06:43 by beefarino |

I'm really digging that I get to work with WPF at the moment.  Using advanced databinding features against MVVM are making quick work of the menial "datapult" applications I'm pounding out.  I still have a lot to absorb, but I'm getting there.

Being a n00b to XAML and WPF, I often find myself lost in a fog when things don't work.  E.g., this afternoon I spent a good 30 minutes trying to decipher a binding problem.  I was trying to use the data context of a TreeViewItem in the text of it's header template (you can laugh at my XAML, this is just me hacking):

<TreeViewItem ItemsSource="{Binding Profiles}">
    <TreeViewItem.HeaderTemplate>
        <DataTemplate>
              <TextBlock Text="{Binding Count}"/>
        </DataTemplate>
    </TreeViewItem.HeaderTemplate>
</TreeViewItem>

I tried several binding paths in the TextBlock, but nothing would show up in the tree node.  I ended up switching gears and trying to figure out what data context was actually available to the header data template.  Finding nothing from a cursory googling, I ended up creating a value converter that would convert the data object into it's underlying type:

namespace My.Converters
{
    public class ObjectToTypeConverter : IValueConverter
    {
        public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
        {
            if( null == value )
            {
                return "null";
            }
            return value.GetType();
        }
        public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture)
        {
            throw new NotImplementedException();
        }
    }
}  

The Convert implementation will return the System.Type of the data context (line 12), or the string "null" if no context is available (line 9).  At this point I can alter the XAML to simply spit out the actual type of the data context, giving me a clue as to why I'm clueless:

<Window ...
xmlns:Converters="clr-namespace:My.Converters">
...
<Window.Resources>
<Converters:ObjectToTypeConverter x:Key="ObjectToTypeConverter" />
</Window.Resources>
<TreeViewItem ItemsSource="{Binding Profiles}">
    <TreeViewItem.HeaderTemplate>
        <DataTemplate>
            <TextBlock Text="{Binding Converter={StaticResource ObjectToTypeConverter}}"/>
        </DataTemplate>
    </TreeViewItem.HeaderTemplate>
</TreeViewItem>
... 

Anyway, long story short, in my particular case the data context for the TreeViewItem header template is actually null - which makes no sense to me, I would have assumed it would have been the TreeViewItem's data context.  A quick relative sourcing of the binding solved my issue:

<TreeViewItem ItemsSource="{Binding Profiles}">
    <TreeViewItem.HeaderTemplate>
        <DataTemplate>
            <TextBlock Text="{Binding Path=Profiles.Count}" 
              DataContext="{Binding DataContext,RelativeSource={RelativeSource Mode=FindAncestor,AncestorType={x:Type TreeViewItem}}}"/>
        </DataTemplate>
    </TreeViewItem.HeaderTemplate>
</TreeViewItem> 

 



Twitter Appender for Log4Net

§ July 7, 2009 04:13 by beefarino |

Casey Watson just published a post describing how to create a Twitter appender for log4net!  Not only is the idea creative and fun, but it's also great sample code if you want to build your own custom appenders.

Please check it out here: http://caseywatson.com/2009/07/07/log4net-twitter-awesome/

NICE WORK CASEY!



Death by Logging Mantra #1 - Logs Consume Space

§ June 15, 2009 18:03 by beefarino |

As much as I love and lean on logging, it's not immune from problems.  In fact, it can be the source of some serious headaches.  A recent snafu at work prompted me to write about some situations where logging can bring your application performance to a screeching halt, or crash your application altogether. 

Here's what happened...

An incredibly complex system of hardware and software had been running smoothly for months; as part of an instrumentation layer we opted to change the rolling log strategy from 50 10MB text files:

...
<appender name="TextFile" type="log4net.Appender.RollingFileAppender">
    <file value="logs\log.txt" />
    <appendToFile value="true" />
    <rollingStyle value="Size" />
    <maxSizeRollBackups value="50" />
    <maximumFileSize value="10MB" />
    <layout type="log4net.Layout.XMLLayout">
      <prefix value="" />
    </layout>
</appender>
... 

to 500 1MB xml files:

...
<appender name="XmlFile" type="log4net.Appender.RollingFileAppender">
    <file value="logs\log.xml" />
    <appendToFile value="true" />
    <rollingStyle value="Size" />
    <maxSizeRollBackups value="500" />
    <maximumFileSize value="1MB" />
    <staticLogFileName value="false" />
    <countDirection value="1" />
    <layout type="log4net.Layout.XMLLayout">
      <prefix value="" />
    </layout>
</appender>
... 

As an ardent log4net user, I am aware of the performance impact of rolling a large number of files - if the CountDirection setting of the RollingFileAppender is less than 0 (which it is by default), the system will rename every log file each time the log rolls over.  This is costly, and in our product configuration that would mean up to 500 file renames on each roll. 

"Good thing I know what I'm doing...."

Several hours after firing up the new configuration a college asked me to come look at the device.  It had slowed to a wounded crawl.   I went to dig into the logs - I popped over to the log drive and started to find the most recent one.

... but there were 2000 log files, not 500.  The 2GB drive dedicated to the log was completely full.  And the application was still trying to write new log entries, which meant a slew of IO Exceptions were being continuously thrown by multiple threads.

"Um, I think I may have found the problem..."

I removed the oldest 1999 log files and the device immediately recovered.

So what happened?

The configuration XML is 100% correct.  The problem was that I accidentally deployed the software to the device with an old beta version of log4net 1.2.9; that particular version contains a bug in the RollingFileAppender code that prevents the MaxSizeRollBackups from being honored when CountDirection was >= 0.  Without the logs being limited in number, the software eventually filled up the hard disk with log entries.

Which bring me to my first death-by-logging mantra...

Logs Consume Space

It sounds silly I know, but this is the single most prevalent antipattern I see with any logging implementation.  There is a finite amount of storage space, and you need to make sure your logging activity doesn't consume more than its share.

I frequently see this when apps use FileAppender - this beast has no chains and, as I've stated elsewhere, you should never ever use it.  Ever.  Even in "little" applications it can cause massive trouble because the log accumulates over process time or application runs with no checks.  I've seen a 1KB application with 3GB of logs spanning almost a year of activity.

But don't think the problem is limited to the file-based appenders.  Remember....

  • memory is finite...
  • event logs fill up more often than you think...
  • a database can be configured to boundlessly expand as needed...
  • mail servers generally put caps on the size of an inbox...

Whichever appender strategies you choose, you should carefully consider the following:

... How much persistent storage are you allocating to the log?  Your answer should be a firm number, like "500MB", and not "the rest of the disk".  If you can, base this on the amount of information you need to have access to.  If a typical run of your application results in 10MB of logs, you can base the allocated size on the number of runs you want to persist.  If your application runs continuously - like a web site or windows service - you can plot out the logging activity over time, then gauge the allocation size from the time span of activity you want to persist.

... How will you cope when the allocated storage is used up?  Some appenders, like the RollingFileAppender, can handle this for you by freeing up space used by old log entries.  Others, like the EventLogAppender or the AdoNetAppender, blithely log without regard to the amount of space being consumed, and it's up to you to manage the size of the log in other ways.  E.g., I've seen SQL jobs dedicated to removing log records older than N hours, or truncating the log table to the N newest records.

... What happens when you log to a full repository?  Plan for success, but understand the causes of failure.  As I recently learned, our application slows down significantly when the log drive is full, so now checking the free space of the log drive is now included in our troubleshooting guide as a checklist item under "Application Performance Issues".  Test your application under limited logging resources to understand how it will behave.

The most important thing to remember is that logging, like any other subsystem of your application, needs to be planned, tested, and verified.



Creating a PowerShell Provider pt 0: Gearing Up

§ May 26, 2009 01:11 by beefarino |

Last month I gave a talk at ALT.NET Charlotte on using PowerShell as a platform for support tools.  The majority of the talk revolved around the extensible provider layer available in PowerShell, and how this enables support engineers to accomplish many things by learning a few simple commands.  Creating providers isn't hard, but there is a bit of "black art" to it.  Based on the lack of adequate examples and documentation available, I thought I'd fill the void with a few posts derived from my talk.

In these posts, I will be creating a PowerShell provider around the ASP.NET Membership framework that enables the management of user accounts (for more information on ASP.NET Membership, start with this post from Scott Gu's blog) .  Basically, this provider will allow you to treat your ASP.NET Membership user repository like a file system, allowing you to perform heroic feats that are simply impossible in the canned ASP.NET Web Site Management Tool. 

Like what you ask?   Consider this:

dir users: | 
   where { !( $_.isApproved ) -and $_.creationDate.Date -eq ( ( get-date ).Date ) } 

which lists all unapproved users created today.  Or this:

dir users: | where{ $_.isLockedOut } | foreach{ $_.unlockUser() } 

which unlocks all locked users.  Or this:

dir users: | 
   where { ( (get-date) - $_.lastLoginDate ).TotalDays -gt 365 } |
   remove-item 

which removes users who haven't authenticated in the past year.  These are things you simply can't do using the clumsy built-in tools.  With a simple PowerShell provider around the user store, you can accommodate these situations and many others you don't even know you need yet.

This first post is aimed at getting a project building, with the appropriate references in place, and with the debugger invoking powershell properly.  The next few posts will focus on implementing the proper extensibility points to have PowerShell interact with your provider.  Future posts will focus on implementing specific features of the provider, such as listing, adding, or removing users.  Eventually I will discuss packaging the provider for distribution and using the provider in freely available tools, such as PowerGUI.

Prerequisites

You'll need the latest PowerShell v2 CTP; at the time of this writing, the latest was CTP3 and it was available here.  Since we're working with a CTP, you should expect some things to change before the RTM; however, the PowerShell provider layer is unchanged from v1.0 in most respects.  The only major difference is with regards to deployment and installation, which I'll discuss at the appropriate time.

If you are currently using PowerShell v1.0, you will need to uninstall it.  Instructions are available here and here.

The .NET Framework 3.5, although not required to run PowerShell v2 CTP3, is required for some of the tools that accompany the v2 release.

Finally, I suggest you fire up PowerShell on its own at least once, to verify it installed correctly and to loosen up the script execution policy.  To save some grief, one of the first commands I execute in a new PowerShell install is this one:

set-executionPolicy RemoteSigned

This will enable the execution of locally-created script files, but require downloaded script files be signed by a trusted authority before allowing their execution.

Setting Up the Project

Create a new Class Library project in Visual Studio named "ASPNETMembership".  This project will eventually contain the PowerShell provider code.

You will need to add the following references to the project:

  • System.Web - this assembly contains the MembershipUser type we will be exposing in our PowerShell provider;
  • System.Configuration - we'll need to access the configuration subsystem to properly set up our membership provider;
  • System.Management.Automation - this contains the PowerShell type definitions we need to create our provider.  You may have to hunt for this assembly; try here: C:\Program Files\Reference Assemblies\Microsoft\WindowsPowerShell\v1.0.

Now that the necessary references are configured, it's time to configure debugging appropriately. 

Open the project properties dialog and select the Debug tab.  Under Start Action, select the "Start external program" option.  In the "Start external program" textbox, enter the fully qualified path to the PowerShell executable: c:\windows\system32\windowspowershell\v1.0\powershell.exe.

Yes, even though you're running PowerShell v2, it still lives in the v1.0 directory.

In the "Command line arguments textbox, enter the following:

-noexit -command "[reflection.assembly]::loadFrom( 'ASPNETMembership.dll' ) | import-module"

The -command argument instructs PowerShell to execute the supplied command pipe string as if it were typed in at the console.  It may not be obvious what the command pipe string is doing.  In a nutshell, the first command in the pipe loads the ASPNETMembership project output dll into the AppDomain.  The second command in the pipe causes PowerShell to load any cmdlets or providers implemented in the dll.  I'll touch on the import-module command more in a future post.

The -noexit argument prevents PowerShell from exiting once the command has been run, which enables us to type in commands and interact with the console while debugging.

Test the Project Setup

Build and Run the empty project.  A PowerShell console should launch.

In the console, run the following command, which lists all assemblies loaded in the currently executing AppDomain whose FullName property matches 'aspnet':

[appdomain]::currentDomain.GetAssemblies() | 
   where { $_.FullName -match 'aspnet' } | 
   select fullname

Verify that the ASPNETMembership assembly is listed in the output of this command:

 

If it isn't, double-check the command-line arguments specified in the project properties -> debug tab.  Specifically, ensure that:

  • the command arguments are entered exactly as specified above, and
  • the working directory is set to the default (empty) value

That's it for now - next post we will begin implementing the MembershipUsers PowerShell provider and enable the most basic pieces necessary to list the users in the membership store.