A Look at the Live Labs Simple Logging Framework

§ March 6, 2009 05:47 by beefarino |

Earlier this week Scott Hanselman tweeted a link to the MS Live Labs Simple Logging Framework (SLF).  I thought I'd check it out and post some comparisons to log4net for posterity.  Note this isn't a performance comparison, just a look at how the frameworks are used.

Spoiler: There are some neat ideas in here.  Check out SLF if you're new to logging, but my guess is that you'll quickly outgrow its capabilities.  If you're already logging, I don't see any reason to switch from more mature frameworks anytime soon.

A Quick Comparison

SLF is remarkably similar to log4net in its design.  SLF uses the concepts of appenders (SLF calls them "logs"), loggers (SLF calls them "loggers" too), and logging levels just like log4net.  In fact, there was virtually no change when porting my "hello world" log4net example to the SLF:

 

using System;
using LiveLabs.Logging;
namespace SimpleLoggingFrameworkExample
{
    class Program
    {
        static void Main( string[] args )
        {
            Logger Log = new Logger( typeof( Program ) );
            Log.Level = LogLevel.Debug;            
            
            Log.Debug( "Hello World!" );  
            Log.Inform( "I'm a simple log4net tutorial." );  
            Log.Warn( "... better be careful ..." );  
            Log.Error( "ruh-roh: an error occurred" );  
            Log.Fatal( "OMG we're dooooooomed!" );
            
            Console.ReadLine();
        }
    }
}

The most annoying change (and the cause of a failed build) was the rename of log4net's ILog.Info method to SLF's Logger.Inform.  Seems silly to me to change that one method name when the rest of the interface is identical....

Simple Formatting Control

SLF defaults to outputting to the process STDERR stream at a log level of Inform.  Line 10 drops the log level to Debug so we can see all of the output, which is formatted rather heinously IMO:

2009-03-06T20:49:57|   |No Thread Name|SimpleLoggingFrameworkExample.Program|Hello World!
2009-03-06T20:49:57|   |No Thread Name|SimpleLoggingFrameworkExample.Program|I'm a simple log4net tutorial.
2009-03-06T20:49:57|   |No Thread Name|SimpleLoggingFrameworkExample.Program|...better be careful ...
2009-03-06T20:49:57|   |No Thread Name|SimpleLoggingFrameworkExample.Program|ruh-roh: an error occurred
2009-03-06T20:49:57|   |No Thread Name|SimpleLoggingFrameworkExample.Program|OMG we're dooooooomed! 

Fortunately SLF provides a straightforward way to alter the message format. Every log object has a MessageFormatter property that can be set to a delegate to format messages for the log. Adding a bit of code to the example:

            Log.Level = LogLevel.Debug;                        
            Log.MessageFormatter = ( msg, log, level ) => String.Format( "{0} [{1}]: {2}", log, level, msg );
            Log.Debug( "Hello World!" );  
makes the output far more readable:
SimpleLoggingFrameworkExample.Program [Debug]: Hello World!
SimpleLoggingFrameworkExample.Program [Inform]: I'm a simple log4net tutorial.
SimpleLoggingFrameworkExample.Program [Warn]: ... better be careful ...
SimpleLoggingFrameworkExample.Program [Error]: ruh-roh: an error occurred
SimpleLoggingFrameworkExample.Program [Fatal]: OMG we're dooooooomed! 

A similar callback exists to format exception objects.  It is certainly easier to achieve this level of formatting control in SLF than it is in log4net, where you have to define custom formatter objects and layout patterns in your configuration. 

Configuration

SLF doesn't require an explicit call to initialize it with a configuration the way log4net does.  As I've pointed out SLF defaults to logging to STDERR at an Inform logging level.

If you do want some control, SLF does offer a Settings class that provides programmatic access to configure defaults and specifics for any of the loggers you've created:

Settings.Default.DefaultLogLevel = LogLevel.Debug; 

However, your options are limited.  You can set a default log level and log for all logger objects, and you can register message and exception formatters.  One glaring omission from the configuration space is the ability to configure default message and exception formatters for all loggers.  Looks like you're stuck setting them individually for now.

Language Feature Support

Another area where SLF improves on log4net is in the use of newer C# language features (log4net is based on the 2.0 version of the language).  Every log method in SLF accepts a lamba expression as an argument:

Log.Debug( () => {
        IDictionary envs = Environment.GetEnvironmentVariables();
        return "Current Environment: " + String.Join( ";", 
            (
                from string k in envs.Keys
                select ( k + "=" + envs[ k ].ToString() )
            ).ToArray() 
        );
    }
);

This is an excellent way to isolate complex logging logic, and to avoid expensive calls to obtain log message formatting arguments when no logging will actually take place; e.g.:

Log.Debug( () => 
    String.Format( "Result: [{0}]", obj.AVeryExpensiveOperation() ) ); 

To achieve the same effect in log4net, you have to explicitly wrap the code in a check of current log level:

if( Log.IsDebugEnabled )
{
    Log.DebugFormat( "Result: [{0}]", obj.AVeryExpensiveOperation() );
}

Personally, I prefer the former.  

Appender Options

This is where log4net beats SLF into a pulp.  log4net ships with a cornucopia of appender options, including the console, files, the event log, the debugger, the .NET trace subsystem, MSMQ, remoting, network endpoints, and various databases.  SLF ships with basically the equivalent of the log4net ConsoleAppender, FileAppender, and RollingFileAppender.

In addition,  SLF offers no built-in decoupling of the log formatting from your logging code the way log4net does.  I've shown in this post how easily you can override the message or exception format on a SLF logger object in your code, but consider also that there is no way to change this message format without changing your code.  E.g., if you someday decide to move from text to an XML log, you have to touch your code.  The log4net formatting configuration can be isolated to the app.config, making it touchable while leaving your code alone.

Summary

To sum up, the major differences between log4net and SLF are:

  • log4net offers far more appender options and configurability.
  • log4net has a larger user-base and more maturity.
  • SLF provides easier access to simple formatting options for messages and exceptions.
  • SLF supports the latest language features, while log4net does not.

I'll be honest - I like some of the ideas I see in the SLF project, but I really can't figure out the impetus for a new project when a mature and open project already exists.  There is this blurb from the project home page on codeplex:

Existing logging libraries are either too feature-light or too complex. That means you might end up spending as much time debugging your logging code as you do your application code. Or you may not even know that your “log” to database is even working until it’s too late. 

Yes, SLF makes a few things easier, but on the whole log4net offers far more for features for the same effort.  And I can't say I've encountered these difficulties with log4net - I don't consider log4net complex (in comparison to, say, the MS Enterprise Logging Framework), but that's probably because I've been using log4net for years. So let's say for argument's sake that log4net is overly complex and difficult to use.  The question at the front of my mind is this:

Why start over when you can openly contribute to the log4net project and make it better?

There is nothing earth-shaking in SLF that I see mandating a new project effort.  And by not contributing to log4net and starting over, you're dismissing the features and maturity it has to offer.

[insert "MS hubris!  They don't 'get' the open source thing...."  comment here]



Correlating and Grepping Logs with PowerShell

§ November 6, 2008 16:44 by beefarino |

I've spent the last few days doing a post-mortem on a colossal system failure at a client site.  We have plenty of data on the failure, in the form of log4net application logs and Windoze event logs, but the amount of data at our disposal is making interpreting and correlating those logs a gignormous task.  Some of the logs are XML, others are CSV, so making correlations between them has involved searching multiple spreadsheets and text files for information manually.  Correlating across the logs is made even more difficult as the event log export contains datetime instants that are local to the machine, whereas the log4net logs contain UTC timestamps.

So what do I do when presented with such a Gordian Knot?  I select a katana sharp enough to shred the knot into pieces.  Lately my weapon of choice for this type of problem has been powershell.  My goal is to be able to correlate events by time and find errors or warnings across all of these logs.  To that end, I'll be using some powershell Shikai to create object hierarchies out of XML and CSV data.  In addition, I'll release powershell Bankai and extend those objects with custom properties that allow both types of logs to be combined and searched as if they were a single log.

Importing Log4Net XML Log Files

My application logs use the XML formatter built-in the standard log4net distribution, which produces a partial XML document (as described here) similar to the following:

<log4net:event logger="MyApp" timestamp="2008-11-04T04:42:42.2612828-07:00" level="INFO">
    <log4net:message>validating xml digital signature on document</log4net:message>
</log4net:event>
<log4net:event logger="MyApp" timestamp="2008-11-04T04:42:43.1279382-07:00" level="INFO">
    <log4net:message>digital signature has passed validation</log4net:message>
</log4net:event>
...

Note there is no root element containing the <log4net:event /> elements, so the import script does a little fernagling to make the log appear to be a valid XML document:

# import-logxml
#
# imports a log4net application log file
#
# file: path to xml log file
#

param( $file )

# surround the log file contents with a root element
$xml = '&lt;log xmlns:log4net="uri:doesnotmatter"&gt;' + ( gc $file ) + '&lt;/log&gt;';

# load the log as XML and slurp out every log event
( [xml] $xml ).log.event |

    # add a note property indicating this log event is from an application log file
    add-member noteproperty -name logsource -value "applicationlog" -passthru |        
        
    # add a datetimeoffset property representing the instant of the log event
    add-member scriptproperty -name datetime -value {
            [DateTimeOffset]::Parse( $this.timestamp );
        } -passthru;

On line 11, contents of the log file are surrounded with a root element to make the log file valid XML.  Line 14 uses powershell's built-in [xml] datatype to translate the XML string into an object hierarchy, then retrieves every log event from the log file.  Each <log4net:event /> element from the XML log file is represented by an object containing properties corresponding to the node's attribute values and child elements.

The next few lines use powershell's add-member cmdlet to extend the log entry objects with new properties. Line 17 adds a simple property named "logsource" with a static value indicating that this object was created from an application log.  Line 20 adds a calculated property named "datetime" containing the instant of the log message as a System.DateTimeOffset structure.  Having this DateTimeOffset property will give me an appropriately-typed value against which I can compare, filter, and sort log entry timestamps.

Using the import-logxml script is straightforward:

> $log = import-logxml 'myapp.log.xml'

After running the script, the $log variable contains an array of objects representing the individual log events in the XML log file:

> $log[ 0 ]
datetime          : 11/04/2008 4:42:42 AM -07:00
logger            : MyApp
timestamp         : 2008-11-04T04:42:42.2612828-07:00
level             : INFO
thread            : 5076
message           : validating xml digital signature on document

Now powershell's filtering capabilities can be used to isolate log events; e.g., by severity level:
> $log | ? { $_.level -eq 'error' }
datetime          : 11/04/2008 4:44:16 AM -07:00
logger            : MyApp
timestamp         : 2008-11-04T04:44:16.8272804-07:00
level             : ERROR
thread            : 5076
message           : an exception was raised while processing the application configuration
...

Importing Event Log CSV Files

Importing the CSV event log data is a little more involved:

# import-eventlogcsv.ps1
#
# translates an event log export from
# CSV format to an object collection
#
# file: path to CSV export of event log
# offset: a timespan indicating the UTC offset
#     of the event log source
#

param( $file, [TimeSpan] $offset = [DateTimeOffset]::Now.Offset )

# add CSV column headers, if necessary
convert-eventlogcsv $file;

# import the CSV file
import-csv $file |

    # add a note property indicating this entry is from an event log
    add-member noteproperty -name logsource -value "eventlog" -passthru |
    
    # add a note property to store the value of the UTC offset
    add-member noteproperty -name offset -value $offset  -passthru |
    
    # add a datetimeoffset property representing the instant of the log entry
    add-member scriptproperty -name datetime -value {
            new-object DateTimeOffset
                ( [DateTime]::Parse( $this.date + " " + $this.instant ),
                  $this.offset
                );
        } -passthru |
        
    # add a property that translates the event log message type
    #    into a log4net log-level value
    add-member scriptproperty -name level -value {
            switch -re ( $this.eventtype  )
            {                
                "info" { "INFO" }
                "succ" { "INFO" }
                "fail" { "INFO" }
                "warn" { "WARN" }
                "error" { "ERROR" }
            }
        } -passthru

Event log timestamps are expressed in the time zone of the source computer, and in my particular case that source computer exists in a different time zone than my laptop.  So the import script accepts a parameter containing a UTC offset to indicate the time zone of the event log source computer (line 11).  

Importing CSV data into powershell is as simple as invoking the import-csv cmdlet (line 14); however, the import-csv cmdlet assumes the first row in the file contains column headers.  The event log exports no such header row, so one must be added.  I've wrapped up the process of adding this header row into a script (convert-eventlogcsv.ps1) that is included in the download for this post; this script is invoked on line 14.

The result of the import-csv cmdlet is an object array.  Each object in the array represents a single row of the CSV, with properties corresponding to each column in the row.  As with the application logs, several properties are added to each object using powershell's add-member cmdlet:

  • a "logsource" property indicating that the log event was read from an event log export;
  • an "offset" property containing the UTC offset passed into the script;
  • a "datetime" property that creates a DateTimeOffset structure representing the instant the log message was produced;
  • a "level" property that translates the event log entry type (Information, Warning, Error, etc. ) into strings matching the standard log4net log severity levels (DEBUG, INFO, WARN, ERROR, and FATAL).

Script use is straightforward:

> $evt = import-eventlogcsv 'eventlog.system.csv' ( [TimeSpan]::FromHours( -7 ) )

With the result similar to the log4net import:

> $evt[ 0 ]
date      : 11/04/2008
instant   : 4:42:42 AM
source    : MyApp
eventtype : Information
category  : None
eventid   : 2
user      : N/A
computer  : PPROS1
message   : application has started
logsource : eventlog
offset    : -07:00:00
datetime  : 11/04/2008 4:42:42 AM -07:00
level     : INFO

Correlating Events Across Logs

Now that the import scripts are available, correlating the logs is pretty simple.  First, we need to import our event and application logs:

> $log = import-logxml 'myapp.log.xml'
> $evt = import-eventlogcsv 'eventlog.system.csv' $log[ 0 ].datetime.offset

Note that the log4net log is imported first; this allows the time zone offest information it contains to be used to adjust the event log timestamps.  Once the two logs are imported, we can combine them using powershell's array concatenation operator:

> $data = $log + $evt

At this point, the $data variable contains the merged contents of the event log and application log.  The add-member cmdlet was used to extend these objects with custom properties during the import; now every merged log entry contains a logsource, datetime, and level property with identical types and semantics.  This means that we can now search, sort, and filter the combined logs using a single set of criteria.  For example, consider the following:

> $start = [DateTimeOffset]::Parse( "11/04/2008 4:40 AM -7:00" )
> $end = [DateTimeOffset]::Parse( "11/04/2008 4:45 AM -7:00" )
> $data | ? { $_.datetime -gt $start -and $_.datetime -lt $end }

which finds all events that occurred between 4:40 and 4:45 AM.  Or this:

> $data | ? { $_.level -eq 'error' -and $_.datetime -gt $start -and $_.datetime -lt $end }

which isolates all error-level events during the same time period.  Or this:

> $data | sort datetime

which outputs all log entries sorted by the time they were written.

Exporting the Correlated Logs

While powershell's interactive filtering capabilities are useful, you may find you want to export the newly merged logs for further analysis in a spreadsheet or other tool.  This can be easily accomplished using the export-csv cmdlet:

> $data | export-csv 'mergedlog.csv' -notypeinformation

Download the Code

The import scripts are available here: merginglogswithpowershell.zip (1.49 kb).

Enjoy!




Log4Net Recommended Practices pt 1: Your Code

§ October 14, 2008 15:49 by beefarino |

So far my log4net writings have focused on specific aspects of the log4net framework.  These next posts are about putting the pieces together.

To that end, I would like to start with one recent example of how logging has saved hide.

The Call

My company phone starting cock-a-doodle-dooing about 3pm one afternoon.  It was the Director of Support calling - I'll refer to him as Bob - he was on-site trying to get a kiosk up and running for a major client, but a security configuration application was reporting a failure.  This application is basically a permissions engine that modifies DACLs on cryptographic keys, files, and registry keys based on a database of access rules.  The client was going live in an hour or so, so time was a critical factor.

"I've tried everything I know - the database is correct, the AD users are there, the security service is running, and I'm running as an administrator," he said nervously, "but it keeps failing at the same point on this one machine.  It worked on all the other machines, but this one kiosk keeps failing."

"Ok, let's open the log file.  It should be at <file path>.  Find it?" I asked.

"Um.... yep.  Ok, it's open."

"Alright.  Jump to the end, find the last line.  Read it to me."  

This is the layout of the log file for the application:

DATETIME LOGGERNAME [LEVEL] (CONTEXT) MESSAGE_STACKTRACE

The last two lines in the log appeared like so:

12:34:56 TaskExecutionContext [DEBUG] (ACLRuleBuilding) Attempting to locate computer domain name using WMI class [Win32_ComputerSystem] property [Domain].

12:34:56 SetCryptoKeyDACLTask [ERROR] (ACLRuleBuilding) An exception was raised while translating the user name to the domain: System.Runtime.InteropServices.COMException at ...

I had never encountered this error before in development, testing, or the two years the application's been used in the field.  However, the data in that one log message was enough for me to help Bob resolve the issue over the phone.  There were four bits of information that I gleaned from Bob's reading of that log message that allowed me to isolate the problem:

  1. The logger name told me that the SetCryptoKeyDACLTask class issued the error.
  2. The logging context told me that the error was occurring while the application was building ACL rules (as opposed to applying them).
  3. The log message told me the failure was occurring when the application was attempting to translate a username to the computer's domain.
  4. The preceding DEBUG log message indicates that the application was trying to find the computer's domain name immediately before the exception was raised.

"Hmmm, haven't seen that one before either," I said, "but it looks like the app is failing when it's trying to determine the computer's domain.  Any idea why this might be happening on this machine as opposed to the others?"

"Well, I had to add the machine to the domain and then ...."  I hear a loud *smack* as Bob hits himself in the head.  "Okay, I think I figured it out."

It turns out that Bob had neglected to reboot the kiosk after adding it to the domain, so when my application started spelunking through WMI to determine the computer's domain, an exception was being raised.  After a few minutes, the kiosk was running perfectly, and Bob and the client enjoyed a smooth go-live.

All things considered, Bob would have resolved this issue on his own eventually by blindly rebooting the machine and starting the security configuration process over again.  The point is that the application log contained enough information to isolate the problem to a very specific cause without resorting to looking at the source code.  COMExceptions are notoriously ambiguous; if that was all we had to go on, we may not have been able to determine the cause of the exception.

Some Recommended Practices

Let's take a look at some of the code that produced the log message:

namespace Log4Net_BestPractices1
{
    public class SetCryptoDACLTask : ITask
    {
        static log4net.ILog Log = log4net.LogManager.GetLogger(
            System.Reflection.MethodBase.GetCurrentMethod().DeclaringType
        );
        
        string userName;
        
        public void Execute( ITaskExecutionContext context )
        {        
            if( Log.IsDebugEnabled )
            {
                Log.DebugFormat( "Executing task under context [{0}]...", ( null == context ? "null" : context.Description ) );
            }
            
            // ...
            
            using( log4net.NDC.Push( "ACLRuleBuilding" ) )
            {
                // ...
                string principalName = null;
                
                try
                {
                    principalName =
                        context.TranslatePrincipalNameToDomain(
                            userName
                        );
                }
                catch( Exception e )                
                {
                    Log.Error(
                        "An exception was raised while translating the user name to the domain",
                        e
                    );
                    
                    throw;
                }
                
                // ...
            }
            
            // ...
            
            Log.Debug( "Task execution has completed." );
        }
    }
}

This code demonstrates some effective logging techniques I recommend you employ in your own applications:

  1. Use a unique logger object for each type in your application (see line 5).  This will automagically tag each log message with the class that wrote it.  Using this technique provides a log that reads like a novel of your application's activity.  
  2. Whenever you catch an exception, log it (see line 30).  Even if you just plan to throw it again, log it.  In addition, log4net knows how to format Exception objects, so don't try and build your own string from the exception data.
  3. Don't be afraid to pepper your application with piddly little log messages (see lines 15 and 47).  Each one becomes a marker of activity and can prove invaluable when trying to work an issue.  While logging does consume resources, that consumption can be controlled and optimized (something I'll talk about in the next post).  In all but the most CPU-intensive applications, the impact of logging won't be noticed when configured properly.
  4. Whenever you use one of the *Format overrides (see line 15), be extra-special-careful about validating the argument list.  It's very easy to forget to check for null references before using a property accessor, for example, because "it's just logging code."  
  5. Always remember that the argument list is evaluated before the logging method is called.  If an argument to the log method is expensive to obtain, be sure to guard your log statement with a check of the appropriate Is*Enabled property (see line 13).
  6. Before an object calls on a shared dependency, consider pushing a tag onto the log context stack (see line 20).  This will provide continuity in the log, allowing you to determine the caller of the shared code that logged a particular message.
  7. Whenever you use a formatted log statement, surround the format argument placeholders (the {}'s) with brackets or parentheses (see line 15).  Doing this will mark the areas in each log message that vary, making the log scan a bit easier.  In addition, it makes empty or null formatting arguments more obvious.  For instance, if the log message in line 15 was formatted with an empty context.Description value, the message would appear as "Executing task under context []..."  Without those brackets, the same message would be written as "Executing task under context ..." The former message makes it obvious that the context failed to provide a description value, the latter does not.

Coming Up

In upcoming posts I'll discuss log4net configuration scenarios and log reader clients.

In the meantime, if you have any additional logging practices you'd like to share, please feel free to comment on this entry!



Log4Net Tutorial pt 8: Lossy Logging

§ September 11, 2008 15:42 by beefarino |

Ok, so I know I've said twice now that I was going to relay some real-life log4net experiences, but a few days ago Georg Jansen (author of the very kewl Log4Net Dashboard, which I plan to describe in a future post) demonstrated a feature of log4net I've never used before: lossy logging.  After using the feature for a few days, I'm completely sold on it and decided to write up another tutorial.

What is Lossy Logging?

Logging is a very useful tool, but it's not something you want to leave enabled in production if you don't need it.  Each log message take a few cycles away from your application, and the persistence of messages does consume system resources.  Of course you can configure log4net to disable logging, either by setting the root logger level to OFF or removing all appenders from the root logger.  The problem is that by disabling logging altogether, you won't get any juicy details if something goes awry in your application.

Lossy logging gives you a great compromise: under normal operation your application will not log any messages; however, if your application logs an error, a small batch of messages leading up to the error is placed into the log, giving you the error and a snapshot of system activity just before it happened.  How awesome it that?!

Lossy Logging Example

Open Visual Studio, create a new console project, and reference log4net.  Add the following code to the Program class:

namespace Tutorial8_LossyLog
{
    class Program
    {
        static void Main( string[] args )
        {
            log4net.Config.XmlConfigurator.Configure();
            static log4net.ILog Log = log4net.LogManager.GetLogger(
                System.Reflection.MethodBase.GetCurrentMethod().DeclaringType
            );
               
            for( int i = 0; i < 100; ++i )
            {
                Log.DebugFormat( "this is debug msg #{0}", i );
            }
               
            Log.Error( "error: an error occurred!" );           
            Log.Warn( "warning: you've been warned" );
        }
    }
}

The program outputs a series of 100 numbered debug log messages, followed by a single error message and a warning message.

Add an app.config to the project:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/>
  </configSections>

  <log4net>
    <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender">
      <layout type="log4net.Layout.SimpleLayout" />
    </appender>

    <appender name="LossyConsoleAppender" type="log4net.Appender.BufferingForwardingAppender">
      <bufferSize value="20" />
      <lossy value="true"/>
      <evaluator type="log4net.Core.LevelEvaluator">
        <threshold value="ERROR" />
      </evaluator>
      <appender-ref ref="ConsoleAppender" />
    </appender>

    <root>
      <level value="DEBUG" />
      <appender-ref ref="LossyConsoleAppender" />
    </root>
  </log4net>
</configuration>

The logging configuration defines two appenders, one very generic Console appender and a BufferingForwardingAppender.  As the name implies, the latter appender buffers log messages and forwards them in batches to one or more other appenders.  You can probably tell from the configuration XML that I've set this appender up with a 20-message buffer.  The lossy and evaluator parameters work together to define when the log message buffer is forwarded to the "base" appender.  More on that in a moment ....

Build and run; the command console will output the following:

DEBUG - this is debug msg #81
DEBUG - this is debug msg #82
DEBUG - this is debug msg #83
DEBUG - this is debug msg #84
DEBUG - this is debug msg #85
DEBUG - this is debug msg #86
DEBUG - this is debug msg #87
DEBUG - this is debug msg #88
DEBUG - this is debug msg #89
DEBUG - this is debug msg #90
DEBUG - this is debug msg #91
DEBUG - this is debug msg #92
DEBUG - this is debug msg #93
DEBUG - this is debug msg #94
DEBUG - this is debug msg #95
DEBUG - this is debug msg #96
DEBUG - this is debug msg #97
DEBUG - this is debug msg #98
DEBUG - this is debug msg #99
ERROR - error: an error occurred!

The program issues a total of 102 log messages (100 DEBUG, one ERROR, and one WARN), but the console only contains 20 messages (19 DEBUG and 1 ERROR).  So what happened to the other 82 DEBUG and WARN messages?

When the BufferingForwardingAppender's lossy property is enabled, the appender will buffer log messages without forwarding them.  If the buffer fills up, the oldest messages are dropped from the buffer to make room for the new messages.  

The evaluator property determines when the appender forwards the messages from the buffer to its base appenders.  There is only one evaluator defined in log4net - the LevelEvaluator.  The LevelEvaluator triggers the forward when a log message is received that meets or exceeds the configured threshold.  The example above is configured so that an ERROR message triggers the appender to forward its buffer to the ConsoleAppender.

Lossy Appender Types

Here is a list of appenders in the log4net distribution that can operate "lossy:"

log4net.Appender.AdoNetAppender
log4net.Appender.RemotingAppender
log4net.Appender.SmtpAppender
log4net.Appender.SmtpPickupDirAppender
log4net.Appender.BufferingForwardingAppender

Coming Up

Ok, seriously, next time it'll be all about log4net best practices, and I promise to show you how you can't live without it.  Really!