At the moment I’m preparing another PowerShell module for release to open source – this one makes it dirt simple to wire up a PowerShell script to a RabbitMQ server, so it can participate in distributed messaging or ESB solutions.  I’ve only been working on this for a day, but it’s already proven its value to me and I’m too titillated not to write about it!

A little background: I’ve been working on a distributed queue-driven solution for the last few months.  The client uses a LAMP stack and has chosen RabbitMQ as their queue.  While this has been my only experience with it, I’ve come to love RabbitMQ – easy to set up, very robust, cross-platform, simple client libraries, and awesome documentation and community support.  If you want to check it out, Justin Etheredge has a fantastic get-you-started post.

bunnyAnyway, as distributed solutions and facebook marriage statuses are wont to do, things got complicated.  I’ll save the details for another post.  Suffice it to say that I needed some  RabbitMQ consumers that could perform lots of simple tasks – like report real-time instrumentation messages for a specific job.  So I made a few broad passes and came up with a simple and effective way to do that using PowerShell.

API Overview

At present the API feels a lot like the PowerShell jobs and events - you can consume messages via polling a queue, blocking until a message arrives, or assigning a script block to process the messages as they arrive. 

Starting a message consumer is simple enough:

1 import-module poshrabbit; 2 $q = start-consumer -hostname RabbitMQSvr -exchange posh -routingkey 'prefix.#'

That’s all there is to it.  The consumer is now connected and trying to pull messages.  The variable $q contains data used by the other cmdlets in the API to identify messages retrieved by this consumer.  For instance, if you wanted to block until a message arrived:

1 import-module poshrabbit; 2 $q = start-consumer -hostname RabbitMQSvr -exchange posh -routingkey 'prefix.#' 3 4 #the script blocks on the call to wait-consumer 5 # until a message is received; the message is 6 # returned in the $event variable 7 $msg = wait-consumer $q; 8 9 write-host 'event received: ' $msg;

The wait-consumer cmdlet does for RabbitMQ messages what the wait-event cmdlet does for events.  The script blocks until a message is received by the specified consumer.

Or perhaps a script needs to do other things when no message are available:

1 import-module poshrabbit; 2 $q = start-consumer -hostname RabbitMQSvr -exchange posh -routingkey 'prefix.#' 3 4 #receive-consumer does not block, if no 5 # messages are available $msgs will be $null 6 $msgs = receive-consumer $q; 7 if( -not $msgs ) 8 { 9 write-host 'no messages at this time'; 10 } 11 else 12 { 13 $msgs | write-host; 14 }

In this case receive-consumer will return any messages received since the last call to receive-message; however it will not block script execution – if no messages are available it will return $null immediately.

Or if you just want to fire-and-forget:

1 import-module poshrabbit; 2 3 #splatting added for readability 4 $a=@{ 5 hostname = 'RabbitMQSvr'; 6 exchange = 'posh'; 7 routingkey = 'prefix.#'; 8 9 #this script block will be run for 10 #each message received; no additional 11 #polling or waiting code is needed 12 action = { $_ | write-host }; 13 }; 14 $q = start-consumer @a;

Assigning a scriptblock to start-consumer’s Action parameter will automagically run the scriptblock for each incoming message.  No need to poll or wait, your PowerShell session can move on to other tasks and still process each message as it comes in, in real-time!

Eventually you’ll want to stop the consumer:

1 import-module poshrabbit; 2 3 #splatting added for readability 4 $a=@{ 5 hostname = 'RabbitMQSvr'; 6 exchange = 'posh'; 7 routingkey = 'prefix.#'; 8 }; 9 $q = start-consumer @a; 10 # ... 11 stop-consumer $q;

Simple enough, just tell stop-consumer which consumer you want to stop.  Or end your PowerShell session and the module will clean up the RabbitMQ resources for you.

Oh, and you’ll probably want to publish messages back to a RabbitMQ exchange at some point, so I added a simple way to do that via the publish-string cmdlet:

1 import-module poshrabbit; 2 3 #splatting added for readability 4 $a=@{ 5 hostname = 'RabbitMQSvr'; 6 exchange = 'posh'; 7 routingkey = 'prefix.batch-0_o'; 8 }; 9 publish-string @a -message 'hello world!'

There’s more to each of these cmdlets (e.g., username, passwords, the option to specify exchange types or queue names, a timeout on wait-consumer), but in terms of functionality that’s it.  I’m pretty happy with the API; I think it’s simple, readable, and proven to be powerful in the 8 hours I’ve been dogfooding it. 

Example

Here’s a sample script that collates instrumentation messages for a specific “batch” of work for my client:

1 param( [string]$batchId ); 2 3 import-module poshrabbit; 4 5 $a=@{ 6 hostname = 'Que'; 7 exchange = 'LSI.PP.Instrumentation'; 8 routingkey = 'Batch.' + $batchId; 9 action = { $_ | write-host }; 10 }; 11 12 write-host "Real-time activity for batch $batchId:"; 13 $q = start-consumer @a;

Next Steps

What really gets me excited are the possibilities – the notion of load-balancing PowerShell tasks across several machines … coordinating the efforts of a deployment script and a build server … service-oriented PowerShell scripts that run on-demand … or for that matter, use RabbitMQ as the foundation for a load-balanced psake CI environment.

But first thing’s first – I want to get this module clean and pretty and put it someplace where you all can get at it and make it better.  My hope is to do that in the next day or so (clients willing).