Killer client side development with live.js and Visual Studio

This post is part of a series. Click for Part 2 or Part 3

Tuesday I got the opportunity to do a presentation at CINNUG. There were some requests to turn the contents into a blog post, and a figured that wouldn’t be a bad way to kick off blogging for the new year.

The talk was on how to use live.js to make debugging and developing client side code a breeze. If you aren’t familiar with live.js. It’s a simple javascript library that’s designed to automatically apply changes to your CSS/JS/HTML, without the need for a manual refresh. Basically, it polls the server for updates to the various assets constantly. With each poll it checks to see if anything has changed and then applies the changes, either through a force refresh, or through dynamic application.

The result is that you can see the affects of your changes live, without ever taking your hands off the keyboard.

So with the basic introduction, let’s take a look how we can take this for a test drive. For the purposes of this discussion we’ll be working with an MVC application, but everything will work just fine with pretty much any framework, WebForms included.

To start, spin up a new Empty MVC project. Then, go download live.js from http://livejs.com/ and add it to your Scripts folder. Next go into the Views-> Shared folder and edit your _Layout.cshtml page by adding a script reference like this,

<br><script type="mce-text/javascript" src="@Url.Content(" data-mce-src="@Url.Content("></script>

Now we can take live.js for a test drive. Start up your application and go to the homepage. Arrange your windows so you can see the browser and edit the application at the same time. Go into the Site.css file and edit something, like maybe the background color of the body. In just a moment you’ll see the application change just in front of you. To see how this works, boot up fiddler. You’ll see a constant stream of requests as live.js polls the server for updates. While this is great for debugging, we don’t want this kind of behavior happening in production. So how do we avoid that? Well it’s simple enough to use a basic HTMLHelper to only render the live.js reference if we’re running in debug mode. Here’s the helper code.


public static class IsDebugHelper
{
     public static bool IsDebug(this HtmlHelper htmlHelper)
    {
      #if DEBUG
        return true;
      #else
        return false;
      #endif
    }
 }

Then we just call this helper in an if statement that conditionally renders our script reference, just like this.

  @if (Html.IsDebug())
    {<script type="mce-text/javascript" src="@Url.Content(" data-mce-src="@Url.Content("></script><br> }<br>

This gives us a basic livejs setup.For our mvc application. Aside from dynamically loading our css, livejs also does the same for html and javascript.

Next post I’ll show you how to use livejs to get this same sort of live refresh setup on external devices, like phones or tablets, to make it easier to test responsive designs.

TDD: A Case Study with Silverlight

One of my goals for the new year was to follow TDD on a real project at work. I actually got my chance very early this year with a fairly basic Silverlight project. The project was short and simple, basically a fancy list of links and resources managed in sharepoint and exposed in a silverlight interface allowing a variety of queries and searches. It was large enough to be more than just a toy project, but small enough that I didn’t worry about doing much damage by trying out TDD for the first time.

I learned alot, and I think the work I did makes a good case study for someone interested in getting started with TDD. In my next few blog posts. I plan to walk readers through my development environment, the specifics of the techniques I followed, and the lessons I learned.

The Environment

As I said at the start, the project was written in Silverlight. For my testing I used the Silverlight Unit Test Framework, which allows for asynchronous testing, which is vitally important for any webservices based integration testing. On top of that I used a fantastic continuous test runner named Statlight. Statlight is a small console application that automatically runs your unit tests every time your test project’s .xap file changes. This means that running your tests is as easy as hitting Ctrl + Shift + B to build the project and Statlight does the rest. I quickly got in the habit of building after every code change so that I was getting instant feedback on what I was doing.

The Process

Since this was an experiment, I tried to stick as close to the rules of TDD as possible. This meant I never wrote a line of code until I had already written a test covering it, and that my tests were extremely granular. Even simple tasks like parsing a single line of XML returned from a webservice had a test devoted to it. I also tried not to overthink some of the details of my design, instead trying to put of design decisions until I had already written the test necessitating them.

The Result

Overall, my experience was hugely positive. I’m convinced that TDD definitely makes me more effective and productive and I want to leverage it wherever I can in the future. In general I found there were 3 major benefits to TDD, and I learned 3 lessons about how to do TDD better next time. Let’s start with the good

Flow – It was shocking how good it felt to be able to code without stopping. With TDD my brain stayed in code mode for hours at a time. Usually, I slip in and out of this mode out the day, especially when I’m manually testing code I’ve just written. With TDD, that never happened, and it made my concentration and focus 20x better. When I’m manually testing, there are all sorts of interruptions and opportunities for distraction. Waiting for the page I’m testing to load? I’ll just go browse google reader for a bit. Stepping through a tedious bit of code so I can examine the value of one variable? Let me just skim this email while I do that. With TDD though, my brain never gets an opportunity to slip away from the task at hand. Throughout the day I was laser focused on whatever I was doing.

Similarly, if I did have to step away for an interruption (meetings, lunch, help another dev, etc.) it was easy to get back into the flow and figure out where I was. Just hit Ctrl + Shift + B and see what test failed. Since each test was so small and covered such a small area,  I didn’t have a ton of details about what I was doing slip away when I got distracted.

Design – I didn’t totally abandon upfront design, but I did do less design than I usually do. I mostly sketched out the layers at the boundaries of the application, the pieces that interacted with the user and the pieces that interacted with the data source, SharePoint, since both of those were external pieces that I couldn’t exercise complete control over. Once I had those layers designed though, I let TDD evolve the internal architecture of the application, which actually led to a couple of neat design decisions I don’t think I would have come up with otherwise. The coolest of these was how I handled loading up a given model for a given page. In our application the same view could be wired up to a variety of different models. The specific model depended on the url the user used. I ended up with two separate objects which handled this process, the Model Locator which parsed the incoming URL, and the Model Map, which tied each model to a path-like-string which represented how the data was maintained in the data store. The Model Locator would use the URL to extract the key elements to identify the right model, and then pass those into the Model Map, which would use those elements to find the right model by building the path representation for the model. The end result was a nice decoupling between the path structure the user used to browse to a model, and the way it was actually handled by the data layer. If I had been designing up front, I am almost positive I would have missed this approach, and put too much of the logic into the Model Locator itself, tightly coupling the data structure and the navigation structure. Instead, I put off making any decisions about how the Model Locator interacted with the data until the last minute, and by then it was clear that a new class would improve the design significantly.

Refactoring Ease of Mind – Not everything about this project was perfect. In fact, towards the middle there were some significant pain points because I had to be temporarily put on another higher priority project. To keep things moving another developer was assigned to the project. There wasn’t enough time invested in communication and as a result, he ended up taking a different approach in some key ares, and duplicating some work I’d already done. By the time I came back, his code was wired up to the UI, and it didn’t make sense to try and reincorporate the pieces of my code that were performing some of the same functions. Unfortunately, there were a number of pieces that handled things like search and model location that were still expecting the classes defined in my code. All of those had to be modified to work with his architecture instead.

This would have been a really scary refactoring to do in the timeline we had, except for the the automated tests I already had covering all of my code. With only a few minor tweaks, that test suite was modified to test my search services using his new classes, and we had extremely detailed information about where my code was now broken. After less than a day of work, we’d switched everything over without a hitch. And because of the tests, we had confidence that everything would work fine.

I won’t say much more in summary, because I think the benefits speak for themselves. Next post, I’ll talk about what I’d do differently next time, and how I plan to get better at TDD in the future.

Custom assertions with should.js

Lately I’ve been playing with nodejs and vows, doing some TDD on a side project at home. I love the very readable syntax of should.js, which lets you frame your assertions as almost natural english. However, pretty early on I realized I wanted to add my own custom assertions to the should object to abstract away some of the messy details of my testing and keep the code readable. In the past I’ve used custom asserts with .NET for testing, and I find it allows you to quickly express domain specific concepts even inside your tests, for better readability and clarity.

One particular example was a test where I wanted to make sure the elements in a <ul> where the same as those in a javascript array. Rather than trying to parse out the list into another array and do a comparison in the test body, I wanted to have an assertion that was something like $(“#list”).children().should.consistOfTheElementsIn(array), where consistOfTheElementsIn will handle the parsing and comparison.

After a little bit a playing around, I worked out a pretty simple way to do this. Basically I create a new node module called customShould.js. customShould.js require’s should, and then exports the should object. Additionally, customShould adds a new method to the “Assertion” object created by should.js. Here’s the code


var should = require('should.js');

exports = module.exports = should;

should.Assertion.prototype.aHtmlListThatConsistOf =
 function(list){
 var compareArrays = function(first,second){
 if (first.length != second.length) { return false; }
 var a = first.sort(),
 b = second.sort();
 for (var i = 0; second[i]; i++) {
 if (a[i] !== b[i]) {
 return false;
 }
 }
 return true;
 }

var matches = this.match(/<li>.?*<//li>/gi);
 for(matchIndex in matches){
 matches[matchIndex] = matches.replace("<li>","").replace("</li>","");
 }
 this.assert(compareArray(matches, list), "lists do not match");
 }

It’s all pretty straight forward. Then to use your custom asserts, you just require your customShould.js module instead of the normal should module.

Automatic Integration Testing With Joomla

Lately, I’ve been deviating from my .NET ways to do a small website for my brother-in-law during my spare time. He works for a artistic iron works company and they were looking for a simple visual refresh to replace their 90’s era, MS FrontPage website.

I haven’t had much experience with Joomla, but I ended up choosing it because they have a dreamhost account and joomla is a one click install. I knew it was a big name in the CMS world, and even knew someone who makes his living off Joomla sites, so I figured it had to be pretty good. Frankly, after building out much of this site in it, I’m not impressed. The UI is clunky and not even intuitive for a techy like me. The documentation is sparse at the api level. And the extension development model seems to rely fair to heavily on static methods and singletons. But what irked me the most about Joomla is how difficult it was to get a solid automated integration test up and running. Hopefully what I document here will save someone else my pain later.

Before getting to the technical how-to though, a little bit of background on why I think this is important. In the last year I’ve become a huge proponent of automated testing. In general, when I start on a new project or feature now, the first thing I do is spin up my test project. This is especially true when I’m integrating with some sort of external framework, particularly when that framework lacks solid documentation. A good set of quickly executing automated integration tests are the fastest way to vet my assumptions about how a framework behaves with reality.

So that’s what I set out to create when I realized I would need to develop a joomla module. The goal of my module was simple. I was using the K2 joomla extension to let my users create photogalleries. I wanted a rollup module that would take the first photo from every gallery in the site, and render a slideshow out of those, with links back to each individual gallery. Following the guides I found on module development, I created a helper.php file to do the heavy lifting. Then I set out to create a test project to test that implementation.

The first sign that something was wrong, was that I couldn’t find anyone else who had tacked the same problem on google. There was a little bit about building custom applications on top of joomla, but nothing about testing. So I figured I’d just setup phpunit and hope for the best.

Right off the bat, the framework started fighting me. PHPUnit failed with no error message, just silently not running. I went back to the article on custom applications and that got me part way there, but I still had to struggle with a whole slew of missing dependency and undefined variable issues.

Eventually I got it to work with the following lines at the start of the of the file.

define('_JEXEC', 1);
define('JPATH_BASE', '/var/www/');
define('JPATH_PLATFORM', JPATH_BASE . '/libraries');
define('DS', DIRECTORY_SEPARATOR);

require_once JPATH_BASE . '/includes/defines.php';
require_once JPATH_BASE . '/includes/framework.php';
jimport( 'joomla.environment.request' );
jimport( 'joomla.application.helper');
jimport('joomla.application.application');
JFactory::getApplication('site');

$_SERVER['HTTP_HOST'] = "localhost";

require('');
const K2_JVERSION = 16;

Even this didn’t give me everything I needed. I kept getting infinite loop errors. Googling for that lead me to a link on github where somebody had fixed a similar error in Joomla. It turns out the actual error was in joomla’s exception throwing mechanism. Whenever Joomla tried to throw an error in the integration test, it got caught in an infinite loop and just reported the generic infinite loop exception.

Since this testing was on a dev machine, I decided the easiest fix would be to edit the joomla files themselves to print out the stack trace whenever and infinite loop detected. The file I edited was /libraries/joomla/error/error.php, replacing the generic error message on line 201 with the code to print a full backtrace
jexit(JText::_('JLIB_ERROR_INFINITE_LOOP')."\n".$exception->getMessage()."\n".$exception->getTraceAsString()."\n".$exception->getLine()."\n");

Only after all that could I successfully run an automated integration test against joomla.

I don’t want to criticize a platform I’ve done so little with, but the complete lack of documentation on basic automated testing doesn’t speak highly of the development enviroment Joomla has created. I hope this contribution helps someone else in my boat at least get started, and the joomla devs start thinking about how to bake this sort of testing process into the platform more directly.

Working with the SharePoint DataFormWebPart in Custom Application Pages: Part II

In the first post in this series, we setup a DataFormWebPart on our custom application page, using markup autogenerated by SharePoint. If everything went right, you should have a page with a DFW loading up and letting you do basic CRUD operations. Now for the harder stuff.

Trials and Tribulations

The first issue you’ll probably notice as soon as you save an item. Immediately after save, you’ll probably be redirected to the SharePoint List where the item is stored. Now maybe this isn’t an issue for you, but I suspect most of the time, you’d rather control where the user ends up.

While this seems like it should be easy, the challenge is that all this redirect logic is bundled with all the save logic up in a sharepoint WebControl, aptly labeled “SaveButton.” If you want to control redirect, you have to handle saving the item yourself. To do so, get a reference to the DataFormWebPart that’s on the page. Once you’ve got the reference, you’ll want to retrieve the ItemContext.ListItem property. This gives you access to the SharePoint List Item that stores whatever data the user has entered. Call ListItem.Update() and you’ve got the save handled. You can wrap that up inside the Click handler for a normal asp:Button and remove the SaveButton control from the form completely.

So now you’ve got a simple form which let’s you view and save your data. But what if you want something more complicated? What if you want to put custom server controls or access to page variables inside the xsl? Well then things start to get very tough. By default, the DFW will only allow custom server controls from Microsoft.SharePoint.WebControls. Adding any other controls will result in an “Unknown Server Tag.” Fortunately, another SharePoint blogger, Charlie Holland already dug into this issue and wrote a custom webpart to resolve it. His ExtendedDataFormWebPart allows you to specify a set of additional assemblies to allow controls from.

So that handles server controls, but what about page variables or inline script? Again, by default the DFW doesn’t allow any sort of inline code. However, we can take a similar approach to Charlie and get the ability to use PageVariables in our xsl, even if we can’t do full online inline code.

The best place to start looking for how to pull data into the XSL is the ParameterBindings List we looked at earlier for our QueryStrings. MSDN blogger Josh Gaffey has a good overview on these ParameterBindings. Basically each binding ends up corresponding to an XSL parameter that you can use inside of the XSL by using  <xsl:value-of select=”{$param-name}”>. However, out of the box, you can only pull these parameter values from a limited number of locations (QueryStrings, CAMLVariables, Server Variables, Control Values,  etc.) and Page Variables isn’t one of them.

Poking around in the properties of the dataform webpart, we can see there’s a property named ParameterValues. Looking at this property in debug mode, we see that it is a hashtable that holds all the values of parameter bindings. So what we need to do is inject our own values based on page variables, into the hashtable. Below is the code for a modified ExtendedDataFormWebPart class that incorporates both Charlie Holland’s additional assembly code, and our code to use page Variables.

<pre>    [ToolboxItemAttribute(false)]
    public class ExtendedDataFormWebPart : DataFormWebPart
    {
        public ExtendedDataFormWebPart()
            : base()
        {
        }

        [Browsable(false), WebPartStorage(Storage.None), PersistenceMode(PersistenceMode.InnerProperty)]
        public string AssemblyReferences
        {
            get
            {
                List<AssemblyReference> response = new List<AssemblyReference>();
                foreach (string reference in _assemblyReferences)
                {

                    AssemblyReference ar = new AssemblyReference(reference);
                    response.Add(ar);
                }

                return response.ToString();
            }
            set
            {
                XDocument doc = XDocument.Parse("<root>" + value + "</root>");
                var refs = from r in doc.Descendants("AssemblyReference")
                           select new AssemblyReference
                           {
                               Prefix = r.Attribute("Prefix").Value,
                               Namespace = r.Attribute("Namespace").Value,
                               Assembly = r.Attribute("Assembly").Value
                           };

                _assemblyReferences = new string[refs.Count()];

                int i = 0;
                foreach (var ar in refs)
                {
                    _assemblyReferences[i] = ar.ToString();
                    i++;
                }
            }
        }

        public override void DataBind()
        {
            BindPageVariablesToParameterBindings();
            base.DataBind();
        }

        private void BindPageVariablesToParameterBindings()
        {
            var listOfParametersAndVariableNames = BuildListOfPageParameters();
            SetParameterCollectionValues(listOfParametersAndVariableNames);
        }

        private void SetParameterCollectionValues(IEnumerable<KeyValuePair<string, string>> listOfParametersAndVariableNames)
        {
            foreach (var param in listOfParametersAndVariableNames)
            {
                object valueOfVariable;
                FieldInfo field = Page.GetType().GetField(param.Value);
                PropertyInfo prop = Page.GetType().GetProperty(param.Value);

                if(field != null)
                {
                    valueOfVariable = field.GetValue(Page);
                }
                else if(prop != null)
                {
                    valueOfVariable = prop.GetValue(Page, null);
                }
                else
                {
                    throw new InvalidOperationException(string.Format( "There is no member with the name {0} on {1}. Check your parameter binding to ensure the Location attribute is correct", param.Value, Page.ToString()));
                }

                valueOfVariable = (valueOfVariable != null) ? valueOfVariable.ToString() : "";
                ParameterValues.Set(param.Key, valueOfVariable as string);
            }
        }

        private IEnumerable<KeyValuePair<string, string>> BuildListOfPageParameters()
        {
            XDocument parametersXml = XDocument.Parse("<parameters>" + ParameterBindings +"</parameters>");
            XName location = XName.Get("Location", "");
            var parameters = parametersXml.Root.Elements().Where(e => e.Name.LocalName == "ParameterBinding" && e.Attribute(location).Value.Contains("PageVariable") );
            return parameters.Select(p => BuildKeyValuePairFromParameter(p));
        }

        private KeyValuePair<string, string> BuildKeyValuePairFromParameter(XElement parameter)
        {
            string key = parameter.Attribute(XName.Get("Name", "")).Value;
            string value = parameter.Attribute(XName.Get("Location", "")).Value.Replace("PageVariable(", "").Replace(")", "");
            return new KeyValuePair<string, string>(key, value);
        }

    }

    public sealed class AssemblyReference
    {
        public AssemblyReference()
        {
        }

        public override string ToString()
        {
            return string.Format("<%@ Register TagPrefix=\"{0}\" Namespace=\"{1}\" Assembly=\"{2}\" %>", Prefix, Namespace, Assembly);
        }

        public AssemblyReference(string reference)
        {

            Match m = Regex.Match(reference, "TagPrefix=\"(\\S*)\" Namespace=\"(\\S*)\" Assembly=\"(.*)\"");

            Prefix = m.Groups[1].Value;
            Namespace = m.Groups[2].Value;
            Assembly = m.Groups[3].Value;
        }

        public string Prefix;
        public string Namespace;
        public string Assembly;
    }

What we’re doing here is overriding the DataBind method and calling a private method that uses reflection to insert our page variables into the ParameterValues collection. If we want to use a PageVariable, we just add ParameterBinding of the form <ParameterBinding Name=”<xsl param name>” Location=”PageVariable(<variable name>)”/>

The only gotcha is that our variable has to be either a public property or field, not private or protected, since reflection doesn’t seem to pick those up, even when we pass a BindingFlags.NonPublic into the GetFields and GetProperties methods.

Conclusion

SharePoint’s DataFormWebPart is a really neat concept, but the execution leaves a little to be desired when you’re heavily customizing things. Fortunately, most of those short comings can be overcome by extending the webpart with the code samples here and elsewhere. Be warned though, that XSL is a pain to work with, and very unforgiving when it comes to typos. Even with all the improvements offered by the ExtendedDataFormWebPart, it might be easier to build the data manipulation plumbing yourself.

Working with the SharePoint DataFormWebPart in Custom Application Pages: Part I

I just recently got through 2 phases of a major SharePoint 2010 project where we made extensive use of the build int DataFormWebPart on a series of custom application pages. We were basically building a web based wizard to walk end users through a process. Each instance of the process had it’s own sharepoint list item, and we used customized DataFormWebParts to layout all the fields of the list item onto a bunch of different screens.

What I found in doing this though, is that there isn’t alot of documentation about how to use the DataFormWebPart in a custom page with lots of codebehind. This is especially frustrating, since the webpart has lots of interesting little quirks and limitations that I had to find out about the hard way. In the next two blog posts I’m going to talk through some of those, as well as providing the code for a enhanced DataFormWebPart that gets around some of them.

What is the DataFormWebPart

If you haven’t worked with the DataFormWebPart before, it’s basically the webpart that SharePoint uses to create list item forms. All the default create, update, and view forms for list items are powered by a DataFormWebPart. Essentially, the dataform webpart is one of many XsltViewerWebParts, which allow developers and site admins to provide their own XSL markup to control how SharePoint data is rendered.

This makes it a very powerful tool for custom application pages. By dropping a DataFormWebPart on your page, you don’ t have to worry about building out any CRUD plumbing. Just choose which fields you want to show up, tweak and style your html, and you’ve got a basic form up and running. At least, that’s how it should work in theory…..

Getting Started

In actual implementation, nothing with the DataFormWebPart is quite so simple. The first challenge is even managing to get something that loads onto your page. The DFW comes with a bevy of parameters and properties, most of which have unclear names, and require specific, often undocumented values. Unless you enjoy seeing lots of big yellow and red error screens, you’re better off letting SharePoint Designer generate your initial markup, and iterating off that. To do so, boot up designer in the site where your list is, and create a Webpart Page. Once inside your new page, select the “SharePoint” menu from the ribbon, and choose the option “Custom ListForm.” This will open up a little wizard where you can choose the list and content type.

When you’re done, Designer will spit all the markup out onto your page. Copy and paste the webpart markup into your Visual Studio. Don’t forget to add

<%@ Register Tagprefix=”WebPartPages” Namespace=”Microsoft.SharePoint.WebPartPages” Assembly=”Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c” %>”

at the top of your custom application page, so that you’ve got the right namespace loading for the DFW. You’re also going to want to delete the autogenerated .designer.cs page that sharepoitn creates for you. If you don’t, all the xsl based ID’s will prevent you from building, since they have invalid characters for code behind. This means you may also want to set the pages “AutoEventWireUp” property to false.

This should at least get your page to load. Now you can go in and start editing the xsl. By default you’ll probably have all your fields on the form, so you can remove them by deleting the FormField control who’s FieldName property matches the field you don’t want. You can add as much extra html or javascript as you like, to really make the form stand out. The main things to watch out for here, is be very careful to ensure you don’t have an duplicate ID’s for your controls. The DFW is likely to throw a very cryptic error if that happens.

Once you’re done styling the form, you’ll also need to put some thought into how people access it. The DFW relies on two QueryString variables to figure out what list and list item to grab the data from. You’ll see these two variables, , ListItemId and ListID, called out in the “ParameterBindings” section of the DFW markup. You can see that they get pulled from two QueryString values, ID and ListID. You’ll need to make sure that you set those query string values when you build the urls that users use to access your forms.

Lastly, you’ll also want to look at the “PageType” property of the DFW and the ControlMode property of each of the form fields. These control whether you can read or edit the form fields, and whether or not a save will create a new item, or update an existing one. For page type, the valid values are “PAGE_DISPLAYFORM”, “PAGE_EDITFORM”, and “PAGE_NEWFORM.” For ControlMode the values are “Edit”, “New”, and “Display”

Otherwise though this is the easy part. We’ll get to the fun stuff next post.

Labor Day Hacking With Node.js

I’ve been reading up on node.js recently, and have been pretty intrigued with what I found. The long weekend seemed like a good opportunity for some extended hacking. After setting up a nice little development enviroment, and reading up on some other node.js projects, I set myself the following goal for the weekend. Build a web app with a node.js backend that would let me remotely control my media center box.

Now we aren’t talking fully on VNC here, just basically a way to turn my laptop into a remote keyboard and mouse for the other computer. That way I could open and browse hulu without having to get up.

Luckily, I’m happy to say the project was a complete success. If you need proof check out the video below.

Right now it’s real basic. Because of some quirks with jQuery, chrome and the keypress vs. keydown events, all letters are capitals. Also key combinations like Ctrl+C and Ctrl+V don’t work, but it’s passable for most tasks I need it for and gives me a fun base to play around with.

The Secret Sauce

So how does this work? Well basically my media PC is running a simple node.js  webserver that serves up pages and maintains a single websocket connection using the socket.io library. On the client side, there’s some javascript that broadcasts messages across the socket whenever there’s a key up, key down, click, or mouse movement. On the server, some node.js goodness forwards these messages along to the XServer to control the display on the media center.

It’s actually not much more than 200 lines of code, not including the libraries. The most complicated part is the communication between my server and the XServer. I started out trying to use Andrey Sidorov’s node-x11 library, which is a pretty neat little project. Basically, it uses the net library in node.js to connect to the Unix X11 Socket and stream requests to the socket over TCP/IP. Unfortunately it’s pretty new, so there really aren’t any methods for sending mouse or keyboard events across the socket yet. I used his framework to write out a new WarpPointer method, which moves the mouse around, but quickly found that sending click or keypress events was going to be waaaaay more complicated.

So for those areas specifically I took a fallback strategy, and resorted to a quick little hack. Instead  of talking with the XServer directly, I had node.js kick off a child process and run the command line tool “xdotool.” xdotool is basically a simple command line window management tool that lets you fake XEvents with a fairly simple syntax. Way simpler than building a request for scratch. Of course it’s an ugly hack, but adding the pieces to Sidorov’s framework to send the events to the XServer over the weekend wasn’t going to happen. As the library get’s fleshed out, I’ll strip that hack out.

I had to resort to a similar hack to get the size of the users current resolution, basically parsing the results of an “xwinfo”  call. Again, something that I’ll have to eventually refactor out, but good enough for right now.

If you want to see the detailed code, I’ve got it up here on github.

https://github.com/AndrewSwerlick/node-remote

What’s Next?

I don’t plan to shelve this hack by any means. In fact I’d love to expand it to a more full featured remote control type app. Right now I’ve got a short list of 4 goals.

1. Improve interaction with X so it’s not calling out to the command line to do the manipulation

2. Improve mobile support. Right now, since moving the browser window around in a mobile doesn’t fire a mousemove() event, you can’t use the remote on a phone.

3. Figure out how to handle key combinations. Right now the app doesn’t recognize key combinations. Similarly, all letters come across as capitals. Both of these issues are tied to challenges around recognizing which key has been pressed in javascript, and making sure that we prevent the browser from acting on keystrokes like backspace

4. Build out some short cuts/quick links functionality. For example, a hulu browser, that will let you search hulu and start shows with a few keystrokes. Or a short cut tab that will let you quick start often used programs.

Conclusions

So what did I learn about node.js from this first little foray? Well I’m certainly impressed at how easy it was to get something basic up and running. Especially considering that I’d never worked with websockets before, socket.io made it dead simple. The async nature of node.js was also especially interesting to work with, but not too unfamiliar with because of my work with Silverlight.

Of course, I probably could have done the same thing with ruby, or mvc.net, etc, but I don’t think it would be as quick. One of the things I liked about node.js was how easy it was to jump between scripting type stuff, like running console apps and parsing the output, and more traditional server type tasks. It was also great to pass JSON objects across the wire through sockets, and then be able to immediately consume those objects in server side code without having to jump through any hoops to parse those objects into server side variables.

I’ve also always liked the loose and fast nature of javascript, that allows for lots of flexibility in terms of object construction. Even in this small project, I could see how this flexibility would be really fun to work with. I’m definitely excited to see where the framework goes, and definitely plan to keep playing with it.