Letter to my representative on the NSA

Below is a letter to my representative in the US House of Representatives that I wrote in response to the most recent revelations on the NSA’s abuses and over-reach as documented here in the New York Times. I sent this letter via snail mail, but I wanted to put it on the web as well to galvanize others. I’ve pasted it in it’s entirety below. 

Representative Wenstrup,

I am writing with deep concern about the recent revelations regarding the NSA’s unconstitutional over-reach and abuses. The stories from the past few months about vast surveillance programs, and over-broad collection of American citizen’s communications have left me very unsettled and distrustful of the American government. However, those feelings pale in comparison to my shock, disbelief and anger at the latest revelations about the NSA’s top secret “Bullrun” and “Signit” programs.

As detailed in the a September 5th article in the New York Times (N.S.A. Foils Much Internet Encryption) these programs consist of some of the following actions

  • Inserting vulnerabilities into commercial encryption systems
  • Developing techniques to defeat key encryption schemes such as HTTPS, SSL,VPN
  • Stealing encryption keys from major Internet companies

These actions are significantly more dangerous than the over-broad surveillance we’ve already been debating. The problem is that they significantly weaken the cryptographic infrastructure upon which our entire digital economy is built. By intentionally introducing backdoors in key cryptographic technologies, the NSA exposes our entire communications and networking systems to malicious hacking by criminal and foreign elements.

As a professional computer engineer, I am keenly aware of how important this cryptographic infrastructure is to our daily lives. By working to weaken this infrastructure the NSA is placing the digital transactions of millions of ordinary Americans at risk. eCommerce, online banking, electronic medical records, and numerous other aspects of our digital lives are completely reliant on strong cryptographic technology. I understand the NSA’s concern about losing out on valuable intelligence because of encryption, but the trade-offs and risks involved in actively working to undermine the very foundations of the Internet are far to high.

These risks are not purely theoretical either. For a clear example of how deliberately created back doors can be exploited by criminal elements, take a look at the 2005 “Athens Affair.” In this epic security fiasco, hackers infiltrated the infrastructure of the Greek arm of the telecom provider Vodafone. For almost half a year they bugged the phones of over 100 key players in the Greek political scene, including the prime minister, the mayor of Athens, and an employee of the US embassy. They were able to do this by hooking into the same back door used by law enforcement for legal wiretaps. To this day the perpetrators haven’t been caught, and the full extent of their surveillance is not known.

The NSA’s programs create the risk that the US will one day be embroiled in an “Athens Affair” of it’s own unless the agency is curtailed and it’s abuses reigned in. I’ve read your own opinions on the prior revelations about the NSA’s over-reach and I too recognize that it is important that our intelligence agencies have adequate information to keep American’s safe, while at the same time respecting our right to privacy and liberty. I appreciate that you’ve supported an amendment clarifying that NSA funds should not be used to target or store the communications of US citizens.

However in light of the most recent revelations I do not think that this is enough. I want you to know that during the 2014 elections I will not vote for any candidate that does not do the following

  • Condemn the NSA’s attempts to deliberately weaken the cryptographic infrastructure our digital lives rely on.
  • Call for a thorough, detailed, and above all transparent review of the NSA’s intelligence programs, particularly those centered on interfering with cryptographic technology
  • Call for legislation preventing the NSA from working with manufacturers and software companies to introduce non-targeted vulnerabilities into commercial hardware and software
  • Call for the dismissal of the Director of the NSA, Keith B. Alexander and other key NSA officials involved in the decision to focus so much of the agency’s resources on a quest to undermine basic encryption and place all Americans at risk.

I appreciate your consideration on this important issue and hope that you will make choices that will allow me to vote for you in next year’s elections.

30 Days of Scala: It’s a Wonderful Life Part 2

This a continuation of my thirty days of Scala series about learning the programming language Scala. For a list of all posts click here

In my last post I covered the process of setting up my development environment. Now we get down to discussing the actual code.

I worked the Game of Life Kata multiple times, using two different approaches. In the first, I focused on creating an actual Cell class which was responsible for handling it’s own life and death, and responsible for keeping track of it’s neighbors. In the second I created a game class that managed the states of all the cells and tracked the neighbors through a single Set which contained the coordinate pairs of all active cells in the game.

For most of the katas,  I was primarily focused on familiarizing myself with the scala syntax. Scala does a number of things differently than C# or Java. Some of this is cosmetic, like how Scala flips the type and parameter names in a method signature, but some of it is more fundamental. In general, Scala seems to be much less prescriptive about syntax for syntax’s sake. The compiler doesn’t care if you forget a semi colon at the end of the line. Single line methods don’t require brackets. You don’t have to explicitly return values from a method, you can just put the value at the end and Scala will assume you wanted it returned. The list goes on and on. It can be very freeing and make it simple to write code without worrying about syntactical details, but for someone coming from a more prescriptive language, it definitely hurts readability. The question is whether I’ll feel the same way after a month of writing and reviewing katas. To a large extent I suspect as my brain gets used to it, the readability will be fine.

However, there are some more fundamental things about Scala that I suspect will not get easier as I get used to it. A good example would be some of the challenges I had with getting my automated testing setup. I decided to use the testing framework specs2, after reviewing some documentation on it. In particular I focused on the acceptance testing syntax documented on their website. The syntax is very impressively clean, consisting of basically a block of plain text with test code interpolated in using a custom string interpolation method. They also have a great syntax for doing repeated tests with different inputs. In general getting a basic test up and running for each of these scenarios was not hard, but when I started to try to do some more complicated setups that weren’t explicitly covered in the documentation I started running into challenges. Debugging these was extremely hard because specs2 makes extremely heavy use of operator overloading to create it’s syntax. Looking at the code, I had a very tough time understanding what it was actually doing, even at a very high level. I had to dig into the code for specs2 on github to even have a basic grasp of even the basic control flow that specs2 very abstract syntax was actually generating.

The issue I was having turned out to be fairly prosaic, it was just an incorrect version number for specs2. I downloaded the right version and everything worked great, but the opaqueness of specs2 operator overloading had me digging into it’s internals unnecessarily because I feared I was misusing something. However, I don’t necessarily disagree with how the specs2 to did things. There’s no arguing that the syntax makes the tests very readable, and get’s rid of a lot of clutter that does nothing to impart meaning. But it does so at the expense of making the testing framework understandable. In this case that’s probably okay, the tradeoff makes sense given how often you’ll be reading your tests, but in Scala the power is definitely there to shoot yourself in the foot by misusing these features.

On the plus side, coming from C# a lot of the functional aspects of Scala felt very familiar. I’m a huge advocate of LINQ and was drawn to functional programming through it before I had even heard the term before. The syntax for Scala’s functional operators for collections are almost exactly identical to LINQ, with some minor differences in terminology (filter vs where, map vs select). I definitely used them fairly heavily, particularly in my second implementation of the kata, where essentially all of the work was various forms of collection manipulation.

In general I found Scala fairly easy to get used to, but I didn’t have any aha moments where I saw why it would be a better fit than C#. Of course this isn’t surprising, given that this was my first foray into it, and that I was doing code kata’s that by their very nature are designed to be fairly language agnostic. I know one major strength of Scala is how effective it’s supposed to be when you’re trying to handle concurrency and parallel processing, so that may be something I start exploring next

30 Days Of Scala: It’s a Wonderful Life Part 1

This a continuation of my thirty days of Scala series about learning the programming language Scala. For a list of all posts click here

The first kata I started with for this project based on Conway’s Game of life. This is actually the first code kata I was every exposed to, at my first clean code retreat, so it’s always held a special place in my heart. Plus I loved Conway’s Life as a kid where I would manually run games on graph paper while I was bored during math class. Basically an uber-geeky form of doodling.

If you aren’t already familiar with Conway’s Game of life here’s a good explanation. As a quick summary though, it’s a 2D grid, where each turn cells turn on and off based on the state of their neighbors.

For this coding kata, my real goal was to learn how to setup my development environment, so I actually gave myself much longer than 30 minutes including research on that topic. I started out looking into an IDE like IntelliJ, since that seemed very similar to the visual studio experience I’m familiar with on the .NET side. However, as I did my research and played with IntelliJ, I found it was abstracting me from key aspects of the scala process, particularly SBT.

SBT (Simple Build Tool) is scala’s build management tool, and a very unique beast. It’s basically a dedicated console that allows you to handle compiling, dependency management, and continuous testing from one place. Plus it’s got an extensible plugin model so you can add additional functionality on your own. I’ve seen tools that have similar goals, like grunt in the node.js space, but SBT feels like a more sophisticated, comprehensive implementation. I didn’t want some sort of IDE abstracting me away from such a powerful tool, particularly when the documentation I was reading was highlighting the key role sbt plays in Scala.

So instead I decided to go for one of the more popular minimalistic code editors out there, Sublime. If you aren’t familiar with sublime, it’s a highly extensible editor with a wide range of plugins out there for every imaginable language. It’s not a turnkey solution like some of the bigger IDE’s but it’s pretty easy to install a key set of plugins to create a first class IDE like experience for Scala. Here are the key plugins I found

Sublime ENSIME – A port of a syntax highlighting/code completion add on for EMACS. It’s a little complicated to setup, but it add alot to the experience, making it easier to catch and fix errors, and to discover language features. It’s not 100% as good a Visual Studio’s intellisense, but still manages to fill that niche fairly well.

Sublime SBT – A plugin that allows you to open an SBT console in a pane at the bottom of Sublime. It also offers quick keyboard access to sublime commands, like build, or start continuous testing. Easy to setup and definitely a must have.

With this setup, I found myself being highly productive, and happy with the quick feedback loop I was getting during test debug sessions.

In addition to sublime, I also found a great utility to help jump start each scala project from a series of code templates, giter8. Giter8 is a sublime based console app that can download project template for any programming language from github and user them to quickly create a basic boilerplate for your project. It’s an awesome idea, since github is a natural repository for those sorts of things, and since starting without boilerplate it always a little tricky, especially if you’re just starting with the language.

In the next section I’ll talk more about the actual coding experiences I had while working this kata.

30 Days of Scala

It’s been a while since I’ve posted here, but I’ve just started on a new project that seemed like it would be a crime not to do some blog posts on. For about a week and a half I’ve been teaching myself Scala, in an attempt to branch out from the .NET space and get back to some of my open source roots.

I originally I chose Scala because I was looking for something that was statically typed, but not C# or Java. I have nothing against dynamically typed languages, but I’ve already played around with node.js and ruby and I wanted something different. Scala seemed like a good and interesting fit.

I started out by trying to read some basic scala tutorials. The stuff put out there by twitter folks like Scala School or Effective Scala was good, but I still felt like the language just wasn’t resonating with me. Usually I try to learn a new language in the context of some sort of big project, which get’s me coding, but usually results in a somewhat spotty acquisition, focused around whatever pieces are important for the project at hand. So this time I decided to try something different. For the next month or so I’m going to try to do a series of code katas in scala.

If you aren’t familiar with the term, a code kata is a basically a simple, short coding exercise, meant to be done in 30 minutes to an hour. The term is borrowed from martial arts, and it literally means “form” in Japanese. The idea is that it’s a set of repeated movements meant to systematically in part of the larger art or discipline. Much like you’ll see practitioners of martial arts repeating the same motions over and over again, the point of code katas is to solve the same problems (or same sorts of problems) many times to help build programming skill systematically.

Once I decided to take this approach, it occurred to me that blogging would be a natural way to help the process of synthesizing that information. I figured this might help other people picking up the language (particularly if they’re coming from C# like I am) and might attract current scala users into a dialog that might lead to even more learning for me. So for the next 30 days I plan to try to consistently code in scala or write about coding in scala and see where that take me. I don’t plan to be super formal about things, sometimes I’ll spend longer than an hour on a problem, sometimes I’ll spend less, but I do plan to do something daily as my schedule allows. I’ll also post all of my code up on github for to help those following along.

Live.js and Visual Studio Part 3 – Automated Testing

This post is part of a series. Click for Part 1 or Part 2

In the last two posts I explored how Live.js can help you do client side testing, particularly for responsive layouts. Now we’ll be looking at another way that live.js can help out in your client side development.

But before we can do that, we have to take a brief foray into the world of javascript based unit testing. I’m not going to try to give a full treatise on the subject, but just a brief introduction so that we can see how live.js can help with this part of your development workflow too.

If you aren’t familiar with client side unit testing, don’t sweat it, it’s pretty straight forward. If you want a good overview check out smashing magazine’s intro or this great video on the qUnit framework. At a high level though it looks something like this.

1. Just like with your backend code, javascript testing starts with how you structure your code in the first place. Focus on small methods with minimal dependencies that return values that you can validate.

2. There are alot of javascript unit testing frameworks out there, but they all generally work the same way. Tests are functions passed into a method defined by the framework. Your to run your tests, you build a simple html page which has script references to the framework library, your test code and your application code. To run the tests, you load the page and the framework manipulates the html to report your results.

With this high level understanding, it’s pretty straight forward to see how live.js can help on this front. If you add live.js to that html page that runs your tests, then that page can refresh automatically and run your tests every time your test code or application code changes.

Note, that your automated testing page doesn’t have to be static html either. For example, in mvc we can set up a TestsController and Tests view that look a little like this.

Controller

public class TestsController : Controller
    {
        //
        // GET: /Tests/

        public ActionResult Index()
        {
            var testFiles = Directory.EnumerateFiles(Server.MapPath("~/Scripts/spec")).Where(f => f.EndsWith(".js"));
            var sutFiles = testFiles.Select(s => s.Replace("_spec", ""));
            ViewBag.SutFiles = sutFiles;
            ViewBag.TestFiles = testFiles;

            return View();
        }

    }

View

<!DOCTYPE html>
<html>
  <head>
    <meta name="viewport" content="width=device-width" />
    <title>Tests</title>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
    <script src="/Scripts/spec/lib/<your-testing-framework>.js"></script>
    @foreach (var fullpath in ViewBag.SutFiles)
    {
        var fileName = Path.GetFileName(fullpath);
    <script src="/Scripts/@fileName"></script>
    }
    @foreach (var fullpath in ViewBag.TestFiles)
    {
        var fileName = Path.GetFileName(fullpath);
    <script src="/Scripts/spec/@fileName"></script>
    }
    <script>
        onload = function () {
            var runner = mocha.run();
        };
    </script>
  </head>
  <body>
  </body>
</html>

The basic idea, is that we have a controller that builds up a list of files by looking in a specific folder where we put all of our tests. For all of the files it finds, it passes them along to the view, which then renders a set of script reference tags. The result is that our page dynamically adds all the assets it needs to test our javascript. Then live.js will do its thing and automatically refresh to run the tests any time there is a change.

TDD: A Case Study with Silverlight

One of my goals for the new year was to follow TDD on a real project at work. I actually got my chance very early this year with a fairly basic Silverlight project. The project was short and simple, basically a fancy list of links and resources managed in sharepoint and exposed in a silverlight interface allowing a variety of queries and searches. It was large enough to be more than just a toy project, but small enough that I didn’t worry about doing much damage by trying out TDD for the first time.

I learned alot, and I think the work I did makes a good case study for someone interested in getting started with TDD. In my next few blog posts. I plan to walk readers through my development environment, the specifics of the techniques I followed, and the lessons I learned.

The Environment

As I said at the start, the project was written in Silverlight. For my testing I used the Silverlight Unit Test Framework, which allows for asynchronous testing, which is vitally important for any webservices based integration testing. On top of that I used a fantastic continuous test runner named Statlight. Statlight is a small console application that automatically runs your unit tests every time your test project’s .xap file changes. This means that running your tests is as easy as hitting Ctrl + Shift + B to build the project and Statlight does the rest. I quickly got in the habit of building after every code change so that I was getting instant feedback on what I was doing.

The Process

Since this was an experiment, I tried to stick as close to the rules of TDD as possible. This meant I never wrote a line of code until I had already written a test covering it, and that my tests were extremely granular. Even simple tasks like parsing a single line of XML returned from a webservice had a test devoted to it. I also tried not to overthink some of the details of my design, instead trying to put of design decisions until I had already written the test necessitating them.

The Result

Overall, my experience was hugely positive. I’m convinced that TDD definitely makes me more effective and productive and I want to leverage it wherever I can in the future. In general I found there were 3 major benefits to TDD, and I learned 3 lessons about how to do TDD better next time. Let’s start with the good

Flow – It was shocking how good it felt to be able to code without stopping. With TDD my brain stayed in code mode for hours at a time. Usually, I slip in and out of this mode out the day, especially when I’m manually testing code I’ve just written. With TDD, that never happened, and it made my concentration and focus 20x better. When I’m manually testing, there are all sorts of interruptions and opportunities for distraction. Waiting for the page I’m testing to load? I’ll just go browse google reader for a bit. Stepping through a tedious bit of code so I can examine the value of one variable? Let me just skim this email while I do that. With TDD though, my brain never gets an opportunity to slip away from the task at hand. Throughout the day I was laser focused on whatever I was doing.

Similarly, if I did have to step away for an interruption (meetings, lunch, help another dev, etc.) it was easy to get back into the flow and figure out where I was. Just hit Ctrl + Shift + B and see what test failed. Since each test was so small and covered such a small area,  I didn’t have a ton of details about what I was doing slip away when I got distracted.

Design – I didn’t totally abandon upfront design, but I did do less design than I usually do. I mostly sketched out the layers at the boundaries of the application, the pieces that interacted with the user and the pieces that interacted with the data source, SharePoint, since both of those were external pieces that I couldn’t exercise complete control over. Once I had those layers designed though, I let TDD evolve the internal architecture of the application, which actually led to a couple of neat design decisions I don’t think I would have come up with otherwise. The coolest of these was how I handled loading up a given model for a given page. In our application the same view could be wired up to a variety of different models. The specific model depended on the url the user used. I ended up with two separate objects which handled this process, the Model Locator which parsed the incoming URL, and the Model Map, which tied each model to a path-like-string which represented how the data was maintained in the data store. The Model Locator would use the URL to extract the key elements to identify the right model, and then pass those into the Model Map, which would use those elements to find the right model by building the path representation for the model. The end result was a nice decoupling between the path structure the user used to browse to a model, and the way it was actually handled by the data layer. If I had been designing up front, I am almost positive I would have missed this approach, and put too much of the logic into the Model Locator itself, tightly coupling the data structure and the navigation structure. Instead, I put off making any decisions about how the Model Locator interacted with the data until the last minute, and by then it was clear that a new class would improve the design significantly.

Refactoring Ease of Mind – Not everything about this project was perfect. In fact, towards the middle there were some significant pain points because I had to be temporarily put on another higher priority project. To keep things moving another developer was assigned to the project. There wasn’t enough time invested in communication and as a result, he ended up taking a different approach in some key ares, and duplicating some work I’d already done. By the time I came back, his code was wired up to the UI, and it didn’t make sense to try and reincorporate the pieces of my code that were performing some of the same functions. Unfortunately, there were a number of pieces that handled things like search and model location that were still expecting the classes defined in my code. All of those had to be modified to work with his architecture instead.

This would have been a really scary refactoring to do in the timeline we had, except for the the automated tests I already had covering all of my code. With only a few minor tweaks, that test suite was modified to test my search services using his new classes, and we had extremely detailed information about where my code was now broken. After less than a day of work, we’d switched everything over without a hitch. And because of the tests, we had confidence that everything would work fine.

I won’t say much more in summary, because I think the benefits speak for themselves. Next post, I’ll talk about what I’d do differently next time, and how I plan to get better at TDD in the future.

Custom assertions with should.js

Lately I’ve been playing with nodejs and vows, doing some TDD on a side project at home. I love the very readable syntax of should.js, which lets you frame your assertions as almost natural english. However, pretty early on I realized I wanted to add my own custom assertions to the should object to abstract away some of the messy details of my testing and keep the code readable. In the past I’ve used custom asserts with .NET for testing, and I find it allows you to quickly express domain specific concepts even inside your tests, for better readability and clarity.

One particular example was a test where I wanted to make sure the elements in a <ul> where the same as those in a javascript array. Rather than trying to parse out the list into another array and do a comparison in the test body, I wanted to have an assertion that was something like $(“#list”).children().should.consistOfTheElementsIn(array), where consistOfTheElementsIn will handle the parsing and comparison.

After a little bit a playing around, I worked out a pretty simple way to do this. Basically I create a new node module called customShould.js. customShould.js require’s should, and then exports the should object. Additionally, customShould adds a new method to the “Assertion” object created by should.js. Here’s the code


var should = require('should.js');

exports = module.exports = should;

should.Assertion.prototype.aHtmlListThatConsistOf =
 function(list){
 var compareArrays = function(first,second){
 if (first.length != second.length) { return false; }
 var a = first.sort(),
 b = second.sort();
 for (var i = 0; second[i]; i++) {
 if (a[i] !== b[i]) {
 return false;
 }
 }
 return true;
 }

var matches = this.match(/<li>.?*<//li>/gi);
 for(matchIndex in matches){
 matches[matchIndex] = matches.replace("<li>","").replace("</li>","");
 }
 this.assert(compareArray(matches, list), "lists do not match");
 }

It’s all pretty straight forward. Then to use your custom asserts, you just require your customShould.js module instead of the normal should module.

Automatic Integration Testing With Joomla

Lately, I’ve been deviating from my .NET ways to do a small website for my brother-in-law during my spare time. He works for a artistic iron works company and they were looking for a simple visual refresh to replace their 90’s era, MS FrontPage website.

I haven’t had much experience with Joomla, but I ended up choosing it because they have a dreamhost account and joomla is a one click install. I knew it was a big name in the CMS world, and even knew someone who makes his living off Joomla sites, so I figured it had to be pretty good. Frankly, after building out much of this site in it, I’m not impressed. The UI is clunky and not even intuitive for a techy like me. The documentation is sparse at the api level. And the extension development model seems to rely fair to heavily on static methods and singletons. But what irked me the most about Joomla is how difficult it was to get a solid automated integration test up and running. Hopefully what I document here will save someone else my pain later.

Before getting to the technical how-to though, a little bit of background on why I think this is important. In the last year I’ve become a huge proponent of automated testing. In general, when I start on a new project or feature now, the first thing I do is spin up my test project. This is especially true when I’m integrating with some sort of external framework, particularly when that framework lacks solid documentation. A good set of quickly executing automated integration tests are the fastest way to vet my assumptions about how a framework behaves with reality.

So that’s what I set out to create when I realized I would need to develop a joomla module. The goal of my module was simple. I was using the K2 joomla extension to let my users create photogalleries. I wanted a rollup module that would take the first photo from every gallery in the site, and render a slideshow out of those, with links back to each individual gallery. Following the guides I found on module development, I created a helper.php file to do the heavy lifting. Then I set out to create a test project to test that implementation.

The first sign that something was wrong, was that I couldn’t find anyone else who had tacked the same problem on google. There was a little bit about building custom applications on top of joomla, but nothing about testing. So I figured I’d just setup phpunit and hope for the best.

Right off the bat, the framework started fighting me. PHPUnit failed with no error message, just silently not running. I went back to the article on custom applications and that got me part way there, but I still had to struggle with a whole slew of missing dependency and undefined variable issues.

Eventually I got it to work with the following lines at the start of the of the file.

define('_JEXEC', 1);
define('JPATH_BASE', '/var/www/');
define('JPATH_PLATFORM', JPATH_BASE . '/libraries');
define('DS', DIRECTORY_SEPARATOR);

require_once JPATH_BASE . '/includes/defines.php';
require_once JPATH_BASE . '/includes/framework.php';
jimport( 'joomla.environment.request' );
jimport( 'joomla.application.helper');
jimport('joomla.application.application');
JFactory::getApplication('site');

$_SERVER['HTTP_HOST'] = "localhost";

require('');
const K2_JVERSION = 16;

Even this didn’t give me everything I needed. I kept getting infinite loop errors. Googling for that lead me to a link on github where somebody had fixed a similar error in Joomla. It turns out the actual error was in joomla’s exception throwing mechanism. Whenever Joomla tried to throw an error in the integration test, it got caught in an infinite loop and just reported the generic infinite loop exception.

Since this testing was on a dev machine, I decided the easiest fix would be to edit the joomla files themselves to print out the stack trace whenever and infinite loop detected. The file I edited was /libraries/joomla/error/error.php, replacing the generic error message on line 201 with the code to print a full backtrace
jexit(JText::_('JLIB_ERROR_INFINITE_LOOP')."\n".$exception->getMessage()."\n".$exception->getTraceAsString()."\n".$exception->getLine()."\n");

Only after all that could I successfully run an automated integration test against joomla.

I don’t want to criticize a platform I’ve done so little with, but the complete lack of documentation on basic automated testing doesn’t speak highly of the development enviroment Joomla has created. I hope this contribution helps someone else in my boat at least get started, and the joomla devs start thinking about how to bake this sort of testing process into the platform more directly.

Completely Controlling Tab Order in Silverlight

I’ve just started a new job as a more pure software developer (as opposed to PM/BA)  so you can expect to see more blog posts of a purely technical nature here in the coming months. Today, we’ll be taking look at controlling tab order in complex silverlight forms.

The Problem

I was inspired to write this post after hours of fruitless googling to build a silverlight data entry form with a series of dynamic controls lists. The client wanted a form whose questions were populated from a separate database. That requirement in and of itself is fairly straight forward, just a matter of using an ItemsControl or one of it’s descendants. However, what I found is that these controls generated a whole series of extra hidden controls that captured tabs, confusing the user and slowing down the data entry process. In searching for a solution I found little documentation on the whole question of tab order in general. Eventually I managed to piece together a solution and understanding that seemed like good blogging material. In this post I hope to offer a fairly comprehensive view of the Silverlight tab and focus model, as well as providing techniques to debug and control tab and focus issues.

The Preliminaries

Before we get into the real meat of the post, we need to spend a little time reviewing the key pieces of Silverlight’s tab and focus model. On the surface, the model appears pretty simple. Most controls have  the “IsTabStop” property that controls whether they participate in tab order at all. For more fine grained control, you can use the “TabIndex” property to set an explicit order for items in the same container control(like a stackpanel or grid). Finally if you have a container control you can use the “TabNavigation” property to affect how the tab order treats the children of that container. The default value, “Local” will allow you to tab inside of the container. “Once” will mean only the parent container will participate in tab order, and “Cycle” means that the tab order will not exit from the container unless the user clicks outside of it.

For most situations this will give you pretty fine grained control over keyboard tabbing and focus. However, once you have an app where you’re dynamically generating controls using ItemsControls, ListBoxes, TabControls, etc. things get more complicated. Certain composite controls in the silverlight toolkit also cause problems, since each one of the controls that makes them up may end up claiming an individual tab. And if you’ve got container control’s nested inside of eachother, it may be difficult to simply set tab index across the board.

Knowing is Half the Battle

What makes debugging these issues especially difficult is that Silverlight doesn’t really make it very clear what control has focus. Most UI elements will get a thin blue border when they have keyboard focus, but some controls, like ContentControl or ListBoxItem have no visual indication they’ve been selected. To resolve this, I ended up creating a static helper class based off of this blog post.  http://codeblog.larsholm.net/2009/12/focushelper/ The code of the class is below

using System;
using System.Net;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Documents;
using System.Windows.Ink;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Animation;
using System.Windows.Shapes;
using System.Threading;
using System.Diagnostics;

public static class FocusHelper
{
    public static void Start(TextBox debugBox)
    {
        focusBorderBrush = new SolidColorBrush(Colors.Red);
        focusBackground = new SolidColorBrush(Colors.Red);
        focusBackground.Opacity = 0.1;

        focusTimer = new Timer(new TimerCallback((o) =>
        {
            try
            {
                System.Windows.Deployment.Current.Dispatcher.BeginInvoke(() =>
                {
                    object temp = null;

                    if (System.Windows.Application.Current.Host.Content.ZoomFactor == 1)
                        temp = FocusManager.GetFocusedElement();

                    if (temp != lastFocus)
                    {
                        if (temp is Control)
                        {
                            Control conTemp = temp as Control;
                            var conTempParent = conTemp.Parent;
                            //Give the last control back its original color
                            if (lastFocus != null)
                            {
                                lastFocus.BorderBrush = lastBrush;
                                lastFocus.BorderThickness = lastThickness;
                                lastFocus.Background = lastBackground;
                            }

                            lastFocus = temp as Control;
                            lastBrush = lastFocus.BorderBrush;
                            lastThickness = lastFocus.BorderThickness;
                            lastBackground = lastFocus.Background;

                            lastFocus.BorderBrush = focusBorderBrush;
                            lastFocus.BorderThickness = new Thickness(1);
                            lastFocus.Background = focusBackground;

                            Debug.WriteLine("Current Focus: Control " + conTemp.Name + " of Type " + conTemp + " in " + conTempParent);

                        }
                    }
                });
            }
            catch
            {
            }

        }), null, 0, 100);
    }

    private static System.Threading.Timer focusTimer;
    private static Control lastFocus = null;
    private static Thickness lastThickness;
    private static Brush lastBrush;
    private static Brush lastBackground;
    private static Brush focusBorderBrush;
    private static Brush focusBackground;
}

Simply put, once the Start() method of this class has been invoked, it will constantly poll the FocusManager.GetFocusedElement() method of Silverlight to determine which control has focus. It will put a more noticeable red border around that control, and it will write the control’s name (if it has one), the type, and the type of it’s parent, to the Output window of Visual Studio. In  my case, I just put FocusHelper.Start() in the Load method of the view I was debugging and wrapped it in a compiler directive to ensure it’s only run in debug mode.

With this in place, it’s now easy to figure out what control’s may be stealing tab order. In my case the two main culprits were the ListBoxItem and the ContentControl.

Styling More Than Style

Once I knew what controls were disrupting my tab order, the next step was to set their “IsTabStop” property to false. The only problem is that these controls weren’t declared anywhere in the xaml, they were being autogenerated by the template of Silverlight’s ListBox and ItemsControls.

In order to set the properties of those autogenerated controls, we can make use of a new Silverlight 4 feature, implicit styling. Implicit styling allows us to declare a style and have it automatically apply to all controls of a certain type that are within the style’s scope. What’s important to note about styling in Silverlight, is that it goes beyond just visual properties. Style can be used to set just about any property of a control, including “IsTabStop” false.

With implicit styling, the concept of scope is key. Based on the parent control you declare the style inside of, it will only affect controls that are children of that parent. In my case, I specifically wanted the controls that were children of my ListBoxes and ItemControls’. To get at these I declare my styles under the Resources property of the relevant ListBoxes and Item’s Controls like so

<ItemsControl IsTabStop="False" ItemsSource="{Binding Path=Items}">
	<ItemsControl.Resources>
		<Style TargetType="ContentControl">
			<Setter Property="IsTabStop" Value="false"/>
		</Style>
	</ItemsControl.Resources>
	<ItemsControl.ItemTemplate>

Now all of those autogenerated controls will end up with their IsTabStop property appropriately set to false.

What’s Missing

Now there’s one thing I haven’t covered here, which is controlling the Tab Index when you have dynamically generated controls. Fortunately in my case, the order of the controls on the page matched up with the order I wanted the user to tab through them.  I can forsee some complicated scenario’s where you might want to be autogenerating that number for a set of dynamic controls. I’ll leave that one as an exercise to the readers, as my math professor used to say.

Workflow Fantasy

Workflow. Everyone in the tech world has heard the word. If you’re an analyst your trained and told to “figure out the client’s workflow.” You talking, sketch arrows on white boards, ask the question “then what?” like 50 times, and eventually produce a series of massive visios. The client lavishes you with praise, saying that you’ve managed to really capture their businesses. Everyone’s happy, and the team goes off an builds the application.

Then testing starts. It’s innocent at first, with little discrepancies where the users find a few little exceptions they forgot to mention during the analysis process. Soon though it’s a full blown disaster, where everyone is growing tired of the phrase, “Well that’s how it works most of the time, but sometimes….” Now there’s a growing realization that the software based on the vaunted workflows you discovered miss out on about 30%-40% of what users actually do on a regular basis.

It’s a familiar story to anyone involved with a software project. There’s this disconnect between the fantasy workflow that people thinks makes up their jobs, and the day to day activities that actually do. It’s tempting to blame the users for not understanding their own jobs, but I think the problem is more fundamental, and it’s with the whole concept of workflow as applied to the modern job.

To understand why, let’s take a look at what work is. Basically, all work can really broken down into two categories. There’s doing things and then there’s solving problems. Doing things is the easy concrete stuff. Write up minutes of the meeting for your boss, take that check from your grandmother to the bank, go running for five miles, all these fit in the doing things category. Getting them done is just a matter of setting aside time, and making sure you have the right tools for the task at hand. When you start the task, you know about how long it will take and can basically envision, in detail, what you will actually be doing.

Solving problems, on the other hand is a very different matter. Solving problems is a matter of breaking complex goals down into a list of things that fit in the “doing things” category. When you start to solve a problem, you really don’t have a good idea of how long it will take. You don’t know who you’ll have to talk to, what you’ll have to read, what tools you’ll need, anything. You’re starting at a big black ? trying to figure out what comes next. It involves a lot of staring out the window, mulling over problems, talking to people, rewriting or redrawing things, in general tasks that often feel somewhat frustrating or unproductive.

So what happens when you ask someone who’s job is to solve problems to “describe their workflow” or more common “so what do you do during the day”? Their mind immediately goes to the concrete, easily visualized and remembered activities of their job, all the doing things stuff. They don’t describe the problem solving stuff because they may not remember it, or they may not have the language to describe what is often an intuitive, somewhat haphazard process. Even if they do talk about problem solving, it usually doesn’t make it to the analysts workflow diagrams because workflow is about doing things. There’s no space on the diagram for problem solving.

What really makes this an issue, is that more and more modern work is a matter of problem solving and less and less about doing things. So when we ask clients to describe their workflow, what we’re doing is getting them to describe the boring, pointless, and most often unimportant, parts of their job.

What’s a better strategy? Don’t ask people what they do, ask them what problems they solve. A question as direct as “what problems do you solve on a daily basis,” will reveal alot more about what their real work day does than asking them for a blow by blow of the day. You’ll hear in great detail about the exceptions to the rule, the tricky important issues, and you’ll have a much richer understanding of how to make software that gives them a leg up on those problems.