Letter to my representative on the NSA

Below is a letter to my representative in the US House of Representatives that I wrote in response to the most recent revelations on the NSA’s abuses and over-reach as documented here in the New York Times. I sent this letter via snail mail, but I wanted to put it on the web as well to galvanize others. I’ve pasted it in it’s entirety below. 

Representative Wenstrup,

I am writing with deep concern about the recent revelations regarding the NSA’s unconstitutional over-reach and abuses. The stories from the past few months about vast surveillance programs, and over-broad collection of American citizen’s communications have left me very unsettled and distrustful of the American government. However, those feelings pale in comparison to my shock, disbelief and anger at the latest revelations about the NSA’s top secret “Bullrun” and “Signit” programs.

As detailed in the a September 5th article in the New York Times (N.S.A. Foils Much Internet Encryption) these programs consist of some of the following actions

  • Inserting vulnerabilities into commercial encryption systems
  • Developing techniques to defeat key encryption schemes such as HTTPS, SSL,VPN
  • Stealing encryption keys from major Internet companies

These actions are significantly more dangerous than the over-broad surveillance we’ve already been debating. The problem is that they significantly weaken the cryptographic infrastructure upon which our entire digital economy is built. By intentionally introducing backdoors in key cryptographic technologies, the NSA exposes our entire communications and networking systems to malicious hacking by criminal and foreign elements.

As a professional computer engineer, I am keenly aware of how important this cryptographic infrastructure is to our daily lives. By working to weaken this infrastructure the NSA is placing the digital transactions of millions of ordinary Americans at risk. eCommerce, online banking, electronic medical records, and numerous other aspects of our digital lives are completely reliant on strong cryptographic technology. I understand the NSA’s concern about losing out on valuable intelligence because of encryption, but the trade-offs and risks involved in actively working to undermine the very foundations of the Internet are far to high.

These risks are not purely theoretical either. For a clear example of how deliberately created back doors can be exploited by criminal elements, take a look at the 2005 “Athens Affair.” In this epic security fiasco, hackers infiltrated the infrastructure of the Greek arm of the telecom provider Vodafone. For almost half a year they bugged the phones of over 100 key players in the Greek political scene, including the prime minister, the mayor of Athens, and an employee of the US embassy. They were able to do this by hooking into the same back door used by law enforcement for legal wiretaps. To this day the perpetrators haven’t been caught, and the full extent of their surveillance is not known.

The NSA’s programs create the risk that the US will one day be embroiled in an “Athens Affair” of it’s own unless the agency is curtailed and it’s abuses reigned in. I’ve read your own opinions on the prior revelations about the NSA’s over-reach and I too recognize that it is important that our intelligence agencies have adequate information to keep American’s safe, while at the same time respecting our right to privacy and liberty. I appreciate that you’ve supported an amendment clarifying that NSA funds should not be used to target or store the communications of US citizens.

However in light of the most recent revelations I do not think that this is enough. I want you to know that during the 2014 elections I will not vote for any candidate that does not do the following

  • Condemn the NSA’s attempts to deliberately weaken the cryptographic infrastructure our digital lives rely on.
  • Call for a thorough, detailed, and above all transparent review of the NSA’s intelligence programs, particularly those centered on interfering with cryptographic technology
  • Call for legislation preventing the NSA from working with manufacturers and software companies to introduce non-targeted vulnerabilities into commercial hardware and software
  • Call for the dismissal of the Director of the NSA, Keith B. Alexander and other key NSA officials involved in the decision to focus so much of the agency’s resources on a quest to undermine basic encryption and place all Americans at risk.

I appreciate your consideration on this important issue and hope that you will make choices that will allow me to vote for you in next year’s elections.

Advertisements

30 Days of Scala: It’s a Wonderful Life Part 2

This a continuation of my thirty days of Scala series about learning the programming language Scala. For a list of all posts click here

In my last post I covered the process of setting up my development environment. Now we get down to discussing the actual code.

I worked the Game of Life Kata multiple times, using two different approaches. In the first, I focused on creating an actual Cell class which was responsible for handling it’s own life and death, and responsible for keeping track of it’s neighbors. In the second I created a game class that managed the states of all the cells and tracked the neighbors through a single Set which contained the coordinate pairs of all active cells in the game.

For most of the katas,  I was primarily focused on familiarizing myself with the scala syntax. Scala does a number of things differently than C# or Java. Some of this is cosmetic, like how Scala flips the type and parameter names in a method signature, but some of it is more fundamental. In general, Scala seems to be much less prescriptive about syntax for syntax’s sake. The compiler doesn’t care if you forget a semi colon at the end of the line. Single line methods don’t require brackets. You don’t have to explicitly return values from a method, you can just put the value at the end and Scala will assume you wanted it returned. The list goes on and on. It can be very freeing and make it simple to write code without worrying about syntactical details, but for someone coming from a more prescriptive language, it definitely hurts readability. The question is whether I’ll feel the same way after a month of writing and reviewing katas. To a large extent I suspect as my brain gets used to it, the readability will be fine.

However, there are some more fundamental things about Scala that I suspect will not get easier as I get used to it. A good example would be some of the challenges I had with getting my automated testing setup. I decided to use the testing framework specs2, after reviewing some documentation on it. In particular I focused on the acceptance testing syntax documented on their website. The syntax is very impressively clean, consisting of basically a block of plain text with test code interpolated in using a custom string interpolation method. They also have a great syntax for doing repeated tests with different inputs. In general getting a basic test up and running for each of these scenarios was not hard, but when I started to try to do some more complicated setups that weren’t explicitly covered in the documentation I started running into challenges. Debugging these was extremely hard because specs2 makes extremely heavy use of operator overloading to create it’s syntax. Looking at the code, I had a very tough time understanding what it was actually doing, even at a very high level. I had to dig into the code for specs2 on github to even have a basic grasp of even the basic control flow that specs2 very abstract syntax was actually generating.

The issue I was having turned out to be fairly prosaic, it was just an incorrect version number for specs2. I downloaded the right version and everything worked great, but the opaqueness of specs2 operator overloading had me digging into it’s internals unnecessarily because I feared I was misusing something. However, I don’t necessarily disagree with how the specs2 to did things. There’s no arguing that the syntax makes the tests very readable, and get’s rid of a lot of clutter that does nothing to impart meaning. But it does so at the expense of making the testing framework understandable. In this case that’s probably okay, the tradeoff makes sense given how often you’ll be reading your tests, but in Scala the power is definitely there to shoot yourself in the foot by misusing these features.

On the plus side, coming from C# a lot of the functional aspects of Scala felt very familiar. I’m a huge advocate of LINQ and was drawn to functional programming through it before I had even heard the term before. The syntax for Scala’s functional operators for collections are almost exactly identical to LINQ, with some minor differences in terminology (filter vs where, map vs select). I definitely used them fairly heavily, particularly in my second implementation of the kata, where essentially all of the work was various forms of collection manipulation.

In general I found Scala fairly easy to get used to, but I didn’t have any aha moments where I saw why it would be a better fit than C#. Of course this isn’t surprising, given that this was my first foray into it, and that I was doing code kata’s that by their very nature are designed to be fairly language agnostic. I know one major strength of Scala is how effective it’s supposed to be when you’re trying to handle concurrency and parallel processing, so that may be something I start exploring next

30 Days Of Scala: It’s a Wonderful Life Part 1

This a continuation of my thirty days of Scala series about learning the programming language Scala. For a list of all posts click here

The first kata I started with for this project based on Conway’s Game of life. This is actually the first code kata I was every exposed to, at my first clean code retreat, so it’s always held a special place in my heart. Plus I loved Conway’s Life as a kid where I would manually run games on graph paper while I was bored during math class. Basically an uber-geeky form of doodling.

If you aren’t already familiar with Conway’s Game of life here’s a good explanation. As a quick summary though, it’s a 2D grid, where each turn cells turn on and off based on the state of their neighbors.

For this coding kata, my real goal was to learn how to setup my development environment, so I actually gave myself much longer than 30 minutes including research on that topic. I started out looking into an IDE like IntelliJ, since that seemed very similar to the visual studio experience I’m familiar with on the .NET side. However, as I did my research and played with IntelliJ, I found it was abstracting me from key aspects of the scala process, particularly SBT.

SBT (Simple Build Tool) is scala’s build management tool, and a very unique beast. It’s basically a dedicated console that allows you to handle compiling, dependency management, and continuous testing from one place. Plus it’s got an extensible plugin model so you can add additional functionality on your own. I’ve seen tools that have similar goals, like grunt in the node.js space, but SBT feels like a more sophisticated, comprehensive implementation. I didn’t want some sort of IDE abstracting me away from such a powerful tool, particularly when the documentation I was reading was highlighting the key role sbt plays in Scala.

So instead I decided to go for one of the more popular minimalistic code editors out there, Sublime. If you aren’t familiar with sublime, it’s a highly extensible editor with a wide range of plugins out there for every imaginable language. It’s not a turnkey solution like some of the bigger IDE’s but it’s pretty easy to install a key set of plugins to create a first class IDE like experience for Scala. Here are the key plugins I found

Sublime ENSIME – A port of a syntax highlighting/code completion add on for EMACS. It’s a little complicated to setup, but it add alot to the experience, making it easier to catch and fix errors, and to discover language features. It’s not 100% as good a Visual Studio’s intellisense, but still manages to fill that niche fairly well.

Sublime SBT – A plugin that allows you to open an SBT console in a pane at the bottom of Sublime. It also offers quick keyboard access to sublime commands, like build, or start continuous testing. Easy to setup and definitely a must have.

With this setup, I found myself being highly productive, and happy with the quick feedback loop I was getting during test debug sessions.

In addition to sublime, I also found a great utility to help jump start each scala project from a series of code templates, giter8. Giter8 is a sublime based console app that can download project template for any programming language from github and user them to quickly create a basic boilerplate for your project. It’s an awesome idea, since github is a natural repository for those sorts of things, and since starting without boilerplate it always a little tricky, especially if you’re just starting with the language.

In the next section I’ll talk more about the actual coding experiences I had while working this kata.

30 Days of Scala

It’s been a while since I’ve posted here, but I’ve just started on a new project that seemed like it would be a crime not to do some blog posts on. For about a week and a half I’ve been teaching myself Scala, in an attempt to branch out from the .NET space and get back to some of my open source roots.

I originally I chose Scala because I was looking for something that was statically typed, but not C# or Java. I have nothing against dynamically typed languages, but I’ve already played around with node.js and ruby and I wanted something different. Scala seemed like a good and interesting fit.

I started out by trying to read some basic scala tutorials. The stuff put out there by twitter folks like Scala School or Effective Scala was good, but I still felt like the language just wasn’t resonating with me. Usually I try to learn a new language in the context of some sort of big project, which get’s me coding, but usually results in a somewhat spotty acquisition, focused around whatever pieces are important for the project at hand. So this time I decided to try something different. For the next month or so I’m going to try to do a series of code katas in scala.

If you aren’t familiar with the term, a code kata is a basically a simple, short coding exercise, meant to be done in 30 minutes to an hour. The term is borrowed from martial arts, and it literally means “form” in Japanese. The idea is that it’s a set of repeated movements meant to systematically in part of the larger art or discipline. Much like you’ll see practitioners of martial arts repeating the same motions over and over again, the point of code katas is to solve the same problems (or same sorts of problems) many times to help build programming skill systematically.

Once I decided to take this approach, it occurred to me that blogging would be a natural way to help the process of synthesizing that information. I figured this might help other people picking up the language (particularly if they’re coming from C# like I am) and might attract current scala users into a dialog that might lead to even more learning for me. So for the next 30 days I plan to try to consistently code in scala or write about coding in scala and see where that take me. I don’t plan to be super formal about things, sometimes I’ll spend longer than an hour on a problem, sometimes I’ll spend less, but I do plan to do something daily as my schedule allows. I’ll also post all of my code up on github for to help those following along.

Live.js and Visual Studio Part 3 – Automated Testing

This post is part of a series. Click for Part 1 or Part 2

In the last two posts I explored how Live.js can help you do client side testing, particularly for responsive layouts. Now we’ll be looking at another way that live.js can help out in your client side development.

But before we can do that, we have to take a brief foray into the world of javascript based unit testing. I’m not going to try to give a full treatise on the subject, but just a brief introduction so that we can see how live.js can help with this part of your development workflow too.

If you aren’t familiar with client side unit testing, don’t sweat it, it’s pretty straight forward. If you want a good overview check out smashing magazine’s intro or this great video on the qUnit framework. At a high level though it looks something like this.

1. Just like with your backend code, javascript testing starts with how you structure your code in the first place. Focus on small methods with minimal dependencies that return values that you can validate.

2. There are alot of javascript unit testing frameworks out there, but they all generally work the same way. Tests are functions passed into a method defined by the framework. Your to run your tests, you build a simple html page which has script references to the framework library, your test code and your application code. To run the tests, you load the page and the framework manipulates the html to report your results.

With this high level understanding, it’s pretty straight forward to see how live.js can help on this front. If you add live.js to that html page that runs your tests, then that page can refresh automatically and run your tests every time your test code or application code changes.

Note, that your automated testing page doesn’t have to be static html either. For example, in mvc we can set up a TestsController and Tests view that look a little like this.

Controller

public class TestsController : Controller
    {
        //
        // GET: /Tests/

        public ActionResult Index()
        {
            var testFiles = Directory.EnumerateFiles(Server.MapPath("~/Scripts/spec")).Where(f => f.EndsWith(".js"));
            var sutFiles = testFiles.Select(s => s.Replace("_spec", ""));
            ViewBag.SutFiles = sutFiles;
            ViewBag.TestFiles = testFiles;

            return View();
        }

    }

View

<!DOCTYPE html>
<html>
  <head>
    <meta name="viewport" content="width=device-width" />
    <title>Tests</title>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
    <script src="/Scripts/spec/lib/<your-testing-framework>.js"></script>
    @foreach (var fullpath in ViewBag.SutFiles)
    {
        var fileName = Path.GetFileName(fullpath);
    <script src="/Scripts/@fileName"></script>
    }
    @foreach (var fullpath in ViewBag.TestFiles)
    {
        var fileName = Path.GetFileName(fullpath);
    <script src="/Scripts/spec/@fileName"></script>
    }
    <script>
        onload = function () {
            var runner = mocha.run();
        };
    </script>
  </head>
  <body>
  </body>
</html>

The basic idea, is that we have a controller that builds up a list of files by looking in a specific folder where we put all of our tests. For all of the files it finds, it passes them along to the view, which then renders a set of script reference tags. The result is that our page dynamically adds all the assets it needs to test our javascript. Then live.js will do its thing and automatically refresh to run the tests any time there is a change.

TDD: A Case Study with Silverlight

One of my goals for the new year was to follow TDD on a real project at work. I actually got my chance very early this year with a fairly basic Silverlight project. The project was short and simple, basically a fancy list of links and resources managed in sharepoint and exposed in a silverlight interface allowing a variety of queries and searches. It was large enough to be more than just a toy project, but small enough that I didn’t worry about doing much damage by trying out TDD for the first time.

I learned alot, and I think the work I did makes a good case study for someone interested in getting started with TDD. In my next few blog posts. I plan to walk readers through my development environment, the specifics of the techniques I followed, and the lessons I learned.

The Environment

As I said at the start, the project was written in Silverlight. For my testing I used the Silverlight Unit Test Framework, which allows for asynchronous testing, which is vitally important for any webservices based integration testing. On top of that I used a fantastic continuous test runner named Statlight. Statlight is a small console application that automatically runs your unit tests every time your test project’s .xap file changes. This means that running your tests is as easy as hitting Ctrl + Shift + B to build the project and Statlight does the rest. I quickly got in the habit of building after every code change so that I was getting instant feedback on what I was doing.

The Process

Since this was an experiment, I tried to stick as close to the rules of TDD as possible. This meant I never wrote a line of code until I had already written a test covering it, and that my tests were extremely granular. Even simple tasks like parsing a single line of XML returned from a webservice had a test devoted to it. I also tried not to overthink some of the details of my design, instead trying to put of design decisions until I had already written the test necessitating them.

The Result

Overall, my experience was hugely positive. I’m convinced that TDD definitely makes me more effective and productive and I want to leverage it wherever I can in the future. In general I found there were 3 major benefits to TDD, and I learned 3 lessons about how to do TDD better next time. Let’s start with the good

Flow – It was shocking how good it felt to be able to code without stopping. With TDD my brain stayed in code mode for hours at a time. Usually, I slip in and out of this mode out the day, especially when I’m manually testing code I’ve just written. With TDD, that never happened, and it made my concentration and focus 20x better. When I’m manually testing, there are all sorts of interruptions and opportunities for distraction. Waiting for the page I’m testing to load? I’ll just go browse google reader for a bit. Stepping through a tedious bit of code so I can examine the value of one variable? Let me just skim this email while I do that. With TDD though, my brain never gets an opportunity to slip away from the task at hand. Throughout the day I was laser focused on whatever I was doing.

Similarly, if I did have to step away for an interruption (meetings, lunch, help another dev, etc.) it was easy to get back into the flow and figure out where I was. Just hit Ctrl + Shift + B and see what test failed. Since each test was so small and covered such a small area,  I didn’t have a ton of details about what I was doing slip away when I got distracted.

Design – I didn’t totally abandon upfront design, but I did do less design than I usually do. I mostly sketched out the layers at the boundaries of the application, the pieces that interacted with the user and the pieces that interacted with the data source, SharePoint, since both of those were external pieces that I couldn’t exercise complete control over. Once I had those layers designed though, I let TDD evolve the internal architecture of the application, which actually led to a couple of neat design decisions I don’t think I would have come up with otherwise. The coolest of these was how I handled loading up a given model for a given page. In our application the same view could be wired up to a variety of different models. The specific model depended on the url the user used. I ended up with two separate objects which handled this process, the Model Locator which parsed the incoming URL, and the Model Map, which tied each model to a path-like-string which represented how the data was maintained in the data store. The Model Locator would use the URL to extract the key elements to identify the right model, and then pass those into the Model Map, which would use those elements to find the right model by building the path representation for the model. The end result was a nice decoupling between the path structure the user used to browse to a model, and the way it was actually handled by the data layer. If I had been designing up front, I am almost positive I would have missed this approach, and put too much of the logic into the Model Locator itself, tightly coupling the data structure and the navigation structure. Instead, I put off making any decisions about how the Model Locator interacted with the data until the last minute, and by then it was clear that a new class would improve the design significantly.

Refactoring Ease of Mind – Not everything about this project was perfect. In fact, towards the middle there were some significant pain points because I had to be temporarily put on another higher priority project. To keep things moving another developer was assigned to the project. There wasn’t enough time invested in communication and as a result, he ended up taking a different approach in some key ares, and duplicating some work I’d already done. By the time I came back, his code was wired up to the UI, and it didn’t make sense to try and reincorporate the pieces of my code that were performing some of the same functions. Unfortunately, there were a number of pieces that handled things like search and model location that were still expecting the classes defined in my code. All of those had to be modified to work with his architecture instead.

This would have been a really scary refactoring to do in the timeline we had, except for the the automated tests I already had covering all of my code. With only a few minor tweaks, that test suite was modified to test my search services using his new classes, and we had extremely detailed information about where my code was now broken. After less than a day of work, we’d switched everything over without a hitch. And because of the tests, we had confidence that everything would work fine.

I won’t say much more in summary, because I think the benefits speak for themselves. Next post, I’ll talk about what I’d do differently next time, and how I plan to get better at TDD in the future.

Custom assertions with should.js

Lately I’ve been playing with nodejs and vows, doing some TDD on a side project at home. I love the very readable syntax of should.js, which lets you frame your assertions as almost natural english. However, pretty early on I realized I wanted to add my own custom assertions to the should object to abstract away some of the messy details of my testing and keep the code readable. In the past I’ve used custom asserts with .NET for testing, and I find it allows you to quickly express domain specific concepts even inside your tests, for better readability and clarity.

One particular example was a test where I wanted to make sure the elements in a <ul> where the same as those in a javascript array. Rather than trying to parse out the list into another array and do a comparison in the test body, I wanted to have an assertion that was something like $(“#list”).children().should.consistOfTheElementsIn(array), where consistOfTheElementsIn will handle the parsing and comparison.

After a little bit a playing around, I worked out a pretty simple way to do this. Basically I create a new node module called customShould.js. customShould.js require’s should, and then exports the should object. Additionally, customShould adds a new method to the “Assertion” object created by should.js. Here’s the code


var should = require('should.js');

exports = module.exports = should;

should.Assertion.prototype.aHtmlListThatConsistOf =
 function(list){
 var compareArrays = function(first,second){
 if (first.length != second.length) { return false; }
 var a = first.sort(),
 b = second.sort();
 for (var i = 0; second[i]; i++) {
 if (a[i] !== b[i]) {
 return false;
 }
 }
 return true;
 }

var matches = this.match(/<li>.?*<//li>/gi);
 for(matchIndex in matches){
 matches[matchIndex] = matches.replace("<li>","").replace("</li>","");
 }
 this.assert(compareArray(matches, list), "lists do not match");
 }

It’s all pretty straight forward. Then to use your custom asserts, you just require your customShould.js module instead of the normal should module.