Continuous Integration with Flex, Hudson, and ArcGIS Server, Part 3

Mon, Aug 24, 2009 6-minute read

(Part 1Part 2Part 4)

One of the tenets of continuous integration is automated testing.  The most basic form of testing, in this context, is unit testing, where each class or method is tested without running the other bits of the application.  Unit testing is focused and, hopefully, simple.  In this post, I am going to add unit testing to our application and get ant to run the tests.  There are several unit testing frameworks out there, and we’re going to use Fluint. Fluint and FlexUnit have kinda merged into FlexUnit4, which is in alpha.  FlexUnit4 is, howyousay, super-fantastic, but it doesn’t currently offer any ant tasks.  When it does, I’ll be moving to it post haste.

Go get Fluint from here.  Put the fluint-1.1.1.swc file in the libs folder of your project.  We are now ready to start with some simple Test Driven Development.  Let’s add a button that zooms the map one level in, shall we?  As all examples, it’s contrived, but the point is to focus on the process.   The process here will follow simple TDD: write a test, watch it fail, write the code to make it pass.  In order to be able to watch the test fail, we’re need a bit of a foundation.  For the sake of this article, I have made a “tests” folder in the src directory where I will store the test bits.  Also, we will need a test runner for when we want to manually run our test suite.  That can be downloaded from here and copied to the src directory as well.  Once the runner (FlexTestRunner.mxml) is in the src dir, be sure to go into the project properties and add it to the list of Flex Applications for the project.  if you’ve done all that properly, then you will be able to run the FlexTestRunner application, which runs the Fluint test suites, by default.

We don’t want to run the framework tests, so remove all mentions of FrameworkSuite from the code in FlexTestRunner.mxml.  In Fluint, a test runner runs test suites and test suites are composed of test cases.  We will make our own test suite, called ZoomButtonTestSuite, and our own test case called (you guessed it) ZoomButtonTestCase.  These live in the aforementioned “tests” dir.

The test suite is simple:

All we do is pull in our test case, which looks like:

Before we go on, go back to FlexTestRunner.mxml and add ZoomButtonTestSuite to the suiteArray in startTestProcess, so it looks like:

We should be ready to right our first test.  For the ZoomButton, here are the specs:

assertTrue(button.map.level < beforeLevel);

} I have started with the button API design, which is the real aim of our test.  Just from this test, you can see that:

<span class="kwrd">public</span> <span class="kwrd">class</span> ZoomButton extends Button
{
    <span class="kwrd">public</span> var map:Map;
    <span class="kwrd">public</span> function ZoomButton()
    {
        super();
    }
    <span class="kwrd">public</span> function doZoom():<span class="kwrd">void</span>{

    }
}

} OK.  The ZoomButton extends the Button class, has a map property (setter/getter not used for brevity) and a doZoom() method.  Launch the test runner, and you’ll see:

About what we’d expect.  The button.map property is null.  However, I don’t want to create a full blown ESRI map for my tests.  It will be clunky and slow and difficult.  We need to fake the map.  This is where Mocks come in.

So, what do we expect the ZoomButton to do in the doZoom() method?  We expect it to call map.zoomIn(), right?  Our first, naive test was testing properties on the map.  While this is testing a valid post-condition, it bleeds too far into testing the ESRI Map control and not the ZoomButton.  After all, the ZoomButton does not set the map level, it just calls a method on the map. If it does that, we are happy with our ZoomButton.  Any issues outside of it are not related to the ZoomButton and its purpose.  With this in mind, our new test looks like:

The new test now creates a mock object for the ZoomButton.map property and tells it to expect the “zoomIn” method to be called one time.  We then test the doZoom method, followed by asking the mock to verify that our expectation was met.  Here is the mock:

This follows the pretty-good documentation on the Mock AS3 Google Code site for mocking classes.  We only need to mock the methods on which we are setting expectations.  Build and launch the test runner again.

Test still fails, but we get a new error:  Verifying Mock Failed. We haven’t coded our doZoom() method yet.

Now, running the tests gives us sweet green….

        <span class="kwrd">public</span> function getTestSuites() : Array
        {
             var suiteArray : Array = <span class="kwrd">new</span> Array();
            suiteArray.push( <span class="kwrd">new</span> ZoomButtonTestSuite());
            <span class="kwrd">return</span> suiteArray;
        }
    ]]&gt;
&lt;/mx:Script&gt;

</mx:Module> Looks a lot like the FlexTestRunner.mxml, huh?  The only thing we have left to do now is add a test target to our ant script.  As some of you might have guessed, we need to add the Fluint ant tasks to our build.xml file.  The easiest (read: not necessarily RIGHT) way to do this is copy them into the c:\ant\lib directory.  You can get the JAR file with the ant tasks here.   Once they are in the ant lib directory, add the following line to your build.xml (right under the flexTasks is a good spot)

There, now we can use our Fluint tasks in the ant file.  Since the tests are using Flex modules, we’ll want to have a couple of targets in the build script: one that builds the module, and one that executes the test.  Heres’ the target that builds the module:

If you go to the command line and type ‘ant build-test—module’, it will build the module and copy it to our bin-debug directory. Now, we just need to tell the Fluint task to go find that module and run the test.  Here’s the target:

You may have noticed the two new build variables: fluint.dir and report.dir.  The Fluint task needs to know where the FluintAirTestRunner.exe file resides, which is the former.  The latter tells fluint where to write the report of how the tests faired.   I added this to the bottom of the build.properties file:

Now, running ‘ant test’ at the command line will run our test and generate the test report.  It’s an XML file that looks like:

So, our ant file is now running our tests.  We are well on our way to continuous integration with Flex.  Coming up, we’ll need to get some of the hardcoding out of the build file, look at our project directory structure, format our test reports to be more readable, and bring Hudson into the mix.  We have a lot to do.

I hope you find this useful.