Tag Archives: agile

MDN Agile and status

tl;dr: We're settling into an Agile process for MDN with daily standups and weekly planning. In the last few months we've added and enhanced developer profiles, events, demos dev derby, and other MDN features. We've also restored, added, and enhanced drafts, review flags, search, article properties, section editing, and authentication in the new wiki. I'll try to write status posts more frequently - each time we make a release, if possible.

AgileAgile

Since my last MDN & Kuma progress post, we've settled into a regular, yet customized, Agile methodology in the MDN team with our agile Bugzilla JS tool. We've pushed MDN 0.9.8, 0.9.9, 1.0, 1.1, 1.2, 1.3, 1.4, and 1.5 thru it relatively smoothly. We've learned and adapted a few techniques in our Agile process worth noting:

Daily standups on IRC

  1. What have you done since we last met?
  2. What do you plan to do until we meet next?
  3. Are you blocked by anything?

By far the 3rd question here is the most important. The first questions are mostly intended to lead into the 3rd. Our scrum-master (now me) is tasked with removing any and all blockers as much as possible - usually chasing down people in other groups from whom we need something (IT, DevEngage, Legal, Security, etc.)

(Bi-)Weekly Retro & Planning Poker

We push code on Tuesdays, so we alternate between post-/pre- and mid-sprint planning meetings. Mid-sprint planning meetings are shorter and faster, little more than a standup. We use an etherpad template for a running minutes of our weekly Wednesday meetings. The basic format is:

  1. Retro - what's go[(ing)|(ne)] good or bad in the sprint?
  2. Discuss - general items to discuss, usually flowing from Retro
  3. Planning (post/pre-sprint only) - re-assess bugs carried over from previous sprint, use planningpoker.com to estimate the current sprint backlog in bugzilla

Test Chamber ProgressStatus

We're developing major features for both MDN and the upcoming wiki.

MDN

  • Developer profiles
    My MDN profile is missing demos because I haven't made any yet. :( We want to hook this up with Mozillians.org in the (near) future.
  • Events
    We've tried to make the MDN events page a comprehensive resource to find the events Mozillians attend. Inspired originally by Christian's site for Mozilla events.
  • Demos & Dev Derby
    We've done significant work on the Dev Derby feature - streamlining the process and clearing out bugs. This is one of my favorite MDN features. When I need to show anyone cool web technology I can open the MDN demo studio and inevitably find something way cool with canvas, video, audio, and webgl.
  • Miscellaneous
    We're also finding and squashing bugs as often as we can. We finally restored production server error emails, so we should be much better about squashing bugs that real users are are hitting. And we're trying to add smaller optimizations and fun features as often as we can.

I enjoy enhancing MDN in addition to building the new wiki system. The wiki is important and should help us take more control over our docs, but it's still behind-the-scenes; I like shipping code to a production site that helps people, especially developers, immediately.

Wiki

We hope to soon (by the end of the year) run the new wiki side-by-side with MindTouch so we can work from a single line of code and allow testers to compare MindTouch with the new wiki on the same server. But so far we've developed the following features on the separate kuma wiki staging server:

  • Drafts
    We use localStorage to periodically save the editing session so a writer doesn't lose precious work if their client crashes.
  • Review Flags
    We added review flags for both technical and editorial review.
  • Search
    We restored the Sphinx-powered search backend we inherited from SUMO/kitsune (just in time for them to move on to ElasticSearch). We still need to restore all of the search frontend and ancillary features.
  • Page Properties
    We restored and refactored document properties we inherited from SUMO/kitsune.
  • Section Editing
    We added section editing to the wiki - our first major modification to the wiki functionality we inherited from SUMO/kitsune.
  • Authentication
    We're in the middle of switching MDN user authentication from MindTouch to django. We will run both authentication backends together, first django then falling back to MindTouch and auto-registering and migrating users to django authentication as they log in.

The wiki work is going pretty well, but we still have a couple of major hurdles to clear: data migration & user scripting. We are already investigating data migration a bit, and once we start migrating data over it should help direct our work on user scripts. In my opinion, we can see light at the end of the tunnel but we're still watching out to avoid potential train-wrecks.

The rest

Outside of MDN, I helped pull together an HTML5 track for Tulsa Tech Fest, helped organize a Tulsa Hackathon with Tulsa Web Devs, and I participated in Startup Weekend Tulsa. At SWT we started a quick web app on top of the TRIF project from Hackathon called OttoZen to send an SMS alert to someone when there's a traffic incident on their commute. It's making us consider building a broader Data/App site for Oklahoma. We'll see.

From now on I'll try to make a status post every time we make an MDN release.

bugzilla-agile

Bugzilla is cute but deadly.I'm not a pure Agilista™ but we're always trying to improve our development process for MDN. John and I like Scrum and XP stuff, but every team does Agile™ a little differently - as they should. A while back we shopped around for tracker tools and stuck (as Mozillians always do) with bugzilla. We - i.e., mostly John and Jay - also looked at acunote, planbox, scrumdo, agilezen, pivotal, and agilebench.

What we most like about Bugzilla:

  • Bugzilla is Mozillian - it channels the work of tens of thousands of Mozillians; we can cc anyone in the community on a bug
  • Bugzilla is open - we can link anyone in the world to a bug
  • Bugzilla is versatile - as Jonath says in Bugzilla for Humans, it's the devil we know

On the latter point, I've forked some agile features onto Greg's BugzillaJS to help us work more Scrummy™. Our most pressing issue is managing releases - our scope keeps bloating and our releases keep slipping. So we're starting to use the Agile/XP concept of "points" to estimate bugs, track our team velocity, and hopefully improve our release rhythm and reliability. Behold our improved "sprint backlog":

MDN™ 0.9.9 Sprint™ Backlog™

There's a few new things going on here. Here's a summary of how we're doing what we're doing:

Bugzilla Agile Target Milestone
Milestone releases - We use the milestone field of bugzilla for our releases. Next up is 0.9.9 scheduled for August 2nd release. As we move to a more continuous development and release cycle, the milestone version numbers lose meaning (just like Firefox), but we want to track releases. (Sorry James, we're not deploying every check-in just yet).

Bugzilla Whiteboard
Whiteboard overloading - We're using a tag=value pattern in the whiteboard to add new "fields" because adding fields and values to bugzilla requires IT changes and they're over-worked as it is. In our case,

  • 'u' - the primary user of the feature/bug (faster and more programmable than writing "As a ___, I want ___" every time
  • 'c' - the component that the feature/bug modifies
  • 'p' - points

bugzilla agile story points

Calculating points and stories - For any search that includes the "whiteboard" column with the specified tokens, the addon sums the number of "Open" and "Closed" stories and points for the release.

MDN Components Graph
Pretty graphs - data visualization FTW. Seriously, graphs give us a quick snapshot of the open v. closed bugs, and in which component we're spending our effort. This is important for MDN because we want to re-write the wiki while continuing to deploy site enhancements and changes. Now we can see exactly how much of our effort is going to the wiki as compared to other components.

I really hope this improves our releases and makes life easier for devs. Points by release and components are our most pressing needs so we can set realistic release and product expectations, and keep ourselves honest about where we're spending our effort. We'll probably add velocity and burndown charts once we finish a few point-based releases.

If anyone else wants to use it, my fork of BugzillaJS is up on github; download the .xpi file and open it with Firefox to install the addon. Feedback and pull requests are very welcome!

Edit: I should point out we're inspired by other agile Bugzilla tools - the Songbird team has wrestled bugzilla into their Agile process, and fligtar created moxie to help AMO product management.

Test-Driven [Design|Development]

Today I learned to appreciate Test-Driven Design a little bit more. Here's the story.

I'm writing some RSS feeds that will contain extensions and other non-RSS elements using XML Namespaces. I'm using Zend_View and Zend_Feed and I thought the best place to put the namespace would of course be at the top of my default.rss.phtml template file - that way I can register all the namespaces at once at the top of the feed. Instead of writing the test first, I wrote the code first. Took maybe 10-20m and seems to work fine:


<rss content="http://purl.org/rss/1.0/modules/content/"
doap="http://usefulinc.com/ns/doap#"
sf="http://sf.net/api/sfelements.rdf#"
foaf="http://xmlns.com/foaf/0.1/"
rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
version="2.0">
....
</rss>

Then I go to write the test. Lo and behold - it's a big pain in the ass to consume the feed using SimpleXML.

It's easy enough to create a SimpleXML element out of the feed, but I can't create SimpleXML elements from the content:encoded XML data:


<content:encoded>
<!--[CDATA[<doap:version>
<doap:name>Project 1.1 - Foobaj</doap:name>
<doap:created>1202221896</doap:created>
<doap:helper>
<foaf:person>
<foaf:name>admin1</foaf:name>
<foaf:homepage resource="http://lcrouch-703.sb.sf.net/users/admin1">
<foaf:mbox_sha1sum>6dd817a0f71590a68131a5e83b1bd73944654e8d</foaf:mbox_sha1sum>
</foaf:Person>
</doap:helper>
<doap:file-release>proj1.file1.tgz</doap:file-release>
<sf:download-count>0</sf:download-count>
</doap:Version>]]-->
</content:encoded>

Because all the namespaces used in the DOAP class aren't in the content. Argh! My first thought is to screw SimpleXML and do a raw string search/parse in the test. But then I had my epiphany: "If I were an actual client of this feed, I would want to be able to parse it easily with SimpleXML or with any other XML library."

I ended up pushing the xml namespace declarations right down into the appropriate elements - where I now think they are *supposed* to be:


<content:encoded>
<!--[CDATA[<doap:version
doap="http://usefulinc.com/ns/doap#"
sf="http://lcrouch-703.sb.sf.net/api/sfelements.rdf#">
<doap:name>Project 1.1 - Foobaj</doap:name>
<doap:created>1202221896</doap:created>
<doap:helper>
<foaf:person
foaf="http://xmlns.com/foaf/0.1/"
rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
<foaf:name>admin1</foaf:name>
<foaf:homepage resource="http://lcrouch-703.sb.sf.net/users/admin1">
<foaf:mbox_sha1sum>6dd817a0f71590a68131a5e83b1bd73944654e8d</foaf:mbox_sha1sum>
</foaf:Person>
</doap:helper>
<doap:file-release>proj1.file1.tgz</doap:file-release>
<sf:download-count>0</sf:download-count>
</doap:Version>]]-->
</content:encoded>

Voila - SimpleXML starts parsing everything very easily.

This is one of the biggest boons for Test-Driven Development - the effects it has on the way you design your code. If I had not tested my code as an actual client would use it, I would have produced some pretty shoddy feeds with useless XML namespacing.

Unit-testing ZF Controllers without Zend_Test

I've read a couple articles and blog posts recently talking about Zend_Test and/or testing Zend Framework Controllers. Particularly for controller testing, I'm kinda surprised how much plumbing code people are using. I recently started testing some Zend_Controller code (from ZF 1.5 even!) at SourceForge and did not do nearly that much plumbing.

Basically, I want to test the controller code in isolation from the front controller, the router, the dispatcher, the views, etc. All I to do is set up a request object, invoke the action methods of the controllers, and then assert against the variables assigned to the view. For these tests, I don't care about the output of the view templates themselves - I just want to know the controllers are putting the right variables into the view object.

It turns out this is actually pretty simple. I made a custom test case:


class Sfx_Controller_TestCase extends Sfx_TestCase
{
protected $_request;
protected $_response;
protected $_controller;

public function setUp()
{
parent::setUp();

// set up smarty view and restful view helper
$viewRenderer = new Sfx_Controller_Action_Helper_TestViewRenderer();
Zend_Controller_Action_HelperBroker::addHelper($viewRenderer);

$this->_request = new Zend_Controller_Request_Http();
$this->_response = new Zend_Controller_Response_Cli();

}
}

Sfx_TestCase contains all my bootstrap code. However, the only thing I do in bootstrap is set include path and set up a default db adapter for Zend_Db_Table. I don't do anything with Zend_Controller_Front. So this may as well extend straight from PHPUnit_Framework_TestCase. I'm not sure why others are claiming you have to use Zend_Controller_Front to test ZF Controllers - you don't.

I wrote and use Sfx_Controller_Action_Helper_TestViewRenderer (and proposed it as a core class) to simply create an empty Zend_View object into which the controllers can assign variables. Here's the whole class:


class Sfx_Controller_Action_Helper_TestViewRenderer extends Zend_Controller_Action_Helper_ViewRenderer
{
public function initView()
{
if (null === $this->view) {
$this->setView(new Zend_View());
}
// Register view with action controller (unless already registered)
if ((null !== $this->_actionController) && (null === $this->_actionController->view)) {
$this->_actionController->view = $this->view;
}
}
}

With only this much plumbing, I'm able to test the Controllers in isolation - no worrying about routes, dispatchers, plugins, helpers, nor view templates - like so:


class ProjectControllerTest extends Sfx_Controller_TestCase
{
private function __constructProjectController()
{
return new ProjectController($this->_request, $this->_response);
}

public function test_indexAction_fetches_all_projects()
{
$this->_controller = $this->__constructProjectController();
$this->_controller->indexAction(); // assigns 'resources' to view
$this->assertNotNull($this->_controller->view->resources);
$this->assertEquals(27,count($this->_controller->view->resources));
}

public function test_indexAction_new_since_fetches_only_new_projects()
{
$this->_request->setParam('new_since',1205880839);
$this->_controller = $this->__constructProjectController();
$this->_controller->indexAction();
$projects = $this->_controller->view->resources;
$this->assertEquals(4,count($projects));
foreach($projects as $project){
$this->assertGreaterThan(1205880839, $project->create_time);
}
}

public function test_indexAction_limit_limits_projects()
{
$this->_request->setParam('changed_since', 1205880839);
$this->_request->setParam('order_by','changed_since');
$this->_request->setParam('limit', 5);
$this->_controller = $this->__constructProjectController();
$this->_controller->indexAction();
$projects = $this->_controller->view->resources;
$this->assertEquals(5,count($projects));
$prevChangeTime = 0;
foreach($projects as $project){
$this->assertGreaterThanOrEqual($prevRegTime, $project->change_time);
$prevChangeTime = $project->change_time;
}
}

}

I'm finding this to be a much simpler and easier way of testing ZF Controllers than the other articles I've been reading. Now if you want to test everything in the front controller dispatch process and the view templates, I think Zend_Test is the best bet, but I've not used it yet so I can't be sure. The above classes work fine for what I do.

the binary canary testing pattern

I think I just invented a new testing pattern - The Binary Canary.

Basically, I was grouping my PHPUnit tests into a test suite and I realized that my TestCase super-classes were "failing" because they had no tests in them. Obviously this is intentional - only the specific sub-classes would have tests.

I guess I could have made the TestCase super-classes abstract, but instead I added this to the highest-level TestCase class:


/*
* global test plumbing here
*/
class Sfx_TestCase extends PHPUnit_Framework_TestCase
{
public function setUp()
{
// more global test plumbing here
}
public function test_Binary_Canary()
{
$this->assertEquals(
"Binary Canary says test plumbing is working.",
"Binary Canary says test plumbing is working."
);
}
}

My little binary canary serves two purposes:

  1. It adds an "always-pass" test to each of my TestCase classes so they don't throw up any more PHPUnit warnings.
  2. Because my TestCase classes set up context-specific test plumbing, the binary canary test inherited by each of them now alerts me if I screw up any of my test plumbing - and tells me the specific area.

For example:


class Sfx_Db_TestCase extends Sfx_TestCase
{
public function setUp()
{
parent::setUp();
// Db-specific test plumbing
}
}

And:


class Sfx_Controller_TestCase extends Sfx_TestCase
{
public function setUp()
{
parent::setUp();
// Controller-specific test plumbing
}
}

Just like the coal-miner canaries of old, this mechanism gives me a simple yes/no signal as to whether or not my test plumbing will soon kill me, and which plumbing code is the culprit.

unit tests and just-got-it-working inertia

I've been reading and enjoying The Productive Programmer by Neal Ford. It has re-ignited some of my passion for Test-Driven Development.

This morning I finished a first phase of "refactoring" some code architecture and found myself extremely hesitant to dive straight into the next phase. I think it's because the extent of my "testing" was to tab over to the fully-functioning web page and refresh after each code change. That's pretty much an "all-or-nothing" scenario.

And the thing about all-or-nothing scenarios is that once you've achieved the "all" state, you're very hesitant to go back to the "nothing" state. Maybe I'm starting to understand one of the benefits of unit tests as opposed to whole-sale acceptance tests. With smaller unit tests, you can move more concretely from nothing to something, then from something to something a little more, then finally to all done.

Continuous Integration

This is another Agile/XP practice with which I'm fairly happy; although I haven't yet seen it live up to its full potential, that potential is great enough to make me a believer.

Continuous Integration is a process that completely builds and tests code frequently. The "process" usually takes the form of a dedicated server running special software that continuously performs a series of tasks similar to the following:
(though apparently this process can be un-automated by using a rubber chicken)

1. Perform an update from the code repository
2. If changes are found, run a build (compile, test) of the latest code
3. a) If successful, package the latest code for deployment or b) If failure, report failure

Although I'm a fan of CI, it seems to be a more complicated practice than TDD. Though my experience may be tainted by bad hardware + software on which our CI depends.

CI requires that you maintain an automated build script. This isn't a tall order amongst Java and other compiled-language developers since projects of any moderate size need an automated build to simply compile and to separate source code from compiled code.

Interpreted languages are a bit different, though, in that they can usually be tested immediately upon edit. As such, automated build scripts are a bit more un-common for software written in interpreted languages. But most interpreted language software projects of any moderate size do have a consistent process for deployment, even if it's as simple as: make db changes, move files - and most interpreted languages have builders to automated this consistent process. In PHP, I've been looking at Phing.

CI is really most helpful when the build process includes a solid test suite. (Defining "solid test suite" is an exercise left to the reader.) With a solid test suite, CI can help you catch bugs earlier than usual because it typically re-runs all those tests after every check-in.

In addition, CI gives creates a vast sequence of clean builds similar to the "nightly builds" you hear about in open-source projects - a finalized packaged release of the project ready for deployment.

Finally, if you ensure that your CI platform replicates your target production platform(s), you can use it as a reliable measure of your project's production-platform readiness. This can be a double-edged sword, however - if your CI platform is different than your target production platform, it can give you false confidence of production-readiness, and even cause problems that aren't caught until later in QA or worse, actual production.

As with TDD, the benefits are not without drawbacks and you should weigh them for your own project before deciding if/which/how Agile practices are adopted. CI has the above benefits, but it is also a fairly complicated development platform for which engineers will be primarily responsible - it sometimes requires a good deal of time and attention to keep going. You still have to judge for yourself if it fits into your project, goals, and style.

Test Driven Development

I've read a few anti- and pro-Agile rants in the past couple days. Because I'm somewhere in between, I can't really rant in either direction. Instead what I might try to do is give my opinion on the actual effects I've noticed from some Agile/XP practices on my own coding. Note that I'm probably only picking out the ones I like, so my posts on the subject will betray a pro-Agile bias. But, the sparse number of posts will hopefully balance the scales in demonstrating that there are only so many Agile/XP practices about which I actually care enough to write.

I'm a fan of Test Driven Development. I don't do TDD 100% of the time, and there are quite a few things I'm not sure how to automatically test (CSS tweaks, anyone?). But I agree with just about all the benefits I read in this pro-TDD article, though I'll re-arrange their listing by my personal opinion of their importance:

When you follow ... TDD, all your code will be testable by definition! And another word for "testable" is "decoupled". In order to test a module in isolation, you must decouple it.

When I use TDD, it forces me to write "decoupled" code. "Decoupled" is one of those magic words programmer types say to each other in intellectual flexing competitions. But TDD shows what it really means - code that can be isolated. The benefits of isolated/decoupled code are numerous - re-usability, less duplication, more concise, and I'd also say "testability" is a benefit.

This is not to say that you have to use TDD to write decoupled code. There are much smarter and more disciplined developers than myself all over who write excellent code without using TDD. But for me personally, TDD forces me into just enough structure and pre-code analysis to keep me from writing messy code that will need cleaning later. Speaking of which ...

Why don't we clean up code that we know is messy? We're afraid we'll break it. But if we have the tests, we can be reasonably sure that the code is not broken, or that we'll detect the breakage immediately. If we have the tests we become fearless about making changes. If we see messy code, or an unclean structure, we can clean it without fear.

The "safety net" feeling you get from gradually building up a big suite of automated tests is, like all "feelings", impossible to describe, but I'll try to relay an anecdote which might help.

I still don't consider myself a Java programmer, although I spend at least 50% of my time programming in Java. I don't "feel" comfortable in Java. But, in the course of our development, I made at least one deep and far-stretching re-factoring (more fancy talk for "change the guts of the code without changing its behavior") to maybe 30 different Java source files all over our code-base, with no hesitation before committing. My uncomfortable Java feelings were superseded by my comfort in the fact that all 800+ tests passed after I made the change. So I'm wasn't afraid of making the needed changes, even in a language in which I'm uncomfortable, because all the code was covered.

Again, this doesn't mean someone can't make sweeping changes unless they have tests covering all their code. I've simply noticed myself indeed becoming more fearless when I myself have to make those kinds of sweeping changes.

Have you ever integrated a third party library into your project? You got a big manual full of nice documentation. At the end there was a thin appendix of examples. Which of the two did you read? The examples of course! That's what the unit tests are!

This particular benefit is quite a distance behind the previous two. Mostly because the kind of example code you find attached to "nice documentation" is a better reference than unit test code. Unit test code is oftentimes performing some superfluous tricks for isolation, and/or hard to understand. It could be argued that if the test code is hard to follow, the tested code's design is to blame. But I think personally I'd rather move extra verbosity into my test code and keep my productive code clean. Could just be a matter of personal preference.

However, unit-tests are useful in understanding the intended use of the tested code. And programmers are more likely to spend their time to write test code that benefits themselves (see above) than they are to "waste" their time writing example code which benefits only others. So, lacking the refined, formal example code, unit tests can act as use-case specifications. (Though not 100% comprehensive design specs, as some might say.)

There are a few anti-TDD points with which I also agree...

For many, TDD is a new way of programming. As such, it has an associated learning curve, requires new thinking patterns, and takes time before it is comfortable to someone. However, I have found TDD easier and more enjoyable to adopt than Java. Some might say that isn't high praise for TDD, but to those people I would say, "Well, at least it's simpler than Java." TDD can be practiced in the language of your choice, and you will probably find that TDD resources from within your preferred language can really help to match your existing programming style with TDD.

TDD results in a LOT of code. In the course of adding tests for the ZF Tutorial app, I realized I was adding verbose testing code to already-verbose MVC code. Indeed, the test code for the controller was more code than the controller code itself. This is a simple fact of TDD - more code. There's no getting around it. You simply have to decide if the benefits of TDD outweigh its drawbacks, such as this one.

We have run into problems where our 800+ test suite takes a significantly long time to run (~10 minutes). This can be a real pain if you're working under strict CI rules in that every change you make, even that pesky css tweak, is supposed to be sent thru that test suite before committing to your source repository. Typically, though, once developers have the hang of TDD, they know what kinds of changes really need the entire test suite, and what kinds of changes can be simply checked-in, or can pass thru only a sub-set of tests. But the pain still exists in that you have an extra step of responsibility between writing your code and committing it. Again, weigh the benefits of TDD against this drawback, maybe come up with a compromise of some kind.

Controller Testing in Zend Framework

Ouch, the previous post here was pretty bad. Messy design and it didn't even work correctly. A better guide on the topic is here:

http://tulsaphp.net/?q=node/40