Linking requirements to code

Mar 5, 2010 at 11:13 AM

Much like Cucumber, it would be nice to know that when a story scenario changes, then the test code already around that story breaks.

For example, in my .csproj I could have:

DataSafetyTests.story

DataSafetyTests.cs

The .story file contains the text form of the requirements, and the .cs obviously contains the code.

If the requirements change, you change the .story file. When StoryQ runs, it parses the .story file line by line then reflects the .cs file to check the expected method names exist in there.

 

Not the most elegant solution, but it could be something you explicitly call, maybe on testfixture setup

[TestFixtureSetup]
public void TestFixtureSetup()
{
    StoryQ.ValidateMethods();
}

 

As an additional idea, with a .story file in your solution. There's potential to add a right-click, generate missing methods option in Visual Studio.

Coordinator
Mar 5, 2010 at 11:44 AM

With StoryQ, the test code IS the story.

Repetition is the enemy of refactorability, so we purposely made the method name become the plain text description (and vice versa), rather than having to match it.

Specflow does a really good job of generating test code directly off a plain text (*.feature) file, I feel that we would always be playing catch up if we decided to move StoryQ into the VS plugin area.

One of the first avenues we explored was creating a T4 template that generated story code directly from a plain text file. Since the step methods would be generated, the lack of DRYness is less of an issue, without the deployment overhead of having to install a VS plugin. However, we gave up on this idea because we felt that an internal DSL was just a simpler way of doing things, and that the lack of moving parts helps developers learn StoryQ (and BDD) faster, and prevents simple mistakes (like forgetting to "transform all templates"). Since then, Specflow came along, and I think it does a great job of generating code from plain text files.

 

We want StoryQ developers to feel like they can throw away the plain text scenarios that they've been given, once they've converted the scenario into storyQ code. Can you think of any reasons why you wouldn't?

Mar 5, 2010 at 12:30 PM

I guess it's just a different way of working. Once the requirements are in code you change them from within the code.

The above suggestion is just a way of being able to work in plain text (maybe with a customer), yet having it relate to your code. It's generally easier reading, and working with plain text than having to read around PascalCasedMethodNames and C# syntax. Although in retrospect, StoryQ comes pretty close with it's fluent style of code so maybe it's at the perfect midway point already.

 

 

Coordinator
Mar 5, 2010 at 8:16 PM

@robfe - I'm glad you have such a good memory and the whys of what we did!

@robertbeal - for some clients, I run the tests in a plain text output and send these to them for review (as, say, an email). They give me feedback and then I update the tests.  This is lightweight process and doesn't give them too many stories at a time. Just as importantly, all the code is in one place. In most cases, the story tests are created and forgotten (they are studies about as much as functional specification documents - only when desparate) as long as the tests keep passing. What I think we need to rethink is the technique for outputting tests, at the moment you choose at design time your output. I would like a runtime configuration. Like but don't need!

Can I ask about your development process? What is the size of the batch of stories do you work in? Do you work with stories from a groomed Product Backlog? or are they more dev focussed? I ask this because I personally think that StoryQ's sweetspot is that the project is dev/code focussed for acceptance although the client wants visibility, albeit little, of the functionality in the code. If you have project where the client really wants hands on visibility then I would go FIT-type tools (eg Fitnesse or StoryTeller).

Back to the size of batches. I get teams to work in small increments to the code base. Inside development devs are working on one story at a time and generally you don't code a story until work on it. In this case, I find the code generation of little value unless I have forgotten the syntax (again!). Put alternately, I see the GUI as a training tool - or a tool to reduce the initial barrier to entry - not as  a productivity tool. I do this because undone stories in the code is waste (in the lean sense).

So, when you work on stories at what layer in the Test Automation pyramid do you use StoryQ? (see the-forgotten-layer-of-the-test-automation-pyramid.) I use it for the top layer exclusively and specifically for acceptance criteria. I reposted my comments here that I renamed/reformulated the use of pyramid (ie GUI becomes System Tests that tests the interactions between dependencies eg workflow). So to me, user stories tend to about workflow precisely because it is an abstraction that people other than developers can understand (why else would we add this overhead?). Put this in context of the batching, the very first step in the dev process is to write the Pending user story and there is usually only one, maybe two (loads of scenarios though). The final step is to ensure a passing test. In between, and incrementally, I add features that fulfill that story.  I use TDD at the unit/integration to code these. As each feature is complete, I add to the storyq test that piece of the puzzle -  I think of as adding to the conversation. In this type of process - iterative, incremental and avoid batching - I find there is little need for code generation. The fluent interface with resharper allows me to code pretty quickly. And a little retyping of the text is pretty good for cleaning up the wording too! Hence the focus of the tool. I hope that makes sense - it is Saturday morning and I need another coffee.

How does this fit with your experience?

Mar 5, 2010 at 11:48 PM

In terms of how we work. We typically use Scrumban with relatively strict WIP, not batches as such, and work on a story at a time, but many stories over an iteration. We have a prioritised, but fluent-ish backlog (it's our own product), and just pull a story from it when we need to. The type of work is whatever offers the most value, whether it's a typically dev orientated story etc... all depends on value, so is usually random.

The way we work, or I typically like to, is that the customer is the point of knowledge for the product. They are aware of the impacts and changes, and accountable for them etc... and specify what they would like done. It's my job to correctly understand and elaborate what they require, and build it. So we typically spend a large amount of time at the beginning discussing, and understanding requirements (around the existing system), fleshing out high/mid level business cases/rules.

And then we start coding using those cases, and defining more specific ones. I'd like to use StoryQ at the bottom and mid part of the pyramid (we already have a Selenium/Cucumber setup for UI stuff). Firstly, using StoryQ for driving out the unit level behaviour of classes (much like TDD, easiest starting at the unit level), and then as I build up, for higher level integration tests.

The main reason for me using it, is it's much more readable compared with what I'm able to write without StoryQ. Also having the reports (and Tags eventually) helps when elaborating work. We can print off existing behaviour as a reminder, or to change/rewrite it. I guess me wanting to link text to code, is so that when you change the behaviour or context you know which tests need updating and are now invalid. Otherwise you're left with passing tests that are no longer represent the desired behaviour that you have to hunt down yourself. And when you've got 1,000+ tests that can be a pain. But in retrospect that's not too difficult to do with StoryQ, although it is open to human error.

If it's a completely new piece of behaviour we can generate the full code with StoryQ. This is helpful, not only because of speed (it builds us a skeleton to fill in), but also in convention. The generated method names match the exact wording of our specifications so are a perfect representation, and just as understandable.