This project is read-only.

New step status - skipped

Apr 11, 2010 at 11:56 AM

In my opinion, steps that follow a pending or failed step should be "skipped" (not executed) - as cucumber does it (http://wiki.github.com/aslakhellesoy/cucumber/step-definitions).

Currently, your example under "Passing, Pending, Failing" (http://storyq.codeplex.com/wikipage?title=write%20your%20first%20StoryQ%20test) prints the following:

With scenario first example:
Given this is my first try => Passed
When I run this method => Pending !!
Then I will see what happens => Failed -> Assert.Fail failed. [1]

I think this should be: 
With scenario first example
Given this is my first try => Passed
When I run this method => Pending !!
Then I will see what happens => Skipped

Apr 11, 2010 at 8:42 PM

Not a bad idea!

I might have to make this a non-default option (StoryQSettings.SkipAfterFirstFail?), because I don't want to break things for users that are used to the way StoryQ currently works.

Sukchr, do you think skipped steps should skipped show up in Nunit (etc) as red (fail) or yellow (pending/indeterminate)?

I will try and choose a tasteful shade of cyan for the reports that StoryQ has control over :)

Apr 12, 2010 at 9:24 AM
Edited Apr 12, 2010 at 5:27 PM

Having an option would be cool.

I think that the color should be that of the last executed step; If the last executed step is pending, then the color should be yellow. If the last executed step is failed, then the color should be red. In other words, skipped steps should not affect the NUnit coloring.

Having a different coloring in the StoryQ report is nice!

Apr 13, 2010 at 11:42 AM

Actually, I think it might need to be something you can customise on each step.

For example, a step that creates the test data in a database should block further steps if it fails.

An assertion (a "then" step) that's followed by an "and" might not want to cause the "and" step to skip, in order to give the user a more thorough idea of what's failed.

Something like:

.Given(TheUserHasLoggedIn).SkipAfterFailure(true)
.When(...

and / or (perhaps you can set the default on StoryQSettings):

.Then(TheUserIsDeleted).SkipAfterFail(false)
.And(AnAuditLogIsMade)

 

 

 

 

Apr 13, 2010 at 12:01 PM
Edited Apr 13, 2010 at 1:42 PM

Good point!

Could the skip-behaviour be configured via custom attributes on the step-method? I often use the same step-methods from many features and if I want them to cause skipping in one scenario, I probably want them to cause skipping in other scenarioes as well. This also keeps the fluent setup easier to read.

 

.Given(TheUserHasLoggedIn)
.When(...
.Then(TheUserIsDeleted)
.And(AnAuditLogIsMade)

...

[SkipAfterFailure(true)]
void TheUserIsLoggedIn() { ... }

[SkipAfterFailure(false)]
void TheUserIsDeleted() { ... }

 

Apr 13, 2010 at 5:12 PM

I had been wondering about allowing tagging via attributes on step methods too. Guess we might as well :)

Apr 14, 2010 at 12:42 PM

I think this discussion is interesting because it raises different approaches that you can use StoryQ for testing. sukchr, it seems to me that you are using it to do pretty BDD in the outside-in approach. Using this approach, you requests make sense to me (I think). Would I be correct in saying that you write the stories and implement as you go? Furthermore, you are tending to want a test/assertion/action per line? If this is the case, then I suspect that you might not be writing other tests eg TDD unit tests or integration tests (I don't know what type of software you writing - so sorry if my comments are way off the mark!)

What I am getting at is that if you use StoryQ as the main test harness then I too would want more control around the pending tests (I was just talking to someone today about how they need to wrestle with this issue in fitnesse).

So, in short, if have a need for it, go for it. Personally, I reckon it will clutter your code and create extra work. I think that Explicit attribute does this and often so too does Ignore - in most cases, they are like commented out code - it should be deleted right there and then. Work queues aside, it does suggest dependencies in the tests. Again, I wonder about the wisdom of this - it has never been a good thing in the medium-to-long term. I am just working on some code I wrote a few years ago and am very aware of this issue :-)

Back the question of different approach to use StoryQ. I don't use StoryQ in the way I have suggested you might be. It is merely part of my test strategy that I describe here. Pertinent to this discussion is that I:

  1.  first write the StoryQ tests in plain text - review and distribute around the team. I want them all to be pending. These are written before any code is cut and is a deliverable as part of the modelling/architecture session with the team.
  2.  the we code out models/repositories/services (all in the DDD sense) - this phases requires classical and mocking TDD libraries/approaches - I could even go down the Machine Specification library direction.
  3.  then we return to the StoryQ tests and use these tests are the first "client" of the code - I then turn the plain text into functions with data and implement a test universe. This requires that we get our fakes in place, that we have good helpers in order to keep the test universe very skinny and think through the workflow that will happen at the GUI. Currently, I am working on an asp.net webforms project (yes, Rob back in eTravel land!) on incredibly slow machines which means that the less work we do in the code behind the better - speed and design wise.
  4.  Now we hit the GUI and get the app going
  5.  Finally, we review StoryQ tests and check that there is good coverage through the original tests


So what's the lesson in this? I write line of business apps - StoryQ is a tool that facilitates a process for me and requires engagement in different ways at different stages. So, I just don't need much fancy stuff! In fact, I need to remain lean and mean because I get non-BDD people to use it without much training. The recent changes that have streamlined syntax have been great because it reduced code. I will digress a little. I now don't even use the GUI converter - it gets in the way. It's great when you forget the syntax but after that I want developers to type in every line because they keep thinking and reviewing as they type in. However, I never get anyone to type in more than one or two user stories at a time - the queue of work is too great. Furthermore, I want them to retype from plain type to functions because in this process they start to see patterns/reuse in the code.

I hope that makes some sense and adds something to the discussion. I don't think it adds much about whether to attribute it or add it to the fluent interface. I'd probably go with the fluent interface because it is written in the context of the story. And really, you are saying that there is a dependency in here (whether technical or simply that it hasn't been done yet). If you attribute it then it is likely to get "lost" because it is going to be on the test universe and in practice I am finding that that code is not in the same file because a set of stories even on a simple domain concept can get large (ie needs scrolling). Having said that, I c!ould see that I might want to attribute it sometimes!

Cheers todd

Apr 14, 2010 at 2:31 PM

Its interesting that you say that you never get developers to type in more than 2 stories at a time. In my (limited) experience so far I too work on one or two stories at a timne, but have found that for some stories I may have quite a few scenarios.

I will have shared steps - and shared assertions - in the scenarios as they are are related to the same story. When I set the story up, I may start with the main success scenario, and then pick off secondary scenarios once that is proved. However, in my current set up, the secondary scenarios will usually fail, even though preceding steps have not been implemented. As I am using StoryQ with CruiseControl.Net, this fails my automated build!!

I have ended up creating my assertions with a silent 'pending' parameter that causes them to be skipped until the steps above have been implemented. This works for me now, but I really think this should be the behaviour out of the box (perhaps something I could enable with an option?).

Just my opinion ;)

Again, thanks for a great tool.

Apr 14, 2010 at 3:21 PM

Paul, can i just try and confirm that this is your process:

Write scenario "a", involving steps 1, 2 & 3 (this won't break the build, because all steps "pend")

Implement steps 1, 2 & 3 (since they all fail, the build will break, so you tend not to check these in)

Write the production code that makes steps 1, 2 & 3 pass (build is fixed - checkin?)

Write scenario "b", involving steps 4, 2 & 5, where 4 & 5 are currently unimplemented (this breaks the build, because step 4 doesn't provide step 2 with any dependencies just yet. If step 2 would "skip" - you'd be happy)

Apr 14, 2010 at 9:21 PM

Essentially that's correct. It currently goes wrong when I have nunit asserts in the shared steps - for example checking that a message is displayed. 

I prepare the scenarios for the first story:

Main Success Scenario "a" - steps 1,2 & 3. all are pending.

Secondary Scenario "b" - steps 4,5 & 3, all are pending.

(+ perhaps more scenarios)

I now check in (all are pending, build works OK).

Now, I implement scenario "a". I implement steps 1 & 2 and add an assert to step 3 to check a result, perhaps to check that a message is being displayed. As soon as I do this, scenario "b" fails - steps 4 & 5 are still pending, but step 3 is implemented, and fails.

I cannot check in scenario "a" as the build will fail.  To make this work, I currently pass a silent parameter into step 3 to indicate whether the assertion should be performed (yuck). This allows me to avoid the problem for scenario "b", and I can check in.

Yes, I would like scenario "b" to skip, as steps 4 & 5 are pending!

Apr 14, 2010 at 9:30 PM
Edited Apr 14, 2010 at 9:31 PM

Oh, I realise that you don't actually need to create all of the scenarios up-front, you can put them in as you are about to implement them.

I prefer not to do this, as I want developers to see the scope of the story when they start on it; otherwise they may create a naive implementation for the first scenario, and keep rewriting the production code as the new scenarios are introduced. I see this as wasteful - its much more efficient to consider the whole story up front in your design, even if the scenarios are implemented incrementally.

Apr 14, 2010 at 10:22 PM

I suppose that I get around this problem by the first implementation in strings rather than the method on the test universe. I often think that I would like the feature you are talking about and then I remember that there is an overriding issue here and this comes out of the craftmanship ideas (Richard Sennet to be precise and also found in Pete McBreen's book). You want people to be right in the middle between problem finding and problem solving. In this case, writing the scenarios and solving the scenarios is the same as finding and solving. If  you think that your developers are simply going to solve the solution in a non-naive way then they must work through the problem as much as the solution. Of course, there is always the apprentices review of the the master work which I think can  be the transformation of the plan text to the method (and probably refactor of method names). There is then that step that suggests a journeryman that they ask is this even the right problem in the first place. In practice, if you assume that developers will see the scope of the story because you have written it down then I think you are in for disappointment - I know that I have been and continue to be - given that I have teams with people learning these techniques.

Can I also just pickup on the efficiency/waste point too. Given you use the term waste-ful, I am going to  see efficiency in terms of lean thinking and cycle time. Efficiency is the reduction of cycle time. Things that help that are work not-done and the reduction of queues of work. Put conversely, queues of un-done work (work started and not completed) work against cycle time reduction. Do you think that you are in fact creating these types of queues?  Because, you need to do more work on these stories they are clearly not done. They can't be because you can't ship them and never come back, they break, etc. However, if they were string methods they could be viewed as done. I can and do walk away from tests in this state. They given me great context and documentation of the product that doesn't break. Now, if I need to hook these up to a new method this is a new piece of work (or task) that then gets to done. It may break other parts but fixing that *is* part of the job to be done - it's called integrating new functionality into the system ;-) 

So I am happy for rewriting. It is somewhat needed for evolutionary design. Rewriting may be rework in a wasteful sense - but then again it's probably just as likely to be the process of developing knowledge and skills. We can't get around this. It is waste though when the same person makes the same mistakes time and again. In fact, I would say that about a team. If the team can't learn from the mistakes of the individuals in the team during the project that too is a major problem. But looking at it this way is more about cycle time of throughput and less to do with efficiency in the sense of local optimisations (people working harder).

There's almost a argument here that adding attributes for doing ignore increases cycle time. I'm not sure I'd quite generalise it that far! cheers todd

Apr 14, 2010 at 10:24 PM

I agree that the silent parameter is less than ideal :)

I also agree that you should be able to create a few scenarios up front 

What we did (and I'm not sure how easy this was with cruise control), was not have our story tests break the build. Our unit tests do, but a separate task runs our stories. If a few story steps are currently failing: no problem, that just lets the team know that there's work to do... 

Are you running multiple scenarios in a single test method? Another option is to [Ignore] the tests that contain the scenarios that you expect to fail. You can store a reference to the Feature that's returned by "IWant", and reuse that across your test class. 

Frantisek and I came up with a framework around this, mentioned on his blog: http://fknet.wordpress.com/2010/03/25/nbehave-out-storyq-in/#comments

I'm not ruling out "skip after fail", but i'd be keen to know what you think about these alternatives...

 

Apr 14, 2010 at 10:36 PM

(Some background: Todd taught me all about BDD, and is an agile guru. I have been doing most of the implementation for v2 of storyq - but he's the one with the big ideas)

Todd: I was thinking about Paul's situation from a "lean" perspective too - and wondering whether to recommend that his developers "pull" stories into the project rather than having him "push" the stories at them.

I have in fact removed the ability to create Given, When and Then steps with strings in a recent version of StoryQ - i saw that as a waste of time (converting from string to CamelCaseMethodNames) - so now you have to provide a real method. If the method throws a NotImplementedException, then that step counts as pending. If you delete that default exception, then the empty method with of course pass...

 

Apr 15, 2010 at 8:48 AM

We are not thinking of 'masters' or 'apprentices' or pushing work onto developers. We simply want to capture scenarios. At my age, my memory is pretty poor and I forget stuff. :)

Our team usually have scenarios noted from our initial discussions, so we like to capture them. We also have domain experts who's ideas we need to capture - I am not one of them - these people may or may not be on the project.

I like to see the scope of work for a week, so can also measure our progress against it. I can use percentage passed, pending, failed to get a view of this. (I am not working as part of a 'pure' agile environment - we have a mix of approaches - lots of waterfall - and a lot of the requirements gathering is done up front by our analysts).

Thinking more about this, I am not sure I like have many scenarios 'hooked up' to as part of one test method. When running under nunit (I use the Resharper test runner) it makes it difficult to see which stories/scenarios are passed/pending/failed when there are many pending scenarios. It would be nice to see red/yellow/green lights at a more granular level. Is this my real problem?

If I could split up my scenarios then I could flag unimplemented scenarios as 'ignored' and this would solve my problem. Is it possible to re-use stories across multiple test methods?

Apr 15, 2010 at 9:50 AM

You certainly can reuse stories (Although by the time you've called "IWant", the class returned by the fluent interfact is actually a Feature):

 

private Feature story = new Story("Data Safety")
            .InOrderTo("Keep my data safe")
            .AsA("User")
            .IWant("All credit card numbers to be encrypted");

        [TestMethod]
        [Ignore]
        public void PassingExample()
        {
            story.WithScenario("submitting shopping cart")
                    .Given(IHaveTypedMyCreditCardNumberIntoTheCheckoutPage).Tag("sprint 1")
                    .When(IClickThe_Button, "Buy")
                      .And(TheBrowserPostsMyCreditCardNumberOverTheInternet)
                    .Then(TheForm_BePostedOverHttps, true)
                    .ExecuteWithReport(MethodBase.GetCurrentMethod());

        }

        [TestMethod]
        public void PendingExample()
        {
            story.WithScenario("submitting shopping cart pending")
                .Given(IHaveTypedMyCreditCardNumberIntoTheCheckoutPage)
                .When(IClickThe_Button, "Buy")
                .And(TheBrowserPostsMyCreditCardNumberOverTheInternet)
                .Then(TheForm_BePostedOverHttpsPending, true).Tag("this one ought to pend")
                .ExecuteWithReport(MethodBase.GetCurrentMethod());

        }

 

 

Apr 15, 2010 at 11:04 AM

I love it when a plan comes together ;)

 

Jun 2, 2010 at 1:28 PM

Sorry about the delayed response to this thread. The reason for this feature request is that I think the StoryQ output should clearly tell me the next step to implement in order to make some scenario/feature pass. Here's a story to illustrate my point:

private int _value;

[Test]
public void ManipulateNumber()
{
    new Story("manipulate number")
        .InOrderTo("be productive")
        .AsA("developer")
        .IWant("to manipulate a number")
            .WithScenario("increment value")
                .Given(InitialValueIs_, 6)
                .When(ValueIsIncremented)
                .Then(NewValueIs_, 7)
            .WithScenario("decrement value")
                .Given(InitialValueIs_, 3)
                .When(ValueIsDecremented)
                .Then(NewValueIs_, 2)
            .ExecuteWithReport(MethodInfo.GetCurrentMethod());
}

private void InitialValueIs_(int initial)
{
    throw new NotImplementedException();
}

private void ValueIsIncremented()
{
    _value++;
}

private void ValueIsDecremented()
{
    _value--;
}

private void NewValueIs_(int expected)
{
    Assert.AreEqual(expected, _value);
}

Its clear from reading the code that the _next_ thing to implement in order to fix this scenario is InitialValueIs_(). However, if I execute this story StoryQ renders the following output:

Story is manipulate number
  In order to be productive
  As a developer
  I want to manipulate a number

      With scenario increment value
        Given initial value is 6    => Pending !!
        When value is incremented   => Passed
        Then new value is 7         => Failed: "  Expected: 7
  But was:  1
 [1]"

      With scenario decrement value
        Given initial value is 3    => Pending !!
        When value is decremented   => Passed
        Then new value is 2         => Failed: "  Expected: 2
  But was:  0
 [2]"

_______________________
Full exception details:
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
[1]: NUnit.Framework.AssertionException:   Expected: 7
  But was:  1

   at NUnit.Framework.Assert.That(Object actual, IResolveConstraint expression, String message, Object[] args)
   at NUnit.Framework.Assert.AreEqual(Int32 expected, Int32 actual)
   at Bdd.Tests.CalculatorTests.NewValueIs_(Int32 expected) in C:\Users\chriss\Documents\Visual Studio 2008\Projects\Bdd\Bdd\Tests\CalculatorTests.cs:line 96
   at StoryQ.Operation.<>c__DisplayClassd`1.b__c() in C:\code\storyq\src\StoryQ\StoryQ.flit.g.cs:line 808
   at StoryQ.Infrastructure.Step.Execute() in C:\code\storyq\src\StoryQ\Infrastructure\Step.cs:line 85

[2]: NUnit.Framework.AssertionException:   Expected: 2
  But was:  0

   at NUnit.Framework.Assert.That(Object actual, IResolveConstraint expression, String message, Object[] args)
   at NUnit.Framework.Assert.AreEqual(Int32 expected, Int32 actual)
   at Bdd.Tests.CalculatorTests.NewValueIs_(Int32 expected) in C:\Users\chriss\Documents\Visual Studio 2008\Projects\Bdd\Bdd\Tests\CalculatorTests.cs:line 96
   at StoryQ.Operation.<>c__DisplayClassd`1.b__c() in C:\code\storyq\src\StoryQ\StoryQ.flit.g.cs:line 808
   at StoryQ.Infrastructure.Step.Execute() in C:\code\storyq\src\StoryQ\Infrastructure\Step.cs:line 85

 

 I think that the next action to take would be clearer if the output was simply:

Story is manipulate number
  In order to be productive
  As a developer
  I want to manipulate a number

      With scenario increment value
        Given initial value is 6    => Pending !!
        When value is incremented   => Skipped
        Then new value is 7         => Skipped

      With scenario decrement value
        Given initial value is 3    => Pending !!
        When value is decremented   => Skipped
        Then new value is 2         => Skipped

Jun 4, 2010 at 3:53 PM

I think that in your situation, you're right, it's clearer which step needs to be fixed first. However, it's still relatively obvious that any step that's not passing needs to be looked at, and you'd start with the first one that wasn't passing...

I'm always against hiding error information, even if the program thinks it's safe to ignore. I can think of other situations where step 6 might have an error, but step 8's error gives you a better idea of what the root cause of the error is...