Setup/Teardown between scenarios

Dec 2, 2009 at 3:02 AM

I may have overlooked the mechanism for doing this, but what is the recommended approach for performaning scenario specific setup/teardown for a multi scenario story where the setup/teardown doesnt relate to the actual specification of the scenario. For example if I need to perform a call to reconfigure my IOC container at the start of each scenario or if a certain scenario that has inserted data requires a database rollback to be called once it has finished. I wouldnt really feel comfortable including these in any actions assigned to the Given/When/Then but currently I'm unaware of any other way to do it. Any plans to include Setup/TearDown delegates that could be passed as arguments to the Scenario or included in the fluent interface?

Coordinator
Dec 2, 2009 at 8:55 AM

Mr Makers:

A lot of people use the "given" step to perform any scene-setting "setup" operations, especially if they are related to the situation. For example, if your narrative is "Given that two jumpers have been sold", then you'd run a script to insert a sale with two products into your database. If you had a subsequent scenario in the same [Test] method, then that scenario would effectively be relying on the previous scenario having occured, so you'd want that sale to still be in the database, plus whatever changes occured in your "when" narrative. The same applies for setting up expectations / return values from mock objects.

However, for things like reconfiguring your IOC container, bringing your database up to a known default state, things that i'd expect you only need to do at the beginning of each test method: just use the standard SetUp/TearDown in your test runner.

That's my recommendation anyway, and why we won't be putting setup/teardown delegates into the scenario class (what about the story class? etc...), however if you find that in your case you really do need it (everyone's situation is different), then feel free to grab the source and add it yourself :)

Cheers - Rob

Dec 2, 2009 at 9:26 PM

Thanks, yep I've been using the Given step to perform all the context setup relevant to the story, I was just having issues with stories that had multiple scenarios which would result in a conflicting state. My requirements are slightly different than what you've descibed where each scenario is dependent upon the state set by the previous scenario, I can see where this would be desireable but in my case I have scenarios which I prefer to be independent - eg I have a story where one scenario is testing an outcome when a user has 2 shopping cart items and $100 credit, and another scenario is testing an outcome where the user has 3 shopping cart items and $50 credit (2 distinct scenarios belonging to the same story). Ideally I'd like to rollback the database to a consistent state between each scenario but at the moment this means seperating out the 2 scenarios into 2 test methods (and therefore defining the same user story twice) in order to put it into the Setup/Teardown method or creating the story and asserting, calling the cleanup, then creating another instance of the same story with different scenarios and asserting again.

I hoped Id found a way around it by creating a story with 1 scenario and calling Assert(), then doing my cleanup and adding another scenario to the same story and calling Assert() again but it seems to only ignore the 2nd Assert, presumeably because the story is in a completed state.

Anyway thanks for the response, I've modified the source a little to meet my requirements, I look forward to seeing what the latest version will bring!

 

 

Coordinator
Dec 3, 2009 at 8:48 AM

While I always think of a story as something that happens in sequence, the next version of storyQ will allow you to reuse a story across tests much better than the current release

Jan 15, 2011 at 12:10 PM
Edited Jan 15, 2011 at 1:02 PM

We are using this a lot for acceptance and integration testing and frequently find a number of scenarios with scenario specific setup under a single story. We define the story globally, then add the scenario and execute it in separate [Test] methods.

Example: One test method per scenario

 [TestFixture]
    public class MultipleScenariosStory
    {
        private Feature _feature;

        [SetUp]
        public void SetUp()
        {
            _feature = new Story("Potatoes Story").InOrderTo("Eat Potatoes").AsA("User").IWant("To Grow Potatoes");
        }

        [Test]
        public void scenario1()
        {
            _feature.WithScenario("Scenario 1")
                .Given(Something)
                .When(Something)
                .Then(Something)
                .ExecuteWithReport(MethodBase.GetCurrentMethod());
        }

        [Test]
        public void scenario2()
        {
            _feature.WithScenario("Scenario 2")
                .Given(Something)
                .When(Something)
                .Then(Something)
                .ExecuteWithReport(MethodBase.GetCurrentMethod());
        }

        private void Something()
        {
            throw new NotImplementedException();
        }
    }

This works but the report output is not grouped under the header story making it more verbose like this:

Stories

  • storyqexamples
    • storyqexamples
      • MultipleScenariosStory
        • scenario1
          Story is Potatoes Story  
            In order to Eat Potatoes  
            As a User  
            I want To Grow Potatoes  
                With scenario Scenario 1  
                  Given something Pending
                  When something Pending
                  Then something Pending
        • scenario2
          Story is Potatoes Story  
            In order to Eat Potatoes  
            As a User  
            I want To Grow Potatoes  
                With scenario Scenario 2  
                  Given something Pending
                  When something Pending
                  Then something Pending

The output can still be grouped by class or method name but ideally the scenario output would appear under the story. Any suggestions for how this could be achieved?

----------------------------

Rob - As an aside I'm interested in how you are writing your stories. How say would you write the validation scenarios for a form? These are separate cases that really do belong under one story.

Example:

Story is Amending customer details

Scenario: Not entering an email address

Given I have not entered an email address 

When I save my details

Then a validation error message is displayed

Scenario: Entering an invalid email address

Give I have entered an invalid email address

When I save my details

Then a validation error message is displayed

 ... etc.

This example doesn't require Scenario specific setup but it is the Scenario, not the Story that is the collection of steps that happen in sequence.

Coordinator
Jan 16, 2011 at 2:47 AM

alexlea,

Your code is correctly written - it is DRY. However, I am less clear that user stories should avoid all duplication. I usually find that it is better to write out all the tests long hand and not worry about the duplication. Often it turns out that where I thought code looks like your's above the conditions change as I learn more.

In terms of your other question about testing validation scenarios. I am still am of the opinion that these should be written in classical-style TDD, in a unit test project and not mixed up with system, user stories tests. For validation scenarios there often much duplication and the need for DRY code is important. But user stories create an enormous amount of noise. I would use a test builder pattern (see validating mother objects here). Validations should all be going back server-side anyway and models should be asked if they are valid.

I can see that you might want to do validation scenarios through a GUI - as system testing - but, why?! It is much harder, slower, brittle. 

Is that helpful?

--tb

Jan 25, 2011 at 11:02 PM

Hi todd,

Yes helpful and interesting!

As you point out it's no biggie but it would be nice purely from an organisational point of view to be able to see them all together. I guess, in part, this depends on work breakdown and what constitutes a User Story. We don't always find a 1 => 1 mapping between story cards and StoryQ stories. Do you?

For validation scenarios I'm inclined to test a valid path, each mandatory field and one invalid input on each field. This provides great documentation as to the actual requirements and is relatively cheap since there's such a high level of reuse. I would then unit test the validation rules more exhaustively and as you say ideally they would exist in the domain. Legacy code forces unpleasant but pragmatic choices sometimes ):

Story noise is something we've noticed a few times and I keep pondering a tabular/data driven approach a-la fitnesse for describing some stories' scenarios more concisely (billing and validation spring to mind). I've seen a few threads on that too.

In one case we are experimenting with StoryQ driving Selenium RC for web acceptance testing on an app with a fairly low-fi GUI. So far it's proving a practical choice but I suspect that's mainly down to the simplicity of the app and lack of external integration points. 

Alex

 

 

Coordinator
Jan 26, 2011 at 5:42 AM

Alex,

> We don't always find a 1 => 1 mapping between story cards and StoryQ stories. Do you?

No, me neither. Initially, I lure teams into thinking that this will be great traceability knowing that it doesn't work that way! There is a theoretical basis to why this is the case ("romantic versus baroque complexity talked about here":http://blog.goneopen.com/2008/10/requirements-and-complexity-the-devil-is-in-the-detail/).

>For validation scenarios I'm inclined to test a valid path, each mandatory field and one invalid input on each field. This provides great documentation as to the actual requirements and is relatively cheap since there's such a high level of reuse. I would then unit test the validation rules more exhaustively and as you say ideally they would exist in the domain. Legacy code forces unpleasant but pragmatic choices sometimes ):

It sounds like you're entrenching a system/design that you don't believe in. But if you have tests at the integration level then it not technically in the Feather's sense "legacy" - now you have the opportunity to refactor a domain in!

>Story noise is something we've noticed a few times and I keep pondering a tabular/data driven approach a-la fitnesse for describing some stories' scenarios more concisely (billing and validation spring to mind). I've seen a few threads on that too.

If you end up here fitnesse, slim or specflow are fine. I would tend to specflow if you don't have the customers really involved.

>In one case we are experimenting with StoryQ driving Selenium RC for web acceptance testing on an app with a fairly low-fi GUI. So far it's proving a practical choice but I suspect that's mainly down to the simplicity of the app and lack of external integration points.

Hey, if it's still working, isn't painful, providing value, key an eye on risky areas, isn't brittle, still working, actually maintainable, go for it. You are lucky!

--tb