Wednesday, June 17, 2009

Automated Acceptance Tests - Myth or Reality? Useful or not so much?

  • tags: tdd

    • Are automated acceptance tests written at the beginning of each iteration just a theoretical assertion that have been proven ineffective by the lack of adoption?
    • why don’t we just make the tests on a white board, have the programmers implement them one at a time, checking them manually even showing them to the product owner manually and when we are done with it, why not just erase it and forget it?
      • The hope is that they will catch regressions. If not - no need to automate. - post by dserebren
    • Now, consider if the tests written by the QA department are written before the development begins. 
      • How could tests be possibly be written before the software is written? It seems that at best they can be written at the same time as the software (in a TDD way). - post by dserebren
    • One of the things that come out when watching Brian Marick's interview, is that there are many teams doing just fine with unit testing for regressions and manual QA Testing for acceptance tests (ad-hoc).
    • ThoughtWorks is building Twist to give QA and test automators the right kind of tool support to make functional test automation work.

Posted from Diigo. The rest of my favorite links are here.

InfoQ: Stop and Refactor?

  • tags: no_tag

    • So what can we conclude, comparing just these two? First, not refactoring provides MORE feature value until some time after refactoring ceases. Second, to know when refactoring starts to win we need to know some numbers: how long will refactoring take, and what will be its impact on velocity? And, how long will it be until the code gets crufty again, which will bring the numbers back down?
    • So what can we conclude, comparing just these two? First, not refactoring provides MORE feature value until some time after refactoring ceases. Second, to know when refactoring starts to win we need to know some numbers: how long will refactoring take, and what will be its impact on velocity? And, how long will it be until the code gets crufty again, which will bring the numbers back down?
      • This is like discussing when to clean up the kitchen and how that impacts delivery of new food. We are comparing stopping to clean now (and therefore food delivery will slow for a while) to not cleaning and therefore keeping the food delivery unchanged. It strikes me that this discussion somehow entirely misses the main point - that it currently stinks in the kitchen. Your cooks do not like it. Your customers do not like it. The food being prepared is being compromised by the garbage around it. As all cooks know, there is only one answer - clean as you go. And if you find that you didn't - stop, clean, then continue. - post by dserebren
    • Now, if we set the Refactoring Accelerator above x, RA > x, what happens? Can we go faster, or do we always slow down? We know that at one point, RA = 1.0, we do slow down, to zero (but we can speed up later).
      • Seems like in addition to the "feature injection rate" and "defect injection rate" we need to also measure "garbage injection rate". We can reason about the garbage injection rate very similarly to the defect rate, but the two are different and should be tracked separately. One can have defect injection without garbage injection and vice versa. Different stakeholders are in position to measure these - QA/Users vs. the developers. - post by dserebren

Posted from Diigo. The rest of my favorite links are here.