文档库 最新最全的文档下载
当前位置:文档库 › DesignForTest

DesignForTest

? 2009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. For more information, please see https://www.wendangku.net/doc/a813031884.html,/web/publications/rights/index.html.

Design for Test

Rebecca J. Wirfs-Brock

Vol. 26, No. 5 Sept./Oct. 2009

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted

without the explicit permission of the copyright holder.

design

E d i t o r:R e b e c c a J.W i r f s-B r o c k N W i r f s-B r o c k A s s o c i a t e s N r e b e c c a@w i r f s-b r o c k.c o

m

A s developers, we’re expected to turn out

implementations proven by tests that we

or others have written. Doing otherwise is

considered unprofessional. But does code

that’s designed to be testable differ funda-

mentally from code that isn’t? What does it mean to design for test?

Making Code Testable

Advocates of test-driven development (TDD) write tests before implementing any other code. They take

to heart Tom Peters’ credo, “Test

fast, fail fast, adjust fast.” Testing

guides their design as they imple-

ment in short, rapid- re “write

test code—fail the test—write

enough code to pass—then pass

the test” cycles. Regardless of

whether you adhere to TDD de-

sign rhythms, writing unit tests

forces you to articulate pesky edge

cases and clean up your design.

Michael Feathers has cheekily de ned a leg-acy system as code that doesn’t have tests. He says that to be testable, code needs appropriate seams. In Working Effectively with Legacy Code (Prentice Hall, 2005), Michael de nes a seam as a place where you can alter your program’s behav-ior without having to rewrite it. Every seam has an enabling point—a place where you can decide to use one behavior over another. There are two main reasons to include these seams:

so that you can insert test code that probes the N

state of your running software and

to isolate code under test from its production N

environment so that you can exercise it in a con-trolled testing context.

You can design seams in different ways, rang-ing from preprocessing ags and conditionals to adjusting class paths and dynamically injecting de-pendencies between collaborators. You also need to isolate and encapsulate dependencies on the ex-ternal environment. All these techniques let you in-sert code that exercises your software without al-tering the code being tested.

In addition to inserting appropriate test hooks, you should write your code so that it doesn’t have unnecessary dependencies on concrete class names, values, and variables—anything that you might want to replace in a test environment. You can do this in many ways—for example, by

using con gurable factories to retrieve service N

providers,

declaring and passing along parameters instead N

of hardwiring references to service providers,

declaring interfaces that can be implemented by N

test classes,

declaring methods as overridable by test N

methods,

avoiding references to literal values, and

N

shortening lengthy methods by making calls to N

replaceable helper methods.

In short, you need to provide appropriate test affordances—factoring your design in a way that lets test code interrogate and control the running system.

On the surface, this sounds like nothing more than good design practice. And largely, this is true.

Rebecca J. Wirfs-Brock

Ideas must be put to the test. That’s why we make things; otherwise they would be no more than ideas.

There is often a huge difference between an idea and its realisation. —Andy Goldsworthy Design for Test

92I E E E S O F T W A R E P u b l i s h e d b y t h e I E E E C o m p u t e r S o c i e t y0740-7459/09/$26.00?2009I E E E

DESIGN

But adding the capability to transparently insert test code has consequences. It can add extra wiring, assembly, and interaction steps to your software. Understanding how collaborations are established can become slightly more dif cult because wiring and assembly steps are often accomplished by indi-rect dependency-injection techniques.

This approach can also involve a lot of ddling and rework if your code wasn’t designed this way from the start. My colleague Don Birkley observes, One critical aspect of design for test is to keep classes designed so that in vitro tests are even possible. This involves not only clean well-

factored design, but also creating the context objects to supply the “nutrients” and “oxy-

gen” for the objects under test.

Code that’s designed for test must continually be tested. If it isn’t, any test affordances you add are purely speculative.

Balancing Test and Product Code So far we’ve been talking about testing from the developer’s perspective on writing unit tests. But what do testers need from a design? Performance-test engineers need the ability to predictably set up, control, and measure software execution. Some-times this requires designing extra hooks that let them precisely con gure and control characteris-tics affecting software performance. And some-times these extensions work their way into prod-ucts, because sophisticated customers also nd performance-tuning capabilities useful.

Testers also like to automate their tests. For this to be feasible, they need predictable, stable behav-ior at points where they stimulate software and measure its behavior. It can be disastrous when gratuitous design changes break a lot of tests. Un-less I know how tests exercise my design (and how they verify its behavior), as a designer I won’t know what’s fair game to change and what behavior I must preserve. But whatever my software updates, logs, or reports is fair game for a test to examine. However, I need to know what the tests want to examine and whether I think it’s reasonable for test code to do so. To reduce both design and test rework, the contractual agreement between what a design produces and what tests consume should be established early. It’s much harder to wedge in consistent error-messaging and logging strategies as a design afterthought.

An agile development team’s manager was frustrated by the increasing dif culty her team had making any signi cant design changes to the production system. After building extensive test suites, they were confronted with a tough choice: either make desired changes to product code and

break quite a few tests, or preserve the tests and

create an awkward design solution. Her team took

their best shot at de ning what they expected to

be the “stable” contract between test and product

code. Sometimes you just can’t make nontrans-

parent design changes to support new product

features. This isn’t just speci c to tests. Part of

evaluating any design change is understanding its

impact on the overall system. Tests are just one of

the system parts that might be impacted. You can’t

realistically expect that tests or the software’s de-

sign won’t have to change to accommodate new

requirements. To make rational decisions about

how to support changing requirements, you need

to manage the design of production code, unit

tests, and acceptance tests as codependent assets.

Promoting Repeatable Behavior

Tests help de ne and constrain behaviors. Still,

they don’t guarantee that software works predict-

ably. But as any experienced designer of complex

software knows, the more you can pin down your

design and make it exhibit repeatable behavior,

the easier it will be to maintain. Nondeterministic

behavior makes reproducing certain bugs nearly

impossible. So, compiler writers know it’s impor-

tant that, given the same source les, compilers

should generate identical, not equivalent, code.

Doing something as innocuous as using a nonde-

terministic hash value, such as object identity, for

a table lookup can throw a monkey wrench into

the works.

That’s also why designers of real-time systems

have their own grab bag of established design

techniques to make their software behave more

deterministically. It’s also why those who write

random-number generators know to design their

code to return the same sequence, given the same

seed. And designers of complex calculations take

the time to design consistent, stylized code. Repro-

ducible behavior is inherently easier to test. If tests

place constraints on a design, so too, should the

necessity to reproduce, isolate, and x bugs.

I f you design for testing, debugging ease, and

repeatable behavior, does it ever get easier and

more intuitive? Or does it always take signi cant

time and effort? Designing for test involves disci-

pline and vigilance. I’m not sure it ever becomes

easy. But it can become more routine, especially if

you treat the design of tests and of the code that sat-

is es them as complementary parts of your develop-

ment process.

Rebecca J. Wirfs-Brock is president of Wirfs-Brock Associates.

Contact her at rebecca@https://www.wendangku.net/doc/a813031884.html,; https://www.wendangku.net/doc/a813031884.html,.

The more

you can pin

down your

design and

make it exhibit

repeatable

behavior, the

easier it will

be to maintain.

September/October 2009 I E E E S O F T W A R E93

相关文档