Test-Driven Development
Test-driven development (TDD) is a software development process that relies on the repetition of a very short development cycle: requirements are turned into very specific test cases, then the software is improved to pass the new tests, only. This is opposed to software development that allows software to be added that is not proven to meet requirements.
Development Cycle
-
Add a test
In test-driven development, each new feature begins with writing a test. The developer can accomplish this through use cases and user stories to cover the requirements and exception conditions, and can write the test in whatever testing framework is appropriate to the software environment.
-
Run all tests and see if the new test fails
This validates that the test harness is working correctly, shows that the new test does not pass without requiring new code because the required behavior already exists, and it rules out the possibility that the new test is flawed and will always pass. The new test should fail for the expected reason. This step increases the developer's confidence in the new test.
-
Write the code
The next step is to write some code that causes the test to pass. The new code written at this stage is not perfect and may, for example, pass the test in an inelegant way. That is acceptable because it will be improved and honed in Step 5. At this point, the only purpose of the written code is to pass the test. The programmer must not write code that is beyond the functionality that the test checks.
-
Run tests
If all test cases now pass, the programmer can be confident that the new code meets the test requirements, and does not break or degrade any existing features. If they do not, the new code must be adjusted until they do.
-
Refactor code
The growing code base must be cleaned up regularly during test-driven development. New code can be moved from where it was convenient for passing a test to where it more logically belongs. Duplication must be removed.
-
Repeat
Starting with another new test, the cycle is then repeated to push forward the functionality. The size of the steps should always be small, with as few as 1 to 10 edits between each test run.
Test Structure
Effective layout of a test case ensures all required actions are completed, improves the readability of the test case, and smooths the flow of execution. Consistent structure helps in building a self-documenting test case. A commonly applied structure for test cases has (1) setup, (2) execution, (3) validation, and (4) cleanup.
-
Setup
Put the Unit Under Test (UUT) or the overall test system in the state needed to run the test.
-
Execution
Trigger/drive the UUT to perform the target behavior and capture all output, such as return values and output parameters. This step is usually very simple.
-
Validation
Ensure the results of the test are correct. These results may include explicit outputs captured during execution or state changes in the UUT.
-
Cleanup
Ensure the results of the test are correct. These results may include explicit outputs captured during execution or state changes in the UUT.
Individual best practices
Individual best practices states that one should:
- Separate common set-up and teardown logic into test support services utilized by the appropriate test cases.
- Keep each test oracle focused on only the results necessary to validate its test.
- Design time-related tests to allow tolerance for execution in non-real time operating systems. The common practice of allowing a 5-10 percent margin for late execution reduces the potential number of false negatives in test execution.
- Treat your test code with the same respect as your production code. It also must work correctly for both positive and negative cases, last a long time, and be readable and maintainable.
- Get together with your team and review your tests and test practices to share effective techniques and catch bad habits. It may be helpful to review this section during your discussion.
Practices to avoid, or "anti-patterns"
- Having test cases depend on system state manipulated from previously executed test cases (i.e., you should always start a unit test from a known and pre-configured state).
- Dependencies between test cases. A test suite where test cases are dependent upon each other is brittle and complex. Execution order should not be presumed. Basic refactoring of the initial test cases or structure of the UUT causes a spiral of increasingly pervasive impacts in associated tests.
- Interdependent tests can cause cascading false negatives. A failure in an early test case breaks a later test case even if no actual fault exists in the UUT, increasing defect analysis and debug efforts.
- Testing precise execution behavior timing or performance.
- Building "all-knowing oracles". An oracle that inspects more than necessary is more expensive and brittle over time. This very common error is dangerous because it causes a subtle but pervasive time sink across the complex project.
- Testing implementation details.
- Slow running tests.
A 2005 study found that using TDD meant writing more tests and, in turn, programmers who wrote more tests tended to be more productive. Hypotheses relating to code quality and a more direct correlation between TDD and productivity were inconclusive.
Programmers using pure TDD on new ("greenfield") projects reported they only rarely felt the need to invoke a debugger. Used in conjunction with a version control system, when tests fail unexpectedly, reverting the code to the last version that passed all tests may often be more productive than debugging.
Test-driven development offers more than just simple validation of correctness, but can also drive the design of a program. By focusing on the test cases first, one must imagine how the functionality is used by clients (in the first case, the test cases). So, the programmer is concerned with the interface before the implementation. This benefit is complementary to design by contract as it approaches code through test cases rather than through mathematical assertions or preconceptions.
In the next couple sections, we look at TDD in action using JavaScript.
There are many JavaScript testing frameworks; two of the most popular are Jasmine and Mocha. Let's take a look at a Jasmine setup.
Installation
First, we want to run npm init in the terminal to create a package.json in the root of the project folder. Then, run the below to install Jasmine.
npm install jasmine-core@2.99.0 --save-dev
npm install jasmine@3.1.0 --save-dev
./node_modules/.bin/jasmine init
Add the below code to your package.json file.
package.json
...
"scripts": {
"test": "jasmine"
}
...
Jasmine is now ready for us to write tests, but we need to set up a test runner to run our tests. Let's use a test runner called Karma.
npm install karma@2.0.0 --save-dev
npm install karma-jasmine@1.1.1 --save-dev
This will tell Karma to launch Chrome.
npm install karma-chrome-launcher@2.2.0 --save-dev
These allow us to use Karma-specific commands in the terminal.
npm install karma-cli@1.0.1 -g
npm install karma-cli@1.0.1 --save-dev
This allows Karma to work with webpack if your project has a webpack setup.
npm install karma-webpack@2.0.13 --save-dev
If we're using jQuery, go ahead and install it.
npm install karma-jquery@0.2.2 --save-dev
This makes it so the test results are easier to read.
npm install karma-jasmine-html-reporter@0.2.2 --save-dev
We have to initialize Karma.
karma init
A series of prompts will appear. Go ahead and hit Enter on all of the prompts. We will enter the information in the generated karma.conf.js file.
karma.conf.js
const webpackConfig = require('./webpack.config.js');
module.exports = function(config) {
config.set({
basePath: '',
frameworks: ['jquery-3.2.1', 'jasmine'],
files: [
'src/*.js',
'spec/*spec.js'
],
webpack: webpackConfig,
exclude: [
],
preprocessors: {
'src/*.js': ['webpack'],
'spec/*spec.js': ['webpack']
},
plugins: [
'karma-jquery',
'karma-webpack',
'karma-jasmine',
'karma-chrome-launcher',
'karma-jasmine-html-reporter'
],
reporters: ['progress', 'kjhtml'],
port: 9876,
colors: true,
logLevel: config.LOG_INFO,
autoWatch: true,
browsers: ['Chrome'],
singleRun: false,
concurrency: Infinity
})
}
To make the CLI test command point to Karma:
package.json
...
"scripts": {
"test": "./node_modules/karma/bin/karma start karma.conf.js"
},
...
Install source mapping to see the correct stack trace for errors.
npm install karma-sourcemap-loader@0.3.7 --save-dev
karma.conf.js
...
preprocessors: {
'src/*.js': ['webpack', 'sourcemap'],
'spec/*spec.js': ['webpack', 'sourcemap']
},
...
Add the below to exclude eslint from checking the spec files for errors.
webpack.config.js
...
module.exports = {
...
{
test: /\.js$/,
exclude: [
/node_modules/,
/spec/
],
loader: "eslint-loader"
}
]
}
};
And that's it for the setup. In the next section we take a look at writing specs (also known as tests).
Let's use an imaginery program called "Triangle Tracker" to write our tests for. This program determines if three provided lengths successfully create a triangle. If the sides can create a triangle, the program determines the type of triangle as equilateral, isosceles, or scalene.
After installing Jasmine, a folder called "spec" is automatically created for us. It is here where we write our spec tests.
Writing Specs
Create a spec file "traingle-spec.js" in the "spec" folder.
triangle-tracker/spec/triangle-spec.js
describe('Triangle', function() {
it('should test whether a Triangle has three sides', function() {
//Test content will go here.
});
});
If eslint is used for the project, add the below code so eslint wouldn't throw errors for jasmine (and etc.) syntax.
.eslintrc
...
"env": {
"browser": true,
"jquery": true,
"node": true,
"jasmine": true
},
...
Running npm test will show our test passes. That's because our test doesn't have any expectations yet.
triangle-tracker/spec/triangle-spec.js
describe('Triangle', function() {
it('should test whether a Triangle has three sides', function() {
var triangle = new Triangle(3,4,5);
expect(triangle.side1).toEqual(3);
expect(triangle.side2).toEqual(4);
expect(triangle.side3).not.toEqual(6);
});
});
The tests will fail because we need to create a Triangle constructor. In the root folder of the project, create a folder "src" and inside it, a file "triangle.js".
src/triangle.js
export function Triangle(side1, side2, side3) {
this.side1 = side1;
this.side2 = side2;
this.side3 = side3;
}
spec/triangle-spec.js
import { Triangle } from './../src/triangle.js';
...
Run npm test. The spec passes. We write more tests, first to confirm they fail, then to make them pass.
spec/triangle-spec.js
...
describe('Triangle', function() {
...
it('should correctly determine whether three lengths can be made into a triangle', function() {
var notTriangle = new Triangle(3,9,22);
expect(notTriangle.checkType()).toEqual("not a triangle");
});
});
The spec fails as expected. We need to write the method to make it pass.
src/triangle.js
...
Triangle.prototype.checkType = function() {
if ((this.side1 > (this.side2 + this.side3)) || (this.side2 > (this.side1 + this.side3)) || (this.side3 > (this.side1 +
this.side2))) {
return "not a triangle";
}
};
This spec and the previous passed spec should both pass now when we run npm test.
There is much more to testing, but for the sake of brevity, this concludes our walkthrough of test-driven-development using JavaScript and Jasmine.