Testing :
- There is no assigned Role (e.g. QA) who conducts the Test Cases, members of the Development Team are responsible for writing and executing the Test Cases.
Acceptance Test Driven Development (ATDD)
- Acceptance Test Driven Development (ATDD) Tests should be readable and focused for customers.
- Acceptance Test Driven Development (ATDD) Tests should be readable and focused for customers.
- Acceptance Test Driven Development (ATDD) is a test-first software development practice in which acceptance criteria for new functionality are created as automated tests.
- The failing tests are constructed to pass as development proceeds and acceptance criteria are met.
- Acceptance Test Driven Development (ATDD) is a test-first software development practice in which acceptance criteria for new functionality are created as automated tests.
- Acceptance Test Driven Development (ATDD) is the practice of expressing requirements as acceptance tests.
- Acceptance Test Driven Development (ATDD) focus on capturing requirements in acceptance criteria and use to drive the development.
- Acceptance Test Driven Development (ATDD) technique encourages bringing the customer in design phase.
- Acceptance Test Driven Development (ATDD) helps reducing the defects in code.
- However it does not reduce the defects completely.
- Advanced practices of test-driven development can lead to Acceptance Test Driven Development (ATDD) where the criteria specified by the customer are automated into acceptance tests, which then drive the traditional unit test-driven development (UTDD) process.
Automated Tests or not ?
- Automated Tests : Running automated tests in automated builds guarantees that every build is tested for bugs.
- If any bug in the code arises, it is better to detect and fix it as soon as it appears.
- Using automated tests in automated builds makes it possible to discover bugs in the shortest period when it is easier to find the reason of the bug and fix it.
- This allows software developers to find and correct more errors before the application is released.
Automated Tests
- The following tests can be automated: Unit Testing (UT), Integration Testing, Component Test, System Test, Functional Testing, Functional Acceptance Test, User Acceptance Testing (UAT) and Non-functional Acceptance Test (Capacity, Security, Performance, etc.), etc.
- However, the following tests cannot be automated: Exploratory Testing, Usability Test, Showcase Test.
Behavior Driven Development (BDD)
- Behavior Driven Development (BDD) is an agile software development practice adding to Test Driven Development (TDD) the description of the desired functional behavior.
- Behavior Driven Development (BDD) focuses on the behavioral aspect of system for customer and developer but still practices writing test before code.
- Behavior Driven Development (BDD) is a software development process, which focuses on user and system interactions. Writing customer requirements as acceptance tests called ATDD.
- Behavior Driven Development (BDD) uses Ubiquitous language that can be understood by the developers and stakeholders.
- Behavior Driven Development (BDD) is adapted in a project, the technical nitty-gritty aspects of the requirements and implementation are outlined in a business-oriented language.
BDD practice
- When you are developing via a BDD approach, the following is internally being used
- ATDD: Acceptance Test Driven Development
- TDD: Test Driven Development
- DSL: Domain Specific Language
- DDD: Domain Driven Design
- Behavior Driven Development (BDD) combines practices from Test Driven Development (TDD) and Domain-Driven Design (DDD) along with the collaboration of development team and domain experts.
- Behavior Driven Development (BDD) uses a simple Domain Specific Language (DSL) using natural language structure based on the ubiquitous language that can express the behaviors and the expected outcomes.
- For more information, Behavior Driven Development (BDD) does not have any formal requirements for exactly how user stories must be written down but Gherkin is a wide-used format to write down the requirements in the Behavior Driven Development (BDD) approach.
- Behavior Driven Development (BDD) is also referred to as Specification by Example and is a synthesis and refinement based on Test Driven Development (TDD) and Acceptance Test Driven Development (ATDD) practices.
- Behavior Driven Development (BDD) is a variation / extension of Test Driven Development (TDD) methodology, where the main focus is on:
- Behavioral specifications of the product or application (or its features).
- User and System Interactions.
BDD Advantages
- Behavior Driven Development (BDD) ensures single source of truth by merging specification and test documentation into a single document
- Behavior Driven Development (BDD) can derive concrete examples in a collaborative manner from the acceptance criteria defined for each story
- Behavior Driven Development (BDD) bring business, developers and testers together with a common language.
- Behavior Driven Development (BDD) acts as a living documentation
Black-box Testing (or Behavioral Testing)
- Black-box Testing (or Behavioral Testing) is a method of software testing that examines the functionality of an application without peering into its internal structures or workings.
- This method of test can be applied virtually to every level of software testing: unit, integration, system and acceptance.
- Black-box Testing (or Behavioral Testing) is a software testing method in which the internal structure/design/implementation of the item being tested is not known to the tester.
- Also, it implies testing, either functional or non-functional, without reference to the internal structure of the component or system.
- This method is named so because the software program, in the eyes of the tester, is like a black box; inside which one cannot see.
- Black-box Testing (or Behavioral Testing) : Black Box testing, also known as Behavioral Testing, is a software testing method in which the internal structure/design/implementation of the item being tested is not known to the tester.
- Black-box Testing (or Behavioral Testing), either functional or non-functional, is a testing which has no reference to the internal structure of the component or system.
Domain Specific Language
- A Domain Specific Language (DSL) is a computer language specialized to a particular application domain.
- This is in contrast to a general-purpose language (GPL), which is broadly applicable across domains.
Dynamic Analysis
- Dynamic Analysis is executed while a program is in operation.
- A dynamic test will monitor system memory, functional behavior, response time, and overall performance of the system.
- On the other hand, Static Analysis is performed in a non-runtime environment.
- Typically, a static analysis tool will inspect program code for all possible run-time behaviors and seek out coding flaws, back doors, and potentially malicious code.
- Dynamic Analysis adopts the opposite approach of Static Analysis and is executed while a program is in operation.
- A dynamic test will monitor system memory, functional behavior, response time, and overall performance of the system.
- Dynamic Analysis involves the testing and evaluation of a program based on its execution.
- Static and dynamic analysis, considered together, are sometimes referred to as Glass-Box Testing.
- Dynamic program analysis tools may require loading of special libraries or even recompilation of program code.
- Dynamic Analysis is capable of exposing a subtle flaw or vulnerability too complicated for static analysis alone to reveal and can also be the more expedient method of testing.
- A dynamic test will only find defects in the part of the code that is actually executed.
- Dynamic Analysis adopts the opposite approach of Static Analysiss and is executed while a program is in operation.
- A dynamic test will monitor system memory, functional behavior, response time, and overall performance of the system.
Exploratory Testing
- Exploratory Testing is all about discovery, investigation, and learning.
- It emphasizes personal freedom and responsibility of the individual tester.
- It is defined as a type of testing where Test cases are not created in advance but testers check system on the fly.
- They may note down ideas about what to test before test execution.
- The focus of exploratory testing is more on testing as a “thinking” activity. Exploratory test cannot be automated.
- Attributes of Exploratory Testing :
- it involves minimum planning and maximum test execution
- it is unscripted testing
Feature Driven Development (FDD)
- Feature Driven Development (FDD) with the core values for all methodologies under its umbrella.
Integration Testing
- Individual units are combined and tested as a group in order to expose faults in the interaction of units.
- Integration Testing, also known as integration and testing (I&T), is a type of testing in which program units are combined and tested as groups in multiple ways. Integration test can be automated.
- Integration Testing is performed on the modules that are Unit Test first and then Integration Testing defines whether the combination of the modules give the desired output or not.
Functional Testing
- Functional Testing is a form of testing that deals with how applications functions.
- Software is tested to ensure that it conforms with all specified functional requirements.
- Some Functional Testing techniques include :
Functional Testing practice
- Traditionally, Functional Testing is implemented by a team of testers, independent of the developers. Functional tests can be automated.
Glass-Box Testing
- Static Analysis and Dynamic Analysis, considered together, are sometimes referred to as Glass-Box Testing.
Happy Pass testing (Sunny Day Testing)
- Happy Pass testing is a type of software testing that uses known input and produces an expected output.
- Also referred to as golden-path or sunny-day testing, the happy-path approach is tightly scripted.
- The happy path does not duplicate real-world conditions and verifies only that the required functionality is in place and functions correctly.
- If valid alternatives exist, the happy path is then identified as the default scenario or the most likely positive alternative featuring no exceptional or error conditions.
- Happy Pass testing (Sunny Day Testing) :
- In the context of software or information modeling, a happy path (sometimes called happy flow) is a default scenario featuring no exceptional or error conditions.
- For example, the happy path for a function validating credit card numbers would be where none of the validation rules raise an error, thus letting execution continue successfully to the end, generating a positive response.
- Process steps for a happy path are also used in the context of a use case. In contrast to the happy path, process steps for alternate paths and exception paths may also be documented.
- Happy Pass test is a well-defined test case using known input, which executes without exception and produces an expected output.
- Happy Pass testing can show that a system meets its functional requirements but it doesn’t guarantee a graceful handling of error conditions or aid in finding hidden bugs.
- Happy day (or sunny day) scenario and golden path are synonyms for happy path.
- In use case analysis, there is only one happy path, but there may be any number of additional alternate path scenarios which are all valid optional outcomes.
- If valid alternatives exist, the happy path is then identified as the default or most likely positive alternative.
- The analysis may also show one or more exception paths.
- An exception path is taken as the result of a fault condition.
- Use cases and the resulting interactions are commonly modeled in graphical languages such as the Unified Modeling Language or SysML.
Sad Pass Testing (Unhappy Pass, Rainy Day)
- There is no agreed name for the opposite of Happy Pass :
- they may be known as Sad Pass, bad paths, or exception paths.
- The term ‘Unhappy Pass‘ is gaining popularity as it suggests a complete opposite to ‘happy path’ and retains the same context.
- Usually there is no extra ‘Unhappy Pass‘, leaving such ‘term’ meaningless, because the happy path reaches the utter end, but an ‘Unhappy Pass‘ is shorter, ends prematurely, and doesn’t reach the desired end, i.e. not even the last page of a wizard.
- And in contrast to a single happy path, there are a lot of different ways in which things can go wrong, so there is no single criterion to determine ‘the Unhappy Pass‘.
Performance Testing
- Performance Testing is the process of determining the speed or effectiveness of a computer, network, software program or device.
- Performance Testing needs many resources and is time-consuming.
- So, it should ideally be carried out just before deploying to production and in an environment that, as closely as possible, replicates the production environment in which the system will ultimately run.
Regression Testing
- Regression Testing is the process of testing changes to computer programs to make sure that the older programming still works with the new changes.
- Regression Testing :
- Whenever developers change or modify their software, even a small tweak can have unexpected consequences.
- Regression Testing is testing existing software applications to make sure that a change or addition hasn’t broken any existing functionality.
- Regression Testing purpose is to catch bugs that may have been accidentally introduced into a new build or release candidate, and to ensure that previously eradicated bugs continue to stay dead.
- By re-running testing scenarios that were originally scripted when known problems were first fixed, you can make sure that any new changes to an application haven’t resulted in a regression or caused components that formerly worked to fail.
- A Regression test can be automated.
- In Regression Testing before a new version of software is released, the old test cases are run against the new version to make sure that old capabilities still work.
Sandbox Testing
- Sandbox Testing is a type of integration test.
- Sandbox Testing can be used for independent evaluation, monitoring or testing.
- Sandbox Testing is a type of software testing environment isolated from the production or live environment.
- It is known also a test server or development server.
- A sandbox is a type of software testing environment that enables the isolated execution of software or programs from the production or live environment for independent evaluation, monitoring, or testing.
- In an implementation, a sandbox also may be known as a test server, development server, or working directory.
Smoke Testing
- Smoke Testing is a non-exhaustive set of tests that aim at ensuring that the most important functions work.
- Smoke Testing, or “Build Verification Testing”, is a type of software testing that includes a non-exhaustive set of tests that aim at ensuring that the most crucial and important functions work.
- The result of this testing is used to decide if a build is stable enough to proceed with further testing.
- Smoke Testing is the preliminary check of the software after a build and before a release.
- This type of testing finds basic and critical issues in an application before critical testing is implemented.
Smoke Testing benefits
- Exposes integration issues
- Uncovers problems early
- Provides level of confidence in changes tosoftware not having adverse affects
Smoke testing origine
- The term Smoke Testing originates from a similarly basic type of hardware testing in which a device passes the test if it does not catch fire the first time it turns on.
Spike Testing
- Typically, a “Spike Test” involves gathering additional information or testing for easily reproduced edge cases.
Static Analysis definition
- Static Analysis, also called static code analysis, is a method of computer program debugging that is done by examining the code without executing the program.
- Static code analysis is a method of debugging by examining source code before a program is run.
- It’s done by analyzing a set of code against a set (or multiple sets) of coding rules.
Static Analysis purpose
- One the primary uses of static analyzers is to comply with standards.
- So, if you’re in a regulated industry that requires a coding standard, you’ll want to make sure your tool supports that standard.
Static Analysis
- Static Analysis is performed in a non-runtime environment.
- Typically, a static analysis tool will inspect program code for all possible run-time behaviors and seek out coding flaws, back doors, and potentially malicious code.
Static Analysis time
- Static code analysis is performed early in development, before software testing begins.
- For organizations practicing DevOps, static code analysis takes place during the “Create” phase.
Static Analysis tools benefits
- The best static code analysis tools offer speed, depth, and accuracy.
- Speed :
- It takes time for developers to do manual code reviews.
- Automated tools are much faster.
- Static code checking addresses problems early on.
- And it pinpoints exactly where the error is in the code.
- So, you’ll be able to fix those errors faster.
- Plus, coding errors found earlier are less costly to fix.
- It takes time for developers to do manual code reviews.
- Depth :
- Testing can’t cover every possible code execution path.
- But a static code analyzer can.
- It checks the code as you work on your build.
- You’ll get an in-depth analysis of where there might be potential problems in your code, based on the rules you’ve applied.
- Testing can’t cover every possible code execution path.
- Accuracy :
- Manual code reviews are prone to human error, automated tools are not.
- They scan every line of code to identify potential problems.
- This helps you ensure the highest-quality code is in place — before testing begins. After all, when you’re complying with a coding standard, quality is critical.
Test Double
- Test Double is a generic term for any case where you replace a production object for testing purposes.
- There are at least five types of Test Doubles :
- Test Stub,
- Mock Object,
- Test Spy,
- Fake Object, and
- Dummy Object with some differences.
- See Mocks, fakes, and stubs
- There are at least five types of Test Doubles :
- Test Double is used to resolve dependencies.
Sketch, Wireframe, Mockup and Prototype
- In order to prevent creating the wrong product or having reworks, it is valuable using some approach to make a better understanding of customer requirements.
- These approaches are Sketch, Wireframe, Mockup and Prototype.
- In addition, all of them occur before implementing any code.
- Sketch, Wireframe, Mockup and Prototypes actually represent the different stages of the design flow.
- They start from low-fidelity and end with high-fidelity respectively.
- The sketch is the simplest way to present an idea or initiative, even can be drawn in a piece of paper and has a minimum level of fidelity.
- Wireframe, a low-fidelity way to present a product, can efficiently outline structures and layouts.
- A mockup looks more like a finished product or prototype, but it is not interactive and not clickable.
- However, a prototype has a maximum level of fidelity and is interactive and clickable.
Mocks, fakes, and stubs
- Classification between mocks, fakes, and stubs is highly inconsistent across the literature.
- Consistent among the literature, though, is that they all represent a production object in a testing environment by exposing the same interface.
- Which out of mock, fake, or stub is the simplest is inconsistent, but the simplest always returns pre-arranged responses (as in a method stub).
- On the other side of the spectrum, the most complex object will fully simulate a production object with complete logic, exceptions, etc.
- Whether or not any of the mock, fake, or stub trio fits such a definition is, again, inconsistent across the literature. For example, a mock, fake, or stub method implementation between the two ends of the complexity spectrum might contain assertions to examine the context of each call.
- For example, a mock object might assert the order in which its methods are called, or assert consistency of data across method calls. In the book “The Art of Unit Testing” mocks are described as a fake object that helps decide whether a test failed or passed by verifying whether an interaction with an object occurred.
- Everything else is defined as a stub. In that book, fakes are anything that is not real, which, based on their usage, can be either stubs or mocks.
Mock Object
- In object-oriented programming, Mock objects are simulated objects that mimic the behavior of real objects in controlled ways, most often as part of a software testing initiative.
- A programmer typically creates a mock object to test the behavior of some other object, in much the same way that a car designer uses a crash test dummy to simulate the dynamic behavior of a human in vehicle impacts.
- The technique is also applicable in generic programming.
- Mock objects are used to simulate the behavior of a given object in order to cope with dependencies and isolate the system under test for controlled testing.
- For more information, it is possible to use the Test Driven Development (TDD) approach without mock objects.
- Mock objects are simulated objects that mimic the behavior of dependent real objects in controlled ways.
- A mock object is a type of Test Doubles.
- There are at least five types of Test Doubles : Test Stub, Mock Object, Test Spy, Fake Object, and Dummy Object with some differences.
Test Driven Development (TDD)
- Test Driven Development (TDD) is one of the most important concepts in Agile so we explain it widely as following.
- Test Driven Development (TDD) is a test-first software development practice in which test cases are defined and created first, and executable code is created to make the test pass.
- Test Driven Development (TDD) is a predictable, emergent, incremental and emergent Software development approach / technique which relies on Automated Test.
- Test Driven Development (TDD) is a test-first software development practice in which test cases are defined and created first, and executable code is created to make the test pass.
TDD benefits
- Test Driven Development (TDD) makes team collaboration easier and more efficient :
- Makes collaboration easier and more efficient, team members can edit each other’s code with confidence because the tests will inform them if the changes are making the code behave in unexpected ways.
- It helps to clarify requirements :
- It helps to clarify requirements because you have to figure out concretely what inputs you have to feed and what outputs you expect.
- It forces for good architecture :
- Test Driven Development (TDD) also forces good architecture.
- In order to make your code unit‐testable, it must be properly modularized.
- Writing the tests first, various architectural problems tend to surface earlier.
- It improves the design :
- Test Driven Development (TDD) encourages small steps and improves the design because it makes you cut the unnecessary dependencies to facilitate the setup.
- It promotes good design and separation of concerns :
- Test Driven Development (TDD) speeds the overall development process.
- It causes you to construct a test harness that can be automated :
- The Test exists before the code is written thus making it act as a requirement.
- As soon as the Test is passed, the requirement is met.
- Since the test exists before the code that makes it pass, the test acts as a requirement of the system under test.
- It forces your code be more modular :
- Because you are writing small tests at a time, it forces your code to be more modular (otherwise they’d be hard to test against).
- Test Driven Development (TDD) helps you learn, understand, and internalize the key principles of good modular design.
- It forces you to try to make your interfaces clean enough to be tested :
- Testing while writing also forces you to try to make your interfaces clean enough to be tested.
- It’s sometimes hard to see the advantage of this until you work on a body of code where it wasn’t done, and the only way to exercise and focus on a given piece of code is to run the whole system and set a break‐point.
- It helps prevent defects Helps prevent defects – well, at least it helps you find design or requirement issues right at the beginning.
- Test Driven Development (TDD) provides early warning to design problems (when they are easier to fix).
- It improves quality and reduces bugs :
- Test Driven Development (TDD) helps reducing the defects in code.
- However it does not reduce the defects completely.
- It helps programmers really understand their code
- It forces for good code documentation :
- Documents your code better than documentation (it doesn’t go out of date since you’re running it all the time).
- It creates an automated regression test suite :
- Creates an automated regression test suite, basically for free. i.e. you don’t need to spend time afterward writing unit tests to test the implementation code.
- Test Driven Development (TDD) is one of the most important concepts in Agile that creates an automated regression test suite, basically for free. i.e. you don’t need to spend time afterward writing unit tests to test the implemented code.
- It helps find stupid mistakes earlier :
- Stupid mistakes are caught almost immediately.
- It helps developers find mistakes that would waste everyone’s time if they were found in QA.
- Refactoring of code becomes easier and faster :
- Makes code easier to maintain and refactor.
- Test Driven Development (TDD) helps to provide clarity during the implementation process and provides a safety net when you want to refactor the code you have just written.
- Because Test Driven Development (TDD) essentially forces you to write unit tests before writing implementation code, refactoring of code becomes easier and faster.
- Refactoring code written two years ago is hard.
- If that code is backed up by a set of good unit tests, the process is made so much easier.
- It facilitates the maintenance :
- Unit Tests are especially valuable as a safety net when the code needs to be changed to either add new features or fix an existing bug.
- Since maintenance accounts for between 60 and 90% of the software life cycle, it’s hard to overstate how the time taken upfront to create a decent set of unit tests can pay for itself over and over again over the lifetime of the project.
Test Driven Development (TDD) practice
- The simple concept of Test Driven Development (TDD) is to write and correct the failed tests before writing new code (before development).
- This helps to avoid duplication of code as we write a small amount of code at a time in order to pass tests.
- Tests are nothing but requirement conditions that we need to test to fulfill them.
- Test Driven Development (TDD) is a Developer approach between developer and tester to create well written unit of code (module, class, function).
- Test Driven Development (TDD) or Test-First Development is a development process that has three steps :
- “Test-Driven Development” refers to a style of programming in which three activities are tightly interwoven :
- coding,
- testing (in the form of writing unit tests) and
- design (in the form of refactoring).
- Test Driven Development (TDD) steps :
- Step 1 – Write a test and run it to fail (Write a “single” unit test describing an aspect of the program, run the test, which should fail because the program lacks that feature)
- Step 2 – Write just enough code to pass the test (write “just enough” code, the simplest possible, to make the test pass)
- Step 3 – Refactor the written code (Refactor the code until it conforms to the simplicity criteria). Then repeat, “accumulating” unit tests over time.
- “Test-Driven Development” refers to a style of programming in which three activities are tightly interwoven :
- Another metaphor for Test Driven Development (TDD) is : Red – Green – Refactor.
- Test Driven Development (TDD) does not test the existing test cases / software before developing new functionality.
- It only tests the test cases writing for the new functionality which needs to be developed.
- Test Driven Development (TDD) is a technique where Developers develop a test case for each desired behavior of a unit of work and then extend the implementation to reflect this behavior.
- It can help to write cleaner code by emphasizing refactoring and it will decrease the risk of bugs.
- But the practice itself will not guarantee these outcomes, it still has to be applied correctly and needs skilled developers to achieve good results.
Test first Development (TFD)
- Test First Development (TFD) is designing tests before satisfying them.
- Test First Development (TFD) is an approach to development in which Developers do not write a single line of code until they have created the test cases needed to prove that unit of work solves the business problem and is technically correct at a unit-test level.
- In a response to a question on Quora, Beck described reading about developers using a test-first approach well before XP and Agile.
- Test First Development (TFD) is test-first development combined with design and code refactoring.
- Both test-first and test-driven development are useful for improving quality, morale and trust and even though both are related they not the same.
- Test First Development (TFD) is an evolutionary approach to programming where agile software Developers must first write a test that fails before they write new functional code.
- Test First Development (TFD), also known as Test Driven Development (TDD) is a development style in which you write the Unit Tests before you write the code to test.
Test-First Development advantages
- It promotes good design and separation of concerns.
- It improves quality and reduces bugs.
- It causes you to construct a test harness that can be automated.
- It speeds the overall development process.
- It reduces the re-work developers would have to do and gives them the courage to refactor.
User Acceptance Testing (UAT)
- With User Acceptance Testing (UAT), we’re not just testing if a feature works, we’re testing if it works for the end user.
- User acceptance testing verifies the user-facing functionality of a software product in real-world scenarios.
- Each user acceptance test reflects the description of a functionality in the software requirements.
- Scope-wise, User Acceptance Testing (UAT) strives for a comprehensive coverage of the product in its entirety.
- This is one of the factors making the task of automating the acceptance testing so difficult.
- Process-wise, User Acceptance Testing (UAT) follows system testing.
- As mentioned earlier, User Acceptance Testing (UAT) is the final stage of testing before the software goes live.
- Running User Acceptance Tests only makes sense after you’ve identified and fixed all major defects during unit and system testing.
- Automated User Acceptance Testing (UAT) can be a part of regression testing where teams rerun UAT suites before major releases.
- Handwritten User Acceptance Tests are unproductive
- The tests written for User Acceptance Testing (UAT) essentially provide a second layer of coverage on top of what Unit Tests and integration tests already cover.
- Basically, we’re talking about 200% test coverage :
- 100% go for unit and Integration Tests, and additional 100% are User Acceptance Testing (UAT).
- Writing this much test code is way too time consuming.
- Basically, we’re talking about 200% test coverage :
Unit Testing (UT)
- Unit Testing (UT) is a Practice of testing certain functions and areas of code or individual units of source code.
- Unit Test is a test that isolates and verifies individual units of source code.
- A Unit Test is a way of testing a unit (the smallest piece of code) that can be logically isolated in a system.
- In most programming languages, that is a function, a subroutine, a method or property.
- A Unit Test can be automated.
Unit Testing benefits
- Identify Failures to improve quality.
- Easy to test Code produced
- Prevent future changes from breaking functionality
Unit Testing practice
- A Unit Test is a way of testing a unit (the smallest piece of code) that can be logically isolated in a system.
- In most programming languages, that is a function, a subroutine, a method or property.
- A Unit Test can be automated.Unit Testing (UT) is performed by Developers
- Code in each Unit test should be as small as possible while maintaining readability of the code.
- Low-level test focusing on small parts of a software system that can be executed fast and in isolation.
- The definition and boundaries of a ‘unit’ generally depends on the context and is to be agreed by Developers
Unit Testing characteristics
- Code in each test is as small as possible while maintaining readability of the code.
- Each test is independent of other unit tests.
- They exercise the persistence layer of a solution.
- Each test makes assertions about only one logical concept.
Good Unit Test
- A unit test is a separated and isolated test that validates a unit of functionality.
- A good unit test should have the following characteristics :
- Does not depend on the environment; e.g. it will run on your computer and it will run on your colleague’s computer
- Does not depend on other unit tests
- Does not depend on external data
- Does not have side effects
- Asserts the results of code
- Tests a single unit of work (mostly a method)
- Covers all the paths of the code under test
- Tests and asserts edge cases and different ranges of data
- Runs fast
- Is well‐factored and as small as possible
- A good unit test should have the following characteristics :
- A unit test does not depend on the environment
- Each Unit test should be independent of other unit tests.
Good Unit Test Attributes
- Asserts the results of code
- Is well‐factored and as small as possible
- Does not have side effects
- Tests and asserts edge cases and different ranges of data
- Runs fast
White-Box Testing
- White-Box Testing is a software testing method in which the internal structure/design/implementation of the item being tested is known to the tester.
- The tester chooses inputs to exercise paths through the code and determines the appropriate outputs.
- Also, it implies testing based on an analysis of the internal structure of the component or system.
- This method is named so because the software program, in the eyes of the tester, is like a white/transparent box; inside which one clearly sees.
- White-Box Testing :
- White Box testing, also known as Clear Box Testing, Open Box Testing, Glass-Box Testing, Transparent Box Testing, Code-Based Testing or Structural Testing, is a software testing method in which the internal structure/design/implementation of the item being tested is known to the tester.
See Scrum Testing and Practices
More informations for the Scrum PSD certification here.
Updated : 01/10/2021