25 September 2021

Scrum and Testing

Testing :

  • There is no assigned Role (e.g. QA) who conducts the Test Cases, members of the Development Team are responsible for writing and executing the Test Cases.

Acceptance Test Driven Development (ATDD) :

  • Acceptance Test Driven Development Tests should be readable and focused for customers.
  • ATDD Tests should be readable and focused for customers.
  • ATDD is a test-first software development practice in which acceptance criteria for new functionality are created as automated tests. The failing tests are constructed to pass as development proceeds and acceptance criteria are met.
  • Acceptance Test Driven Development is a test-first software development practice in which acceptance criteria for new functionality are created as automated tests.
  • Acceptance Test Driven development is the practice of expressing requirements as acceptance tests.
  • ATDD focus on capturing requirements in acceptance criteria and use to drive the development. ATDD technique encourages bringing the customer in design phase.
  • ATDD helps reducing the defects in code. However it does not reduce the defects completely.
  • Advanced practices of test-driven development can lead to Acceptance Test-driven development (ATDD) where the criteria specified by the customer are automated into acceptance tests, which then drive the traditional unit test-driven development (UTDD) process.

Automated Tests or not ?

  • Automated Tests : Running automated tests in automated builds guarantees that every build is tested for bugs. If any bug in the code arises, it is better to detect and fix it as soon as it appears. Using automated tests in automated builds makes it possible to discover bugs in the shortest period when it is easier to find the reason of the bug and fix it. This allows software developers to find and correct more errors before the application is released.

Automated Tests :

  • The following tests can be automated: Unit Test, Integration Test, Component Test, System Test, Functional Acceptance Test, User Acceptance Test and Non-functional Acceptance Test (Capacity, Security, Performance, etc.), etc.
  • However, the following tests cannot be automated: Exploratory Test, Usability Test, Showcase Test.

Behavior Driven Development (BDD) :

  • BDD is an agile software development practice adding to TDD the description of the desired functional behavior.
  • Behavior Driven Development focuses on the behavioral aspect of system for customer and developer but still practices writing test before code.
  • Behavior-Driven Development (BDD) is a software development process, which focuses on user and system interactions. Writing customer requirements as acceptance tests called ATDD.
  • Behavior Driven Development uses Ubiquitous language that can be understood by the developers and stakeholders. When Behavior Driven Development is adapted in a project, the technical nitty-gritty aspects of the requirements and implementation are outlined in a business-oriented language.

BDD practice :

  • When you are developing via a BDD approach, the following is internally being used
    ATDD: Acceptance Test Driven Development
    TDD: Test Driven Development
    DSL: Domain Specific Language
    DDD: Domain Driven Design
  • BDD combines practices from Test-Driven Development (TDD) and Domain-Driven Design (DDD) along with the collaboration of development team and domain experts. BDD uses a simple Domain-Specific-Language (DSL) using natural language structure based on the ubiquitous language that can express the behaviors and the expected outcomes. For more information, BDD does not have any formal requirements for exactly how user stories must be written down but Gherkin is a wide-used format to write down the requirements in the BDD approach. BDD is also referred to as Specification by Example and is a synthesis and refinement based on TDD and Acceptance Test-Driven Development (ATDD) practices.
  • Behavior Driven Development is a variation / extension of Test-Driven Development methodology, where the main focus is on:
    1) Behavioral specifications of the product or application (or its features).
    2) User and System Interactions.

BDD Advantages :

  • 1. BDD ensures single source of truth by merging specification and test documentation into a single document
  • 2. BDD can derive concrete examples in a collaborative manner from the acceptance criteria defined for each story
  • 3. BDD bring business, developers and testers together with a common language.
  • 4. BDD acts as a living documentation

Black-box Testing (or Behavioral Testing):

  • Black-box testing is a method of software testing that examines the functionality of an application without peering into its internal structures or workings. This method of test can be applied virtually to every level of software testing: unit, integration, system and acceptance.
  • Black-box testing is a software testing method in which the internal structure/design/implementation of the item being tested is not known to the tester. Also, it implies testing, either functional or non-functional, without reference to the internal structure of the component or system. This method is named so because the software program, in the eyes of the tester, is like a black box; inside which one cannot see.
  • Black Box Testing: Black Box testing, also known as Behavioral Testing, is a software testing method in which the internal structure/design/implementation of the item being tested is not known to the tester.
  • Black Box Testing, either functional or non-functional, is a testing which has no reference to the internal structure of the component or system.

Domain Specific Language :

  • A domain-specific language (DSL) is a computer language specialized to a particular application domain. This is in contrast to a general-purpose language (GPL), which is broadly applicable across domains.

Dynamic Analysis :

  • Dynamic Analysis is executed while a program is in operation. A dynamic test will monitor system memory, functional behavior, response time, and overall performance of the system. On the other hand, Static Analysis is performed in a non-runtime environment. Typically, a static analysis tool will inspect program code for all possible run-time behaviors and seek out coding flaws, back doors, and potentially malicious code.
  • Dynamic Analysis adopts the opposite approach of Static Analysis and is executed while a program is in operation. A dynamic test will monitor system memory, functional behavior, response time, and overall performance of the system.
  • Dynamic analysis involves the testing and evaluation of a program based on its execution. Static and dynamic analysis, considered together, are sometimes referred to as glass-box testing. Dynamic program analysis tools may require loading of special libraries or even recompilation of program code. Dynamic analysis is capable of exposing a subtle flaw or vulnerability too complicated for static analysis alone to reveal and can also be the more expedient method of testing. A dynamic test will only find defects in the part of the code that is actually executed.
  • Dynamic Analysis adopts the opposite approach of Static Analysis and is executed while a program is in operation. A dynamic test will monitor system memory, functional behavior, response time, and overall performance of the system.

Exploratory Testing :

  • Exploratory Test: Exploratory testing is all about discovery, investigation, and learning. It emphasizes personal freedom and responsibility of the individual tester. It is defined as a type of testing where Test cases are not created in advance but testers check system on the fly. They may note down ideas about what to test before test execution. The focus of exploratory testing is more on testing as a “thinking” activity. Exploratory test cannot be automated.
  • Attributes of Exploratory Testing :
    it involves minimum planning and maximum test execution
    it is unscripted testing

Feature Driven Development (FDD) :

  • Feature-driven development with the core values for all methodologies under its umbrella.

Integration Testing :

  • Individual units are combined and tested as a group in order to expose faults in the interaction of units.
  • Integration Test: Integration testing, also known as integration and testing (I&T), is a type of testing in which program units are combined and tested as groups in multiple ways. Integration test can be automated.
  • Integration testing is performed on the modules that are unit tested first and then integration testing defines whether the combination of the modules give the desired output or not.

Functional Testing :

  • Functional Test: Functional testing is a form of testing that deals with how applications functions.
  • Software is tested to ensure that it conforms with all specified functional requirements.
  • Some functional testing techniques include
    1. Smoke Testing
    2. White Box Testing
    3. Black Box Testing
    4. Unit Testing
    5. Acceptance Testing

Functional Testing practice:

  • Traditionally, functional testing is implemented by a team of testers, independent of the developers. Functional tests can be automated.

Glass-Box Testing :

  • Static and dynamic analysis, considered together, are sometimes referred to as glass-box testing.

Happy Pass testing :

  • Happy-path testing is a type of software testing that uses known input and produces an expected output. Also referred to as golden-path or sunny-day testing, the happy-path approach is tightly scripted. The happy path does not duplicate real-world conditions and verifies only that the required functionality is in place and functions correctly. If valid alternatives exist, the happy path is then identified as the default scenario or the most likely positive alternative featuring no exceptional or error conditions.
  • Happy Pass (Sunny Day) : In the context of software or information modeling, a happy path (sometimes called happy flow) is a default scenario featuring no exceptional or error conditions. For example, the happy path for a function validating credit card numbers would be where none of the validation rules raise an error, thus letting execution continue successfully to the end, generating a positive response.
  • Process steps for a happy path are also used in the context of a use case. In contrast to the happy path, process steps for alternate paths and exception paths may also be documented.
  • Happy path test is a well-defined test case using known input, which executes without exception and produces an expected output. Happy path testing can show that a system meets its functional requirements but it doesn’t guarantee a graceful handling of error conditions or aid in finding hidden bugs.
  • Happy day (or sunny day) scenario and golden path are synonyms for happy path.
  • In use case analysis, there is only one happy path, but there may be any number of additional alternate path scenarios which are all valid optional outcomes. If valid alternatives exist, the happy path is then identified as the default or most likely positive alternative. The analysis may also show one or more exception paths. An exception path is taken as the result of a fault condition. Use cases and the resulting interactions are commonly modeled in graphical languages such as the Unified Modeling Language or SysML.

Sad Pass Testing (Unhappy Pass, Rainy Day) :

  • There is no agreed name for the opposite of happy paths: they may be known as sad paths, bad paths, or exception paths. The term ‘unhappy path’ is gaining popularity as it suggests a complete opposite to ‘happy path’ and retains the same context. Usually there is no extra ‘unhappy path’, leaving such ‘term’ meaningless, because the happy path reaches the utter end, but an ‘unhappy path’ is shorter, ends prematurely, and doesn’t reach the desired end, i.e. not even the last page of a wizard. And in contrast to a single happy path, there are a lot of different ways in which things can go wrong, so there is no single criterion to determine ‘the unhappy path’.

Performance Testing :

  • Performance Testing is the process of determining the speed or effectiveness of a computer, network, software program or device.
  • Performance testing needs many resources and is time-consuming. So, it should ideally be carried out just before deploying to production and in an environment that, as closely as possible, replicates the production environment in which the system will ultimately run.

Regression Testing :

  • Regression Testing is the process of testing changes to computer programs to make sure that the older programming still works with the new changes.
  • Regression Test: Whenever developers change or modify their software, even a small tweak can have unexpected consequences. Regression testing is testing existing software applications to make sure that a change or addition hasn’t broken any existing functionality.
  • Regression Testing purpose is to catch bugs that may have been accidentally introduced into a new build or release candidate, and to ensure that previously eradicated bugs continue to stay dead. By re-running testing scenarios that were originally scripted when known problems were first fixed, you can make sure that any new changes to an application haven’t resulted in a regression or caused components that formerly worked to fail. A Regression test can be automated.
  • In regression testing before a new version of software is released, the old test cases are run against the new version to make sure that old capabilities still work.

Sandbox Testing :

  • Sandbox Testing is a type of integration test.
  • Sandbox Testing can be used for independent evaluation, monitoring or testing.
  • Sandbox Testing is a type of software testing environment isolated from the production or live environment.
  • It is known also a test server or development server.
  • A sandbox is a type of software testing environment that enables the isolated execution of software or programs from the production or live environment for independent evaluation, monitoring, or testing. In an implementation, a sandbox also may be known as a test server, development server, or working directory.

Smoke Testing :

  • Smoke Testing is a non-exhaustive set of tests that aim at ensuring that the most important functions work.
  • Smoke testing, or “Build Verification Testing”, is a type of software testing that includes a non-exhaustive set of tests that aim at ensuring that the most crucial and important functions work. The result of this testing is used to decide if a build is stable enough to proceed with further testing.
  • Smoke testing is the preliminary check of the software after a build and before a release. This type of testing finds basic and critical issues in an application before critical testing is implemented.
  • The results of this test is used to decide if a build is stable enough to proceed with further testing.

Smoke Testing benefits :

  • 1. Exposes integration issues
  • 2. uncovers problems early
  • 3. provides level of confidence in changes tosoftware not having adverse affects

Smoke testing origine :

  • The term smoke testing originates from a similarly basic type of hardware testing in which a device passes the test if it does not catch fire the first time it turns on.

Spike Testing :

  • Typically, a “spike test” involves gathering additional information or testing for easily reproduced edge cases.

Static Analysis definition :

  • Static analysis, also called static code analysis, is a method of computer program debugging that is done by examining the code without executing the program.
    Static code analysis is a method of debugging by examining source code before a program is run. It’s done by analyzing a set of code against a set (or multiple sets) of coding rules.

Static Analysis purpose :

  • One the primary uses of static analyzers is to comply with standards. So, if you’re in a regulated industry that requires a coding standard, you’ll want to make sure your tool supports that standard.

Static Analysis :

  • Static Analysis is performed in a non-runtime environment.
  • Typically, a static analysis tool will inspect program code for all possible run-time behaviors and seek out coding flaws, back doors, and potentially malicious code.

Static Analysis time :

  • Static code analysis is performed early in development, before software testing begins. For organizations practicing DevOps, static code analysis takes place during the “Create” phase.

Static Analysis tools benefits :

  • The best static code analysis tools offer speed, depth, and accuracy.
  • Speed : It takes time for developers to do manual code reviews. Automated tools are much faster. Static code checking addresses problems early on. And it pinpoints exactly where the error is in the code. So, you’ll be able to fix those errors faster. Plus, coding errors found earlier are less costly to fix.
  • Depth : Testing can’t cover every possible code execution path. But a static code analyzer can. It checks the code as you work on your build. You’ll get an in-depth analysis of where there might be potential problems in your code, based on the rules you’ve applied.
  • Accuracy : Manual code reviews are prone to human error, automated tools are not. They scan every line of code to identify potential problems. This helps you ensure the highest-quality code is in place — before testing begins. After all, when you’re complying with a coding standard, quality is critical.

Test Double :

  • Test Double is a generic term for any case where you replace a production object for testing purposes. There are at least five types of Test Doubles: Test Stub, Mock Object, Test Spy, Fake Object, and Dummy Object with some differences.
  • Test Double is used to resolve dependencies.

Sketch, Wireframe, Mockup and Prototype :

  • In order to prevent creating the wrong product or having reworks, it is valuable using some approach to make a better understanding of customer requirements. These approaches are Sketch, Wireframe, Mockup, and Prototype. In addition, all of them occur before implementing any code.
  • Sketches, wireframes, mockups, and prototypes actually represent the different stages of the design flow. They start from low-fidelity and end with high-fidelity respectively.
  • The sketch is the simplest way to present an idea or initiative, even can be drawn in a piece of paper and has a minimum level of fidelity.
  • Wireframe, a low-fidelity way to present a product, can efficiently outline structures and layouts.
  • A mockup looks more like a finished product or prototype, but it is not interactive and not clickable.
  • However, a prototype has a maximum level of fidelity and is interactive and clickable.

Mocks, fakes, and stubs :

  • Classification between mocks, fakes, and stubs is highly inconsistent across the literature. Consistent among the literature, though, is that they all represent a production object in a testing environment by exposing the same interface. Which out of mock, fake, or stub is the simplest is inconsistent, but the simplest always returns pre-arranged responses (as in a method stub). On the other side of the spectrum, the most complex object will fully simulate a production object with complete logic, exceptions, etc. Whether or not any of the mock, fake, or stub trio fits such a definition is, again, inconsistent across the literature. For example, a mock, fake, or stub method implementation between the two ends of the complexity spectrum might contain assertions to examine the context of each call. For example, a mock object might assert the order in which its methods are called, or assert consistency of data across method calls. In the book The Art of Unit Testing mocks are described as a fake object that helps decide whether a test failed or passed by verifying whether an interaction with an object occurred. Everything else is defined as a stub. In that book, fakes are anything that is not real, which, based on their usage, can be either stubs or mocks.

Mock Object :

  • In object-oriented programming, mock objects are simulated objects that mimic the behavior of real objects in controlled ways, most often as part of a software testing initiative. A programmer typically creates a mock object to test the behavior of some other object, in much the same way that a car designer uses a crash test dummy to simulate the dynamic behavior of a human in vehicle impacts. The technique is also applicable in generic programming.
  • Mock objects are used to simulate the behavior of a given object in order to cope with dependencies and isolate the system under test for controlled testing. For more information, it is possible to use the TDD approach without mock objects.
  • Mock objects are simulated objects that mimic the behavior of dependent real objects in controlled ways. A mock object is a type of Test Doubles. There are at least five types of Test Doubles: Test Stub, Mock Object, Test Spy, Fake Object, and Dummy Object with some differences.

Test Driven Development (TDD) :

  • TDD is one of the most important concepts in Agile so we explain it widely as following.
  • TDD is a test-first software development practice in which test cases are defined and created first, and executable code is created to make the test pass.
  • Test Driven Development is a predictable, emergent, incremental and emergent Software development approach / technique which relies on Automated Test.

TDD benefits :

  • It makes team collaboration easier and more efficient : Makes collaboration easier and more efficient, team members can edit each other’s code with confidence because the tests will inform them if the changes are making the code behave in unexpected ways.
  • It helps to clarify requirements : It helps to clarify requirements because you have to figure out concretely what inputs you have to feed and what outputs you expect.
  • It forces for good architecture : TDD also forces good architecture. In order to make your code unit‐testable, it must be properly modularized. Writing the tests first, various architectural problems tend to surface earlier.
  • It improves the design : It encourages small steps and improves the design because it makes you cut the unnecessary dependencies to facilitate the setup.
  • It promotes good design and separation of concerns : It speeds the overall development process.
  • It causes you to construct a test harness that can be automated : The Test exists before the code is written thus making it act as a requirement. As soon as the Test is passed, the requirement is met. Since the test exists before the code that makes it pass, the test acts as a requirement of the system under test.
  • It forces your code be more modular : Because you are writing small tests at a time, it forces your code to be more modular (otherwise they’d be hard to test against). TDD helps you learn, understand, and internalize the key principles of good modular design.
  • It forces you to try to make your interfaces clean enough to be tested : Testing while writing also forces you to try to make your interfaces clean enough to be tested. It’s sometimes hard to see the advantage of this until you work on a body of code where it wasn’t done, and the only way to exercise and focus on a given piece of code is to run the whole system and set a break‐point.
  • It helps prevent defects Helps prevent defects – well, at least it helps you find design or requirement issues right at the beginning. TDD provides early warning to design problems (when they are easier to fix).
  • It improves quality and reduces bugs : TDD helps reducing the defects in code. However it does not reduce the defects completely.
  • It helps programmers really understand their code
  • It forces for good code documentation : Documents your code better than documentation (it doesn’t go out of date since you’re running it all the time).
  • It creates an automated regression test suite : Creates an automated regression test suite, basically for free. i.e. you don’t need to spend time afterward writing unit tests to test the implementation code. TDD is one of the most important concepts in Agile that creates an automated regression test suite, basically for free. i.e. you don’t need to spend time afterward writing unit tests to test the implemented code.
  • It helps find stupid mistakes earlier : Stupid mistakes are caught almost immediately. It helps developers find mistakes that would waste everyone’s time if they were found in QA.
  • Refactoring of code becomes easier and faster : Makes code easier to maintain and refactor. TDD helps to provide clarity during the implementation process and provides a safety net when you want to refactor the code you have just written. Because TDD essentially forces you to write unit tests before writing implementation code, refactoring of code becomes easier and faster. Refactoring code written two years ago is hard. If that code is backed up by a set of good unit tests, the process is made so much easier.
  • It facilitates the maintenance : Unit tests are especially valuable as a safety net when the code needs to be changed to either add new features or fix an existing bug. Since maintenance accounts for between 60 and 90% of the software life cycle, it’s hard to overstate how the time taken upfront to create a decent set of unit tests can pay for itself over and over again over the lifetime of the project.

Test Driven Development (TDD) practice :

  • The simple concept of TDD is to write and correct the failed tests before writing new code (before development). This helps to avoid duplication of code as we write a small amount of code at a time in order to pass tests. (Tests are nothing but requirement conditions that we need to test to fulfill them).
  • TDD is a Developer approach between developer and tester to create well written unit of code (module, class, function).
  • TDD or Test-Driven Development or Test-First Development is a development process that has three steps:
    “Test-Driven Development” refers to a style of programming in which three activities are tightly interwoven: coding, testing (in the form of writing unit tests) and design (in the form of refactoring).
    Step 1 – Write a test and run it to fail (Write a “single” unit test describing an aspect of the program, run the test, which should fail because the program lacks that feature)
    Step 2 – Write just enough code to pass the test (write “just enough” code, the simplest possible, to make the test pass)
    Step 3 – Refactor the written code (Refactor the code until it conforms to the simplicity criteria)
    Then repeat, “accumulating” unit tests over time.
  • Another metaphor for TDD is: Red – Green – Refactor.
  • Test Driven Development does not test the existing test cases / software before developing new functionality. It only tests the test cases writing for the new functionality which needs to be developed.
  • Test Driven Development is a technique where Developers develop a test case for each desired behavior of a unit of work and then extend the implementation to reflect this behavior. It can help to write cleaner code by emphasizing refactoring and it will decrease the risk of bugs. But the practice itself will not guarantee these outcomes, it still has to be applied correctly and needs skilled developers to achieve good results.

Test first Development :

  • Test first Development is designing tests before satisfying them.
  • Test-first development (TFD) is an approach to development in which developers do not write a single line of code until they have created the test cases needed to prove that unit of work solves the business problem and is technically correct at a unit-test level. In a response to a question on Quora, Beck described reading about developers using a test-first approach well before XP and Agile. Test-driven development is test-first development combined with design and code refactoring. Both test-first and test-driven development are useful for improving quality, morale and trust and even though both are related they not the same.
  • Test-First Development is an evolutionary approach to programming where agile software developers must first write a test that fails before they write new functional code.
  • Test first development, also known as Test Driven Development (TDD) is a development style in which you write the unit tests before you write the code to test.

Test-First Development advantages :
· It promotes good design and separation of concerns.
· It improves quality and reduces bugs.
· It causes you to construct a test harness that can be automated.
· It speeds the overall development process.
· It reduces the re-work developers would have to do and gives them the courage to refactor.

User Acceptance Testing (UAT):

  • With UAT, we’re not just testing if a feature works, we’re testing if it works for the end user.
  • User acceptance testing verifies the user-facing functionality of a software product in real-world scenarios.
  • Each user acceptance test reflects the description of a functionality in the software requirements.
  • Scope-wise, UAT strives for a comprehensive coverage of the product in its entirety. This is one of the factors making the task of automating the acceptance testing so difficult.
  • Process-wise, UAT follows system testing. As mentioned earlier, user acceptance testing is the final stage of testing before the software goes live.
  • Running acceptance tests only makes sense after you’ve identified and fixed all major defects during unit and system testing.
  • Automated user acceptance testing can be a part of regression testing where teams rerun UAT suites before major releases.
  • Handwritten user acceptance tests are unproductive
  • The tests written for UAT essentially provide a second layer of coverage on top of what unit tests and integration tests already cover. Basically, we’re talking about 200% test coverage: 100% go for unit and integration tests, and additional 100% are UAT. Writing this much test code is way too time consuming.

Unit Testing :

  • Unit Testing is a Practice of testing certain functions and areas of code or individual units of source code.
  • Unit Tests is a test that isolates and verifies individual units of source code.
  • A Unit test is a way of testing a unit (the smallest piece of code) that can be logically isolated in a system. In most programming languages, that is a function, a subroutine, a method or property. A Unit test can be automated.

Unit Testing benefits :

  • 1. Identify Failures to improve quality.
  • 2. Easy to test Code produced
  • 3. Prevent future changes from breaking functionality

Unit Testing practice :

  • Unit Testing is performed by Developers
  • Code in each Unit test should be as small as possible while maintaining readability of the code.
  • Low-level test focusing on small parts of a software system that can be executed fast and in isolation.
  • The definition and boundaries of a ‘unit’ generally depends on the context and is to be agreed by the Development Team

Unit Testing characteristics :

  • 1. Code in each test is as small as possible while maintaining readability of the code.
  • 2. Each test is independent of other unit tests.
  • 3. They exercise the persistence layer of a solution.
  • 4. Each test makes assertions about only one logical concept.

Good Unit Test :

  • A unit test is a separated and isolated test that validates a unit of functionality. A good unit test should have the following characteristics:
    • Does not depend on the environment; e.g. it will run on your computer and it will run on your colleague’s computer
    • Does not depend on other unit tests
    • Does not depend on external data
    • Does not have side effects
    • Asserts the results of code
    • Tests a single unit of work (mostly a method)
    • Covers all the paths of the code under test
    • Tests and asserts edge cases and different ranges of data
    • Runs fast
    • Is well‐factored and as small as possible
  • A unit test does not depend on the environment
  • Each Unit test should be independent of other unit tests.

Good Unit Test Attributes :

  • Asserts the results of code
  • Is well‐factored and as small as possible
  • Does not have side effects
  • Tests and asserts edge cases and different ranges of data
  • Runs fast

White-Box Testing :

  • White-box testing is a software testing method in which the internal structure/design/implementation of the item being tested is known to the tester. The tester chooses inputs to exercise paths through the code and determines the appropriate outputs. Also, it implies testing based on an analysis of the internal structure of the component or system. This method is named so because the software program, in the eyes of the tester, is like a white/transparent box; inside which one clearly sees.
  • White Box Testing: White Box testing, also known as Clear Box Testing, Open Box Testing, Glass Box Testing, Transparent Box Testing, Code-Based Testing or Structural Testing, is a software testing method in which the internal structure/design/implementation of the item being tested is known to the tester.

More informations for the Scrum PSD certification here.

Leave a Reply

Your email address will not be published. Required fields are marked *