ONLINE HELP
 WINDEVWEBDEV AND WINDEV MOBILE

Help / Developing an application or website / Test / Automated tests
  • Overview
  • Displaying the test manager
  • Features of the test manager
  • Test management window
  • Ribbon options
  • Describing a scenario
  • Code of scenario
  • "Before the test"
  • Event "Test scenario"
  • Event "After the test"
WINDEV
WindowsLinuxUniversal Windows 10 AppJavaReports and QueriesUser code (UMC)
WEBDEV
WindowsLinuxPHPWEBDEV - Browser code
WINDEV Mobile
AndroidAndroid Widget iPhone/iPadIOS WidgetApple WatchMac CatalystUniversal Windows 10 App
Others
Stored procedures
Overview
The test manager is used to:
  • display in the editor the list of tests (or scenarios) attached to an element.
  • configure the different tests.
The status report of test execution is displayed in the "Results of tests" pane.
Displaying the test manager
The list of existing automated tests can be viewed in the "Tests" folder of the "Project explorer" pane.
The different scenarios are grouped according to the element whose test is run (or according to the element that started the automated test). For example, "TEST_WIN_addressinput" groups all the test scenarios run from the "WIN_addressinput" window.
To display the test manager for a specific element, use one of the following methods:
  • double-click the name of the group of tests (in the "Project explorer" pane).
  • display the element (window, class, ...) in the editor. On the "Automated tests" tab, in the "Tests" group, click "See associated tests".
Features of the test manager

Test management window

Below is the window for managing the automated tests of an element:
This window is used to:
  • View the summary of all the scenarios associated with the current element. In the left section of the window. The tests are classified into different categories:
    • Stand-alone tests: independent tests. For example, for a stand-alone test on a window, the window is closed at the end of test.
    • Linked tests: tests run one after another. For example, for sequenced tests on a window, the window remains opened at the end of first test in order to run the next test on the same window.
    • Subtests: tests not run directly but via EmulateWindow.
  • Display the code of a scenario: all you have to do is select a scenario in the tree structure displayed on the left. The corresponding WLanguage code is displayed in the right section of the editor.
From this window, you can:
  • Start a scenario: simply select the desired scenario in the tree structure on the left, and select "Start" in the context menu.
  • Display the description of a scenario: simply select a scenario in the tree structure on the left, and select "Description" in the context menu.
  • Rename or delete a scenario: simply select a scenario in the tree structure on the left, and select "Rename" or "Delete" in the context menu.
  • Create a new scenario ("New scenario" in the context menu): the created scenario is empty and the corresponding WLanguage code can be typed.
  • Create a group of tests ("New group of tests" in the context menu): the different scenarios can be grouped in a group of tests via a "Drag and Drop" of scenario.
Different icons allow you to find out the test status:
  • : test passed without error.
  • : test modified.
  • : test in error.
The status report of test execution is displayed in the "Results of tests" pane.

Ribbon options

Some features of the test editor are grouped in the "Automated tests" tab of the ribbon.
The different options are:
  • "Tests" group options:
    • New: Creates a new script. You can:
      • Record a new scenario:
      • New blank scenario: Create a blank scenario.
      • Import a scenario recorded on the application: Imports a test case (wsct file) created by the user into the current project. For more details, see Automated test created by the user.
    • Run test: Runs the current test in the test editor.
    • Run all:
      • Run all the project tests: Runs all the scenarios on all the elements in the project.
      • Run all the tests not run: Runs all the scenarios that have not yet been run.
        Remark: A modified scenario that has not been started is considered "not run".
      • Run all the tests that detected errors: Runs only the scenarios in which errors were found.
      • Run automated tests in slow mode: If this option is selected, the execution of the scenario will take longer.
      • Enable dynamic audit during automated tests: If this option is selected, the dynamic audit is automatically started along with the scenarios and the audit report is displayed after each scenario is run.
      • Enable strict mode during automated tests: If this option is selected, when the test encounters an error (TestCheck returns False, errors, assertions, etc.), the test stops automatically in the debugger on the current iteration.
    • View results: Displays the results of the current test in the "Test results" pane.
    • Go to object: Goes to the project element associated with the current scenario.
  • "Automated test" group options:
  • "Code coverage" group options: Allows you to configure the data displayed in the coverage code statistics. For more details, see Code coverage.
Describing a scenario
To display the description of a test scenario:
  1. Select the scenario in the tree structure of the test manager.
  2. Select "Description" in the context menu of the report.
The description of a test scenario is used to enter information in the different tabs:
  • "General": Used to specify the name of test as well as a caption.
  • "Details": Used to specify (for a window):
    • Whether the test must trigger an error if the window cannot be closed at the end of the test. When running unit tests on a window, this option is used to check whether the window is closed. If this option is selected, the test is a stand-alone test.
    • Whether the test must keep the window opened at the end of its execution so that other tests can be run on this window. This option allows you to link several window tests together. This option is automatically checked if another window is opened during a window test. If this option is checked, the test is a sequenced test.
    • Whether the test result depends on a sequence of tests. In this case, the test cannot be run on its own. If this option is checked, the test is a sub-test.
    The "Details" tab can also be used to redefine the opening parameters of the window for test (for a stand-alone test only).
    Indeed, when running the test of a window with parameters, it may be interesting to test the opening the window by using specific parameters. The parameters specified when creating the test are indicated by default.
Code of scenario
The code of test scenario can be modified. Indeed, the scenario code is written in WLanguage and it can be easily modified. To access the scenario code, all you have to do is select a scenario in the tree structure displayed on the left. The corresponding WLanguage code is displayed in the right section of editor.
Three code sections are displayed:
  • the event "Before the test". This event is run before the test.
  • the event "Test scenario". This event corresponds to the code run to test the desired element.
  • the event "After the test". This event is used to check the test results (according to the data found in a reference file for example). Generally, this event allows you to close and free all the elements open and allocated in the event "Before the test".

"Before the test"

The event "Before the test" is used to:
  • locate the data that will be used for test,
  • pass the necessary parameters to the test,
  • define variables and define a set of test data (via TestAddIteration).
Example:
// -- Event "Before the test" of a test on a procedure
// Two parameters have been defined in the event "Test scenario"
// Adds the test data
TestAddIteration("test@test.fr", True)
TestAddIteration("not valid", False)
TestAddIteration("", False)
TestAddIteration("test.test.test@test.fr", True)
TestAddIteration("test@test@test.fr", False)
TestAddIteration("test.fr", False)
TestAddIteration("http://www.test.fr", False)

Event "Test scenario"

The event "Test scenario" corresponds to the code run by the test.
You have the ability to modify the scenario code, to add operations into the test. You have the ability to use the functions specific to the automated tests or the Emulate functions.
You also have the ability to replace values by parameters (to use a set of test data). In this case, you have the ability to declare a procedure in this code, to specify the expected parameters.
Example 1: Script that uses a set of test data and a controller
In this code, the Controller1 variable is a variable defined as controller (via the <Controller> extension attribute). This controller variable can then be used in the test code, for example to check the result of a procedure. In our example, the line of code:
Controller1 = bCheckEmail(psEmail)
runs the bCheckEmail procedure by passing the "psEmail" parameter and compares the result to the value defined for the "Controller1" variable.
// Define two parameters to create a set of test data
// The value of these parameters is defined in the event "Before the test"
PROCEDURE MyScenario(psEmail, Controller1 is boolean<controller>)
Controller1 = bCheckEmail(psEmail)
Example 2: Script that uses a set of test data and the TestVerify function
In this code, the procedure corresponding to the test expects two parameters. The test of data set is run by TestCheck.
// Define two parameters to create a set of test data
// The value of these parameters is defined in the event "Before the test"
PROCEDURE MyScenario(psEmail, Controller1 is boolean)
TestVerify(Controller1 = bCheckEmail(psEmail), "Test error for " + psEmail)

Event "After the test"

The event "After the test" is used to check the test results (according to the data found in a reference file for example). Generally, this event allows you to close and free all the elements open and allocated in the event "Before the test".
Example: Deleting the data created during the test: this code is used to check whether the test address was created and it is used to delete it if necessary.
// Check whether the address was saved
HReadSeekFirst(ADDRESS, NAME, "TEST")
IF HFound(ADDRESS) = False THEN
// The test is not ok
TestWriteResult(twrError, "The database was not successfully updated")
ELSE
// Test ok, we can delete
TestWriteResult(twrInfo, "The database was updated")
// Delete the test address
HDelete(ADDRESS)
TestWriteResult(twrInfo, "The test address was deleted")
END
Minimum version required
  • Version 14
This page is also available for…
Comments
Click [Add] to post a comment

Last update: 06/23/2022

Send a report | Local help