All the necessary aspects of the Toolbox can be configured from code. Look for such parameters on various methods. The root configuration class that can be set up for every test individually is OrchardCoreUITestExecutorConfiguration
.
Note that since the tests are xUnit tests you can configure general parameters of test execution, including the level or parallelization, with an xUnit configuration file (xunit.runner.json). A default suitable one is included in the UI Testing Toolbox and will be loaded into your test projects; if you want to override that then:
- Add a suitable xunit.runner.json file to your project's folder.
- In the
csproj
configure its "Build Action" as "Content", and "Copy to Output Directory" as "Copy if newer" to ensure it'll be used by the tests. This is how it looks like in the project file:
<ItemGroup>
<None Remove="xunit.runner.json" />
<Content Include="xunit.runner.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</Content>
</ItemGroup>
Note also that some projects' xunit.runner.json files may include the flag stopOnFail
set to true
, which makes further tests stop once a failing test is encountered.
Certain test execution parameters can be configured externally too, the ones retrieved via the TestConfigurationManager
class. All configuration options are basic key-value pairs and can be provided in one of the two ways:
- Key-value pairs in a TestConfiguration.json file. Note that this file needs to be in the folder where the UI tests execute. By default this is the build output folder of the given test project, i.e. where the projects's DLL is generated (e.g. bin/Debug/net6.0).
- Environment variables: Their names should be prefixed with
Lombiq_Tests_UI
, followed by the config with a__
as it is with (ASP).NET configuration, e.g.Lombiq_Tests_UI__OrchardCoreUITestExecutorConfiguration__MaxRetryCount
(instead of the double underscore you can also use a:
on certain platforms like Windows). Keep in mind that you can set these just for the current session too. Configuration in environment variables will take precedence over the TestConfiguration.json file. When you're setting environment variables while trying out test execution keep in mind that you'll have to restart the app after changing any environment variable.
Here's a full TestConfiguration.json file example, something appropriate during development when you have a fast machine (probably faster then the one used to execute these tests) and want tests to fail fast instead of being reliable:
{
"Lombiq_Tests_UI": {
"AgentIndex": 3,
"TimeoutConfiguration": {
"RetryTimeoutSeconds": 5,
"RetryIntervalMillisecondSeconds": 300,
"PageLoadTimeoutSeconds": 120
},
"OrchardCoreUITestExecutorConfiguration": {
"MaxRetryCount": 0,
"RetryIntervalSeconds": 0,
"MaxParallelTests": 0
},
"BrowserConfiguration": {
"Headless": true
}
}
}
Recommendations and notes for such configuration:
- This will execute tests in headless mode, so no browser windows will be opened (for browsers that support it). If you want to troubleshoot a failing test then disable headless mode.
- We encourage you to experiment with a
RetryTimeoutSeconds
value suitable for your hardware. Higher, paradoxically, is usually less safe. - If you have several UI test projects it can be cumbersome to maintain a TestConfiguration.json file for each. Instead you can set the value of the
LOMBIQ_UI_TESTING_TOOLBOX_SHARED_TEST_CONFIGURATION
environment variable to the absolute path of a central configuration file and then each project will look it up. If you place an individual TestConfiguration.json into a test directory it will still take precedence in case you need special configuration for just that one. MaxParallelTests
sets how many UI tests should run at the same time. It is an important property if you want to run your UI tests in parallel, check out the inline documentation inOrchardCoreUITestExecutorConfiguration
.
If you want to filter out certain html validation errors for a specific test you can simply filter them out of the error results by their rule ID. For example:
configuration => configuration.HtmlValidationConfiguration.AssertHtmlValidationResultAsync =
validationResult =>
{
var errors = validationResult.GetParsedErrors()
.Where(error => error.RuleId is not "prefer-native-element");
errors.ShouldBeEmpty(HtmlValidationResultExtensions.GetParsedErrorMessageString(errors));
return Task.CompletedTask;
});
Note that the RuleId
is the identifier of the rule that you want to exclude from the results. The custom string formatter in the call to errors.ShouldBeEmpty
is used to display the errors in a more readable way and is not strictly necessary.
If you want to change some HTML validation rules for multiple tests, you can also create a custom .htmlvalidate.json file (e.g. TestName.htmlvalidate.json). This should extend the default.htmlvalidate.json file (which is always copied into the build directory) by setting the value of "extends"
to a relative path pointing to it and declaring "root": true
. For example:
{
"extends": [
"./default.htmlvalidate.json"
],
"rules": {
"element-required-attributes": "off",
"no-implicit-button-type": "off"
},
"root": true
}
You can also create a completely standalone config file too, without any inheritance, but we'd recommend against that.
You can change the configuration to use the above file as follows:
changeConfiguration: configuration =>
configuration.HtmlValidationConfiguration.HtmlValidationOptions.ConfigPath =
Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "TestName.htmlvalidate.json");
Though if the file is in the base directory like above, then it can be simplified using the WithRelativeConfigPath(params string[] pathSegments)
method:
changeConfiguration: configuration => configuration.HtmlValidationConfiguration.WithRelativeConfigPath("TestName.htmlvalidate.json");
If you want to do this for all tests in the project, just put an .htmlvalidate.json file into the project root and it will be picked up without further configuration.
Include it in the test project like this:
<ItemGroup>
<Content Include="TestName.htmlvalidate.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
<PackageCopyToOutput>true</PackageCopyToOutput>
</Content>
</ItemGroup>
UI tests are executed in parallel by default for the given test execution process (see the xUnit documentation). However, if you'd like multiple processes to execute tests like when multiple build agents run tests for separate branches on the same build machine then you'll need to tell each process which build agent they are on. This is so clashes on e.g. network port numbers can be prevented.
Supply the agent index in the AgentIndex
configuration. It doesn't need to but is highly recommended to be zero-indexed (see the docs on limits) and it must be unique to each process. You can also use this to find a port interval where on your machine there are no other processes listening.
If you have multiple UI test projects in a single solution and you're executing them with a single dotnet test
command then disable them being executed in parallel with the xUnit "parallelizeAssembly": false
configuration (i.e. while tests within a project will be executed in parallel, the two test projects won't, not to have port and other clashes due to the same AgentIndex
). This is provided by the xunit.runner.json file of the UI Testing Toolbox by default.
You can learn more about the microsoft-mssql-server container here. You have to mount a local volume that can be shared between the host and the container. Update the values of device
and SA_PASSWORD
in the code below and execute it.
docker pull mcr.microsoft.com/mssql/server
docker run --name sql2019 -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Password1!" -p 1433:1433 -d mcr.microsoft.com/mssql/server:2019-latest
docker exec -u 0 sql2019 bash -c "mkdir /data; chmod 777 /data -R; chown mssql:root /data"
You need to put the shared directory inside your $HOME, in this example ~/.local/docker/mssql/data:
docker pull mcr.microsoft.com/mssql/server
docker run --name sql2019 -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Password1!' -p 1433:1433 -d 'mcr.microsoft.com/mssql/server:2019-latest'
docker exec -u 0 sql2019 bash -c 'mkdir /data; chmod 777 /data -R; chown mssql:root /data'
If you haven't yet, add your user to the docker
group.
If you get a PlatformNotSupportedException
, that's a known problem with Microsoft.Data.SqlClient on .Net 5 and above. As a workaround, temporarily set the project's runtime identifier to linux-x64
- either on the terminal, or by adding <RuntimeIdentifier>linux-x64</RuntimeIdentifier>
to the project file.
If you want to test it out, type docker exec -u 0 -it sql2019 /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P 'Password1!'
to access the SQL console.
You can use Docker Desktop or Portainer to stop or start the container going forward.
SQL Server on Linux only has SQL Authentication and you still have to tell the toolbox about your backup paths. Add the following properties to your Lombiq_Tests_UI
. Adjust the Password field of the connection string and HostSnapshotPath
property as needed.
{
"SqlServerDatabaseConfiguration": {
"ConnectionStringTemplate": "Server=.;Database=LombiqUITestingToolbox_{{id}};User Id=sa;Password=Password1!;Connection Timeout=60;ConnectRetryCount=15;ConnectRetryInterval=5;TrustServerCertificate=True;Encrypt=False"
},
"DockerConfiguration": {
"ContainerSnapshotPath": "/data/Snapshots",
"ContainerName": "sql2019"
}
}
Note that TrustServerCertificate=true;Encrypt=false
is used in the connection string due to breaking changes in Microsoft.Data.SqlClient as described in this issue. The same is also present in the default value of the connection string template. This configuration would be a security hole in production environment, but it's safe for testing and development.
The default value of ContainerSnapshotPath
is "/data/Snapshots"
so you can omit that.
While the snapshot functionality is only supported for SQLite and SQL Server databases, you can still run tests that connect to a PostgreSQL or MySQL (MariaDB) database.
This doesn't need any special care, just configure the suitable connection strings when running the Orchard setup. If you're running multiple tests with a single DB then also take care of setting the database table prefix there to a value unique to the current execution of the given test. Note that due to the lack of snapshot support for these DBs, you won't be able to use ExecuteTestAfterSetupAsync()
from your tests.
For an example of how you can set up a test suite to run tests with all DB engines supported by Orchard (in GitHub Actions), see this pull request.