Setting up DWH Test Automation: Design & Arch
15 mei 2017 2017-05-15 16:00Setting up DWH Test Automation: Design & Arch
Setting up DWH Test Automation: Design & Arch
With Continuous Integration working for my Data Warehousing solution (SQL Server, SSAS Tabular, SSIS), it's time to step forward. Before moving to Continuous Deployment, I want to have a rigorous and automated testing on my EDW. But how to get there?
As I currently see it, we need tests[ref]And, as a consequence, automated tests[/ref] for the DWH at least in two ways:
- Case-by-case specific end-to-end system testing
- Broad regression testing
Case-by-case specific end-to-end system testing
The first way is case-by-case testing: hand-pick interesting cases from the business, write down the input we get for these cases, as well as the output that's expected, and test that. Notice that this is
- case-by-case - because every case is worked through separately
- end-to-end system testing - because we insert something at staging, call on the ETL process, and see if the result is as expected
By narrowing down the test cases this much, it will become easy to detect new errors. It also aligns neatly with a test-driven approach, where we ask the business to come up with very specific cases in the source system along with their expectations for the resulting dashboard and build for these very specific requirements before moving on. Lynn Winterboer suggested (in this particularly great podcast from SQLDataPartners) you should even have the business agree with delivered software that is only guaranteed to work for these 10 cases - after which they can submit the next 10 that don't work right now. This way, you'll have a fast iteration cycle, while building a solid collection of tests.
From a technical point of view, notice that it will be a good idea to have some infrastructure that automates the following tasks for you:
- Empty entire DWH inside your test environment
- Insert the data needed for one specific case
- Execute ETL
In my current project, I've got a database containing everything to perform these tests:
- Tables with identical structure to the ones in the staging area (plus two columns "TestSuiteName" and "TestName")
- A table containing the mapping from test-input table to target database, schema and table[ref]column names are expected to be identical here[/ref]
- A stored procedure to purge the DWH (all layers) in the test environment
- A stored procedure to insert the data for a specific testsuite / name
When preparing a specific test case (the "insert rows for test case" step from the diagram above), the rows needed for that case are copied into the DWH:
Broad Regression Testing
The second thing we need is a broader way of testing in order to answer the question how the new version compares up to the running Data Warehouse?
In order to answer this question, I have an environment identical to the production server. I call this the "production shadow test" for lack of a better name. Important is here that this server is synchronized with the production server before each new deployment[ref]I use a small PowerShell script to do a backup-restore first, revoking the production rights and applying the CI rights[/ref], and after that has exactly the same ETL load.
This enables us to compare the production version row-by-row to this "test" environment.
Note that this will not lead to automated evaluation - it should be clear that new releases updates to the DWH should give different results than the code currently in production. So the fact that the results are different are not an error per se - but this will give insight to the person who has to "approve" this release what logic and data will change.
Other types of tests
After every test you perform, there's a hidden question - on the most basic level, most unit tests answer the question whether the method you just programmed or changed performs as expected. Before moving to continuous deployment there are more questions I'd like to ask to my new codebase - and therefor should test for:
- Is it still deployable?
- Does it fulfill the acceptance criteria the business agreed on?
- How does it compare with the current production system?
- When running with a "real" workload, does the system still perform as expected?
- Does it change the outcomes we currently get from production?
Currently, all my questions can - be answered by these three corresponding processes, as just described above:
- Deploy to CI-test environment
- Run case-by-case tests
- Run "shadow" tests (described above)
Maybe over time other questions will rise (and other test types will need to be designed).
The Test Workflow
Bringing the different types of tests together, the workflow just described would look like this:
As you can see, it isn't that hard. It just requires some discipline ;-).
Having shared this, I'm pretty sure more can be said about automated testing. For example, maybe my terminology isn't 100% correct (I'm not a test professional after all), and I'm using confusing terms like "case-by-case" or "shadow" where you'd clearly describe otherwise. Or I'm incorporating / leaving out important parts of these tests, making my life harder (or tests less useful).
So I'm very curious to your input - feel welcome to comment below!