13
votes

I've been using Pentaho Kettle for quite a while and previously the transformations and jobs i've made (using spoon) have been quite simple load from db, rename etc, input to stuff to another db. But now i've been doing transformations that do a bit more complex calculations that i would now like to test somehow.

So what i would like to do is:

  1. Setup some test data
  2. Run the transformation
  3. Verify result data

One option would probably be to make a Kettle test job that would test the transformation. But as my transformations relate to a java project i would prefer to run the tests from jUnit. So i've considered making a jUnit test that would:

  1. Setup test data (using dbunit)
  2. Run the transformation (using kitchen.sh from command line)
  3. Verify result data (using dbunit)

This approach would however require test database(s) which are not always available (oracle etc. expensive/legacy db's) What I would prefer is that if I could mock or pass some stub test data to my input steps some how.

Any other ideas on how to test Pentaho kettle transformations?

3
I don't understand what you mean by "This however would limit my tests to those databases that i have available on our test server." Aren't you always limited to those databases, given you are running on the test server?mooreds
I edited the question a bit to clarify. But anyhoo, What I meant was that I don't always have access to my input step databases (besides read-access to a real production db). So i cannot input any test data to those via dbunit etc. So that's why I would prefer mocking my input step data if possible somehow.hannesh

3 Answers

4
votes

there is a jira somewhere on jira.pentaho.com ( i dont have it to hand ) that requests exactly this - but alas it is not yet implemented.

So you do have the right solution in mind- I'd also add jenkins and an ant script to tie it all together. I've done a similar thing with report testing - I actually had a pentaho job load the data, then it executed the report, then it compared the output with known output and reported pass/failure.

3
votes

If you separate out your kettle jobs into two phases:

  • load data to stream
  • process and update data

You can use copy rows to result at the end of your load data to stream step, and get rows from result to get rows at the start of your process step.

If you do this, then you can use any means to load data (kettle transform, dbunit called from ant script), and can mock up any database tables you want.

I use this for testing some ETL scripts I've written and it works just fine.

1
votes

You can use the data validator step. Of course is not a full unit test suite, but i think sometimes will be useful to check the data integrity in a quick way. You can run more than several tests at once.

For a more "serious" test i will recommend the @codek answer and execute your kettles under Jenkins.

data validator step screenshot