fail

ETLUnit 3.9.6

 

Draft in Progress

This document is a draft and is under development.

 

Description

Force a failure in an ETLUnit test.  Useful for debugging.

Attributes

List of fail() Attributes

 

message:  - >>  TYPE:  string  - - REQUIRED

failure-id:  - >>  TYPE:  string

Individual fail() Attributes

message:

  • The message to send with the failure.
  • This message will appear in the test log.

failure-id:

  • The failure-id string is arbitrary.  It may show up in the CLI (Command Line Interface) when you run the test to point to where the failure occurred.  It may also show up on the summary page report next to the test method.
  • The convention is to make the failure-id value all capitals, with words separated by underscores.  For instance, FORCED_FAILURE or WORKFLOW_SHOULD_HAVE_FAILED.

Examples

fail() Example

When the etlunit test reaches the fail() operation, the test is marked for failure.  However, the test method may continue processing code that appears after the fail() operation, until it reaches the end of the test method's code.  It will then fail according to the attributes in the fail() operation.

 

Example of fail() operation where failure is not caught

@Test()

failOnPurpose()

{

log(

message: 'failing by operation fail()',

log-file-name: 'fail_me.log',

log-classifier: 'anyClassifier'

);

 

fail(

message:'Failure message 123-765',

failure-id: 'DELIBERATE_FAILURE'

);

}

 

 

The failure-id may show up in the CLI when you run the test.

 

userme > te #failOnPurpose
Processing [1] tests
class experiment.fail_operation   ------------------------------------------------
1/1        .failOnPurpose
      DELIBERATE_FAILURE
  Failed                                                                      F[1]
Tests run: 1, Failures: 1, Time elapsed: 00.269 sec

 

Note that the failure ID specified in the failure() operation appeared in the CLI under the name of the test method.

Run the report...

 

userme > r

 

. . . and you may see the value of error-id next to the method name.

 

 

Catch the failure generated by the fail() operation

The failure generated by the fail() operation may be caught by specifying the value of the failure-id attribute in the @Test annotation's expected-failure-id attribute.

 

Example of catching failure generated by fail operation

@Test(expected-failure-id: 'DELIBERATE_FAILURE')

succeedWhilefailing()

{

log(

message: 'failing by operation fail()',

log-file-name: 'log_my_fail.log',

log-classifier: 'anyClassifier'

);

 

fail(

message:'Failure message 123-765',

failure-id: 'DELIBERATE_FAILURE'

);

}

 

The content of the test method hasn't changed, but because we specified an expected-failure-id, this time the test passed when run from the Command Line Interface:

 

userme > te #succeedWhile
Processing [1] tests
class experiment.fail_operation   ------------------------------------------------
1/1        .succeedWhilefailing
  Passed                                                                      P[1]
Tests run: 1, Successes: 1, Time elapsed: 00.179 sec

 

 

Using multiple fail() operations

 

Example of multiple fail operations

@Test

failTwice()

{

fail(

message:'Failure message 123-765',

failure-id: 'DELIBERATE_FAILURE'

);

 

log(

message: 'failing by operation fail()',

log-file-name: 'fail_me.log',

log-classifier: 'anyClassifier'

);

 

fail(

message:'Failure message Whadduhyahknow',

failure-id: 'SECONDARY_FAILURE'

);

 

log(

message: 'I log again.',

log-file-name: 'fail_me_also.log',

log-classifier: 'anyClassifier'

);

}

 

Both failure IDs may show up on the CLI:

 

userme > te #failTwice
Processing [1] tests
class experiment.fail_operation   ------------------------------------------------
1/1        .failTwice
      DELIBERATE_FAILURE
      SECONDARY_FAILURE
  Failed                                                                      F[2]
Tests run: 1, Failures: 2, Time elapsed: 00.181 sec

 

... and note that despite the fail() operation, the other code in the method went ahead and executed anyway.  Check out the two logging messages in the test's log.

 

As further proof, the two log files were created when running this test method: