I am not aware that anyone has has yet found a way to automate full testing of a FIM solution. I know some people unit test their extension code but that doesn’t tell you anything beyond the inputs and outputs of the code. Full testing may need to encompass data entry in the Portal or from connected systems, workflows, synchronization, and resultant modifications in target systems.
We can achieve a reasonable level of testing in the Sync Service using the built-in functionalities of sync preview and “drop a log file and stop the run”. However what if we also need to test FIM’s response to particular ways that data is entered and modified in connected systems, and what about the Portal?
The only way that I know to end-to-end test a FIM solution is with Test Cases. These are documented procedures with particular inputs that should lead to demonstrable outcomes. They have to be conducted manually which is, unfortunately, a thoroughly boring job for someone.
As a solution grows you may end up with many hundreds of test cases. Is it really necessary to run through all of them at each change? We have to be practical here so perhaps a subset of the most common use cases can be tested, plus any that are particularly linked to the change.
I am also very glad when someone other than me does the testing. If I made the change then of course I know it’s fine, so may completely miss where it actually isn’t. For complex environments with a lot of test cases to run through having people other than the FIM administrators/designers to do the testing is definitely a best practice.
Got something to add? Disagree? Comments are open!