The Squirrels Are Watching

Assumption Testing

Posted in Code and Tech by andrewfong on May 6, 2009

I like to write narratives when I’m testing code. Do A. Test some stuff. Do B. Test some stuff.

The problem with these narratives is that an error in part A can result in cascading test failures in B, C, etc. It’s usually not too hard to figure out, but it’s definitely annoying to see one bug fill up your console  with tracebacks from 50 test failures.

One way to deal with this is to compartmentalize your tests, i.e. make A and B separate tests and mock out any references to A in B. It’s easy to overdo this however.  A lot of times, you actually do want to test the interaction between A and B (and C and D and so forth).

What we really want to do is to run the narrative and stop as soon as we hit a failure. However, there doesn’t appear to be a control the flow and order of testing between different tests in Python’s native unittest and doctest modules. That basically leaves writing really long and unwieldy test functions.  Not very maintainable.

So I ended up hacking together an extension of unittest (and doctest, sort of) and named it Antf. The basic idea is that we add functionality for specifying that a test case depends on functionality tested in another test case. If test A depends on test B, then we test B first. If B fails, then the test runner passes over A.

Possible issues are possible namespace issues in keeping track of which test cases have already been looked at and circular assumption references. These don’t seem to be real show-stoppers right now though, so I’ve gone ahead and pasted the code below the fold.



"""
Antf is an extension of the unittest framework that allows for
abstraction of assumptions. Assumption testing basically means
we don't run certain tests if other tests those tests assume are True
have failed. This saves us the hassle of wading through a long stream
of stack traces when something low in our stack breaks

"""
from unittest import TestCase
import unittest
import doctest
import new
import types


class AntfTestCase(TestCase):
    """ A subclass of AntfTestCase can assume other subclasses of
    AntfTestCase are valid. When a AntfTestCase subclass is
    instantiated and called, it first checks all of its assumptions.
    If any of the assumptions are False, it fails immediately.
    
    Use like you would normally use unittest.TestCase, but set the
    _assumes class variable if you want to test those
    
    >>> class PassingTest(AntfTestCase):
    ...     def runTest(self):
    ...         assert True
    
    >>> class AssumesPassingTest(PassingTest):
    ...     _assumes = [PassingTest]
    
    >>> class AssumesAssumesPassingTest(PassingTest):
    ...     _assumes = [AssumesPassingTest]
    
    
    Testing AssumesAssumesPassingTest runs 3 tests -- it checks
    every assumption down the chain by running corresponding tests
    
    >>> result = unittest.TestResult()
    >>> AssumesAssumesPassingTest()(result)
    >>> result.testsRun
    3
    >>> assert result.wasSuccessful(), result.failures + result.errors
    
    
    If we add another assumption to the chain, only that additional test
    is checked since the results from the previous assumptions in the
    chain are already loaded in memory
    
    >>> class TripleAssumesTest(PassingTest):
    ...     _assumes = [AssumesAssumesPassingTest]
    
    >>> result = unittest.TestResult()
    >>> TripleAssumesTest()(result)
    >>> result.testsRun
    1
    >>> assert result.wasSuccessful(), result.failures + result.errors
    
    
    If there is a failing test in the assumption, then stop the chain early
    In this case, only the failing test has been run
        
    >>> class FailingTest(AntfTestCase):
    ...     def runTest(self):
    ...         assert False
    
    >>> class AssumesFailingTest(PassingTest):
    ...     _assumes = [FailingTest]
    
    >>> result = unittest.TestResult()
    >>> AssumesFailingTest()(result)
    >>> result.testsRun
    1
    >>> assert not result.wasSuccessful()
    
    
    When assuming multiple tests, any false assumption should cause a failure
    
    >>> class AssumesBothTests(PassingTest):
    ...     _assumes = [PassingTest, FailingTest]
    
    >>> result = unittest.TestResult()
    >>> AssumesBothTests()(result)
    >>> result.testsRun
    0
    
    
    Unittest with multiple tests can be called once for each test.
    
    >>> class DoubleTest(AntfTestCase):
    ...     def test_one(self):
    ...         assert True
    ...
    ...     def test_two(self):
    ...         assert False
    >>> result = unittest.TestResult()
    >>> DoubleTest('test_one')(result)
    >>> DoubleTest('test_one')(result)
    
    Same test so only one test
    >>> result.testsRun
    1
    >>> assert result.wasSuccessful()
    
    Different test, so new result
    >>> DoubleTest('test_two')(result)
    >>> result.testsRun
    2
    >>> assert not result.wasSuccessful()
    
    
    If we assume a test case with multiple tests, if any of them fail,
    the assumption does not hold
        
    >>> class AssumesDoubleTest(AntfTestCase):
    ...     _assumes = [DoubleTest]
    ...     def runTest(self):
    ...         assert True
    >>> result = unittest.TestResult()
    >>> AssumesDoubleTest()(result)
    >>> result.testsRun
    0
    
    
    When assuming a test case with multiple tests, the assumption will call
    all of them if they haven't been called already. Also, tests of subclasses
    will be called again since their outcome may differ under a new class
    
    >>> class NewDoubleTest(DoubleTest):
    ...     def test_two(self):
    ...         assert True
    
    >>> class AssumesNewDoubleTest(AntfTestCase):
    ...     _assumes = [NewDoubleTest]
    ...     def runTest(self):
    ...         assert True
    
    >>> result = unittest.TestResult()
    >>> AssumesNewDoubleTest()(result)
    >>> result.testsRun
    3
    >>> assert result.wasSuccessful()
    
    
    Can make assumptions of individual test methods as well
    
    >>> class AssumesDoubleTestPass(AntfTestCase):
    ...     _assumes = [DoubleTest.test_one] # this test asserts True
    ...     def runTest(self):
    ...         assert True
    
    >>> result = unittest.TestResult()
    >>> AssumesDoubleTestPass()(result)
    >>> result.testsRun
    1
    >>> assert result.wasSuccessful()
    
    >>> class AssumesDoubleTestFail(AntfTestCase):
    ...     _assumes = [DoubleTest.test_two] # this test asserts Faslse
    ...     def runTest(self):
    ...         assert True
    
    >>> result = unittest.TestResult()
    >>> AssumesDoubleTestFail()(result)
    >>> result.testsRun
    0
    
    """
    _assumes = [] # List of other testcases this unittest assumes
    _ok = {} # Nested dicts mapping from AntfTestCase clss to individual tests
             # to whether that test has passed
             # True implies passed, False implies it hasn't, and if class
             # is not in this dict, then it hasn't been run yet
    
    @property
    def already_ran(self):
        return self._testMethodName in self.cls_ok_dict
        
    @classmethod
    def get_ok_dict(cls):
        return cls._ok.setdefault(cls, {})
        
    @property
    def cls_ok_dict(self):
        return self.__class__.get_ok_dict()
        
    def set_ok(self, passed):
        self.cls_ok_dict[self._testMethodName] = passed
    
    @classmethod
    def no_failures(cls, test_method_name=None):
        """ Has the given test method for this test case passed?
        
        If no method name is provided, then return True if all methods
        for this test so far have not failed and False if any have
        
        """
        if test_method_name:
            try:
                return cls.get_ok_dict()[test_method_name]
            except KeyError:
                assert False, "A test with that name ad this class has not yet been run"
        
        assert cls.get_ok_dict(), "No tests have been run for this class"
        return all(cls.get_ok_dict().values())
    
    def __call__(self, result=None):
        """ Runs the test with assumption logic.
        
        (1) Does not run twice -- once cls._ok is set, just returns that
        (2) Checks self._assumptions before running
        (3) Returns True if there were no problems and False if there were
        
        """
        if self.already_ran:
            # If test has already been run, stop
            return
        else:
            # By default, assume this test is failing
            # Must successfully reach end without raising exception to pass
            self.set_ok(False)
                
        # If any assumptions are false, don't run this test
        for assumption in self.__class__._assumes:
            if isinstance(assumption, types.MethodType):
                assumption_name = assumption.__name__
                assumption = assumption.im_class
                assert issubclass(assumption, AntfTestCase),\
                    "Assumed methods must belong to AntfTestCase subclass"
                # Init, run actual
                assumption(assumption_name)(result)
                
            elif isinstance(assumption, types.TypeType) \
              or isinstance(assumption, types.ClassType):
                assumption_name = None
                assert issubclass(assumption, AntfTestCase),\
                    "Assumed class be AntfTestCase subclass"
                # Create test suite from assumption class and run it
                loader = unittest.defaultTestLoader
                suite = loader.loadTestsFromTestCase(assumption)
                suite(result)
            
            else: assert False, "Invalid assumption type"
            
            if not assumption.no_failures(assumption_name):
                return # False
        
        old_error_count = len(result.errors)
        old_failure_count = len(result.failures)
        self._call(result)
        if (len(result.errors) == old_error_count and
            len(result.failures) == old_failure_count):
            self.set_ok(True)
            
    def _call(self, result):
        return TestCase.__call__(self, result)


class AntfDocTestCase(AntfTestCase):
    _test_objs = []
    
    def run(self, result=None):
        if result is None: result = self.defaultTestResult()
        
        tests = []
        for obj in self._test_objs:
            tests += doctest.DocTestFinder().find(obj)
        
        for test in tests:
            doctest.DocTestCase(test, **self._extras)(result)
    
    def runTest(self, *args, **kwds):
        pass


class MakeAntfDocTest(object):
    ''' Returns an AntfTestCase that wraps a function's doctest 
    
    These next examples more or less mirror the examples for AntfTestCase
    
    >>> @MakeAntfDocTest()
    ... def test_pass():
    ...     """
    ...     >>> assert True
    ...     """
    
    >>> @MakeAntfDocTest(test_pass)
    ... def test_assumes_pass():
    ...     """
    ...     >>> 5 + 5
    ...     10
    ...     """
    
    >>> @MakeAntfDocTest(test_assumes_pass)
    ... def test_double_assumes_pass():
    ...     """
    ...     >>> 5 + 5
    ...     10
    ...     >>> 7 + 7
    ...     14
    ...     """
    
    >>> assert issubclass(test_pass, AntfTestCase)
    >>> assert issubclass(test_assumes_pass, AntfTestCase)
    >>> assert issubclass(test_double_assumes_pass, AntfTestCase)
    
    
    Testing test_double_assumes_pass runs 3 tests -- it checks
    every assumption down the chain by running corresponding tests
    
    >>> result = unittest.TestResult()
    >>> test_double_assumes_pass()(result)
    >>> result.testsRun
    3
    >>> assert result.wasSuccessful(), result.failures + result.errors
    
    
    If we add another assumption to the chain, only that additional test
    is checked since the results from the previous assumptions in the
    chain are already loaded in memory
    
    >>> @assumes(test_assumes_pass)
    ... def test_triple_assumes_pass():
    ...     """
    ...     >>> assert True
    ...     """
    
    >>> result = unittest.TestResult()
    >>> test_triple_assumes_pass()(result)
    >>> result.testsRun
    1
    >>> assert result.wasSuccessful(), result.failures + result.errors
    
    
    If there is a failing test in the assumption, then stop the chain early
    In this case, only the failing test has been run
        
    >>> @MakeAntfDocTest()
    ... def test_fail():
    ...     """
    ...     >>> assert False
    ...     """
    
    >>> @assumes(test_fail)
    ... def assumes_test_fail():
    ...     """
    ...     >>> assert True
    ...     """
    
    >>> result = unittest.TestResult()
    >>> assumes_test_fail()(result)
    >>> result.testsRun
    1
    >>> assert not result.wasSuccessful()
    
    
    When assuming multiple tests, any false assumption should cause a failure
    
    >>> @assumes(test_pass, test_fail)
    ... def test_assume_both():
    ...     """
    ...     >>> assert True
    ...     """
    
    >>> result = unittest.TestResult()
    >>> test_assume_both()(result)
    >>> result.testsRun
    0
    
    
    Ellipsis option on by default
    
    >>> @antf()
    ... def test_ellipsis():
    ...     """
    ...     >>> print 12345
    ...     1...5
    ...     """
    >>> result = unittest.TestResult()
    >>> test_ellipsis()(result)
    >>> assert result.wasSuccessful(), result.failures + result.errors
    
    '''
    
    def __init__(self, *assumptions, **extras):
        self.assumptions = assumptions
        self.extras = {'optionflags' : doctest.ELLIPSIS}
        self.extras.update(extras)
    
    def __call__(self, func):    
        cls_dict = {}
        cls_dict['_assumes'] = self.assumptions
        cls_dict['_extras'] = self.extras
        cls_dict['_test_objs'] = [func]
        
        return new.classobj(func.__name__, (AntfDocTestCase,), cls_dict)

# Aliases
assumes = antf = MakeAntfDocTest


def _test():
    import doctest
    doctest.testmod()

if __name__ == "__main__":
    _test()

Update (May 8, 2009) : Made a few fixes to some of the more glaring bugs

Update 2 (May 8, 2009) : More fixes. Antf now lives in a Google Code SVN. Check there for updates from now on.

Advertisements
Tagged with: , , ,

Comments Off on Assumption Testing

%d bloggers like this: