Friday, January 28, 2005

Python unit testing part 3: the py.test tool and library

This is the last part of a 3-part discussion on Python unit test frameworks. You can find part 1 here and part 2 here. In this post I'll discuss the py.test tool and library.

py.test

Availability

As Python unit test frameworks go, py.test is the new kid on the block. It already has an impressive set of features, with more to come, since the tool is under very active development.

py.test is part of the py library, a collection of modules with bold goals. For example, the py.path module aims to "allow you to seamlessly work with different backends, currently a local filesystem, subversion working copies and subversion remote URLs. Moreover, there is an experimental extpy path to address a Python object on the (possibly remote) filesystem." (quoted from the Why, who, what and how do you do the py lib page).

Much of the motivation for writing the py library came from issues that arose in the PyPy project, whose goal is no other than to produce a simple and fast runtime-system for the Python language, written in Python itself. Note that the PyPy project received funding from the European Union, which is very encouraging for open-source projects in general and for Python projects in particular. As you can see, the guys in charge of these projects set their sights high and, judging by the intense activity on the py-dev mailing list, they'll waste no time reaching their goals. Of course, PyPy sprints in winter-sport-friendly Switzerland can't be all that bad either :-)

The main py.test developers are Holger Krekel and Armin Rigo. People interested in delving into all the juicy details of how to use py.test are urged to attend Holger's and Armin's talk at PyCon 2005. In this post, I'll just cover the basic usage, since I'm still very much a beginner at using this tool.

I'll start by a quick overview of the installation. More details can be found in Getting started with py.lib.

1. I didn't have subversion installed on my machine, so I had to jump through some hoops in order to install it (I won't go into the gory details here.)
2. I cd-ed into the directory where the py distribution will live ( /usr/local in my case)
3. I checked out the latest py distribution by running:
svn co http://codespeak.net/svn/py/dist dist-py
4. At this point, I had the py directory tree under /usr/local/dist-py
5. I added the following line to my .bash_profile:
eval `python /usr/local/dist-py/py/env.py`
(this line basically sets up the PATH and PYTHONPATH environment variables so that you can run py.test as a command-line utility and you can "import py" in your Python code)
6. I sourced .bash_profile in my current shell session:
. ~/.bash_profile

That's about it. Now you can just run "py.test -h" at a command prompt to see the various command-line options accepted by the tool.

Ease of use / API complexity

Two words: no API. It's a scary thought, but you can really go wild in writing your unit tests. Just two things you need to remember:

1. Prefix the names of your test functions/methods with test_ and the names of your test classes with Test
2. Save your test code in files that start with test_

That's about it in terms of API complexity. If you just run py.test in the directory that contains your tests, the tool will search the current directory and its subdirectories for files that start with test_ , then it will automagically invoke all the test functions/methods it finds in those files. There is no need to inherit your test class from a framework-specific class, as is the case with unittest.

As with everything, there is one exception to the "no API" rule. The one place where py.test does have an API is in providing hooks for managing test fixture state. I'll provide more details as you read on.

Here's a quick example of testing the sort() list method. I saved the following in a file called test_sort.py:

class TestSort:
def setup_method(self, method):
self.alist = [5, 2, 3, 1, 4]

def test_ascending_sort(self):
self.alist.sort()
assert self.alist == [1, 2, 3, 4, 5]

def test_custom_sort(self):
def int_compare(x, y):
x = int(x)
y = int(y)
return x - y
self.alist.sort(int_compare)
assert self.alist == [1, 2, 3, 4, 5]

b = ["1", "10", "2", "20", "100"]
b.sort()
assert b == ['1', '10', '100', '2', '20']
b.sort(int_compare)
assert b == ['1', '2', '10', '20', '100']

def test_sort_reverse(self):
self.alist.sort()
self.alist.reverse()
assert self.alist == [5, 4, 3, 2, 1]

def test_sort_exception(self):
import py.test
py.test.raises(NameError, "self.alist.sort(int_compare)")
py.test.raises(ValueError, self.alist.remove, 6)
Note the use of the special setup_method. It provides the same functionality as the setUp hook of the unittest module. I'll revisit py.test's state setup/teardown mechanism in the "Test fixture management" discussion below.

To run the tests in test_sort.py, simply invoke:
# py.test test_sort.py

inserting into sys.path: /usr/local/dist-py
============================= test process starts =============================
testing-mode: inprocess
executable : /usr/local/bin/python (2.4.0-final-0)
using py lib: /usr/local/dist-py/py
initial testconfig 0: /usr/local/dist-py/py/test/defaultconfig.py/.
===============================================================================
....
================== tests finished: 4 passed in 0.01 seconds ==================
Test execution customization

If you ran "py.test -h", you already saw that py.test has an impressive array of command-line options. The simplest one to try out is the verbose (-v) option:
# py.test -v test_sort.py

inserting into sys.path: /usr/local/dist-py
============================= test process starts =============================
testing-mode: inprocess
executable : /usr/local/bin/python (2.4.0-final-0)
using py lib: /usr/local/dist-py/py
initial testconfig 0: /usr/local/dist-py/py/test/defaultconfig.py/.
===============================================================================
0.000 ok test_sort.py:5 TestSort.test_ascending_sort()
0.000 ok test_sort.py:9 TestSort.test_custom_sort()
0.000 ok test_sort.py:23 TestSort.test_sort_reverse()
0.007 ok test_sort.py:28 TestSort.test_sort_exception()


================== tests finished: 4 passed in 0.02 seconds ==================
(a nice touch here is the printing of the execution time for each method. )

By the way, if you run py.test with no command-line options whatsoever, it will dutifully collect and run all the test code it can find in the current directory and below. Not bad for 7 characters worth of typing...

Another useful option is -S or --nocapture, which suppresses py.test's catching of sys.stdout/stderr output. By default, all such output is intercepted by py.test. I had some problems with this when a test I was running was itself redirecting stderr to stdout and not releasing it properly. When I ran my test code through py.test, all I got was a pretty mysterious traceback. I notified the developers on the py-dev list and the issue was promptly remedied. However, there may still be cases when you want your print statements to actually show up in your test output -- that's where the -S flag comes in handy (by default, print output from your test code will only show up for tests that fail.)

I might as well air a gripe at this point: py.test does a lot of "magic" behind the curtains, which may or may not be what you want. This is the price you pay for the "No API" feature. There's a lot of hidden stuff going on and sometimes py.test handles errors/exceptions less than gracefully -- so you can find yourself staring at stack traces that may be not very revelatory. However, the people on the py-dev mailing list are extremely responsive and supportive, so all you need to do is just send an email to py-dev at codespeak.net and you can be assured your issue will be responded to in a matter of hours.

Back to command-line options: I haven't played with all of them yet, but I'll just mention another useful one: --collectonly, which shows you all the tests found by py.test in the current directory and below, without actually running them. Here's the output I get:
# py.test --collectonly

inserting into sys.path: /usr/local/dist-py
============================= test process starts =============================
testing-mode: inprocess
executable : /usr/local/bin/python (2.4.0-final-0)
using py lib: /usr/local/dist-py/py
initial testconfig 0: /usr/local/dist-py/py/test/defaultconfig.py/.
===============================================================================
Directory('')
Module('/root/scripts/tests/test_blogger.py/.')
Class('/root/scripts/tests/test_blogger.py/.TestBlogger')
Module('/root/scripts/tests/test_blogger2.py/.')
Class('/root/scripts/tests/test_blogger2.py/.TestBlogger')
Module('/root/scripts/tests/test_doctest_sort.py/.')
Module('/root/scripts/tests/test_sort.py/.')
Class('/root/scripts/tests/test_sort.py/.TestSort')

====================== tests finished: in 0.16 seconds ======================
Test fixture management

py.test really shines in this category. It vastly surpasses unittest in providing setup and teardown hooks for managing test fixture/state in your test environments. You can have state maintained across test modules, classes and methods via hooks called setup_module/teardown_module, setup_class/teardown_class and setup_method/teardown_method respectively.

Let's first see an example of setup_method. As I mentioned before, this is the equivalent of unittest's setUp hook. Here's a test class I wrote for the Blogger module. I saved the following lines in a file called test_blogger.py:
import Blogger


class TestBlogger:

def setup_method(self, method):
print "in setup_method"
self.blogger = Blogger.get_blog()

def test_get_feed_title(self):
title = "fitnessetesting"
assert self.blogger.get_title() == title

def test_get_feed_posting_url(self):
posting_url = "http://www.blogger.com/atom/9276918"
assert self.blogger.get_feed_posting_url() == posting_url

def test_get_feed_posting_host(self):
posting_host = "www.blogger.com"
assert self.blogger.get_feed_posting_host() == posting_host

def test_post_new_entry(self):
init_num_entries = self.blogger.get_num_entries()
title = "testPostNewEntry"
content = "testPostNewEntry"
assert self.blogger.post_new_entry(title, content) == True
assert self.blogger.get_num_entries() == init_num_entries+1
# Entries are ordered most-recent first
# Newest entry should be first
assert title == self.blogger.get_nth_entry_title(1)
assert content == self.blogger.get_nth_entry_content_strip_html(1)

def test_delete_all_entries(self):
self.blogger.delete_all_entries()
assert self.blogger.get_num_entries() == 0
Let's run this through py.test with -v and -S, so that we can see the print output from setup_method:
# py.test -v -S test_blogger.py

inserting into sys.path: /usr/local/dist-py
============================= test process starts =============================
testing-mode: inprocess
executable : /usr/local/bin/python (2.4.0-final-0)
using py lib: /usr/local/dist-py/py
initial testconfig 0: /usr/local/dist-py/py/test/defaultconfig.py/.
===============================================================================
in setup_method
0.050 ok test_blogger.py:9 TestBlogger.test_get_feed_title()
in setup_method
0.000 ok test_blogger.py:13 TestBlogger.test_get_feed_posting_url()
in setup_method
0.001 ok test_blogger.py:17 TestBlogger.test_get_feed_posting_host()
in setup_method
10.173 ok test_blogger.py:21 TestBlogger.test_post_new_entry()
in setup_method
7.106 ok test_blogger.py:32 TestBlogger.test_delete_all_entries()


================== tests finished: 5 passed in 17.47 seconds ==================
Note that setup_method was called before each of the test_ methods. This is exactly what you need in those cases where you want your test methods to be independent of each other, each with its own state: you create that state in setup_method and you destroy it if needed in teardown_method.

However, you may not need the overhead of setting up/tearing down state on each and every test method call. In this case, you can use module-level or class-level setup/teardown hooks.

Here's an example of using a module-level hook. In my specific case, it doesn't make that much difference, since the call to Blogger.get_blog() returns the same object every time. But one can easily imagine cases where some fixture state (such as a database connection or query result, or a file to read from) needs to be set up once per module, so that all test classes/methods/functions in that module can then use it. I saved the following lines in a file called test_blogger2.py:
import Blogger


def setup_module(module):
print "in setup_module"
module.TestBlogger.blogger = Blogger.get_blog()

class TestBlogger:
"""the rest of code is the same"""
Running this code under py.test with -v and -S produces:
# py.test -v -S test_blogger2.py

inserting into sys.path: /usr/local/dist-py
============================= test process starts =============================
testing-mode: inprocess
executable : /usr/local/bin/python (2.4.0-final-0)
using py lib: /usr/local/dist-py/py
initial testconfig 0: /usr/local/dist-py/py/test/defaultconfig.py/.
===============================================================================
in setup_module
0.058 ok test_blogger2.py:9 TestBlogger.test_get_feed_title()
0.000 ok test_blogger2.py:13 TestBlogger.test_get_feed_posting_url()
0.001 ok test_blogger2.py:17 TestBlogger.test_get_feed_posting_host()
10.173 ok test_blogger2.py:21 TestBlogger.test_post_new_entry()
21.329 ok test_blogger2.py:32 TestBlogger.test_delete_all_entries()


================== tests finished: 5 passed in 31.71 seconds ==================
Note that setup_module was called only once, at the very beginning of the test run. For more examples of setup/teardown hooks in action, see the py.test online documentation.

Test organization

This is another strong point of py.test. Because the only requirement for a test file to be recognized as such by py.test is for the filename to start with test_ (and even this can be customized), it is very easy to organize your tests in hierarchies and test suites by creating a directory tree and placing/grouping your test files in the appropriate directories. Then you can just run py.test with no arguments and let it find and execute all the test files for you. A carefully chosen naming scheme would certainly help you in this scenario.

A feature of py.test which is a pleasant change from unittest is that the test execution order is guaranteed to be the same for each test run, and it is simply the order in which the test function/methods appear in a given test file. No alphanumerical sorting order to worry about.

I should probably also mention YAPTF (yet another py.test feature): testing starts as soon as the first test item iscollected. The collection process is iterative and does not need to complete before your first test items are executed. But wait...the nifty things you can do never seem to stop! You can disable the execution of test classes by setting the special class-level attribute disabled. An example from the documentation: to avoid running Unix-specific test under Windows, you can say

class TestEgSomePosixStuff:
disabled = sys.platform == 'win32'

def test_xxx(self):
...
Note that the py.test collection process can be used not only for unit tests, but for other types of testing, for example functional or system testing. In the past, I used a homegrown framework for collecting and running functional and system test suites, but I intend to replace that with the more elegant and customizable py.test mechanism. See the py.test documentation for more details, particularly The three components of py.test and Customizing the py.test process. One caveat here is that this is a work in progress, so some details related to the customizaton of the process might change. Consult the py-dev mailing list if in doubt.

Another py.test feature worthy to be mentioned in this category is the ability to define and run so-called "generative tests". I haven't yet used them but here's what the py.test documentation has to say about them:

"Generative tests are test methods that are generator functions which yield callables and their arguments. This is most useful for running a test function multiple times against different parameters. Example:


def test_generative():
for x in (42,17,49):
yield check, x

def check(arg):
assert arg % 7 == 0 # second generated tests fails!

Note that test_generative() will cause three tests to get run, notably check(42), check(17) and check(49) of which the middle one will obviously fail."

Assertion syntax

There is no special assertion syntax in py.test. You can use the standard Python assert statements, and they will (again, magically) be interpreted by py.test so that more helpful error messages can be printed out. This is in marked contrast with unittest's custom and somewhat clunky assertEqual/assertTrue/etc. mechanism.

I haven't showed an example of a failing test yet. Let's modify the assertion in the test_delete_all_entries method from:
assert self.blogger.get_num_entries() == 0

to:
assert self.blogger.get_num_entries() == 1

We now get this output:
# py.test test_blogger2.py

inserting into sys.path: /usr/local/dist-py
============================= test process starts =============================
testing-mode: inprocess
executable : /usr/local/bin/python (2.4.0-final-0)
using py lib: /usr/local/dist-py/py
initial testconfig 0: /usr/local/dist-py/py/test/defaultconfig.py/.
===============================================================================
....F
_______________________________________________________________________________

def test_delete_all_entries(self):
self.blogger.delete_all_entries()
E assert self.blogger.get_num_entries() == 1
~ assert 0 == 1
+ where 0 = "<"/root/scripts/tests/test_blogger2/py.TestBlogger instance at 0x40801f0c">".blogger.get_num_entries()

[/root/scripts/tests/test_blogger2.py:34]
_______________________________________________________________________________
============= tests finished: 4 passed, 1 failed in 36.78 seconds =============

(I had to manually insert quotes around the less-than and greater-than signs on the line starting with + where 0, otherwise the Blogger editor would eliminate the whole text between those characters)

Note the output:

def test_delete_all_entries(self):
self.blogger.delete_all_entries()
E assert self.blogger.get_num_entries() == 1
~ assert 0 == 1
+ where 0 = "<"/root/scripts/tests/test_blogger2/py.TestBlogger instance at 0x40801f0c">".blogger.get_num_entries()

When it encounters a failed assertion, py.test prints the lines in the method containing the assertion, up to and including the failure. It also prints the actual and the expected values involved in the failed assertion. This default behavior can be changed by giving the --nomagic option at the command line, in which case the assert statement behaves in the standard way, generating an output such as:
E       assert self.blogger.get_num_entries() == 1

~ AssertionError
Also, by default, when it encounters a failure py.test only shows the relevant portions of the tracebacks in order to make debugging easier. If you want to see the full traceback leading to the failure in all its gory details, you can run py.test with the --fulltrace option (I will spare you the details of the output.)

Dealing with exceptions

The test_sort.py module I showed above contains an example of how exceptions can be handled with py.test:

def test_sort_exception(self):
import py.test
py.test.raises(NameError, "self.alist.sort(int_compare)")
py.test.raises(ValueError, self.alist.remove, 6)
Here I needed to import py.test in my test code, in order to be able to use the raises() function it provides. This function takes the expected exception type as the first parameter. The other parameters are either
  • a string specifying the function or method call that is supposed to raise the exception, or
  • the actual callable, followed by its arguments
The more general form for the raises() function is:
py.test.raises(Exception, "func(*args, **kwargs)")

py.test.raises(Exception, func, *args, **kwargs)
Summary

I hope I convinced you that the py.test tool and the py library are worthy of your consideration, although I probably just scratched the surface in terms of their capabilities.

Here are some Pros and Cons of using py.test, in the interest of what I hope is a fair comparison between unittest, doctest and py.test.

py.test Pros
  • no API!
  • great flexibility in test execution via command-line arguments
  • strong support for test fixture/state management via setup/teardown hooks
  • strong support for test organization via collection mechanism
  • strong debugging support via customized traceback and assertion output
  • very active and responsive development team
py.test Cons
  • available only in "raw" form via subversion; this makes its inclusion in other modules/frameworks a bit risky
  • many details, especially the ones related to customizing the collection process, are subject to refactorings and thus may change in the future
  • a lot of magic goes on behind the scenes, which can sometimes obscure the tool's intent (it sure obscures its output sometimes)
An interesting question is how to best combine the strengths of the 3 tools I discussed (unittest, doctest and py.test). It seems that many people are already using unittest in conjunction with doctest, with the former being used in situations that demand fixture setup and teardown, and the latter in situations where small functions need to be tested without the overhead of creating test case classes. Regardless of the style of testing, doctest seems to be a great way of keeping documentation in sync with the code. At the same time, py.test can either coexist with or replace unittest in those cases where test fixture management and test organization are important.

I think that small teams will appreciate py.test's flexibility and utter lack of rules, whereas larger teams might appreciate unittest's structure and the fact that it standardizes the testing code, thus making it more maintainable. That is not to say that py.test cannot be adopted by large teams -- I just think that they will have to create at some point their own frameworks on top of py.test, in order to impose structure and standardization to the body of their test code. The good news is that py.test's versatility and ductility makes it easy to add structure on top of it.

Thursday, January 27, 2005

Python unit testing part 2: the doctest module

This is part 2 of a 3-part discussion on Python unit test frameworks. You can find part 1 here. In this second part, I'll discuss the doctest module.

doctest

Availability

The doctest module has been part of the Python standard library since version 2.1.

Ease of use

It's hard to beat doctest in this category. There is no need to write separate test functions/methods. Instead, one simply runs the function/method under test in a Python shell, then copies the expected results and pastes them in the docstring that corresponds to the tested function.

In my example, I simply added the following docstrings to the post_new_entry and delete_all_entries methods of the Blogger class:

def post_new_entry(self, title, content):
"""
>>> blog = get_blog()
>>> title = "Test title"
>>> content = "Test content"
>>> init_num_entries = blog.get_num_entries()
>>> rc = blog.post_new_entry(title, content)
>>> print rc
True
>>> num_entries = blog.get_num_entries()
>>> num_entries == init_num_entries + 1
True
"""

def delete_all_entries(self):
"""
>>> blog = get_blog()
>>> blog.delete_all_entries()
>>> print blog.get_num_entries()
0
"""
I then added the following lines to the __main__ section of the Blogger module:

if __name__ == "__main__":
import doctest
doctest.testmod()
Now the Blogger module is "doctest-ready". All you need to do at this point is run Blogger.py:
# python Blogger.py

In this case, the fact that we have no output is a good thing. Doctest-based tests do not print anything by default when the tests pass.

API complexity

There is no API! Note that doctest-enabled docstrings can contain any other text that is needed for documentation purposes.

However, there are some caveats associated with the way doctest interprets the docstrings (most of the following bullet points are lifted verbatim from the doctest documentation):
  • any expected output must immediately follow the final '>>> ' or '... ' line containing the code, and the expected output (if any) extends to the next '>>> ' or all-whitespace line
  • expected output cannot contain an all-whitespace line, since such a line is taken to signal the end of expected output
  • output to stdout is captured, but not output to stderr (exception tracebacks are captured via a different means)
  • doctest is serious about requiring exact matches in expected output. If even a single character doesn't match, the test fails (so pay attention to those white spaces at the end of your copied-and-pasted lines!)
  • the exact match requirement means that the output must be the same on every run, so in the output that you capture you should try not to:
    • print a dictionary, because the order of the items can vary from one run to the other
    • operate with floating-point numbers, because the precision can vary across platforms
    • print hard-coded addresses, such as <__main__.c>
Test execution customization

The output of a doctest run can be made more verbose by means of the -v flag. Here is an example:
# python Blogger.py -v

Trying:
blog = get_blog()
Expecting nothing
ok
Trying:
title = "Test title"
Expecting nothing
ok
Trying:
content = "Test content"
Expecting nothing
ok
Trying:
init_num_entries = blog.get_num_entries()
Expecting nothing
ok
Trying:
rc = blog.post_new_entry(title, content)
Expecting nothing
ok
Trying:
print rc
Expecting:
True
ok
Trying:
num_entries = blog.get_num_entries()
Expecting nothing
ok
Trying:
num_entries == init_num_entries + 1
Expecting:
True
ok
Trying:
blog = get_blog()
Expecting nothing
ok
Trying:
blog.delete_all_entries()
Expecting nothing
ok
Trying:
print blog.get_num_entries()
Expecting:
0
ok
25 items had no tests:
__main__
__main__.BlogParams
__main__.BlogParams.__init__
__main__.Blogger
__main__.Blogger.__init__
__main__.Blogger.delete_entry_by_url
__main__.Blogger.delete_nth_entry
__main__.Blogger.get_feed_posting_host
__main__.Blogger.get_feed_posting_url
__main__.Blogger.get_nonce
__main__.Blogger.get_nth_entry
__main__.Blogger.get_nth_entry_content
__main__.Blogger.get_nth_entry_content_strip_html
__main__.Blogger.get_nth_entry_title
__main__.Blogger.get_nth_entry_url
__main__.Blogger.get_num_entries
__main__.Blogger.get_post_headers
__main__.Blogger.get_tagline
__main__.Blogger.get_title
__main__.Blogger.refresh_feed
__main__.Blogger.snooze
__main__.Entry
__main__.Entry.__cmp__
__main__.Entry.__init__
__main__.get_blog
2 items passed all tests:
3 tests in __main__.Blogger.delete_all_entries
8 tests in __main__.Blogger.post_new_entry
11 tests in 27 items.
11 passed and 0 failed.
Test passed.
The amount of output seems a bit too verbose to me, but in any case it gives a feeling for how doctest actually runs the tests.

One other important customization that can be done for doctest execution is to include the docstrings in a separate file. This can be beneficial when the docstrings become too large and start detracting from the clarity of the code under test, instead of increasing it. Having separate doctest files scales better in my opinion (and this seems to be the direction the Zope project is heading with their test strategy, according again to Jim Fulton's PyCon 2004 presentation).

As an example, consider the following text file, which I saved as testfile_blogger:

Test for post_new_entry():

>>> from Blogger import get_blog
>>> blog = get_blog()
>>> title = "Test title"
>>> content = "Test content"
>>> init_num_entries = blog.get_num_entries()
>>> rc = blog.post_new_entry(title, content)
>>> print rc
True
>>> num_entries = blog.get_num_entries()
>>> assert num_entries == init_num_entries + 1

Test for delete_all_entries():

>>> blog = get_blog()
>>> blog.delete_all_entries()
>>> print blog.get_num_entries()
0
Note that free-flowing text can coexist with the actual output and there is no need to use quotes. This is especially advantageous for interspersing the output with descriptions of test scenarios, special boundary cases, etc.

To have doctest run the tests in this file, you need to put the following 2 lines either in their own module, or replacing the 2 lines at the end of the Blogger module:
import doctest

doctest.testfile("testfile_blogger")

I chose to save these lines in a separate file called doctest_testfile.py. Here is the result of the test run in this case:

# python doctest_testfile.py -v
Trying:
from Blogger import get_blog
Expecting nothing
ok
Trying:
blog = get_blog()
Expecting nothing
ok
Trying:
title = "Test title"
Expecting nothing
ok
Trying:
content = "Test content"
Expecting nothing
ok
Trying:
init_num_entries = blog.get_num_entries()
Expecting nothing
ok
Trying:
rc = blog.post_new_entry(title, content)
Expecting nothing
ok
Trying:
print rc
Expecting:
True
ok
Trying:
num_entries = blog.get_num_entries()
Expecting nothing
ok
Trying:
num_entries == init_num_entries + 1
Expecting:
True
ok
Trying:
blog = get_blog()
Expecting nothing
ok
Trying:
blog.delete_all_entries()
Expecting nothing
ok
Trying:
print blog.get_num_entries()
Expecting:
0
ok
1 items passed all tests:
12 tests in testfile_blogger
12 tests in 1 items.
12 passed and 0 failed.
Test passed.

Test fixture management

doctest does not provide any set-up/tear-down hooks for managing test fixture state (although this is a feature that's considered for inclusion in future release). This can sometimes be an advantage, in those cases where you do not want each of your test methods to be independent of each other, and you do not want the overhead of setting up and tearing down state for each test run.

Test organization and reuse

A new feature of doctest in Python 2.4 is the ability to piggyback on unittest's suite management capabilities. To quote from the doctest documentation:

As your collection of doctest'ed modules grows, you'll want a way to run all their doctests systematically. Prior to Python 2.4, doctest had a barely documented Tester class that supplied a rudimentary way to combine doctests from multiple modules. Tester was feeble, and in practice most serious Python testing frameworks build on the unittest module, which supplies many flexible ways to combine tests from multiple sources. So, in Python 2.4, doctest's Tester class is deprecated, and doctest provides two functions that can be used to create unittest test suites from modules and text files containing doctests. These test suites can then be run using unittest test runners.

The two doctest functions that can create unittest test suites are DocFileSuite, which takes a path to a file as a parameter, and DocTestSuite, which takes a module containing test cases as a parameter. I'll show an example of using DocFileSuite. I saved the following lines in a file called doctest2unittest_blogger.py:
import unittest

import doctest

suite = unittest.TestSuite()
suite.addTest(doctest.DocFileSuite("testfile_blogger"))
unittest.TextTestRunner().run(suite)
I passed to DocFileSuite the path to the testfile_blogger file that contains the docstrings for 2 of the Blogger methods. Running doctest2unittest_blogger produces:

python doctest2unittest_blogger.py
.
----------------------------------------------------------------------
Ran 1 test in 24.768s

OK
Note that this is a unittest-specific output. As far as unittest is concerned, it executed only 1 test, the one we added via the suite.addTest() call. We can increase the verbosity by calling unittest.TextTestRunner(verbosity=2).run(suite):
# python doctest2unittest_blogger.py

Doctest: testfile_blogger ... ok

----------------------------------------------------------------------
Ran 1 test in 33.693s

OK
To convince myself that the doctest tests are really being run, I edited testfile_blogger and changed the expected return codes from True to False, and the last expected value from 0 to 1. Now both doctest tests should fail:
# python doctest2unittest_blogger.py

Doctest: testfile_blogger ... FAIL

======================================================================
FAIL: Doctest: testfile_blogger
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.4/doctest.py", line 2152, in runTest
raise self.failureException(self.format_failure(new.getvalue()))
AssertionError: Failed doctest test for testfile_blogger
File "testfile_blogger", line 0

----------------------------------------------------------------------
File "testfile_blogger", line 9, in testfile_blogger
Failed example:
print rc
Expected:
False
Got:
True
----------------------------------------------------------------------
File "testfile_blogger", line 19, in testfile_blogger
Failed example:
print blog.get_num_entries()
Expected:
1
Got:
0


----------------------------------------------------------------------
Ran 1 test in 25.691s

FAILED (failures=1)
As you can see, it's pretty easy to use the strong unittest test aggregation/organization mechanism and combine it with doctest-specific tests.

As a matter of personal taste, I prefer to write my unit tests using the unittest framework, since for me it is more conducive to test-driven development. I write a test, watch it fail, then write the code for making the test pass. It is an organic process that sort of grows on you and changes your whole outlook to code design and development. I find that I don't get the same results with doctest. Maybe I'm just not used to playing with the Python shell that much while I'm developing. For me, the biggest plus of using doctest comes from having top-notch documentation for my code. This style of testing is rightly called "literate testing" or "executable documentation" by the doctest folks. However, I only copy and paste the expected output into the docstrings AFTER I know that my code is working well (because it was already unit-tested with unittest).

Assertion syntax

There is no special syntax for assertions in doctest. Most of the time, assertions will not even be necessary. For example, in order to verify that posting a new entry increments the number of entries by 1, I simply included these 2 lines in the corresponding docstring:
>>> num_entries == init_num_entries + 1

True
Dealing with exceptions

I'll use an example similar to the one I used for the unittest discussions. Here are some simple doctest tests for the sort() list method:

def test_ascending_sort():
"""
>>> a = [5, 2, 3, 1, 4]
>>> a.sort()
>>> a
[1, 2, 3, 4, 5]
"""

def test_custom_sort():
"""
>>> def int_compare(x, y):
... x = int(x)
... y = int(y)
... return x - y
...
>>> a = [5, 2, 3, 1, 4]
>>> a.sort(int_compare)
>>> print a
[1, 2, 3, 4, 5]
>>> b = ["1", "2", "10", "20", "100"]
>>> b.sort()
>>> b
['1', '10', '100', '2', '20']
>>> b.sort(int_compare)
>>> b
['1', '2', '10', '20', '100']
"""

def test_sort_reverse():
"""
>>> a = [5, 2, 3, 1, 4]
>>> a.sort()
>>> a.reverse()
>>> a
[5, 4, 3, 2, 1]
"""

def test_sort_exception():
"""
>>> a = [5, 2, 3, 1, 4]
>>> a.sort(int_compare)
Traceback (most recent call last):
File "", line 1, in ?
NameError: name 'int_compare' is not defined
"""

if __name__ == "__main__":
import doctest
doctest.testmod()

Note that for testing exceptions, I simply copied and pasted the traceback output. doctest will look at the line starting with Traceback, will ignore any lines that contain details likely to change (such as file names and line numbers), and finally will interpret the lines starting with the exception type. Both the exception type (NameError in this case) and the exception details (which can span multiple lines) are matched against the actual output.

The doctest documentation recommends omitting traceback stack details and replacing them by an ellipsis (...), as they are ignored anyway by the matching mechanism.

To summarize, here are some Pros and Cons of using the doctest framework.

doctest Pros
  • available in the Python standard library
  • no API to remember, just copy and paste output from shell session
  • flexibility in test execution via command-line arguments
  • perfect way to keep documentation in sync with code
  • tests can be kept in separate files which can also contain free-flowing descriptions of test scenarios, special boundary cases, etc.
doctest Cons
  • output matching mechanism mandates that output must be the same on every run
  • no provisions for test fixture/state management
  • provides test organization only if used in conjunction with the unittest framework
  • does not seem very conducive to test-driven development

Python unit testing part 1: the unittest module

Python developers who are serious about testing their code are fortunate to have a choice between at least three unit test frameworks: unittest, doctest and py.test. I'll discuss these frameworks and I'll focus on features such as availability, ease of use, API complexity, test execution customization, test fixture management, test reuse and organization, assertion syntax, dealing with exceptions. This post is the first in a series of three. It discusses the unittest module.

The SUT (software under test) I'll use in this discussion is a simple Blog management application, based on the Universal Feed Parser Python module written by Mark Pilgrim of Dive Into Python fame. I discussed this application in a previous PyFIT-related post. I implemented the blog management functionality in a module called Blogger (all the source code used in this discussion can be found here.)

unittest

Availability

The unittest module (called PyUnit by Steve Purcell, its author) has been part of the Python standard library since version 2.1.

Ease of use

Since unittest is based on jUnit, people familiar with the xUnit framework will have no difficulty picking up the unittest API. Due to the jUnit heritage, some Python pundits consider that unittest is too much "java-esque" and not enough "pythonic". I think the opinions are split though. I tried to initiate a discussion on this topic at comp.lang.python, but I didn't have much success.

API complexity

The canonical way of writing unittest tests is to derive a test class from unittest.TestCase. The test class exists in its own module, separate from the module containing the SUT. Here is a short example of a test class for the Blogger module. I saved the following in a file called unittest_blogger.py:

import unittest
import Blogger

class testBlogger(unittest.TestCase):
"""
A test class for the Blogger module.
"""

def setUp(self):
"""
set up data used in the tests.
setUp is called before each test function execution.
"""
self.blogger = Blogger.get_blog()

def testGetFeedTitle(self):
title = "fitnessetesting"
self.assertEqual(self.blogger.get_title(), title)

def testGetFeedPostingURL(self):
posting_url = "http://www.blogger.com/atom/9276918"
self.assertEqual(self.blogger.get_feed_posting_url(), posting_url)

def testGetFeedPostingHost(self):
posting_host = "www.blogger.com"
self.assertEqual(self.blogger.get_feed_posting_host(), posting_host)

def testPostNewEntry(self):
init_num_entries = self.blogger.get_num_entries()
title = "testPostNewEntry"
content = "testPostNewEntry"
self.assertTrue(self.blogger.post_new_entry(title, content))
self.assertEqual(self.blogger.get_num_entries(), init_num_entries+1)
# Entries are ordered most-recent first
# Newest entry should be first
self.assertEqual(title, self.blogger.get_nth_entry_title(1))
self.assertEqual(content, self.blogger.get_nth_entry_content_strip_html(1))

def testDeleteAllEntries(self):
self.blogger.delete_all_entries()
self.assertEqual(self.blogger.get_num_entries(), 0)

def suite():
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(testBlogger))
return suite

if __name__ == '__main__':
#unittest.main()

suiteFew = unittest.TestSuite()
suiteFew.addTest(testBlogger("testPostNewEntry"))
suiteFew.addTest(testBlogger("testDeleteAllEntries"))
#unittest.TextTestRunner(verbosity=2).run(suiteFew)
unittest.TextTestRunner(verbosity=2).run(suite())
A few API-related things to note in the code above:
  • test method names that start with "test" are automatically invoked by the framework
  • each test method is executed independently from all other methods
  • unittest.TestCase provides a setUp method for setting up the fixture, and a tearDown method for doing necessary clean-up
    • setUp is automatically called by TestCase before any other test method is invoked
    • tearDown is automatically called by TestCase after all other test methods have been invoked
  • unittest.TestCase also provides custom assertions (for example assertEqual, assertTrue assertNotEqual) that generate more meaningful error messages than the default Python assertions
All in all, not a huge number of APIs to remember, but it's enough to draw some people away from using unittest. For example, in his PyCon 2004 presentation, Jim Fulton complains that unittest has too much support for abstraction, which makes the test code intent's less clear, while making it look too different from the SUT code.

Test execution customization

The canonical way of running tests in unittest is to include this code at the end of the module containing the test class:

if __name__ == '__main__':
unittest.main()

By default, unittest.main() builds a TestSuite object containing all the tests whose method names start with "test", then it invokes a TextTestRunner which executes each test method and prints the results to stderr.

Let's try it with unittest_blogger:

# python unittest_blogger.py
.....
----------------------------------------------------------------------
Ran 5 tests in 10.245s

OK

The default output is pretty terse. Verbosity can be increased by passing a -v flag at the command line:

# python unittest_blogger.py -v
testDeleteAllEntries (__main__.testBlogger) ... ok
testGetFeedPostingHost (__main__.testBlogger) ... ok
testGetFeedPostingURL (__main__.testBlogger) ... ok
testGetFeedTitle (__main__.testBlogger) ... ok
testPostNewEntry (__main__.testBlogger) ... ok


----------------------------------------------------------------------
Ran 5 tests in 17.958s

OK

One note here: the order in which the tests are run is based on the alphanumerical order of their names, which can be sometimes annoying.

Individual test cases can be run by simply specifying their names (prefixed by the test class name) on the command line:

# python unittest_blogger.py testBlogger.testGetFeedPostingHost testBlogger.testGetFeedPostingURL
..
----------------------------------------------------------------------
Ran 2 tests in 0.053s

OK
In conclusion, it's fair to say that unittest offers a lot of flexibility in test case execution.

Test fixture management

I already mentioned that unittest.TestCase provides the setUp and tearDown methods that can be used in derived test classes in order to create/destroy "test fixtures", i.e. environments were data is set up so that each test method can act on it in isolation from all other test methods. In general, the setUp/tearDown methods are used for creating/destroying database connections, opening/closing files and other operations that need to maintain state during the test run.

In my unittest_blogger example, I'm using the setUp method for creating a Blogger object that can then be referenced by all test methods in my test class. Note that setUp and tearDown are called by the unittest framework before and after each test method is called. This ensures test independence, so that data created by a test does not interfere with data used by another test.

Test organization and reuse

The unittest framework makes it easy to aggregate individual tests into test suites. There are several ways to create test suites. The easiest way is similar to this:

def suite():
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(testBlogger))
return suite
Here I created a TestSuite object, then I used the makeSuite helper function to build a test suite out of all tests whose names start with "test". I added the resulting suite to the initial TestSuite object via the addTest method.

A suite can also be created from individual tests by calling addTest with the name of the test method (which in this case does not have to start with test). Here is a fragment from unittest_blogger.py:

suiteFew = unittest.TestSuite()
suiteFew.addTest(testBlogger("testPostNewEntry"))
suiteFew.addTest(testBlogger("testDeleteAllEntries"))

In order to run a given suite, I used a TextTestRunner object:

unittest.TextTestRunner().run(suiteFew)
unittest.TextTestRunner(verbosity=2).run(suite())
The first line runs a TextTestRunner with the default terse output and using the suiteFew suite, which contains only 2 tests.

The second line increases the verbosity of the output, then runs the suite returned by the suite() method, which contains all tests starting with "test" (and all of them do in my example).

The suite mechanism also allows for test reuse across modules. Say for example that I have another test class, which tests some properties of the sort() method. I saved the following in a file called unitest_sort.py:
import unittest


class TestSort(unittest.TestCase):

def setUp(self):
self.alist = [5, 2, 3, 1, 4]

def test_ascending_sort(self):
self.alist.sort()
self.assertEqual(self.alist, [1, 2, 3, 4, 5])

def test_custom_sort(self):
def int_compare(x, y):
x = int(x)
y = int(y)
return x - y
self.alist.sort(int_compare)
self.assertEqual(self.alist, [1, 2, 3, 4, 5])

b = ["1", "2", "10", "20", "100"]
b.sort()
self.assertEqual(b, ['1', '10', '100', '2', '20'])
b.sort(int_compare)
self.assertEqual(b, ['1', '2', '10', '20', '100'])

def test_sort_reverse(self):
self.alist.sort()
self.alist.reverse()
self.assertEqual(self.alist, [5, 4, 3, 2, 1])

def test_sort_exception(self):
try:
self.alist.sort(int_compare)
except NameError:
pass
else:
fail("Expected a NameError")
self.assertRaises(ValueError, self.alist.remove, 6)

def suite():
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(TestSort))
return suite

if __name__ == "__main__":
unittest.main()
I can now run the tests in both unittest_blogger and unittest_sort by means of a test suite that aggregates the test suites defined in each of the 2 modules:

# cat unittest_aggregate.py
import unittest
import unittest_sort
import unittest_blogger

suite1 = unittest_sort.suite()
suite2 = unittest_blogger.suite()

suite = unittest.TestSuite()
suite.addTest(suite1)
suite.addTest(suite2)
unittest.TextTestRunner(verbosity=2).run(suite)

# python unittest_aggregate.py
test_ascending_sort (unittest_sort.TestSort) ... ok
test_custom_sort (unittest_sort.TestSort) ... ok
test_sort_exception (unittest_sort.TestSort) ... ok
test_sort_reverse (unittest_sort.TestSort) ... ok
testDeleteAllEntries (unittest_blogger.testBlogger) ... ok
testGetFeedPostingHost (unittest_blogger.testBlogger) ... ok
testGetFeedPostingURL (unittest_blogger.testBlogger) ... ok
testGetFeedTitle (unittest_blogger.testBlogger) ... ok
testPostNewEntry (unittest_blogger.testBlogger) ... ok

----------------------------------------------------------------------
Ran 9 tests in 17.873s

OK
Assertion syntax

As I said previously, unittest provides its own custom assertions. Here are some of the reasons for this choice:
  • if tests are run with the optimization option turned on, the standard Python assert statements will be skipped; for this reason, unittest provides the assert_ statement, equivalent with the standard assert, but that will not be optimized away
  • the output of the standard Python assert statements does not show the expected and actual values that are compared
For example, let's make the testDeleteAllEntries test fail by comparing the value of get_num_entries() with 1 instead of 0:

def testDeleteAllEntries(self):
self.blogger.delete_all_entries()
self.assertEqual(self.blogger.get_num_entries(), 1)
Running the test will produce a failure:
# python unittest_blogger.py testBlogger.testDeleteAllEntries

F
======================================================================
FAIL: testDeleteAllEntries (__main__.testBlogger)
----------------------------------------------------------------------
Traceback (most recent call last):
File "unittest_blogger.py", line 42, in testDeleteAllEntries
self.assertEqual(self.blogger.get_num_entries(), 1)
AssertionError: 0 != 1

----------------------------------------------------------------------
Ran 1 test in 0.082s

FAILED (failures=1)
The output of the AssertionError is enhanced with the values being compared: 0 != 1. Now instead of assertEqual let's use assert_:

def testDeleteAllEntries(self):
self.blogger.delete_all_entries()
self.assert_ self.blogger.get_num_entries() == 1
The output is now:
# python unittest_blogger.py testBlogger.testDeleteAllEntries

F
======================================================================
FAIL: testDeleteAllEntries (__main__.testBlogger)
----------------------------------------------------------------------
Traceback (most recent call last):
File "unittest_blogger.py", line 43, in testDeleteAllEntries
self.assert_(self.blogger.get_num_entries() == 1)
AssertionError

----------------------------------------------------------------------
Ran 1 test in 0.077s

FAILED (failures=1)
There's no indication of what went wrong when the actual and the expected values were compared.

Dealing with exceptions

The unittest_sort module listed above has 2 examples of testing for exceptions. In the test_sort_exception method, I first test that calling sort with an undefined function as a sort function results in a NameError exception. The test will pass only when NameError is raised and will fail otherwise:

try:
self.alist.sort(int_compare)
except NameError:
pass
else:
fail("Expected a NameError")
A more concise way of testing for exceptions is to use the assertRaises statement, passing it the expected exception type and the function/method to be called, followed by its arguments:

self.assertRaises(ValueError, self.alist.remove, 6)
To summarize, here are some Pros and Cons of using the unittest framework.

unittest Pros
  • available in the Python standard library
  • easy to use by people familiar with the xUnit frameworks
  • flexibility in test execution via command-line arguments
  • support for test fixture/state management via set-up/tear-down hooks
  • strong support for test organization and reuse via test suites
unittest Cons
  • xUnit flavor may be too strong for "pure" Pythonistas
  • API can get in the way and can make the test code intent hard to understand
  • tests can end up having a different look-and-feel from the code under test
  • tests are executed in alphanumerical order
  • assertions use custom syntax

Monday, January 17, 2005

Telecommuting via ssh tunneling

Sometimes you want to be able to work on your work machine from home, but the company firewall only allows you ssh access into one of the servers. That's all you need in order to gain remote access (for example via VNC) to your work machine. An additional benefit is that the network traffic between your home and your work machines will be encrypted, so you can use this technique to secure plain-text protocols such as POP or IMAP.

Here is the scenario I'll cover:
  • You have ssh access to a server at work which is behind the corporate firewall and can be reached via an external IP address or a name such as gateway.corp.com; your account on that server is called gateway_account
  • You need to get remote access into a machine running VNC called work_machine which has an internal IP address such as 192.168.1.100
  • You have a home Linux/OS X box called home_machine
Here's what you need to do:

1. Open a shell session on home_machine and run the following command:

ssh -l gateway_account -L 5900:192.168.1.100:5900 -g -C gateway.corp.com

This command creates an ssh tunnel which forwards the local port 5900 (which is the default VNC port) to the remote port 5900 on machine 192.168.1.100, using the account gateway_account on machine gateway.corp.com, using data compression (-C).

If you have a fast link into the company network, you can omit the -C switch. The -g switch allows the remote host to connect back into the local host via the established tunnel.

2. Run the VNC client on home_machine and connect to localhost:0. This will actually connect to the local end of the ssh tunnel, which will then forward the connection to the remote end on 192.168.1.100. You should now have a VNC connection into work_machine.

This recipe can be used for other types of remote access, for example:
  • mail: replace port 5900 with either 110 (POP3) or 143 (IMAP) ; replace the IP address of work_machine with the IP address of the corporate mail server
  • Remote Desktop: replace port 5900 with port 3389
In all cases, you will connect to localhost (or 127.0.0.1) via the corresponding application (mail reader, Remote Desktop app, etc.) and you will be tunneled via ssh to the remote machine behind the corporate firewall.

This technique can be abused by hackers, so corporate firewalls should really allow ssh access only from known IP addresses (which of course can be spoofed, so this is only a weak form of protection). This is one reason why many companies offer only VPN access into their internal networks, so the ssh tunneling technique will be of no use in this case.

Note: if your work_machine runs Windows and you want to connect via VNC, you might have problems if your home_machine runs Linux. I've seen the remote Windows VNC server crash when a Linux client tried to connect to it. In this scenario, you have a better chance if you:

1. run your ssh tunnel on your Linux box at home as specified above
2. go to another home machine running Windows, start the VNC client there and connect to linux_home_machine:0.

Here are some other articles that describe ssh tunneling:

O'Reilly article
SSH.com article


Saturday, January 08, 2005

PyFIT Tutorial Part 3

I will expand here on Part 1 and Part 2 of the PyFIT tutorial and I'll show how to use SetUp pages in FitNesse. I'll also clean up the code in some of the fixtures I wrote for the Blog Management application that I used as the SUT (Software Under Test).

First, some code cleanup. I had a lot of hard-coded paths in my fixtures. All fixtures used to start with:

from fit.ColumnFixture import ColumnFixture
import sys
blogger_path = "C:\\eclipse\\workspace\\blogger"
sys.path.append(blogger_path)
import Blogger

This is clearly sub-optimal and requires a lot of copy-and-paste among modules. To simplify it, I turned the blogger directory into a package by simply adding to that directory an empty file called __init__.py. I also moved Blogger.py (the main functionality module) and its unit test module, testBlogger.py, to a subdirectory of blogger called src. I made that subdirectory a package too by adding to it another empty __init__.py file. Now each fixture module can do:

from fit.ColumnFixture import ColumnFixture
from blogger.src.Blogger import get_blog

There's one caveat here: the parent directory of the blogger directory -- in my case C:\eclipse\workspace -- needs to be somewhere on the Python module search path. In FitNesse it's easy to solve this issue by adding C:\eclipse\workspace to the classpath via this variable definition which will go on the main suite page:

!path C:\eclipse\workspace

When invoking the Python interpreter from a command line, one way of making sure that C:\eclipse\workspace is in the module search path is to add it to the PYTHONPATH environment variable, for example in a .bash_profile file. For our example though, the fixture modules are always invoked within the FitNesse/PyFIT framework, so we don't need to worry about PYTHONPATH.

Now to the SetUp page functionality. If you create this page as a sub-page of a suite, then every test page in that suite will automatically have SetUp prepended to it by FitNesse. I created a SetUp page (its URL is http://localhost/FitNesse.BlogMgmtSuite.SetUp) with the following content:

!|BloggerFixtures.Setup|
|setup?|
|true|

I also created the corresponding Setup.py fixture, with the following code:

from fit.ColumnFixture import ColumnFixture
from blogger.src.Blogger import get_blog

blog_manager = None

class Setup(ColumnFixture):
_typeDict={
"setup": "Boolean"
}

def setup(self):
global blog_manager
blog_manager = get_blog()
return (blog_manager != None)

def get_blog_manager():
global blog_manager
return blog_manager

The setup method invokes the get_blog function from the Blogger module in order to retrieve the common Blogger instance used by all the fixtures in our test suite. I did this because I wanted to encapsulate the common object creation in one fixture class (Setup), then have all the other fixture classes cal the get_blog_manager function from the Setup module. Another benefit is that only the Setup module needs to know about the physical location of the Blogger module, via the line:

from blogger.src.Blogger import get_blog

All other fixture classes need only do:

from Setup import get_blog_manager

since they are in the same directory with Setup.py.

A SetUp page can also be used to pass values to the application via methods in the Setup class. Assume we need to pass the path to a configuration file. One way of accomplishing this is to define a FitNesse variable in the SetUp page, like this:

!define CONFIG_PATH {C:\config}

The FitNesse syntax for referencing a variable is ${variable}, so we can pass it as an argument to our Setup fixture like this:

!|BloggerFixtures.Setup|${CONFIG_PATH}|
|setup?|
|true|

When the page is rendered by FitNesse, ${CONFIG_PATH} is automatically replaced with its value, so on the rendered page we'll see:

variable defined: CONFIG_PATH=C:\config

BloggerFixtures.Setup C:\config
setup?
true

In the Setup fixture class, we can get the value of the argument like this:

config_path = self.getArgs()[0]

One other thing we could do in the SetUp page is to include the DeleteAllEntries fixture, so that we can be sure that each test page will start with a clean slate in terms of the blog entries.

I moved the old code from parts 1 and 2 of the tutorial here. You can see the new code here.

Modifying EC2 security groups via AWS Lambda functions

One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...