Wednesday 13 May 2009

Testing with Perl Catalyst

One of the things I really like about Catalyst is how easy it makes writing tests for your applications. There is documentation about testing in the Catalyst tutorial and a more generic book on Perl Testing. Going into more detail, I suppose I would classify these tests at several levels: Model unit, Controller unit, Integration, End-To-End.

Firstly, Model unit tests. The Model encapsulates business logic and access to data serialisation, typically via DBIx::Class Object-Relational Mapper to a database. Strictly speaking, these tests exist outside of Catalyst although you may need to pick up database access credentials from your Catalyst application configuration file (conf/myapp.yml or similar) using Config::JFDI. This replicates what Catalyst::Plugin::ConfigLoader does but works outside Catalyst.

Usually, I'd expect to start off with tests named 00_setup_database.t and zz_teardown_database.t. These create a known database before unit testing and then tear it down after. You can use a utility like DBIx::Class::Fixtures to do this.
Then I'd have 10_schema.t (and 11_schema2.t if I am using more than one schema) to test that I can connect to a database schema, that the sources (tables) are as expected and that I can retrieve a sample datum.
use Test::More tests => 4;  
BEGIN { use_ok 'MyApp::Model::MyModel' }  
BEGIN { use_ok 'MyApp::Schema' }   
my $connect_info = MyApp::Model::MyModel->config->{connect_info};  
diag "connecting schema2 to ".$connect_info->[0]."\n";  
my $schema = MyApp::Schema->connect( @$connect_info );   
my @sources = $schema->sources();  
ok( scalar @sources > 0, 'found schema sources :-  '.join(", ",@sources) );   
my $cl;  
ok($cl = $schema->resultset('Client')->find({ client_code => 'TEST01' }),
      'find client with code TEST01');  
ok ($cl->name eq 'A Test Client', 'client name as expected');
To this I'd add 20_user.t, 21_client.t (and so on), one per business object. Often these directly relate to a database table, but sometimes they can be a virtual class and use several tables with tricky logic built in, for example to calculate matching availability of resources over time. With a DBIx::Class schema you can do this and hide the details of the implementation inside the class or other classes it calls using a chained recordset approach. But back in the unit test, you'd test the public interface each schema class is exposing. For example, create a client record, then check to make sure subsidiary records have been created in other tables. Once you obtain a DBIC $schema handle, as above, the rest is straightforward.

Secondly, Controller unit tests. In the Catalyst world you use Catalyst::Test to achieve this. It sets up a fake Catalyst context so you can test actions. The manual page and the tutorial above explain this clearly so I won't go into detail. You add a unit test for each public action URL to ensure it dispatches to handlers as expected.

use Test::More tests => 2; 
BEGIN { use_ok 'Catalyst::Test', 'MyApp' } 
ok( request('/web/login')->is_success, 'Request should succeed' );
Thirdly, Integration testing. By this I mean running through some typical business workflow scenarios. In effect, expressing the functional requirements and specification of the system in the form of unit tests that will exercise the system as a whole (database, controller, model code, subsidiary library code, view templates).

For a web application (bear in mind Catalyst does not purely have to be used for writing web apps) you can use Test::WWW::Mechanize::Catalyst. This sets up a fake HTTP request object so you can test a sequence of actions as if you were going through a web server. So you might login, retrieve a list of records, view the detail for one of them, update a record, then check the list of records reflects that. After the WWW::Mechanize tests you could then check the database manually to ensure the records are as expected from the system functional specification.

use Test::More;   
eval "use Test::WWW::Mechanize::Catalyst 'MyApp'";  
plan $@ ? ( skip_all => 'Test::WWW::Mechanize::Catalyst required' )      
  : ( tests => 36 ); 
ok( my $mech = Test::WWW::Mechanize::Catalyst->new, 'Created mech object' );   
# logout screen redirects to login screen  
$mech->get_ok( 'http://localhost/web/logout', 'logout' );  
$mech->content_contains('Log on using', 'logout redirects to login screen'); 
diag "check login authentication is required to access extranet screens";  
# turn off automatic redirect follow so we can check response code  
$mech->requests_redirectable([]); 
# 1. check redirect code and location
$mech->get( 'http://localhost/web/extranet/list/booking' ); 
is($mech->response->headers->{status}, 302, 'unauthed user is redirected away from page requiring auth'); 
like($mech->response->headers->{location}, qw|/web/login/|, 'redirect is to login page');
# 2. check all protected paths 
for (qw|     index     home     booking     |) {
  my $path = '/web/extranet/'.$_;   
  $mech->get( 'http://localhost/'.$path);   
  is($mech->response->headers->{status}, 302, 'unauth redirect for '.$path); } 
# allow automatic redirect again  
$mech->requests_redirectable([qw/ GET HEAD POST /]); 
# 3. business workflow tests 
# get login screen  
$mech->get_ok( 'http://localhost/web/login', 'get login' );  
$mech->content_contains('Log on using', 'login contains Log on using');  
# login
$mech->submit_form( fields => { username => 'cust', password => 'cust' } );  
# check logged in successfully  
$mech->content_contains('Welcome to the', 'successfully logged in screen');  
diag "screen tests";  
$mech->get_ok( 'http://localhost/web/extranet/home', 'get home' );  
$mech->content_contains('Welcome to the', 'home contains welcome'); 
$mech->get_ok( 'http://localhost/web/extranet/list/booking', 'list/booking' ); 
$mech->content_contains('25843011', 'booking list contains booking 25843011');

Having written a range of unit tests then use Devel::Cover and run them all to check how well you have covered your code.

Fourthly, End-To-End testing. This is using a set of Integration tests to exercise the whole system in use by feeding in data and user actions and then checking the output data and files to make sure they are consistent. In the past, I worked on a meter reading and billing system and as part of a major release one of the tests was a manual one to set up a fixture database then type a set of pre-determined meter readings into the system from real remote handsets communicating over a WAN, process some batch runs in the central server, perform a few edits from the central server Oracle screens, then generate output billing records. You would expect the financial totals to be in balance and for selected bills to reflect the combination of the meter readings over a time period. Most systems are not that complex but the principle is to use real kit and real life scenarios. You might use several versions and releases of real web browsers against a copy of your live environment on a staging server and check the resulting screens, any PDFs generated and the figures totals on your database.

That's it for today. Happy testing!

Cheers, Peter

1 comment:

Sandae said...

very helpful. :-) Thanks.