Puffin Users Guide

Abstract

Puffin is a web application testing framework. It allows you to test any web application as though it were a black box, with unknown internals. To test an application using puffin, you must configure the framework to your web application (through a single, XML-based configuration file) and construct one or more test plans. A test plan represents a sequence of one or more tasks which, in turn, are each made up of one or more test actions, each of which represents a single request to your web application. A test plan can be in the form of a simple list of raw test actions, or a more complex series of test action groupings (called tasks) with specified dependencies and repititions. This user guide walks you through the configuration of the puffin framework and the construction of a series of test plans.

Introduction

Web applications are no longer simple form-based series of CGI scripts. Web applications today are complex, data-driven systems in which business processes involve several complex steps, each building on the last. With complexity comes the need for rigorous testing.

There are two forms of testing that go into complete end-to-end testing of an application: unit testing and functional testing. A unit test is a test case that demonstrates that a single component (or unit) executes its tasks successfully in a representative sample of possible scenarios. Unit testing starts with the system in a known state and typically ends in the same state.

A functional test tests the complete system from end to end, focusing not on the individual components that make up the system, but on their combination and cooperation to meet the business need for which you constructed the web application. Functional testing includes, by necessity, some level of system testing, in which the interactions of all components are tested, but its main focus is on testing whether or not the application meets business needs. It's not enough for example, that a single dynamic web page successfully saves a given record, but that the process by which the user created that record is also sound -- even if it involved the use of several pages in a process.

There are several tools, both commercial and open-source, that assist in unit testing. Puffin is not one of those tools. Puffin helps you perform functional testing and does so by allowing you to create arbitrarily complex tests of your web application in the context of the business needs.

This user's guide shows you how to configure the puffin framework to your web application and then demonstrates the construction and execution of a series of test plans of various complexity.

Configuration: System Properties

Before you can execute test plans against your web application, you must configure the puffin framework for your system. There are two steps in configuring puffin. You must configure the overall system properties and then you must configure the possible test actions for your web application. In this section, we will cover all the system properties and their possible values.

All configuration values for the puffin framework are stored in a single XML file, called puffinConfig.xml by default (you can intstruct puffin to use any file name for your config file using the --config.file=[filepathname] command line argument). We'll look at the test action section of the puffin config file in the next section. Here is the system properties section of the config file customized for our demo application, Puffin Excursions, Ltd.:

<puffin>
<!-- System configuration settings. -->
<system>
<server host="www.mydomain.com" port="80"/>
<frameworkLogging defaultPriority="WARN">
<handler type="StreamHandler">
<param name="msgFormat"><![CDATA[%(asctime)s %(name)s %(message)s]]></param> </handler>
</frameworkLogging>
<reportLogging>
<handler type="FileHandler">
<param name="msgFormat" eval="0"><![CDATA[%(message)s]]></param>
<param name="fileName" eval="0">puffinResults.log</param>
<param name="mode" eval="0"><![CDATA[a+]]></param>
</handler>
</reportLogging>
<failureAlertLogging>
<handler type="NTEventLogHandler">
<param name="msgFormat"><![CDATA[%(asctime)s %(message)s]]></param>
</handler>
<handler type="SMTPHandler">
<param name="msgFormat" eval="0"><![CDATA[%(message)s]]></param>
<param name="mailHost" eval="0">mail.weissinger.org</param>
<param name="fromAddr" eval="0"><![CDATA[keyton@weissinger.org]]></param>
<param name="toAddr" eval="0"><![CDATA[keyton@weissinger.org]]></param>
<param name="toAddr" eval="0"><![CDATA[keytonw@yahoo.com]]></param>
<param name="subject" eval="0">Puffin Test Report</param>
</handler>
</failureAlertLogging>
<systemFlags>
<flag name="security" value="1"/>
</systemFlags>
<resultsProcessor reportDetail="ALL"/>
<defaultResponseAnalyzerList>
<responseAnalyzer src="puffin" type="TimerResponseAnalyzer">
<param name="timeLimitSecs">10</param>
</responseAnalyzer>
<responseAnalyzer src="puffin" type="StatusResponseAnalyzer">
<param name="httpStatus">200</param>
<param name="httpStatus">302</param>
</responseAnalyzer>
</defaultResponseAnalyzerList>
<testPlanManager src="puffin" type="TestPlanManager"/>
<!-- <preTestDataLoad>
<param name='USER' eval='0'>user01</param>
<param name='PASSWORD' eval='0'>password</param>
</preTestDataLoad> -->
<autoInputs>
<input name="Cookie" type="HEADER" processor="DICT" defaultValue=""
antiActions="login" systemFlagDepends="security">
<param name="key" eval="0"><![CDATA[SECURITY_COOKIE]]></param>
</input>
</autoInputs>
<autoOutputs/>
</system> .
.
.


We will now go into detail for each system property. Please note that if a given attribute or element value is listed as a series of values separated by the pipe character ("|"), then that means that any of the values are valid for that attribute or element.

Server

The first system property is for the identification of the server on which your web application executes and the port number. This is self-explanatory. Here is the XML element of interest:

        <server host='www.mydomain.com' port='80'/>

System-Wide Flags

Next, there are a group of framework-wide flags. We will learn more about these system flags when we discuss test actions and their inputs and outputs. For now, you need only know that each flag represents a setting for the entire framework that can be set to true or false (1 or 0). In the examples, we will use a flag to specify whether or not security is on (requiring us to send a security token to the server on each call or not), but you could use these flags for any setting you wish to use. As mentioned, we will learn more about system flags when we discuss test actions below. For now here is the format for a system flag:

<systemFlags>
   <flag name='flagName' value='0|1'/>
</systemFlags>

You can use as many system flags as you like. For example, the following snippet of XML represents the use of two system flags, one for security and one for a ficticious 'mySetting' flag:

<systemFlags>
<flag name='security' value='1'/>
<flag name='mySetting' value='0'/>
</systemFlags>

Logging and Results Processing

New with version 0.8.5 of the puffin testing framework, I've introduced Vinay Sajip's streamlined log4j-like logging system as the communication process for the framework. No longer just simple print statements to the console or file writes, now puffin allows for very flexible logging of information from the framework itself, test plan reports and -- another new feature in 0.8.5 -- failed task reports.

Vinay's logging project handles logging through logging channels that can be configured in many many different ways (see his documentation at BLAH.com). The details of the logging system are outside the scope of this document. All you need to know is that you can configure a specific priority at which messages actually get logged (from DEBUG for the really chatty low level stuff to FATAL to that system-level error messaging) and that you can configure each logging channel to log its messages via a wide variety of logging handlers. These handlers allow you to log to the command line, the file system, SMTP, etc. You can configure a logging channel to use one or more handlers, and puffin allows for this. We'll see this in a moment.

The Puffin framework utilizes three logging channels. One handles logging messages for the framework itself. If you want to see my logging messages (useful if you run into problems, you can set the logging level to DEBUG -- otherwise, it defaults to WARN -- more on this in a moment). The second handles logging of the test plan's execution report in total. This report details the entire test execution from beginning to end. To be fair, this is a slight deviation from what the logging system is for, but it adds significant flexibility to use Vinay's logging system and I wanted to add the flexibility... The final logging channel handles alerts from those tasks in the test plan that fail for whatever reason. Separating these two different logging channels allows you to record the results of your entire report in one way, but to get notices of failed tasks in another way.

Configuring these logging channels is simple and is almost exactly the same for all three logging channels. Here is the basic format:

<frameworkLogging defaultPriority="WARN">
   <handler type="StreamHandler">
      <param name="msgFormat"><![CDATA[%(asctime)s %(name)s %(message)s]]></param>
   </handler>
</frameworkLogging>

The defaultPriority attribute is only used for the <frameworkLogging>, but otherwise the above is also the same format for the report and failed task alert logging systems. The really important element here is the <handler> child node of the logging channel element. You can add as many handlers as you want. For details on the various handlers, look at Vinay's documentation. Currently, puffin allows the following handler types:

·    SMTPHandler

·    NTEventLogHandler

·    FileHandler

·    StreamHandler

Want more? Send me an email (keyton@weissinger.org) and I'll throw them in. The parameters are straightforward. Here are examples of all four different handler types:

<handler type="StreamHandler">
   <param name="msgFormat"><![CDATA[%(asctime)s %(name)s %(message)s]]></param>
</handler>
<handler type="FileHandler">
   <param name="msgFormat" eval="0"><![CDATA[%(message)s]]></param>
   <param name="fileName" eval="0">puffinResults.log</param>
   <param name="mode" eval="0"><![CDATA[a+]]></param>
</handler>
<handler type="NTEventLogHandler">
   <param name="msgFormat"><![CDATA[%(asctime)s %(message)s]]></param>
</handler>
<handler type="SMTPHandler">
   <param name="msgFormat" eval="0"><![CDATA[%(message)s]]></param>
   <param name="mailHost" eval="0">mail.mydomain.org</param>
   <param name="fromAddr" eval="0"><![CDATA[keyton@weissinger.org]]></param>
   <param name="toAddr" eval="0"><![CDATA[keyton@weissinger.org]]></param>
   <param name="toAddr" eval="0"><![CDATA[keytonw@yahoo.com]]></param>
   <param name="subject" eval="0">Puffin Test Report</param>
</handler>


The message format describes how puffin will format the logging message when it sends it to the logging channel. Note that the "message" in the case of the report or failure alert is part of the details from the execution of the test plan. See resultsprocessor.py for more details.

NOTE: To use the NTEventLogHandler, you must have the Win32 extensions installed.

How do you set the priority for the framework logging channel itself? There are two ways: Either change the value of the defaultPriority attribute for the frameworkLogging element or set the command line argument "--logging=[logging priority]." Setting the priority on the command line overrides the setting for defaultPriority. This allows you to set the logging for the framework on the fly. Here are the possible priority values:

·    FATAL

·    ERROR

·    WARN

·    INFO

·    DEBUG

NOTE: Sometimes using a log4j-type logging system can be a bit confusing for beginners. Just remember this: if you set the priority to FATAL, then only FATAL messages will get to the logging system and recorded on the console or in the log file or whatever. If you set it to DEBUG, then EVERY message gets loggd. So the higher the logging priority the fewer the messages that the system will actually process.

Detail Levels

As described above, the results for a test plan's execution is communicated through the use of two logging channels. Detailed above, the report details are logged using the report logging channel and failure reports are logged using the failed task alerts. The <resultsProcessor> system element indicates to the framework what level of detail you wish to log. The reportDetail attribute of the <resultsProcessor> element has the following possible values: SUMMARY, LIGHT, RESULTS, and ALL. The following table describes the information shown per detail level:

INFO SHOWN
SUMMARY
LIGHT
RESULTS
ALL
Task Info
X
X
X
X
Test Action Info
X
X
X
Response Analysis Info
X
X
Headers, Inputs, Outputs, Raw Response Doc
X

The task's execution report details all the information for the entire test plan's execution all rolled together into a single unified report. The failed task alerts are individual logging messages for each failed task in the test plan's execution. Often the strategy is to write the results for a report to a file, but mailed failed task alerts to themselves.

Response Analyzers

OK. We have some base system settings and a way to communicate test results (with a reports and failure alerts). Now we need to get into the details of the actual test execution. As I mentioned before, a test plan consists of one or more tasks each of which which, in turn, is made up of one or more test actions. Each test action represents a call to your web application and can have zero or more inputs that puffin will build into the test action's web request before the call is made and/or zero or more outputs that puffin will process once the response comes back from the web application. We will talk about the mechanics of this in a moment. For now, we will talk about the response itself.

The whole point of puffin is to make sure that a given set of test actions, when executed in a given order against your web application each result in the "correct" response being sent back from the server. Each test action's response is analyzed by puffin for 'correctness.' What constitutes a correct response is largely up to you. Puffin allows you to analyze this response using one or more ResponseAnalyzers. A ResponseAnalyzer simply looks at the response that comes back from the server and determines whether or not the test action was a success or a failure. This result is communicated via a AnalysisResult object that allows for some more details on this analysis, but the gist is a success or failure (see responseanalyzer.py for more details on both classes).

<responseAnalyzer>

Puffin allows you to configure one or more response analyzers that will be used for all test actions (called default response analyzers) and/or one or more response analyzers for each specific test action. Here is what the responseAnalyzer element will look like in your puffin config file:

<responseAnalyzer src='puffin' type='StatusResponseAnalyzer' runOnFirstExecution="1">
   <param name='httpStatus'>200</param>
   <param name='httpStatus'>302</param>
</responseAnalyzer>

You can use a response analyzer provided with puffin (see below) or one of your own. If you use your own, then you must set the value of the 'src' attribute of the <responseAnalyzer> element to 'custom,' extend the base ResponseAnalyzer class and set its name as the value of the 'type' attribute. Also as in the case of the various logging handlers, the puffin framework will send any parameters into the constructor for the specified response analyzer as a paramDictionary. The response analyzers that come with puffin are in the responseanalyzer.py file. Your custom analyzers should be added to the extensions.py file.

Notice the attribute 'runOnFirstExecution.' There will be times when you will not want Puffin to validate a test action's response the first time that test action is executed in a test plan (for example, when checking that inventory is decremented on a shopping site, you may want to run a test action presenting the list one time just to get the starting number and then validate that the number decrements in subsequent calls). This attribute (which defaults to TRUE (1) if not present), indicates that Puffin should or should not use this response analyzer on the first execution of a given test action. This works for both default and test action specific response analyzers.

Default Response Analyzers

You can set up as many default analyzers as you wish. The <system> sub-element that you use to configure default response analyzers is <defaultResponseAnalyzerList> and it looks like this:

<defaultResponseAnalyzerList>
   <responseAnalyzer src='puffin' type='TimerResponseAnalyzer'>
      <param name='timeLimitSecs'>10</param>
   </responseAnalyzer>
   <responseAnalyzer src='puffin' type='StatusResponseAnalyzer'>
      <param name='httpStatus'>200</param>
      <param name='httpStatus'>302</param>
   </responseAnalyzer>
</defaultResponseAnalyzerList>

The above states that every test action configured for puffin that uses the defaults will use the TimerResponseAnalyzer and the StatusResponseAnalyzer. As mentioned above, Puffin allows you to use as many response analyzers as you like. A response must be successful through the analysis of ALL specified response analyzers for the test action to be successful.

Puffin Core Platform Response Analyzers

The puffin framework provides three default response analyzers:

·    EvalResponseAnalyzer -- This simple response analyzer will evaluate an expression and based on the result will deem the test action a success or failure. You can include any previously initialized test token in the expression as well.

·    TimerResponseAnalyzer -- times the execution of a test action. If it takes longer than the value specified in the timeLimitSecs parameter, the test action will be considered a failure.

·    StatusResponseAnalyzer -- Checks the HTTP codes that come back with your response. If it is not 200 or 302 (this is configurable, obviously), then your test action is a failure. Simply add as many <param name='httpStatus'>###</param> parameter sub-elements, each with a different status number, and the response analyzer will take it from there.

We will see later how you can also set response analyzers specifically for a given test action (rather than using these defaults). You can also specify whether for a given test action, puffin should use its specific response analyzers AND the default analyzers or just the ones for the specific test action. More detail on this subject can be found below in the section on configuration of test actions.

The TimerResponseAnalyzer and StatusResponseAnalyzers are pretty self-explanatory. Simply add them to the list of response analyzers (for either the defaults or for a specific test action) and you're good to go. The EvalResponseAnalyzer is far more powerful and flexible and requires a bit more explanation.

EvalResponseAnalyzer

The EvalResponseAnalyzer allows you to provide an expression that upon evaluation must result in a TRUE value for the test action to be considered to have succeeded. The easiest way to understand this powerful response analyzer, is to see some examples:

<responseAnalyzer src="puffin" type="EvalResponseAnalyzer">
   <param name='evalExpression'><![CDATA[$$$TOKEN1$$$ > $$$TOKEN2$$$]]></param>
</responseAnalyzer>

The above response analyzer will perform the following as its analysis:

1) Replace $$$TOKEN1$$$ and $$$TOKEN2$$$ with their corresponding values.

2) Use Python's eval() builtin method to evaluate the result.

3) The result of that evaluation is the result of the analysis.

So, if TOKEN1 has a value of 1000 in the current token dictionary and TOKEN2 has a value of 2000 in the current token dictionary, the above evaluation formula would resolve into "1000 > 2000" which eval() will resolve to FALSE (0). This means that this test action would be deemed a failure using this response analyzer.

Notice the step in which puffin replaces the token with its value from the current token dictionary. This is a very powerful mechanism as it allows you to create fairly complex formulae that are dynamically generated at run time. Basically any string you prefix and postfix with '$$$' will be looked up in the current token dictionary and will have its value retrieved from the token dictionary and substituted in place of the token name (sans the $$$ prefix and postfix, of course).

You can create as complex an evaluation formula as you like. You can also use strings. For example, the following is perfectly acceptable:

<responseAnalyzer src="puffin" type="EvalResponseAnalyzer">
   <param name='evalExpression'><![CDATA['$$$TOKEN1$$$' + 'W' == 'KeytonW']]></param>
</responseAnalyzer>

As long as TOKEN1 resolves to "Keyton," then the test action would be considered a success according to this response analyzer.

NOTE: To evaluate strings in this fashion, you must enclose them in quotes.

But WAIT! This response analyzer isn't "analyzing" the response. You are correct. However, you can use it to analyze a response somewhat indirectly. For example, consider the following test action:

<testAction name='login'>
   <path>/start.cgi</path>
   <!-- INPUTS -->
   <!-- OUTPUTS -->
   <output name="SESSION_ID" processor="extractRegex">
      <param name="expr" eval="0"><![CDATA[href="itemList\.cgi\?session_id=(\d*)&]]></param>
   </output>
   <responseAnalyzerList>
      <responseAnalyzer type='EvalResponseAnalyzer'>
         <param name='evalExpression'><![CDATA[$$$SESSION_ID$$$>1000]]></param>
      </responseAnalyzer>
   </responseAnalyzerList>
   <output name="ITEM_NUMBER" processor='generateRandomNumber'>
      <param name='max'>4</param>
   </output>
</testAction>

In the case of the above test action, the SESSION_ID output is processed first (see below for more on outputs) -- BEFORE analysis using the EvalResponseAnalyzer (note order of elements). Then the response analyzer checks the value of the SESSION_ID token against the number 1000 and then the final output is processed. In this way, you can analyze any information resulting from the execution of a test action. We will talk more about setting up response analyzers which are specific to test actions later when we discuss test action configuration and will discuss outputs at the same time.

As you can see, the EvalResponseAnalyzer is a very powerful part of puffin. You will use it often in your test plans.

Test Plan Manager

Continuing in the theme of extensibility, the next system property entry is for the test plan manager. This component of the puffin framework, as its name implies, handles loading the test plan(s) for a given execution of the framework. It handles both simple and complex test plan files (we'll learn about these later in this document) and then, after building a unified test plan, if necessary, executes each test action in the unified plan, taking dependencies and repititions into account. The test plan manager that comes with the puffin framework is fairly robust and flexible. However, in the continued interest in maximizing extensibility, you can extend the framework with your own test plan manager instead of using the default one. Here is the default entry, using the test plan manager provided with the framework:

        <testPlanManager src='puffin' type='TestPlanManager'/>

The default test plan manager is found in the testplan.py file. Note that although the default test plan manager does not require any parameters, your custom version can. (NOTE: TODO: The test plan manager extension mechanism is not quite there yet, so I won't talk any more about it. The default test plan manager handles everything the puffin test plans can handle right now, though, so this is ok short term. It is on my short todo list, but I've not yet gotten a round tuit....)

Pre-Testing Data Loading

Throughout the execution of a test, puffin uses an internal hash of name-value pairs. This hash is called a TokenDictionary and it has a few other features than the typical dictionary object, but can be considered functionally equivalent. The main job of the token dictionary is to store values for use as inputs in test actions. For example, if your web application requires that a specific cookie be sent to identify the user's security session, then one key-value pair in the token dictionary might represent the security cookie. That way you have a convenient way to "carry around" these special values for use later in your test plan.

Most often the puffin framework will populate these key-value pairs by extracting values from the response from a test action and giving them a name you specify (more on this in a moment). However, sometimes, you may want to "prime the pump" as it were and populate one or more key-value pairs before you execute your test plan. The <preTestDataLoad> system property allows you to do just that. Here is the format:

<preTestDataLoad>
   <param name='USER' eval='0'>joe</param>
   <param name='PASSWORD' eval='0'>schmoe</param>
</preTestDataLoad>
 

You can initialize as many key-value pairs as you wish. The format is simple. For each key-value pair, you add a <param> sub-element with the following format:

           <param name='keyName' eval='0'>value</param>

Note the second attribute, eval. This allows you to trigger puffin to process the value of the parameter as though it were python code. For example, the following would result in the myList key to have a value of a list of letters:

           <param name='myList' eval='1'>['a','b','c']</param>

All we had to do is set the eval attribute to true (1) and then puffin will treat the value as a python expression rather than a simple string.

Auto-Inputs and Auto-Outputs

As alluded to earlier, a test action takes zero or more inputs and can generate zero or more outputs. These inputs and outputs are typically specific to a given test action, but in some instances you may want to always send a given input or extract a given output every time. These are called auto-inputs and auto-outputs, respectively, and their configuration resembles the following:

<autoInputs>
   <input name="Cookie" type="HEADER" processor="DICT" defaultValue="" 
         antiActions="login" systemFlagDepends="security">
      <param name="key" eval="0"><![CDATA[SECURITY_COOKIE]]></param>
   </input>
</autoInputs>

We will cover their details below when I cover test actions, next.

Configuration: Test Actions

Once you have set all the appropriate puffin framework system properties, the next step is to configure all the possible test actions for your web application. I've used the phrase "test action" several times in this document without much explanation. A test action represents a single call your web application. It has a name by which you can refer to it in your test plans, a path which represents how to call it for your web application, a few puffin-specific properties, and a set of zero or more inputs and a set of zero or more outputs. You configure test actions in the puffin config file. Here is the most basic example of a configured test action:

<testAction name='login'>
   <path>/start.cgi</path>
     <!-- INPUTS -->
   <input name="list" type="GET" processor="VALUE">
      <param name="value" eval="0">Pets</param>
   </input>
      <!-- OUTPUTS -->
   <output name="SESSION_ID" processor="extractRegex">
      <param name="expr" eval="0"><![CDATA[href="itemList\.cgi\?session_id=(\d*)&]]></param>
   </output>
   <responseAnalyzerList>
      <responseAnalyzer type='EvalResponseAnalyzer'>
         <param name='evalExpression'><![CDATA[$$$SESSION_ID$$$>1000]]></param>
      </responseAnalyzer>
   </responseAnalyzerList>
   <output name="ITEM_NUMBER" processor='generateRandomNumber'>
      <param name='max'>4</param>
   </output>
</testAction>

Test Actions Element

All of your configured test action elements (see below) should be children of the <testActions> element in the puffin configuration file (with one type of exception that we will cover in a moment.

<testActions>
</testAction>

There is one (optional) attribute on the <testActions> element:

<testActions appContextAlias="/puffindemoapp">
</testAction>

The appContextAlias attribute on the <testActions> element allows you to set up an application alias that will be added to every path for every test action. You will learn about the path of a test action below, but for now suffice it to say that the path represents a web path for a test action's server call (if that test action involved a server call, but that's a different story). So in the above example attribute value, "/puffindemoapp" will be added to every path for every test action that has one. So if a test action's path is /start.cgi, then Puffin will add the application context alias prefix before calling that test action. So it would be /puffindemoapp/start.cgi.

This prevents you from having to add this prefix to every path or to change it in every test action should the script alias change.

In the next section, you will learn about how to set up each individual test action. After that, you will learn about including test action files in your puffin config file -- allowing multiple developers to configure multiple test actions in different files without stepping on each other. Puffin uses only the appContextAlias in the <testActions> element in the puffin config file -- not in any included test action file.

Test Action Element

The testAction itself represents the information for the web application call itself. There is one such element for every possible test action for your web application. However, and this is an important distinction, there is ONLY one test action configured for each server call -- even if you can call it several different ways. We'll talk about this more in a moment when we discuss inputs and outputs. For now, you should remember that you should have only a single test action element per web application "page" path (JSP, ASP, CGI, etc) and that test action should contain all possible information for that page.

Here are the attributes for the test action element itself:

·    name -- The name of the test action as used in a test plan file. NOTE: Each test action name in a given puffin config file (or included test action file -- see below) must be unique within that config file. If more than one test action has the same name, only the first is recognized. Puffin ignores all subsequent test actions with that same name.

·    stopPlanOnFail -- (OPTIONAL) This boolean value (0 or 1) indicates whether puffin should stop an entire test plan on a failure of this test action. Default is false (0).

·    noSvrCall -- (OPTIONAL) This boolean value (0 or 1) indicates whether or not this test action involves an actual call to the web server or not. You can use test actions without a server call to process inputs or outputs without calling the server.

<responseAnalyzerList>

Earlier when we talked about configuration of default response analyzers, I mentioned that you could also specify response analyzers for Puffin to use for specific test actions. The format of the configuration is almost identical, except for the sub-element to use. For the system configuration, that element is <defaultResponseAnalyzerList> and it has no attributes. For configuring response analyzers for specific test actions, you use the <responseAnalyzerList> and it has a single attribute: 'useDefaults' that is not allowed for the <defaultResponseAnalyzerList> element. As you might guess 'useDefaults' specifies whether puffin should use only the response analyzers set up for this test action using the <responseAnalyzerList> element or whether it should also use the defaults.

Earlier, when we discussed the EvalResponseAnalyzer, I introduced the fact that you could process some of the outputs before executing your response analysis. We will talk about that more below when we discuss outputs. For now, it's useful to see the process. As you can see in the following steps, the execution of your test action specific response analyzers will occur only after the execution of the default response analyzers, if you are going to use them:

1) Process inputs.

2) Make web call, retrieve response.

3) Process default response analyzers if they are to be used for this test action.

4) Process any outputs occurring BEFORE the responseAnalyzerList for this test action.

5) Process test action specific response analyzers if they exist.

6) Process any outputs occurring AFTER the responseAnalyzerList for this test action.

Note that this process is short circuited at the first failure. If one of the defaults results in failure, then none of the test action specific response analyzers will even be processed, for example.

If you do not have any <responseAnalyzerList> element for a test action, puffin will use the defaults. If you have the element and set 'useDefaults' to TRUE (1), then Puffin will first run the test action's response through all the response analyzers set up for the defaults and then through each response analyzer set up for just this test action. Finally, if you use the <responseAnalyzerList> element for your test action configuration and set useDefaults to FALSE (0) or leave out the attribute all together, then Puffin will NOT use any of the default response analyzers to validate the response for that test action.

<path>

The first sub-element is the server path for the test action. The path includes everything in the URL but the host name and port. For example, if you have an active server page that has a complete URL of

      http://www.mydomain.com:8080/apps/selection/myPage1.asp

Then the <path> element value should be:

      <path>/apps/selection/myPage1.asp</path>

Note that no querystring information should be included in the path. Also, if you set the noSvrCall attribute to true (1), then this element is ignored.

<input>

A test action input represents a discrete piece of information that is sent to the server upon making the application call for this test action. An input can be in any of the following three forms:

·    HTTP GET -- A value that is appended to the querystring when the test action's server call is made. For example, if you have an input called "userid" whose type is "GET" then the token "userid=[value]" is appended to the end of the test action's path before puffin makes the server call.

·    HTTP POST -- A value that is appended to the body of the HTTP request before puffin executes the test action's server call. Form data is typically communicated (though it could be GET-based) through the use of POST.

·    Header -- A value that is appended as a header in the HTTP request for the test action's server call execution.

Here is a complete <input> sub-element:

<input name='userid' type='POST' processor='DICT' defaultValue=''>
   <param name='key' eval='0'>USER</param>
</input>

We'll cover the attributes of the <input> element itself first (note there are two more for autoInputs -- discussed later in this document):

·    name -- This is the name of the input. Puffin will use it as the name of the value to be sent as a POST, GET, or header. It is a required attribute. NOTE: You can use a $$$[TOKEN_NAME]$$$ token for an input name, in case you don't know at test writing time what you will want to call your input. See the section on the EvalResponseAnalyzer, above, for more information on how this works.

·    type -- (OPTIONAL: Default is GET) This is the type of input. Valid values: GET, POST, and HEADER. Default is GET.

·    processor -- (OPTIONAL: Default is VALUE) This attribute tells puffin how to get the value for this input. Valid values: DICT, SCRIPT, FILELIST, LIST, TOKENLIST, SCRIPTLIST, VALUE. Default is VALUE.

·    defaultValue -- (OPTIONAL) The default value to send. Default is None.

·    src -- (OPTIONAL) The source of code for a script or scriptlist-based input. Default is 'puffin.'

We'll discuss the various processors in a moment, but first let's quickly cover the <param> sub-element of inputs. Parameters for inputs work exactly like parameters for everything else in puffin:

            <param name='key' eval='0'>USER</param>

where:

·    name -- The name of this parameter.

·    eval -- (OPTIONAL) Whether or not to evaluate the parameter value as a python expression. Default is FALSE (0).

Parameters are sent into the constructor for the input as a paramDictionary, just like for other places where parameters are required. As we will see in a moment, when we cover input processor types, parameters are only required sometimes.

Token Expressions as Input Names

As mentioned above, you can use a token expression, in the format '$$$[TOKEN_NAME]$$$' as the name of an input. In this manner, you can dynamically generate the actual name of your input variable. Here is an example:

<input name='$$$USER_ID$$$' type='POST' processor='DICT' defaultValue=''>
   <param name='key' eval='0'>USER</param>
</input>

When the above input token is processed, Puffin will first look up the value of the USER_ID token in the current token dictionary and replace $$$USER_ID$$$ with it as the name for this input.

This can be very useful when using Puffin to "fill" a dynamically created form element etc.

Input Processors

Puffin supports the following types of input processors:

VALUE

This is the simplest type of input value processor and is the default processor type. It takes a simple value as a parameter and uses that value as the value for the input. Here is an example:

<input name='userID' type='GET' processor='VALUE'>
   <param name='value' eval='0'>-2</param>
</input>

As you can see, all that is required is a 'value' parameter whose value can be simple or an evaluated python expression.

NOTES: I have plans for a Extending Puffin document to be written in the future. In that document I will expand on writing your own input processors.

DICT

This is the token dictionary we discussed earlier. It is basically a hash of named values. This type of input processor takes a parameter ('key') value and uses that to look up a value in the currently active token dictionary. This value is then used as the input. Example:

   <input name='userid' type='POST' processor='DICT' defaultValue=''>
      <param name='key' eval='0'>USER</param>
   </input>

This will look up a value for the key, USER, and then use it as a value for the 'userid' input, wrapping that up in the body of a request (HTTP POST).

LIST Type Inputs

Often you have access to all the possible values for a given input ahead of the time you will process the input. Here are some scenarios like this:
-- You have the values by themselves (and can place them as parameters into your config file. (LIST)
-- You have a file containing the possible values. (FILELIST)
-- You have a custom script that will generate your list of possible values. (SCRIPTLIST)
-- You have a token at test run time containing a list of possible values. (TOKENLIST)

Puffin allows for all four of these scenarios. Each LIST-type input processor works the same way. Each time you process the input, it gives you the next value in its list. For example, if you have a list of names ['keyton', 'bob', 'chris'], and process a LIST-type input processor with those values three times, you will get -- in order -- 'keyton' then 'bob' and finally 'chris'. If you call it AGAIN, the LIST-type processor restarts and you get 'keyton' again. All LIST-type input parameters work this way.

Also, though most of the parameters that each input processor are unique to the type of input processor, all take a 'currentItemStorage' that is the name of a token that will contain the current list value. The default currentItemStorage parameter value is 'CURRENT_INPUT_LIST_ITEM'.

Using this currentItemStorage token, you can retrieve the current list value without re-processing the input and moving the list index up one item.

LIST

Often you know the possible input values ahead of time and there are not enough to warrant storage in a separate file. In this case, you can use a LIST input processor. All that is required for LIST input processors is one or more 'listItem' parameter values. Here is an example:

   <input name='foo' type='GET' processor='LIST'>
      <param name='currentItemStorage' eval='0'>MY_CURRENT_LIST_ITEM</param>
      <param name='listItem' eval='0'>Keyton</param>
      <param name='listItem' eval='0'>Bob</param>
      <param name='listItem' eval='0'>Chris</param>
   </input>

The first time this input is processed, the input value will be Keyton. The second time, it will be Bob. The third time it will be Chris. And then it will start again at Keyton.

FILELIST

You can store all possible values (one per line) in a file and use a FILELIST input processor. All that is required for a FILELIST input processor is a file with an individual input value on each line. Here is an example:

   <input name='foo' type='GET' processor='FILELIST'>
      <param name='currentItemStorage' eval='0'>CURRENT_FILE_LIST_ITEM</param>
      <param name='fileName' eval='0'>fooData.txt</param>
   </input>

The only requirement is the parameter with name "fileName." This value represents the path and name of your value file. Here is an example of a valid value file:

   blahVal1
   blahVal2
   blahVal3
   blahVal4
   blahVal5
   blahVal6
   blahVal7
   blahVal8

Each line is a separate value. Note that this generation of values happens when the input is initially constructed. All values are set before the test action is run for the first time.

SCRIPTLIST

A SCRIPTLIST input processor behaves similarly to the FILELIST input processor, it generates a list of possible values at the initial instantiation of the input for which this input processor is generating values. The difference between this input processor type and the FILELIST is that this list of values is generated from the execution of a script. Here is an example:

<input name='bar' type='GET' processor='SCRIPTLIST' src='custom'>
   <param name='currentItemStorage' eval='0'>CURRENT_SCRIPT_LIST_ITEM</param>
   <param name='scriptName' eval='0'>generateBarValueList</param>
</input>

Like when you specify a script name as the processor type (see below), all that is required is a 'src' attribute on the <input> element (either 'puffin' for scripts provided with puffin or 'custom' for your custom scripts) and a scriptName parameter. Here is an example of the SCRIPTLIST input processor (trivial):

def generateBarValueList():
   """Sample token list input processor. Generate a list of user names."""
   return ['barVal1', 'barVal2', 'barVal3']

All the script must do is to generate a list of values. Note that because the list of values are generated at time of instantiation of the input, no tokenDictionary or paramDictionary is sent in. (TODO: This should be changed to allow for this....) As usual, all you must do to extend the framework is to set 'src' to 'custom' and add your own list-generating script to extensions.py.

TOKENLIST

There are some output extractors that return a list of items into a token. To iterate over the values in one of these tokens, use a TOKENLIST input processor. The only parameter required for the TOKENLIST input processor is the 'listTokenName' parameter. This parameter, as you probably guess, contains the name of the token containing the list of values over which you wish to iterate. Here is an example of a TOKENLIST input processor node:

   <input name='foo' type='GET' processor='TOKENLIST'>
      <param name='currentItemStorage' eval='0'>CURRENT_TOKEN_LIST_ITEM</param>
      <param name='listTokenName' eval='0'>CATEGORY_TYPE_LIST</param>
   </input>

[SCRIPT_NAME]

You can also submit the name of a script as the value for the processor attribute. As it name implies, this input processor involves the execution of a script. Either this is a script provided with the puffin framework (attribute 'src' equals 'puffin') or provided by you as an extenstion in the extensions.py file (attribute 'src' equals 'custom'). The processor attribute tells the framework the name of the script to call. All script-based input processors (whether they are part of puffin or custom) must take as arguments, the current token dictionary and a parameter dictionary. Here is a sample script-based input processor method:

def generateRandomFirstName(tokenDictionary, paramDictionary):
   """Generate a random first name using the current time."""
   firstNameStub = 'FIRST'
   return firstNameStub + `time()`.split('.')[0][-3:]

As you can see, it takes a tokenDictionary (the currently active one for the running test plan) and the paramDictionary. To create your own custom script-based input processor, just write a method like this, add it to the extensions.py method and set the input information accordingly:

<input name='myVal' type='POST' processor='generateMyValue' src='custom'>
   <param name='myParam' eval='0'>myParamValue</param>
</input>

<output>

You now understand how to set the inputs for a test action, we must now tackle how to get data back from the execution of a test action. An output can represent a single value that can be extracted from the HTTP response sent back from the server upon execution of a given test action. An output can ALSO represent a value you generate at a given time after the execution of a test action. This can be a little confusing. Just think about it like this: You can get an output value either by extracting it from the test action's response or by simply triggering its creation (using a script or similar) by the execution of a test action.

The most important thing is what puffin does with the the output value. All output values -- regardless of where they come from -- get placed into the current token dictionary for later use.

There are two ways by which puffin can extract values from the response:

·    VALUE (DEFAULT) -- Puffin will return the value of the 'value' parameter. Use this if you wish to generate a value upon execution of a given test action. This is the default

·    DICT -- The DICT output processor works exactly like the DICT input processor. See input DICT processor, above, for more details.

·    EXTRACTNEXTLISTITEM -- Like the LIST-type input processors, the EXTRACTNEXTLISTITEM output processor retrieves the next value from a token's list value. Each time you process this output it will give you the next value. If it reaches the end of the list, it will start over with the first item.
(NOTE: This is a good example where there is little difference between an input and an output. I've considered making them the same, but have not done so for a variety of reasons. I can be convinced, though...)

·    Extractor script (either one from the puffin core or from an extension extractor script that you create and place in the extensions module). This extractor script can generate a value pretty much any way you want. Puffin comes with six output processors as part of the core platform. These are discussed below.

Here's a complete <output> sub-element using the VALUE processor:

<output name='USER_ID' processor='VALUE' defaultValue=''>
   <param name='value' eval='0'>kweissinger</param>
</output>

Here are the attributes for the <output> element itself (note there are two more for autoOutputs -- discussed later in this document):

·    name -- This is the name of the key to be used to store the value puffin will extract for this output. This becomes the key in the token dictionary.

·    processor -- This is the mechanism by which puffin will extract the value for this output.

·    defaultValue -- (OPTIONAL) The default value to use if extractDefault is true. Ignored otherwise.

·    src -- (OPTIONAL) The source for a SCRIPT-based output processor. (NOTE: This is confusing to have both SOURCE and SRC -- TODO: FIXIT).

Parameters for outputs work exactly the same way as parameters for inputs:

            <param name='value' eval='0'>USER</param>

where:

·    name -- The name of this parameter.

·    eval -- (OPTIONAL) Whether or not to evaluate the parameter value as a python expression. Default is false (0).

Parameters are sent into the constructor for the output as a paramDictionary, just like for other places where parameters are required. As we will see in a moment, when we cover output processor types, parameters are only required sometimes.

Output Processors

The puffin framework generates values for outputs using a variety of methods:

VALUE

The default output processor expects a simple 'value' parameter that will hold a value to be placed into the token dictionary for the output:

<output name='BUSINESS_ID' processor='VALUE'>
   <param name='value'>1234</param>
</output>

The above output will result in the value 1234 being stored in the current token dictionary using the dictionary key BUSINESS_ID.

EXTRACTNEXTLISTITEM

As briefly described above, the EXTRACTNEXTLISTITEM output processor allows you to iterate through the values stored in a token in the current token dictionary. The EXTRACTNEXTLISTITEM output processor works EXACTLY like the TOKENLIST input processor except that it does not take a 'currentItemStorage' parameter (as the output token provides this behaviour automatically). It takes only one required parameter, 'listTokenName', is the name of the token containing a list of values. Here is an example:

<output name="SELECTED_ITEM_NUMBER" processor="EXTRACTNEXTLISTITEM">
   <param name="listTokenName">ITEM_NUMBER_LIST</param>
</output>

Each time the above output is processed, it will retrieve the next value from the list value stored in the ITEM_NUMBER_LIST token in the current token dictionary.

[SCRIPT_NAME]

All other output processors result from puffin's calling a given script. You can either call one of the puffin core output processors (described below) in which the src attribute should be 'puffin' (or absent, as 'puffin' is the default value of the src attribute) or one of your own (as placed in the extensions.py file (in which case your src attribute needs to be 'custom').

Here is an example of a puffin core output processor:

<output name='AVAIL_ROOMTYPE_ID' processor='extractXpath'>
   <param name='xpathExpr' eval='0'><![CDATA[roomType[not(dates/dateAvail/@num    < 1)]]]></param>
   <param name='xpathType' eval='0'>NODE_ATTRIB_VALUE</param>
   <param name='index' eval='0'>0</param>
   <param name='attribName' eval='0'>id</param>
</output>

The above element will result in puffin calling its own extractXpath output processor to extract a value from the response. Here is the code (from tokenprocessing.py for this output processor:

def extractXpath(actionResponse, tokenDictionary,    paramDictionary, defaultValue=None):
   """Extract a specific value from the output using an Xpath expression.
	Keyword Arguments:
		actionResponse -- The action response from the currently executing test action.
		tokenDictionary -- The current token dictionary from the current test plan.
		paramDictionary -- The <param> elements for this output processor.
		defaultValue -- (OPTIONAL) A default value to return if no other value can be    found. 
			None will be returned if no default value is provided.
	Parameters:
		xpathExpr -- The actual XPath expression this can be straight text or a CDATA element.
		xpathType -- What does the XPath expression result in:
			ATTRIB_VALUE -- The value of a single attribute. No more parsing required.
			NODE_VALUE -- The value must be retrieved from the first node in the node list 
				returned from the execution of the XPath expression.
			NODE_ATTRIBUTE_VALUE -- The XPath expression's execution results in a node list 
				you must retrieve a specific node from the node list and from that node
				retrieve a specific attribute's value.
		attribName -- For NODE_ATTRIBUTE_VALUE, this indicates the attribute whos value    you
			wish to extract.
		index -- For NODE_ATTRIBUTE_VALUE and NODE_VALUE, this indicates the specific    node
			from the resulting node list whose value you wish to extract. If not present,    
			this defaults to 0."""
   
	# Retrieve important param values:
	xpathExpr = paramDictionary.getParamValue('xpathExpr')
	xpathType = paramDictionary.getParamValue('xpathType').upper()
	if xpathExpr and xpathType:
		try:
			# Generate a DOM on which to execute XPath expressions.
			responseDOM = xml.dom.minidom.parseString(actionResponse.getRawResponseDoc().lstrip())
   
			# Get the xpath expression.
			outputValueItem = Evaluate(xpathExpr, responseDOM.documentElement)
			outputValue = None
			if xpathType == 'ATTRIB_VALUE':
				outputValue = outputValueItem[0].nodeValue
			else:
				index = int(self.getparamDictionary().getParamValue('index', default=0))
				if xpathType == 'NODE_TEXT_VALUE':
					outputValue = outputValueItem[index].childNodes[0].nodeValue
				elif xpathType == 'NODE_ATTRIB_VALUE':
					attribName = self.getparamDictionary().getParamValue('attribName')
					if not attribName:
						outputValue = None
					else:
						outputValue = outputValueItem[index].getAttribute(attribName)
		except:
			outputValue = None
		else:
			outputValue = None
	return outputValue or defaultValue

Here is an example of an output that will call a custom script (note it has no parameters):

         <output key='AVAIL_ROOMTYPE_ID' processor='extractHotelDateComplete' src='puffin'/>

This would call a custom output processor script similar to the following:

def extractHotelDateComplete(actionResponse, tokenDictionary,    paramDictionary):
   """Generate a date object for use in later test actions. Use 
   the hotel date parts from earlier extractions."""
   hotelDateYear = int(tokenDictionary['HOTELDATE_YEAR'])
   hotelDateMonth = int(tokenDictionary['HOTELDATE_MONTH'])
   hotelDateMonth = hotelDateMonth + 1 # Month is 0-based for our system.
   hotelDateDay = int(tokenDictionary['HOTELDATE_DAY'])
   hotelDateComplete = DateTime(hotelDateYear, hotelDateMonth, hotelDateDay)
   debug('hotel date complete: ' + `hotelDateComplete`)
   return hotelDateComplete

Note that we did not use the response doc in this example.

Core Output Processors

Puffin provides six core output processors:

·    extractHeader -- Which retrieves a value from an HTTP response header.

·    extractXpath -- Executes an XPath expression against the HTTP response document and returns the resulting match.

·    extractRegex -- Executes a regular expression against the HTTP response document and returns the resulting match.

·    extractRegexList -- Executes a regular expression against the HTTP response document and returns a list of the resulting matches.

·    generateRandomNumber -- Generates a random number for the output token.

·    selectRandomString -- Selects a random selection from the list value of a specified token or from a list of one or more strings.

·    generateRandomString -- Generates a string of random characters either based on a char type and a length or on a template. You can use this output processor to generate such things as garbage data (including non-alphanumeric) or even fake email addresses and the like.

·    getCmdExecResult -- Allows you to retrieve the result of any command line script or command and store the result into an output token.

·    executeMySQLQuery -- Given a database name and a query, this output processor will execute the query against the database specified and place the results (if there are any) into an output token. This processor also stores the number of rows affected in another token and the names of columns in the returned rows in a third token. This is the only output token processor to use more than one token name.

·    extractCalculationResult -- Evaluates an expression (similar to that for the EvalResponseAnalyzer -- see above) and returns the result.

extractHeader

A extractHeader output processor, as its name implies, extracts a value from one of the HTTP headers returned in the HTTP response.

Here is an example output element:

<output name='SECURITY_COOKIE' processor='extractHeader'    defaultValue=''>
   <param name='headerName' eval='0'>set-cookie</param>
   <param name='headerPrefix' eval='0'>WebLogic</param>
</output>

This example output processor will extract a value (the value of the set-cookie header with the prefix 'WebLogic') and return it for storage into the token dictionary with the key, 'SECURITY_COOKIE.' Later inputs can then retrieve the value of this header and send it back to the server as an input, if needed.

The parameters for the extractHeader output processor are:

·    headerName -- The name of the specific header whose value you wish to extract.

·    headerPrefix -- The prefix of the value for the header value, in case there are more than one values with the same header name. (NOTE: Currently this is required, but it should not be long term. I have it in there because my interest is in a set-cookie header and I need some way to discern among multiple possible headers of this type. TODO: Fix this.)

extractXpath

Often, the results of a test action's execution is an XML document of some kind. This allows extraction of an output value using XPath relatively painless. There are three scenarios in which XPath-based output processing is useful (which one is communicated through the parameter, xpathType):

·    ATTRIB_VALUE -- The value of a single attribute. No more parsing required.

·    NODE_VALUE -- The value must be retrieved from the first node in the node list returned from the execution of the XPath expression.

·    NODE_ATTRIBUTE_VALUE -- The XPath expression's execution results in a node list you must retrieve a specific node from the node list and from that node retrieve a specific attribute's value.

Here is an example of an XPath-based ouput processor used in an output element:

<output name='AVAIL_ROOMTYPE_ID' processor='extractXpath'>
   <param name='xpathExpr' eval='0'><![CDATA[roomType[not(dates/dateAvail/@num    < 1)]]]></param>
   <param name='xpathType' eval='0'>NODE_ATTRIB_VALUE</param>
   <param name='index' eval='0'>0</param>
   <param name='attribName' eval='0'>id</param>
</output>

The parameters xpathExpr and xpathType are both required. The index parameter is required for both NODE_VALUE and NODE_ATTRIBUTE_VALUE xpath extractions. The attribName parameter is required only for NODE_ATTRIBUTE_VALUE extraction type.

You need to be careful that your xpath expression returns something that can be used in the type of output processor you specify for the output. Otherwise, it is pretty straightforward.

(NOTE: TODO: More information needed here.....)

extractRegex

The extractXpath extractor is an excellent way to extract output tokens if you are testing a web application whose responses are composed of well formed XML. However, this is rarely the case. Mostly you need to extract a value from a straight HTML document. Here is where you will use the extractRegex output extractor. Here is an example of the use of a extractRegex extractor:

<output name="BUSINESS_ID" processor="extractRegex">
   <param name="expr" eval="0"><![CDATA[Business ID:<a href="(\d*)" ]]></param>
</output>

Pretty simple (too simple -- more needed?): A extractRegex extractor takes only a single parameter:

            <param name="expr" eval="0"><![CDATA[(E.C.PTION)]]></param>

The regular expression you use for the expression can be of an arbitrary complexity. All that is required is that some portion be marked for return using parentheses (to have parentheses in your regular expression, simply escape them with the backspace character, "\").

Here is an example:

            <param name="expr" eval="0"><![CDATA[href="itemList\.cgi\?session_id=(\d*)]]></param>

The above will match the following phrase (for example):

      href="itemList.cgi?session_id=123456

But will return only the part of the expression in the parentheses:

            <param name="expr" eval="0"><![CDATA[href="itemList\.cgi\?session_id=(\d*)]]></param>

In our example expression above, puffin would return only that part in bold below:

      href="itemList.cgi?session_id=123456

Regular expressions are one of the most powerful tools available to Python and other scripting languages. I couldn't do justice to the subject here, even if this were the best place. For more information for how Puffin's regular expression matching work look at the Python re module documentation.

extractRegexList

Often you have a web page with several of the same items in it and you wish to retrieve them all into a list. Suppose you have the following web page:

<html>
<body>
<center>
MENU
<p/>
<a href="itemList.cgi?session_id=14561020049151&list=Pets">Pets</a><br/>
<a href="itemList.cgi?session_id=14561020049151&list=Books">Books</a><br/>
<a href="itemList.cgi?session_id=14561020049151&list=Gear">Gear</a><br/>
<a href="itemList.cgi?session_id=14561020049151&list=Posters">Posters</a><br/>
<p/>
<a href="showCart.cgi?session_id=14561020049151">View Cart</a>
<br/>
<a href="logout.cgi?session_id=14561020049151">Log out</a>
</center>
</body>
</html>

You want to extract all four categories (Pets, Books, Gear, and Posters). The following is the regular expression that will extract all the categories:


<a href.*">(.*)</a><br/>

The above will return all four categories when executed against the above HTML document. You can use the extractRegexList input processor to extract all of these values into a list. The only parameter required for the extractRegexList output processor is the 'expr' parameter. Here is an example of use of the extractRegexList output parameter:

<output name="CATEGORY_LIST" processor="extractRegexList">
   <param name="expr" eval="0"><![CDATA[<a href.*">(.*)</a><br/>]]></param>
</output>

This output processor will extract all category names into a list value stored in the CATEGORY_LIST token.

generateRandomNumber

The generateRandomNumber, as its name implies, creates a random number and returns it. You can provide a maximum number (which defaults to 10) and you can provide an optional seed number.

<output name='ITEM_SELECTION' processor='generateRandomNumber' defaultValue=''>
   <param name='max' eval='0'>100</param>
   <param name='seed' eval='0'>23</param>
</output>

The above generates a random number between 0 and 99 based on the random seed of 23 and stores it into the ITEM_SELECTION token in the token dictionary. You can also use a puffin token expression for either number:

<output name='SECURITY_COOKIE' processor='generateRandomNumber' defaultValue=''>
   <param name='max' eval='0'>$$$LIST_NUMBER_MAX$$$</param>
   <param name='seed' eval='0'>$$$SEED$$$</param>
</output>

This will create a random number between 0 and the current value of the LIST_NUMBER_MAX token in the token dictionary using the value of the SEED token as the random seed. NOTE: The tokens used for both max and seed parameters must be integers or an error will occur.

selectRandomString

The selectRandomString output processor takes one or more 'string' parameters and returns to you a random selection of one string from them. To use the selectRandomString output processor you have two choices: you can provide a 'stringListToken' parameter or a series of 'string' parameters. The stringListToken parameter value refers to a specific token name whose token value is a list of values. The output processor will then select a random index from within the list of strings. Here is an example:

<output name='RANDOM_NAME' processor='selectRandomString' defaultValue=''>
   <param name='stringListToken' eval='0'>CATEGORY_LIST</param>
</output>

The above will extract a random string from the list of categories in the CATEGORY_LIST token.

If you use a series of 'string' parameters, the output processor will then select a value at random from the various 'string' parameter values. Here is an example:

<output name='RANDOM_NAME' processor='selectRandomString' defaultValue=''>
   <param name='string' eval='0'>Tom</param>
   <param name='string' eval='0'>Gertrude</param>
   <param name='string' eval='0'>Rolph</param>
</output>

The above generates either Tom, Gertrude or Rolph, selected randomly and placed into the token RANDOM_NAME. You can also use a puffin token expression for string parameter values:

<output name='RANDOM_NAME' processor='selectRandomString'    defaultValue=''>
   <param name='string' eval='0'><![CDATA[$$$FIRST_CUSTOMER$$$]]></param>
   <param name='string' eval='0'><![CDATA[$$$SECOND_CUSTOMER$$$]]></param>
   <param name='string' eval='0'><![CDATA[$$$THIRD_CUSTOMER$$$]]></param>
</output>

This works exactly as you would expect. It retrieves a random string from the values of the FIRST_CUSTOMER, SECOND_CUSTOMER, and THIRD_CUSTOMER tokens in the current token dictionary.

generateRandomString

LAZINESS SETTING IN: MUST REFORMAT:

This processor generates a random string of characters.

You can either generate a string of a given length with a given character type.
OR
You can generate a random string based on a pattern template string you send in.

Keyword Arguments:
tokenDictionary -- The current token dictionary from the current test plan.
paramDictionary -- The  elements for this extractor.
defaultValue -- A default value to return if no other value can be found.
actionResponse -- The action response from the currently executing test action.

Parameters:
type -- The type of character to use in generating a random string. The options are
    # -- Number 0-9
    _ -- Letter a-z or A-Z
    ? -- Number or Letter
    % -- Any non-alphanumeric (!@#$^%&, etc)
    * -- Any character.
    NOTE: The 'type' parameter is optional. Default is '*'.
length -- The length of random string to generate. (Optional -- Defaults to 10.)
pattern -- A template upon which to generate a random string. This template can use
    any of the above type characters. For example, the following would generate 
    a random email address: "____@_____.com" (NOTE: There are 5 '_' characters on
    either side of the '@'.) This would generate a random email of a 5 letter name,
    followed by a random 5 letter domain name, followed by '.com'.

If a pattern parameter is sent, both type and length are ignored.

getCmdExecResult

LAZINESS SETTING IN: MUST REFORMAT:

This processor executes a command line command and returns its results.

It is meant to allow for the use of other scripts to be run from within Puffin
and to get the results of that script. For example, if you have a Perl script
that you like that generates random names, you could use this processor.

NOTE: The return can only be simple data. It will result in a string in Puffin.
NOTE: Do not start unending processes/daemons this way. They will hang Puffin.

The example included with Puffin simply calls a Python script but this is mostly
because I like Python. You could just as easily have run a Perl or shell script
or even something like a 'ping' command. Anything, so long as it comes back with
some value and ends.

Keyword Arguments:
tokenDictionary -- The current token dictionary from the current test plan.
paramDictionary -- The  elements for this extractor.
defaultValue -- A default value to return if no other value can be found.
actionResponse -- The action response from the currently executing test action.

Parameters:
commandLine -- The string to use on the command line.

executeMySQLQuery

WARNING: THIS HAS NOT YET BEEN TESTED VERY WELL. PLAY WITH IT SOME BEFORE TRUSTING IT!!!!!!
LAZINESS SETTING IN: MUST REFORMAT:

This processor executes a query against a MySQL database. It is fairly simple 
and needs more work. But it is a start.

You can execute a DDL or a DML script. Obviously, a query can return either no
rows of data, a single row of data, multiple rows of data, or a single value.

This processor allows for all three possibilities.

NOTE: It is unique among processors in that it stores results in two different
tokens, both namable by you. The first is the number of rows affected by the
execution of the query's execution. The second is a list of rows resulting from
the execution of the query.

Keyword Arguments:
tokenDictionary -- The current token dictionary from the current test plan.
paramDictionary -- The  elements for this extractor.
defaultValue -- A default value to return if no other value can be found.
actionResponse -- The action response from the currently executing test action.

Parameters:
  CONNECTION SPECIFIC:
    host -- string, host to connect to or NULL pointer (localhost)
    user -- string, user to connect as or NULL pointer (your username)
    passwd -- string, password to use or NULL pointer (no password)
    db -- (REQUIRED) string, database to use or NULL (no DB selected)
    port -- integer, TCP/IP port to connect to or default MySQL port
    unix_socket -- string, location of unix_socket to use or use TCP
    client_flags -- integer, flags to use or 0 (see MySQL docs)
    connect_time -- number of seconds to wait before the connection
            attempt fails.
    compress -- if set, compression is enabled
    init_command -- command which is run once the connection is created
    read_default_file -- see the MySQL documentation for mysql_options()
    read_default_group -- see the MySQL documentation for mysql_options()

    queryStatement -- (REQUIRED) The actual statement to execute...

  RETURN VALUE(s) SPECIFIC:
    numRowsAffectedTokenName -- (OPTIONAL Default=NUM_ROWS_AFFECTED) The 
        name of the token into which you want Puffin to store the number 
        rows affected by executing your query.
    colNameListTokenName -- (OPTIONAL Default=COL_NAMES_STRING) The name
        of the token into which Puffin will place a comma-delimited
        string of the column names for the rows your execution returns.
    rowIndex -- (OPTIONAL Default=-1 --> ALL) The index of the row you
        want Puffin to return if you only want one. The index numbering
        starts at 0.
    colIndex -- (OPTIONAL Default=-1 --> ALL) The index of the col you
        want Puffin to return if you only want one. The index numbering
        starts at 0.

NOTE: This processor handles individual rows like this. If the result of
    your query is a row like the following:
        1   Keyton      Weissinger
        2   Sam         Smith
        3   Bill        Tallman
    You will receive a list of strings constructed by combining the
    field values with NULL values (chr(0)).
        1KeytonWeissinger
        2SamSmith
        3BillTallman
    UNLESS you enter a rowIndex AND a colIndex, in which case, you will
    receive a single value.

extractCalculationResult

The extractCalculationResult is a very powerful output processor because it lets you dynamically generate an expression of arbitrary complexity, with or without token expressions, evaluate that expression and place the resulting value into a token. Here is an example of this output processor:

<output name='CHECKOUT_TOTAL' processor='selectRandomString'    defaultValue=''>
   <param name='calculationFormula' eval='0'><![CDATA[($$$CURRENT_TOTAL$$$    * $$$SALES_TAX$$$)]]></param>
</output>

As you might guess, puffin first processes the token expressions, then uses Python's eval() to evaluate the formula's result and then places the resulting value into the CHECKOUT_TOTAL token.

You don't have to make this numerical either, the following example concatenates two strings.

<output name='WIKI_NAME' processor='selectRandomString' defaultValue=''>
   <param name='calculationFormula' eval='0'><![CDATA['$$$FIRST_NAME$$$' + '$$$LAST_NAME$$$']]></param>
</output>

Just as when you use strings with the EvalResponseAnalyzer, when handling strings using the extractCalculationResult, you must enclose the strings in quotes.

As with all such expressions in puffin, it must simply be a valid Python statement which is pretty intuitive -- even if you don't know Python.

The extractCalculationResult is a very powerful way to do things like check a running total for a shopping cart or similar.

VALUE (DEFAULT)

The VALUE extractor type allows you to simply place a value into the token dictionary at the time of execution for a test action. This is especially useful for non-server call test actions (see below) in which you just want to load a value into the token dictionary. Here is an example output using the VALUE extraction type:

<output name="[KEY_NAME]" processor="VALUE">
   <param name="value">[KEY_VALUE]</param>
</output>

The above places the value KEY_VALUE into the token dictionary for the KEY_NAME.

Non-Server Call Test Actions

Sometimes you want to process inputs and/or outputs without making a call to the server. These types of test actions allow you to generate or extract values at a specific time in the test plan. This prevents us from having to have different types of test actions (one for server calls, one for loading data values, etc).

Here's how to do it. All you need to do is add a 'noSvrCall' attribute to the <testAction> element in the puffin configuration file and set its value to true (1), like this:

<testAction name='loginLoad' stopPlanOnFail='0'    noSvrCall='1'>
   <path/>
      <!-- OUTPUTS -->
   <output name='USER' processor='getNextLoginInfo' src='custom'>
      <param name='returnVal' eval='0'>USER</param>
   </output>
   <output name='PASSWORD' processor='getNextLoginInfo' src='custom'>
      <param name='returnVal' eval='0'>PASSWORD</param>
   </output>
   <output name='ITEMS_REMAINING' processor='getNextLoginInfo' src='custom'>
      <param name='returnVal' eval='0'>ITEMS_REMAINING</param>
   </output>
</testAction>

Note that there is no value for the path sub-element for this test action (as it would be ignored anyway).

When this test action is executed in a test plan, no server call is made, but the outputs are processed. This allows us to load data into the token dictionary without making a server call. These data can then be used as inputs in subsequent test actions.

Note that if all you wish to do is add a value to the token dictionary, the following output element is sufficient:

<testAction name='loginLoad' noSvrCall='1'>
   <path/>
   <output name='USER'>
      <param name='value'>kweissinger</param>
   </output>
</testAction>

Note that if you do not specify a source or an extraction type, VALUE is assumed for both. See above for more information on outputs.

<autoInputs> and <autoOutputs>

Now that you understand how both inputs and outputs work, you are now prepared for the discussion I promised earlier on the autoInput and autoOutput elements in the system properties configuration.

As their name imply, these are simply automatically generated inputs and outputs that puffin will add to every single test action. This allows you to automatically add a security input, for example, to every test action without needed to set that input as part of every single test action in the configuration file.

The only differences between autoInputs and regular inputs and autoOutputs and regular outputs are two attributes:

·    antiActions -- This is a comma-delimited list of test action names to which the auto input or auto output should NOT be added.

·    systemFlagDepends -- This is a comma-delimited list of system flags (configured in the <system> section of the puffin config file) that must be TRUE before this auto input or auto output is added to all the test actions. This is useful when you want to easily switch something like security on or off and have the auto input for security added or not added.

NOTE: Puffin processes the antiActions and systemFlagDepends attributes only for autoInputs and autoOutputs (i.e. NOT for regular inputs or outputs).

NOTE: All autoOutputs are processed BEFORE any Response Analyzers are executed.

Hints for Test Action Configuration

When you are just getting started, configuring test actions can be a daunting challenge. In this one XML file, you must configure every possible input and output for every single test action for your web application. This is admittedly a slow painful process. I'm working longterm on some ways to make this simpler (see Future Enhancements later in this document). However, it is VERY important to note that by configuring every possible input and output here in the puffin config file, you are able to generate test plans of almost arbitrary complexity using any combinations of inputs and outputs for the various test actions, allowing for near-infinite flexibility in your testing capabilities.

Test Action Include Files

When you start configuring Puffin for your web application, you will quickly notice that there can start to be a LARGE number of test actions configured in the Puffin config file. In many cases, more than one developer is working on the web application at one time and it would be convenient for each developer to be able to tweak the configuration of his or her specific test actions. Test action include files allow you to do just that.

All that is required is to include a <testActionIncludeFile/> child element in your <testActions> element in your Puffin config file:

<testActions>
   <testActionIncludeFile fileName="supplementalTestActions.xml"/>   
</testActions>

Here's what the file "supplementalTestActions.xml" looks like (minus some test action details):

<includedTestActions>
   <testActionIncludeFile fileName="supplementalTestActions2.xml"/>

   <testAction name='resetInventoryCount'>
      .
      .
      .
   </testAction>
</includedTestActions>

At initialization, Puffin will go to each included file (yes, recursively) and pull from it all the included test action configurations for use in your test plans.

Using this feature, one developer could be responsible for the main puffin config file that is shared across a team and others could each have their own test action include file for their test actions. Then you can simply run puffin against the main config file which will pull everyone's test action include files.

NOTE: Remember from earlier that the appContextAlias attribute can only be used in the original <testActions> element -- NOT in any of the <includedTestActions> elements.

Test Plan Construction

You now have your puffin framework configured for your web application. Now it's time to create test plans and start executing...

The puffin framework allows for two different test plan format:

·    simple format -- basically a sequential list of test actions placed one per line in a file.

·    complex format -- a more flexible test plan format that allows for dependency checking (only execute this test action if this other succeeded and the like), repititions (execute this test action X number of times) and sub plans (including plans into other plans, etc).

In this section we will briefly cover the construction of simple test plans and their limitations.

Simple Plans

As mentioned above, a simple test plan is just a list of test action names, one per line in a file. Here is an example:

SIMPLE_test.plan:
# This is a sample test plan.
login
getUserRights
itemList
itemSelect
checkOut
 

That is all there is to a simple test plan. You would create a text file containing the above and save it as 'test.plan' for example. We will cover execution in just a moment...

Here are some things to note about the format of simple test plans:

·    Puffin considers any line starting with a '#' character as a comment and ignores it.

·    Each test action name must correspond to a specific <testAction> element in the puffin config file.

·    You can place only a single test action name per line. Don't worry about white spaces on either side of the test action name. They will be split off.

Often, all that is needed to test your web application is a simple test plan. However, they have several limitations which may force you to use the more complex format described next:

·    Although you can stop a test plan's execution on failure (using the stopPlanOnFail attribute for the test action), you cannot base execution of one test action on the successful execution of another test action. There is no concept of dependencies for simple test plans.

·    To repeat the execution of a test action, you must place it into a simple test plan multiple times. There is no concept of repititions.

·    Complex plans allow you to group multiple test actions into a "task." There is no concept of tasks in simple test plans.

·    You cannot include simple plans inside other simple plans. Complex plans allow for this for the building of complex plans that utilize common functionality.

Complex Plans

You have determined that you need greater flexibility in your test plans than the simple format provides. Chief among the differences between the simple format and the complex format is use of XML to describe more information for the plan. Also, the complex format introduces the concept of a task, which contains one or more test actions. We will see this in detail below.

The best way to discuss complex plans is to dive in. Here is a sample complex test plan for puffin:

COMPLEX_testPlan1.xml:
<plan>
   <task name='loginLoad'>
      <testAction name='loginLoad'/>
   </task>
   <task name='login'>
      <testAction name='login'/>
   </task>
   <task name='getUserRights' depends='login'>
      <testAction name='getUserRights'/>
   </task>
   <task name='itemList' depends='login,getUserRights'>
      <testAction name='itemList'/>
   </task>
   <task name='itemSelect' depends='login, getUserRights,itemList' repeat='3'>
      <testAction name='itemSelect'/>
   </task>
   <task name='checkOut' depends='login, getUserRights,itemList, itemSelect'>
      <testAction name='checkOut'/>
   </task>
</plan>
 

Wow. That looks WAY more complex than the simple plan for the same series of test actions. However, you can readily see that the complex test plan format provides us:

·    Dependencies. For example, the itemSelect task will not be attempted unless the login, getUserRights, and itemList actions ALL execute successfully. If any of those fail, then puffin does not even attempt to execute the itemSelect test action.

·    Repititions. The above test plan allows us to specify that puffin is to execute the itemSelect task three times. Using a simple plan, we would have had to add itemSelect three times to our test plan.

If dependencies and repititions were the only extras the complex format offered, it would be of limited value. So let's look at some other capabilities. Let's start by wrapping the login and getUserRights together into a combined set of test actions. Here is the second version of our complex test plan:

COMPLEX_testPlan2.xml:
<plan>
   <task name='userStart'>
      <testAction name='loginLoad'/>
      <testAction name='login'/>
      <testAction name='getUserRights'/>
   </task>
   <task name='itemList' depends='userStart'>
      <testAction name='itemList'/>
   </task>
   <task name='itemSelect' depends='userStart,itemList' repeat='3'>
      <testAction name='itemSelect'/>
   </task>
   <task name='checkOut' depends='userStart,itemList, itemSelect'>
      <testAction name='checkOut'/>
   </task>
</plan>
 

Now we have single task, userStart, that consist of the login and getUserRights test actions. This grouping of test actions is a very powerful part of why the complex format allows for greater flexibility. Now we can set up a dependency for later tasks on userStart rather than on two separate actions login and getUserRights.

To continue this simplification, we will go ahead and do the same grouping-into-a-task for the other three tasks:

COMPLEX_testPlan3.xml
<plan>
   <task name='userStart'>
      <testAction name='loginLoad'/>
      <testAction name='login'/>
      <testAction name='getUserRights'/>
   </task>
   <task name='shop' depends='userStart' repeat='3'>
      <testAction name='itemList'/>
      <testAction name='itemSelect'/>
      <testAction name='checkOut'/>
   </task>
</plan>
 

Now we have a single task, shop, that wraps getting an item list, selecting an item, and checking out. Note that we lost some granularity in our repititions (we are now repeating all three test actions three times as a group, rather than just the itemSelect test action). You will need to gauge how important such granularity is to your web application.

Finally, one very powerful capability of the complex test plan format is the ability to include plans within other plans. This is very useful if you have commonly used functions. That userStart task looks like a good candidate. Let's break it out into a separate plan that we can include in other plans:

userStart.plan:
<plan>
   <task name='userStart'>
      <testAction name='loginLoad'/>
      <testAction name='login'/>
      <testAction name='getUserRights'/>
   </task>
</plan>
 

And now we will include it in our test plan:

COMPLEX_testPlan4.xml:
<plan>
   <subPlan>userStartPlan.xml</subPlan>
   <task name='shop' depends='userStart' repeat='3'>
      <testAction name='itemList'/>
      <testAction name='itemSelect'/>
      <testAction name='checkOut'/>
   </task>
</plan>
 
  

Now we have a subplan for the userStart test actions that we can include in other test plans without much trouble.

Dynamic Includes

Earlier in this document I described that by configuring all possible inputs and outputs for the test actions for your web applications, you could then have infinite flexibility for your test plans by enabling you to execute test plans with different combinations of inputs and outputs.

This is very handy and allows you a great deal of flexibility in your test plans. Here is how it works. Suppose our getUserRights test action allows for the following inputs:

·    foo

·    bar

·    userID

·    include

And the following outputs:

·    clubID

·    rights

·    currentSysDate

And that you only want to use some of the inputs and outputs (not all) in executing the getUserRights test action. Here is the format:

COMPLEX_testPlan5.xml:
<plan>
   <task name='userStart'>
      <testAction name='login'/>
      <testAction name='getUserRights' inputs='userID,include' outputs='rights'/>
   </task>
</plan>
 

As you can see, all that is required to process only some of the inputs or outputs is a comma-delimited string containing the list of those inputs or outputs you wish to include.

Some caveats to the input/output limitation mechanism:

·    Auto-inputs and Auto-outputs are ALWAYS processed, whether they are in your comma-delimited attribute value or not.

·    The inputs and outputs not in your attributes are NEVER processed for this test action. So if these have side-effects (progress indicator for list input processors, for example) be aware (and don't rely on side effects anyway...).

Complex Test Plans: Questions and Answers

Q: Can you mix and match simple and complex plans?

A: A qualified yes. I tested this to some extent, but I don't certify it.

Q: Can you recursively nest subplans?

A: Yes. You can go to an arbitrary depth of sub-planning. However, you cannot cyclically nest subplans. This will error out.

Q: Is there any overhead associated with complex test plans?

A: Yes, but only at start up when the XML is pulled together.

Q: Is there any way to specify only some includes using the simple plan format?

A: No. Not at this time.

Iterative Test Plans

Often, you will want to repeat test plans until some condition becomes true. You have two ways of doing this. You can either force puffin to continue executing until a given token in the token dictionary becomes true (with a value of 1) or until the first test action fails. The first is the more complicated.

Repeating Plan Execution Until Condition

Here are the steps to setting up a repeating test plan that puffin will repeatedly execute until a given token in the token dictionary becomes true (1):

1) Add a 'stopToken' attribute to the <plan> test plan. The value of this attribute represents a token name we will use later:

COMPLEX_testPlan6.xml:
<plan stopToken='TEST_COMPLETE'>
   <subPlan>userStartPlan.xml</subPlan>
   <task name='shop' depends='userStart' repeat='3'>
      <testAction name='itemList'/>
      <testAction name='itemSelect'/>
      <testAction name='checkOut'/>
   </task>
</plan>
 
  
2) Configure a new test action. This test action should be the last in the test plan. It will update our stop token at the end of each test plan iteration. Here is the puffin configuration file entry for this test action:
<testAction name='updatePlanStatus' stopPlanOnFail='0' noSvrCall='1'>
<path/>
      <!-- OUTPUTS -->
   <output name='TEST_COMPLETE' processor='extractCalculationResult'>
      <param name='calculationFormula'>$$$ITEMS_REMAINING$$$ == 0</param>
   </output>
</testAction>

 There are several features to notice in this test action:

a) This is a non-server call test action. Note the inclusion of the noSvrCall attribute.

b) The test action is called 'updatePlanStatus,' but the name really doesn't matter. The important thing is the output token.

c) The key used for our output token is the same we will be using for our stop token, TEST_COMPLETE.

d) The value for the TEST_COMPLETE token is generated using the puffin-core output processor (from the tokenprocessing module) called extractCalculationResult. This token value processor takes a variety of parameters and based on an analysis of a formula involving another token (in the above case ITEMS_REMAINING) will 'extract' a value for our output token. We'll talk more about this output processor in a moment.

3) Add our new test action at the very end of the test plan:

COMPLEX_testPlan7.xml:
<plan stopToken='TEST_COMPLETE'>
   <subPlan>userStartPlan.xml</subPlan>
   <task name='shop' depends='userStart' repeat='3'>
      <testAction name='itemList'/>
      <testAction name='itemSelect'/>
      <testAction name='checkOut'/>
   </task>
   <task name='updatePlanStatus' depends='userStart'>
      <testAction name='updatePlanStatus'/>
   </task>
</plan>
 
  

Now, puffin will continue to execute our test plan until the TEST_COMPLETE token is set to true by the updatePlanStatus task. At the beginning of each execution, the TEST_COMPLETE token is checked in the token dictionary. If it is still false (0), then puffin executes the test plan again. It will continue executing it until TEST_COMPLETE is true (1).

See the section on outputs earlier in this document for details on the extractCalculationResult output processor.

But where is ITEMS_REMAINING set? Remember the non-server task, loginLoad, that puffin executes at the beginning of every test plan? Here it is again:

<testAction name='updatePlanStatus' stopPlanOnFail='0' noSvrCall='1'>
   <path/>
   <!-- OUTPUTS -->
   <output name='ITEMS_REMAINING' processor='getNextItem' src='custom'/>
</testAction>
 

Note the ouput, ITEMS_REMAINING. This is a fairly typical way to handle this type of repeating test. You have a set of records to process for which you want to execute a given test plan. The updatePlanStatus task will load the next record (this could have been a scriptList, but it is simpler in this case). It will also set the ITEMS_REMAINING output token value to the number of records remaining. At the end of each test plan's execution the updatePlanStatus will use the extractCalculatiResult output processor to evaluate the value of the formula ($$$ITEMS_REMAINING$$$ == 0) and places the value into the TEST_COMPLETE output token. Before each test plan execution iteration, puffin checks this token (because it was set as our stopToken). If it is true, then puffin does not execute the test again.

Repeating Plan Execution Until Task Failure

Having puffin repeat execution of the test plan until a failure occurs, is MUCH simpler than using the checkToken extractor to watch a particular token. All that is required is to set the stopToken to an asterisk, like this:

COMPLEX_testPlan8.xml:
 <plan stopToken='*'>
   <subPlan>userStartPlan.xml</subPlan>
   <task name='shop' depends='userStart' repeat='3'>
      <testAction name='itemList'/>
      <testAction name='itemSelect'/>
      <testAction name='checkOut'/>
   </task>
</plan>
 

That's all there is to it. With the stopToken set to an asterisk, puffin will merrily repeat the test plan's execution again and again until the first task fails for any of it's response analyzers.

NOTE: By "again and again" I mean just that. If you have no server-call test action whose response is analyzed with a response analyzer that can return a failure, then the test plan will continue until you break out.

Execution of Test Plans

You have a test plan or two all prepared. Now it's time to execute. If your test plan file is called 'test.plan' and your config file is called 'puffinConfig.xml' then all you have to do is make sure your test plan file is in the same folder as puffin and run puffin on the command line with no arguments

      python puffin.py

That's it. It will chug for a moment and then deliver to you the results from the report writer for every task, test action, and response analyzer.

OK. That's not quite all of the story. If you are running puffin on a linux or unix distribution and add a shebang line (there is not one by default) to the beginning of the puffin.py file, you can simply type:

      puffin.py

Furthermore, if you are running on a windows platform and have associated the ".py" extension to python, then you can simply type:

      puffin

I will assume this setup in the text below. If I can make this easier on linux systems, please email me with how. I'm very interested in serving both platform's communities, but am less familiar with Linux.

However, you can also specify a test plan file (or a config file see earlier in this document for details) with the --test.plan=[testfilenamepath] argument. For example, suppose you have a test plan called myTest.xml and that it is in your user folder (c:/user or /user). You would then use the following:

      puffin --test.plan=c:/user/myTest.xml

or

      puffin --test.plan=/user/myTest.xml

You can also specify more than one test plan at a time by providing a comma-delimited list of test plan file names like this:

      puffin --test.plan=/user/myTest1.xml,myTest2.xml,myTest3.xml

This will trigger the execution of the test plans, myTest1.xml, myTest2.xml, and myTest3.xml -- in the order specified.

(For more detail on command line arguments see Appendix A elsewhere in this document.)

Future Enhancements

Though still in its infancy, the puffin testing framework allows for a fairly robust set of testing capabilities. Still here are a few places where I KNOW it needs work. This list constitutes my list of "would like to haves" and does not represent a schedule or a promise....

·    Ability to use your own test plan manager. This is all wired in, but not live. I need to fix it, basically.I want to give people the ability to write test plan managers that will manage ultra-complex (or at least ultra-specialized) test plan formats.

·    Currently you can have only a single value for a given token name in the token dictionary. I don't like this, but haven't found a good enough reason to rip the guts of this functionality out to rework it.

·    Integration with bugzilla or other bug tracking software. If a problem arises, drop an issue into the tracking software. Just an off-the-cuff idea, but I like it.

·    More documentation -- specifically on extending the framework.

·    LONGTERM: GUI for puffin configuration and test plan creation.

·    LONGTERM: Incorporation of client side application testing (DHTML/JavaScript).

·    LONGTERM: Automatic generation of some test action baseline using proxy server-based "test action recorder" concept...

·    MORE bullet-proof. Goes without saying....

Conclusion

As with everything I do (books, articles, software, etc), my goal is to have fun, learn something, and make someone's life better. Give me feedback. I want to make this product useful. Help me by giving some comments:

Keyton Weissinger

keyton@weissinger.org

Appendix A: Command Line Parameters for Puffin

Puffin recognizes the following command line arguments:

--puffin.conf=[puffin config file location]

            The name and location of your puffin configuration file. Defaults to puffinConfig.xml

      in the current folder.

            Usage Example:

            puffin --puffin.config=/usr/conf/puffin.conf

--test.plan=[test plan file(s) location(s)]

            The name and location of your test plan file or files. Defaults to test.plan in the current folder.

            If you include multiple test plan files, then they are combined in the order listed before execution.

            Usage Example:

            puffin --test.plan=/usr/conf/myTest.plan

      puffin --test.plan=test1.plan,test2.plan,test3.plan

--contributors

            The contributors to puffin.

--logging

            Dictates the level of the framework logging. Default is WARN.

--help

            This help text. ('-h,' '-help,' and '--help' also work).

--license

            The license file describing the use/distribution of puffin.

--version

            The current version of this puffin installation.