diff --git a/AUTHORS b/AUTHORS new file mode 100644 index 0000000..7de9f8d --- /dev/null +++ b/AUTHORS @@ -0,0 +1,8 @@ +Please add your name to this list with your first commit request + +Karanbir Singh +Fabian Arrotin +Athmane Madjoudj +Steve Barnes +Marian Marinov + diff --git a/Authors b/Authors deleted file mode 100644 index 7de9f8d..0000000 --- a/Authors +++ /dev/null @@ -1,8 +0,0 @@ -Please add your name to this list with your first commit request - -Karanbir Singh -Fabian Arrotin -Athmane Madjoudj -Steve Barnes -Marian Marinov - diff --git a/README b/README new file mode 100644 index 0000000..b8a1100 --- /dev/null +++ b/README @@ -0,0 +1 @@ +See the doc directory for additional information on test writing. diff --git a/WritingTests b/WritingTests deleted file mode 100644 index e294b6c..0000000 --- a/WritingTests +++ /dev/null @@ -1,22 +0,0 @@ - -This file provides some basic guidance on things to consider when writing test scripts for the QA process: - -- scripts should exit with either zero to signal success, or a non-zero value to signal failure. A failure exit code causes the entire test script execution process to stop (which is fine - we need to see what failed) - -- you can use any language to write test scripts, but at present, the largest collection of helper functions are available to Bash/sh scripts. If the language you intend to use isn't available by default, the first thing you test script should do is yum install . - -- several helper functions are available in test/0_lib/*. Please use these in preference to directly calling any of the commands they implement. You don't have to source the file in your test scripts, just call/use the functions as required. The helper functions are there (for example) to promote consistency in debugging output and help avoid timing related test failures eg, daemons not being given sufficient time to start before testing for their existence. Please review the contents of 0_lib/ so you're familiar with what's on offer. - -- if you're using Bash, the first thing you should ideally do is make a call to t_Log, passing in $0 and a description of what the test is doing, something like: - - t_Log "Running $0 - Postfix SMTP test." - -- test scripts are processed in alphabetical order, so it's sensible to install any required packages in a 0-install-blah.sh script. - -- anything starting with a _ is ignored, and so are files named `readme` (case insensitive). If you need a file to store config values or any kind of metadata that's used in your test script, it probably makes sense to put it in a file starting with _ and then sourcing/including it in your test script. - -- all test scripts should be chmod +x in order to be executed. Equally, removing execute permissions from a script will prevent it from being run (or prefixing it with an _, both approaches work) - -- please include a suitable #Author comment line in your test scripts, so we know who to contact in the event of questions/changes/issues. - -- try and keep stdout/debugging messages generated by your test scripts to a minimum. diff --git a/doc/TODO b/doc/TODO new file mode 100644 index 0000000..5528ad0 --- /dev/null +++ b/doc/TODO @@ -0,0 +1,9 @@ +- At the moment the tests are not distro / arch specific, we might need a way + to clearly mark a test as -only-for-c4- or -only-for-c5- etc ( specially + when there are kickstart files involved, there are some non trivial changes + in that area moving towards c6 ) + +- We also need a way to dictate if a role_ should be run on realiron, + virtual machine host, virtual machine ( and maybe even the type of virt + being used! ) + diff --git a/doc/WritingTests b/doc/WritingTests new file mode 100644 index 0000000..98716a2 --- /dev/null +++ b/doc/WritingTests @@ -0,0 +1,182 @@ +Greetings! :-) + +This file is a continuation of the CentOS wiki page regarding writing tests for the QA process. For background information on writing tests for the t_functional QA process, please refer to: + + http://wiki.centos.org/TestScripts + +As a newcomer, you should read this document from start to finish. Questions/comments/suggestions should be voiced in the #centos-devel channel on Freenode IRC, or via email on the centos-devel@centos.org mailing list. + +=== Introduction === + +What are the QA scripts? +------------------------ + +Small, self-contained test scripts that provide "component testing" of CentOS RPMs. These scripts verify packages install correctly via yum and ensure that whatever the package installs, works as expected. + +What are they used for? +----------------------- + +Quality assurance - making sure that each package installs and functions correctly on a given architecture. The CentOS QA process directly benefits from having a set of repeatable, automated tests to run against each distinct build and package as and when it's created. + +When do they get used? +---------------------- + +As part of the QA process for every CentOS build prior to its testing/release and every time a CentOS supplied package is updated. + +Where are they stored? +---------------------- + +In a publically available repository hosted on gitorious.org + +What's in the repository? +------------------------- + +Here's a breakdown: + +tests/ : contains all test scripts + +tests/0_lib/ : contains all the common functions and shared code for the tests all files in that directory are 'sourced' + before any of the tests are run, which also means it can only contain bash code ( no subdir allowed ) + +tests/0_common/ : contain's tests that are run before any other test, and immediately after the 0_lib/ code is sourced. + These should be tests that check system sanity and environment. These tests should also not leave behind any state or content + residue that would impact package and role specific tests that are run after + +tests/p_/ : Each of the p_ directories would contain tests for that specific package. The needs to be - rpm --qf "%{name}\n" + for the srpm. + +tests/r_ : Each of the r_ directories should contain the tests specific to a role. eg: 'lamp'. The test harness looks at a file + called 'package_deps' inside each of the role directories and runs the role tests if any package listed in that file has been + changed / built etc. + + Role tests can be run with specific kickstarts. At the moment each role can have 1 kickstart file. It must be called + ks_.cfg and it must be in the tests/r_/ directory + +What language are tests written in? +----------------------------------- + +As of June 2011, all of test scripts are written in Bash. You're free to write test scripts in any language that's installable via yum - Python/Perl/Ruby etc. The only proviso is that, as a first step, you make a call (using a simple Bash script) to: + + t_InstallPackage + +to install whatever package(s) need to be available for your subsequent, non-Bash test scripts to execute against. In short, at least some part of your test scripts will need to be in Bash. + +What's t_installPackage? +------------------------ + +To promote a manageable level of consistency across the test suite, a handful of useful functions and variables have been consolidated into a small (but expanding) Bash library. Standard test script tasks such as logging, service control, package installation etc should (ideally) all be performed via calls to functions provided by the helper library. Unless there's a sound (and documented) reason for not doing so, use of the library should be preferred at all times. + +Is the Bash library documented anywhere? +---------------------------------------- + +All of the functions available in the Bash library are fully documented using comments contained within the library itself, there's nothing particularly complicated or cryptic in the implementations, so you should be able to work out what's going on fairly easily. If the usage of anything in the library isn't obviously self-evident, please let us know. The library itself is by no means comprehensive and simply serves as the basis for writing consistent test scripts. + +How do I use the library? +------------------------- + +You shoud include the following statement in your test script: + + source ../0_lib/functions.sh + +What return value/exit status should I return? +---------------------------------------------- + +Your scripts should exit with a status code of 0 to indicate success, and 1 to indicate failure. + +What environment is best for writing tests? +------------------------------------------- + +Something with a working copy of git, a text editor to write tests in, and (preferably) a virtual machine environment so you can run tests/roll back/run tests/roll back etc etc. + +What should I be testing for? +----------------------------- + +First a foremost, that the packages you've chosen to test, install correctly. This should be the first thing your script does via a call to t_InstallPackage. t_InstallPackage will (as the name suggests) attempt to install the requested packages from the local QA repository via a call to yum. If the yum install process fails, t_InstallPackage will exit with a fail status code, and the test harness halts. + +Assuming the package(s) install correctly, your scripts should then exercise the packages binaries in however way you see fit - start with something simple; perhaps just calling the binary with the --help switch and checking the exit status is correct (or grep'ing for expected words). Once you're comfortable with how the test scripts work, try something more inventive/advanced :-) + +Is there an execution order? +---------------------------- + +Tests are executed in alphabetical order. Any files named 'readme' (case insensitive) or starting with '_' are ignored. On that basis, if you have any shared variables or config values that need a home, you could put them in a file named (for example) '_config' and refer to it from within your test scripts. You are of course free to keep everything inside a single file, but if it's a common value shared amongst your test scripts for a given package, it might make sense to separate things into a stand-alone file; whatever you believe is the most managable arrangement. + +How much debugging output should I provide? +------------------------------------------- + +You're free to produce as much debugging output as you feel is necessary to convey the actions your script is performing. If your script returns an exit status indicating failure, it's (obviously) a lot easier to decipher what went wrong if your script is emitting clear and concise messages. As a first step, each of your test scripts should make a call to t_Log (or similar, if you're not using Bash), including the name of your script and a short description of what you're testing for. For example + + t_Log "Running $0 - checking procinfo runs and returns non-zero exit status." + +What should I name my tests? +---------------------------- + +Scripts are processed in alphabetical order and grouped together into folders on a per-package basis. Package test folders should be named p_XXX where XXX matches the output of: + + rpm -q --qf "%{name}\n" + +Following the same approach, files within each package test folder are processed in alphabetical order. So (for example), tests that start with '0_' are processed before those starting with '5_', which are processed before those starting with '10_' etc. You should install any packages that your test requires in low-numbered scripts and then test against that package in incrementally higher scripts. If that makes no sense, see the "Hello World" example at the end of this document for a practical example. + +How do I test my tests? +----------------------- + +In order to test your scripts in "stand-alone" mode, you'll need to perform the following command (assuming you're in the t_functional directory): + + source tests/0_lib/functions.sh + +You can try executing the runtests.sh script found in t_functional, but some of the tests in 0_common will fail owing to the repo.centos.qa hosts being unreachable outside of the CentOS QA environment. You're welcome to remove the execute permissions from '00_qa_repo_config.sh` and '30_dns_works.sh` if you want to run the entire test suite. + +What comments should I include in my tests? +------------------------------------------- + +Start your tests with a comment block which includes your name and e-mail address. After that, make a call to t_Log, passing in $0 (or the non-Bash equivalent for your script's file name). Something like: + + t_Log "Running $0 - Postfix SMTP socket connect + 220 banner response test." + + +=== A Practical Example === + +We'll now assemble all of the above information into a practical example, to help get you started. For the purposes of this example, we're going to stick to Bash - adapt as required based on your language of choice. + +Note, before running any tests, you should add the following entry to your /etc/hosts: + +Firstly, get yourself a copy of the current testing repository. This is available via + + https://gitorious.org/testautomation/t_functional + +If you're not familiar with how git works, spend some time searching around the web for a couple of git tutorials to help you get comfortable with the concepts, terminology and execution. + +Once you've got a working tree, it's time to pick a package. For the purposes of this example, we'll use [http://aide.sourceforge.net/ AIDE] - the Advanced Intrustion Detection Engine. + +First thing to do is create a folder for your package. Using the standard Linux `mkdir': + + cd t_functional/tests + mkdir p_aide + +Now that we have a home for our tests, we need to set about getting the package installed. Repeating the advice provided earlier, all test scripts are executed in alphabetical order so we'll put a call to t_InstallPackage in a file named '0-install-aide.sh'. Using your preferred editor: + + #!/bin/bash + + t_Log "$0 - installing AIDE" + t_InstallPackage aide + +That's all we need. Breaking this down, we start our script with a logging statement via t_Log. Nothing particularly special/complex going on there. Following on, we get our package installed via a call to a library provided function - t_InstallPackage. You don't need to check the return values from either of these functions. The t_InstallPackage function evaluates the exit status from yum and if there's a problem, will abort the test run. + +Now to write a (very) simple test script to exercise the AIDE binary. Back to your editor, create a new file called 5-aide-basic-test.sh + + #!/bin/bash + + t_Log "$0 - basic AIDE initialisation test" + + AIDE=`which aide` + [ -x "$AIDE" ] || { t_Log "FAIL: AIDE binary doesn't exist or isn't executable"; exit $FAIL; } + + # Perform an initialisation of the AIDE database + $AIDE --init + + # Check for a 0 exit status + t_CheckExitStatus $? + +Again, nothing particularly complex here. The only thing probably worth explaining is the call to `t_CheckExitStatus', which is just a convenience wrapper around an evaluation of $? with 0. Using t_CheckExitStatus is the preferred means of evaluating exit codes from previously called functions. If the exit status isn't 0, a failure message is logged and the test harness halts. + +That's it! :-) + diff --git a/readme b/readme deleted file mode 100644 index f6adbd8..0000000 --- a/readme +++ /dev/null @@ -1,56 +0,0 @@ -This Git repo contains all the functional tests run for various packages and roles in the CentOS build + release process - -filesystem layout: - -tests/ : contains all test scripts - -tests/0_lib/ : contains all the common functions and shared code for - the tests all files in that directory are 'sourced' - before any of the tests are run, which also means it - can only contain bash code ( no subdir allowed ) - -tests/0_common/ : Contain's tests that are run before any other test, - and immediately after the 0_lib/ code is sourced. - These should be tests that check system sanity and - environment. These tests should also not leave behind - any state or content residue that would impact package - and role specific tests that are run after - -tests/p_/ : Each of the p_ directories would contain tests - for that specific package. The needs to be - - rpm --qf "%{name}\n" for the srpm. - - All package tests are run on a machine which has a minimal - install. Its not possible, at this time, to have a kickstart - attached with the package tests. - -tests/r_ : Each of the r_ directories should contain the tests - specific to a role. eg: 'lamp'. The test harness looks at - a file called 'package_deps' inside each of the role directories - and runs the role tests if any package listed in that file - has been changed / built etc. - - Role tests can be run with specific kickstarts. At the moment - each role can have 1 kickstart file. It must be called - ks_.cfg and it must be in the tests/r_/ directory - -notes... -- each of the directories are parsed in alphabetical order, so its possible - to set some sort of a run order by using _ - -- if tests are written in any language other than bash, its upto the test - to install the required environment ( including python, ruby, perl.. ) - -- all files named 'readme', or starting with an underscore are ignored - by the test harness ( so one can use the _ to host metadata - or any config that might be needed for a test. - -ToDo: -- At the moment the tests are not distro / arch specific, we might need a way - to clearly mark a test as -only-for-c4- or -only-for-c5- etc ( specially - when there are kickstart files involved, there are some non trivial changes - in that area moving towards c6 ) - -- We also need a way to dictate if a role_ should be run on realiron, - virtual machine host, virtual machine ( and maybe even the type of virt - being used! ) diff --git a/runtests.sh b/runtests.sh index 9fa697e..a6cfa1d 100755 --- a/runtests.sh +++ b/runtests.sh @@ -3,9 +3,6 @@ # Author: Steve Barnes (steve@echo.id.au) # Description: this script sources our library functions and starts a test run. -export readonly PASS=0 -export readonly FAIL=1 - echo -e "\n[+] `date` -> CentOS QA $0 starting." LIB_FUNCTIONS='./tests/0_lib/functions.sh' diff --git a/tests/0_lib/functions.sh b/tests/0_lib/functions.sh index e943566..266503d 100755 --- a/tests/0_lib/functions.sh +++ b/tests/0_lib/functions.sh @@ -1,5 +1,9 @@ #!/bin/bash +# Human friendly symbols +export readonly PASS=0 +export readonly FAIL=1 + # Description: call this function whenever you need to log output (preferred to calling echo) # Arguments: log string to display function t_Log @@ -36,7 +40,7 @@ function t_RemovePackage } # Description: call this to process a list of folders containing test scripts -# Arguments: a list of folder paths to process (see example in runtests.sh) +# Arguments: a file handle from which to read the names of paths to process. function t_Process { exec 7< $@ @@ -66,9 +70,9 @@ function t_CheckDeps } # Description: perform a service control and sleep for a few seconds to let -# the dust settle. Failing to do this means tests that check for an -# open network port or response banner will probably fail for no -# apparent reason. +# the dust settle. Using this function avoids a race condition wherein +# subsequent tests execute (and typically fail) before a service has had a +# chance to fully start/open a network port etc. function t_ServiceControl { /sbin/service $1 $2