| Thursday, November 7, 2002: 10:30 AM Go to 11:30 AM Go to 1:30 PM Go to 2:30 PM |
|T1 Test Management|
Pressure-Cooker Testing: What to Do When the Squeeze Is On
Geoff Horne, iSQA
All things are possible in the face of adversity, even an under-resourced testing project with an immovable deadline. Many testing projects start out with high ideals, then descend into mad panic when the realities begin to set in. Usually by this stage it’s too late to back out of commitments, yet delivering a product that doesn’t meet customer and business expectations is not an option. Geoff Horne offers some useful insights for taking a resource- and time-challenged project and turning it into a successful endeavor that still delivers a quality solution.
• Assemble an appropriate test team even when there are few testers around
• Put together a test schedule that’s realistic and achievable
• Learn to set and reset expectations
|T2 Test Techniques|
Traps That Can Kill a Review Program (And How to Avoid Them)
Esther Derby, Esther Derby Associates, Inc.
Technical reviews have been around for a long time, and they’re generally recognized as a “good thing” for building quality software and reducing the cost of rework. Yet many software companies start to do reviews only to have the review program falter. So the question remains: How can you succeed with a review program? Management support and good training for review leaders is a good place to start. But it’s the details of implementation that truly determine whether reviews will stick, or they’ll fall by the wayside. Esther Derby offers her insights based on observations from both successful and failed review programs.
• Set the criteria for what to review and how much time to invest in reviews
• Learn to recognize material that isn’t reviewable and what to do about it
• Recognize and avoid political traps that can kill review programs
|T3 Test Automation|
A Test Automation Harness for the PocketPC
Ravindra Velhal, Intel Corporation
The emergence of the handheld platform is an exciting opportunity to reapply quality and usability paradigms. It gives us the chance to establish new, industrywide quality benchmarks for handheld applications that may propel society beyond the traditional human-machine interface. Handheld-based computing has its potential — and its limits. But in moving from desktop-centered quality assurance to handheld-centered applications, there will be changes that affect software testing techniques. And we must be prepared. This session covers the basics of handheld automation, details what’s needed before designing test automation, and demonstrates a repository for tests designed for the PocketPC.
• Find out how things are different when the same interface/technology is ported from the desktop to a handheld device
• Explore the differences in the usability paradigms
• Obtain an overview and see a demonstration of the test harness designed for the PocketPC
|T4 Web/eBusiness Testing|
Getting Things Done: Practical Web Application/eCommerce Stress Testing
Robert Sabourin, AmiBug.Com Inc.
Web and eCommerce applications are still the rising, often unreachable, stars of the testing world. Your team’s ability to effectively stress test Web applications — before your customers do — is critical. This double-track session shows you the tools that support stress testing, including several that cost absolutely nothing. It also walks you through a variety of approaches to stress testing that are available during all phases of development. This journey allows you to develop a plan to automate your stress testing, as well as know how and when to implement it as part of the software development process.
• Obtain a Web application stress testing overview: Who, What, When, Where, Why, and How
• Analyze real-world examples of load testing for Web applications
• Walk away with tools and techniques to improve your stress testing capabilities
|T5 Advanced Topics|
Creating Five-Star Test Metrics on a One-Star Budget
Rick Tennant, SBC Communications, Inc.
Creating concise and useful metrics can be a challenge for any testing organization. Rick Tennant outlines how one organization used a simple, low-cost method to collect data and produce relevant test metrics. These metrics were the vital signs that helped guide the project team, while keeping senior management informed about the project’s quality and status. He also shows you how to turn these critical metrics into succinct yet meaningful reports.
• Define useful metrics that can be easily collected from test planning and test defect information
• Use your metrics to manage a testing effort and evaluate individual performance
• Launch into overall software process improvement using test metrics as a springboard
| Thursday, November 7, 2002: 11:30 AM Go to 10:30 AM Go to 1:30 PM Go to 2:30 PM |
|T6 Test Management|
How to Successfully Communicate the State of Testing in Your Organization
Robert L. Galen, EMC Corporation
Many QA, process improvement, and test engineering personnel feel their company’s management doesn’t sufficiently understand, support, or value their contributions. And you know what? They’re right! But what’s the root cause of this lack? Robert Galen believes it’s our inability to effectively communicate … to promote our teams and our abilities. We expect that either our work should speak for itself, or our value proposition accompanied by the metrics and data will make our case for us. But as a discipline, we need to improve our salesmanship when it comes to our contribution. This session imparts public relations (PR) skills you need to employ so that your key stakeholders will better understand your role and its importance.
• Obtain broad techniques to improve your PR
• Learn to leverage unexpected PR opportunities such as defects, quality assessments, and hallway encounters to sell the value of testing
• Find out why attitude makes a tremendous difference in perceptions and PR
|T7 Test Automation|
Keyword Testing at the Object Level
Brian Qualters, TurboTesting Concepts
It’s time to put a new spin on the technique of keyword testing using a data-driven engine. Brian Qualters shows you how to effectively place your focus not on the action or process to be completed, but rather on the object type that’s to be manipulated. This redirected focus lets you avoid the pitfalls and resource requirements encountered when you move to test another application. He gives a demonstration of how this modified approach can be integrated into the manual test case creation as a way to tremendously improve efficiency.
• Learn to avoid some of the maintenance issues of automation
• Make scripts easily maintainable
• Keep the process unobtrusive to the manual test effort
|T8 Advanced Topics|
Applying Orthogonal Defect Classification Principles to Software Testing
Suzanne Garner, Cisco Systems
Test escape analysis and corrective action tracking (TEACAT) is a method used to collect and utilize information about the causes of test escapes to prevent customer-found defects and improve internal test, development, and release processes. The TEACAT approach provides testers and test managers with the primary causes of defect escapes from their organizations into the field. Suzanne Garner takes you through the test escape analysis process at Cisco and shows you how test-specific ODC fields can be employed to provide customer focus to test process improvement activities, and ensure that test gaps are closed.
• Explore the principles and values of test escape analysis
• Discover how automated tools support this process
• Learn how three main test escape analysis fields and the values that can be assigned to each
| Thursday, November 7, 2002: 1:30 PM Go to 10:30 AM Go to 11:30 AM Go to 2:30 PM |
|T9 Test Management|
Conversations I Never Expected to Have as a Test Manager
Johanna Rothman, Rothman Consulting Group, Inc.
There are times in a test manager’s career when the work situation becomes surreal. If you’ve been in situations where you think you must be dreaming, sometimes it helps to look at things from the other person’s perspective. As we mature in our jobs, we can examine these situations and see how to better answer the questions we have about unexpected communications. In this session we’ll look at some typical conversations and discuss alternative ways to help everyone find the true reality, then better deal with the situation. From her years of experience as a consultant and her personal encounters, Johanna Rothman shares her insights and gets you involved in discovering what’s really being said in these strange conversations.
• Learn to recognize when it’s actually a communication problem
• Examine various solutions to typical test manager communication issues
• Maintain your sense of humor in the face of all adversity
|T10 Test Techniques|
Hand-Over Tests in the Integration Process
Kemal Balioglu, Siemens EMIS
Integration of software components, especially in complex software systems, often fails in test, resulting in disharmony between the development and test team members. In today’s global environment, however, where software components are developed and tested at several locations, there’s an even greater probability for integration testing issues to occur. Hand-over tests are a technique designed to improve the integration process, because these integration tests are performed by the tester and the developer … together. This process ensures that both sides have the same understanding of requirements, success, and failure. Plus the developer gets instant feedback on existing problems and can investigate them immediately.
• Learn how to prepare for and execute hand-over tests
• Find out how hand-over tests are often a solution for the most critical integration testing issues
• Get recommendations for implementation of hand-over tests based on real-world experiences
|T11 Test Automation|
Selecting and Implementing a Test Management Tool
Rutesh Shah, Arsin Corporation
Tool selection is always tricky with its endless choices from vendors, boundless feature lists, and myriad of requirements from demanding team members. You’ve probably read a lot about selecting a test automation tool, but what about a test management tool? Emerging tools are more feature-rich than ever, and in many organizations they’ve become a permanent fixture in the QA environment. This session gives you a road map for the selection and implementation of a test management tool. Rutesh Shah presents a case study of a tool’s implementation in the QA organization of a large banking firm. Learn how to use a test management tool to derive process efficiency, resource utilization, and testing status metrics.
• Determine the desired characteristics of a test management tool
• Get tips to avoid having your test management tool become shelfware
• Learn ways to get your organization ready to adopt this enhanced process
|T12 Test Process Improvement|
Making a Difference with Test Assessments
Sigrid Eldh, Ericsson AB
Test assessments are a powerful way to understand the current status of your testing. These assessments provide an independent view of where you are and they guide you to where you’re going. They also highlight what your team needs to do to reach its testing goals. From her experiences performing test assessments, Sigrid Eldh covers all aspects of her assessment approach including processes, management issues, automation, and test deliverables. You too can use assessments as a tool for your own success because they confront these contextual issues and give you valuable feedback that can be implemented immediately.
• Get a step-by-step explanation of how to improve your testing using assessments
• Learn about the positive side benefits of using assessments
• Find out how to get immediate benefits from a test assessment
|T13 Advanced Topics|
eXtreme Programming’s Unit Test Fixtures: Experience from the Field
Stan Bell, McKesson
Are you interested in adopting eXtreme Programming’s (XP) unit test fixtures and related methods? Stan Bell shares his team’s experiences with the Visual Basic version of the Xunit unit test framework. He then explains the methodology employed in the development shop, i.e., how engineers and QA analysts interacted prior to the application of this technique versus after. He points out the challenges, pitfalls, and successes encountered during the adoption process, and reports on the much-improved defect detection and correction rates that occurred post-adoption.
• Gain a basic understanding of the XP code/unit test method
• Learn the benefits of using a unit test fixture on components — and find out where they don’t help
• Examine the process improvements realized with this method
| Thursday, November 7, 2002: 2:30 PM Go to 10:30 AM Go to 11:30 AM Go to 1:30 PM |
|T14 Test Techniques|
eFeature Risk Analysis: What Do I Test First? And Last?
Steve Tolman, PowerQuest Corporation
Feature risk analysis is a quick, valuable method designed to determine which features need the most testing, and which need the least. Steve Tolman shows you how to gain a basic, yet very usable, understanding of how to prioritize the testing of features in any given test cycle. Intended for off-the-shelf software, this method delivers a definable and variable starting and ending point for testing.
• Arrive at realistic quantification and qualifications of test work for off-the-shelf software
• Develop itemized and grouped test items
• Determine the proper starting and ending points for testing
|T15 Test Automation|
Test Lab Stability Through Health Check Test Automation
John Rappa, Verizon
New application code is installed on Sunday. Your test team arrives on Monday to run test scripts and certify the release. Unfortunately, one environmental problem leads to another and suddenly it’s Friday before you run your first test script against the new code. Does this sound familiar? One way to buck this trend is to run daily health checks on the test environment. By running daily health checks, you’ll minimize the time required to test new application code installs. Plus, you’ll improve your test environment stability, reduce the number of variables to examine when a test fails, and reduce tension between your development and test teams.
• Find out how to create health checks on your test environment
• Select which health checks you should automate to improve stability
• Measure your success through the daily running of health checks
|T16 Test Process Improvement|
Improvement is a Journey: A Software Test Improvement Road Map
Karen Rosengren, IBM
With the wide array of software testing practices out there, how do you know where to start? Karen Rosengren shows you how a group of IBM testers developed a road map for implementing practices that takes into consideration things such as the skills required to implement them and how the practices relate to one another. She also explains IBM’s Software Testing Improvement Road Map (STIR) which defines the levels of testing practices from “basic” to “engineered”.
• Examine how one group organized a set of good practices to increase the potential of successful deployment
• Get started evaluating where your team is and what the next improvement actions should be
• Determine how to implement practices in steps and keep them on the path to continuous improvement
|T17 Advanced Topics|
Automated Testing for Programmable Logic Control Systems
Reginald B. Howard, Argonne National Laboratory
Developing real-time, automated testing for mission-critical programmable logic controller (PLC)-based control systems has been a challenge for many scientists and engineers. Some have elected to use customized software and hardware as a solution, but that can be expensive and time consuming to develop. Reginald Howard shows you a way to integrate a suite of commercially available, off-the-shelf tools and hardware to develop a scalable, Windows-based testing platform that’s capable of performing an array of different tests including, but not limited to, black box, destructive, regression, and system security testing. He describes the use of the Jelinski-Morana statistical model for determining expected results from automated tests.
• How to apply automated testing to a real-time application
• Build your automated test platform using standard, commercial software and hardware
• Use a statistical model to develop estimates of expected test results