SyneRyder - journal

How To Run A Beta Test... Or Not?

by Kohan Ikin // 13th September 2004, 10:20pm


For more news like this:

RSS Feed

I had high hopes for this beta test. It was a 0.1 upgrade release of a product I'd been selling for 3 years. Most of the bugs had already been squashed, so I intended to spend most of my time surveying testers and soliciting ideas for improvements. I thought I could build word-of-mouth by using a large beta testing team. The expectation was that this would be a very quick and simple beta.

Oh man. How wrong could I have been?

Several aspects of the test went okay. But I decided to make changes to how I run my beta tests, and that resulted in problems. These are some lessons I learned from the latest test.

Build a huge database of beta candidates

One of the things namesuppressed has always done right is build a database of beta testing candidates. It tracks contact info, demographic data, hardware and software, and other metadata. I'm glad for that - it's clear you need a lot of candidates just to find a good testing team to choose from. Asking candidates to complete an application form also helps to weed out testers who are only interested in freebies.

35% of candidates in the database were deemed ineligible for the test. Eligibility criteria included compatible software/hardware, a history of answering emails, and a history of providing feedback. That left 65% of the database to choose from.

Of the invitations I sent to potential candidates:

Never roll your own if you don't have to

I decided to use the free mailing list software provided by my webhost, even though it had some drawbacks. I thought I could overcome the drawbacks by writing my own software to compensate for them. In fact, I already sell software that does this. I thought I'd make a couple of modifications to the program I sell and all would be fine....

Uh, no. The extra software & modifications I wrote caused major problems. People started getting two copies of every email, it messed with HTML emails, and even modified them in ways that triggered spam filters. Hotmail users never received any emails until the problem was fixed - and then they were surprised by a sudden flood of messages. Fixing those problems was one of the most stressful parts of the whole beta.

I should have just started a free discussion group on Yahoo Groups, or purchased an account with Topica. Both are proven solutions. By writing my own software, I wasted a lot of time that could have been better spent. But on the bright side, I fixed some bugs in my own software.

Make your testers opt-in themselves

Before the test began, I asked all testers personally if they would like to join an email discussion group for the beta test. Amazingly, everyone agreed - even I hadn't expected that. However, it soon deteriorated. As soon as the test began, some testers unsubscribed from the list straight away. More testers complained later and unsubscribed (or asked to be unsubscribed).

A problem was that some testers didn't recognize the emails when they first started receiving them. Even though the list was "confirmed opt-in" (we had written confirmation from each testers email address), it would be better if testers had performed some action to opt-in (eg clicking a weblink). This would help them realize they were joining an email list. Also, explaining how to identify the emails may be helpful (eg "The subject line of all beta test emails will begin with [betatest]" or something similar).

Expect the unforeseen

It's very rare for a beta test to go exactly as you plan. The whole idea of a beta test is to locate problems you couldn't find yourself. That approach needs to be applied to the beta testing process too.

Organize several beta testing groups

Anyone who has run beta tests before will know that it can be hard to keep enthusiasm up for the duration of the testing period. The best way to curb this is to bring in new groups of testers at regular intervals (every second beta). I didn't have enough candidates to try that this time. However, it means I can show you a graph of enthusiasm levels by measuring the frequency of messages throughout the testing period:

Chart showing frequency of beta tester feedback

Notice that enthusiasm is highest at the very start, and is maintained for about a week. It is renewed slightly for the second beta, but doesn't last much further than that. This is not a criticism of the beta testers in any way, it's just something to be expected.

Be exceptionally clear about expectations and etiquette

Our testers were confused about the purpose of our beta-tester email list. Here's what some of them thought it was for:

Announcements Only: Some testers thought the list would only include announcements of new beta downloads. We explained in the beta invitations that it was a discussion group they could participate in, but apparently it wasn't clear to everyone.

Bug Reports Only: Some testers thought the list was for making bug reports only. They thought that telling everyone the bugs would reduce duplicate bug reports. It's a nice idea, but when you have hundreds of messages it's difficult for everyone to keep track. You either get lots of duplicate reports anyway, or lots of people who don't report their bugs because "someone else probably found that bug already".

Socializing: Lots of testers thought the list was so they could talk to other testers about anything they wanted. Actually, we encouraged this idea, thinking it would make testers feel comfortable. It didn't quite work - some felt comfortable, others felt alienated.

Tutorials: some testers thought the purpose was to share images created using the program we were testing, and teach others how to create those images. I hadn't anticipated that, and it was too late to accommodate it. Tutorial lists involve lots of large attachments, and group members with dialup connections or small email in-boxes couldn't handle it. Also, some people like tutorial lists while other people really dislike them, causing a rift in the beta group.

So, what was our list really meant to be? A combination of the above - we announced all new betas on the list, expected bugs to be reported to the list, and expected some off topic chat... even the occasional picture post. But we didn't make this clear in our initial beta test invitations, so no one knew what to expect or what the boundaries were. I guess that's because we didn't know where to set the boundaries either.

Manage Conflicts Within The Testing Group

On the surface, everything was fine - the test group seemed friendly, very active and quite productive. Behind the scenes, things were falling apart. I received angry and upset emails from people who were frustrated by the volume of emails, frustrated at levels of "off topic" discussion, people who felt shy or intimidated by others testers, and even people who just didn't get on with the other testers. Some statistics:

Chart showing beta tester emotions

I'm not sure what I needed to do to fix this. Certainly I needed a better understanding of how to manage virtual communities, and how to develop a stable culture within the group. Perhaps I needed to set up two communities with different rules.

Take a final beta test survey

I was disappointed with the lack of response to my final beta tester survey. Testers were told that completing the survey was a necessary part of testing, even in the beta invitations we sent. However, we got a low survey response rate:

The surveys are extremely important. They provide measurable feedback on pricing, product features and general opinions of the product. We use the data to calculate maximal profit curves and select features to add in later versions. It's crucial information, so dropping the surveys isn't an option.

Another reason for the surveys is to elicit feedback from quiet testers. There are always some testers who never send bug reports or talk on the mailing list. Many of them will respond to an anonymous web survey though. The feedback is useful and often brutally honest - exactly what you want.

So, how to increase the response rate?:

In summary: what should we do in future?

Project Statistics

Name: Softener 1.20
Duration: 4 weeks [31 days, 120 hours work]
Development Platforms: Windows 98SE, Windows XP Professional
Release Platforms: Windows 95/98/98SE/ME/2000/XP
Lines Of Code: 3239 (Softener = 1661, nsPSPlugin = 1578)

Development Tools:
Textpad 4.6.2, Borland C++ Builder 3, FilterMeister 0.4.21, Ghost Installer 3.7, Ghost Installer 4.1, PADGen, Resource Hacker, XVI Hex Editor, Microsoft Virtual PC 2004, MySQL 3.23.44, MySQL Admin, MySQL Front 2.5, Beyond Compare 2, Jasc Paint Shop Pro 7, Jasc Paint Shop Pro 8, Jasc Paint Shop Pro 9 Beta, Jasc Paint Shop Pro Studio Beta, Adobe Photoshop 6 Tryout, Adobe Photoshop CS Tryout, Megalux Ultimate FX 1.3, Microsoft Wordpad, ezmlm, namesuppressed WebScriber, other bespoke namesuppressed software


Related Articles