How To Form Enabler Stories for Testing in SAFe

As a SAFe Program Consultant, I am often asked how to write user stories for testing validation and verification or simply, testing, but it is actually simpler than writing development user stories. Here are a few points to answer these concerns, based on actual SAFe implementation:

1. Understand that testing is a compliance enabler. In SAFe, we have the concept of having regular, user stories or the functional parts of the deliverable product; and enabler stories or the things that we need to do but does not have an effect on the product itself, such as testing. There are four types of enablers according to SAFe: exploration enablers (e.g. research spikes), infrastructure enablers (e.g. building your CI servers), architecture enablers (e.g. architecture planning, designing) and compliance enablers — an example of which is testing — these are requirements set by the product management or the government prior to delivering. Examples of compliance enablers include documentation, design reviews, proper licensing, contract signing, getting the necessary permits, etc.

2. List down ongoing development features. A feature is the smallest chunk of value that is deliverable to the customer, and the purpose of separating testing effort is, normally, to produce unbiased testing from the user’s point of view. It is expected that there will be more user stories than enabler test stories in the team backlog because lower level testing should already be part of the user story Definition of Done. A feature might have multiple enabler stories, like design review, testing, documentation, etc. Since you are only listing down features that are planned for development, this should already be in order of priority / business value.

3. Create features for tasks that cannot be mapped to ongoing features. Remember that tests are expected to be done first before the actual user story development, and so mapping should be to ongoing features only. Otherwise, your team will be reopening features that have long been closed. But these cases do exist, and to solve this, you can get product management to create features for technical debts. Don’t forget to give inputs for the benefit hypothesis so it doesn’t always get pushed back in the iteration or program increment. This will play a crucial part in step 5.

Feature: “Test Creation for Legacy Features
Benefit Hypothesis: “Increase test coverage to speed up feedback in case of feature breakage.”

4. Write down enabler stories, but it does not require the user voice format. We often write user stories in the format: “As a… I want to… so that…”, this is called the user voice format. But since we are dealing with enabler test stories here, we can save ourselves some time formulating in the user voice, especially since our “as a…” will most of the time be the same thing. No one’s stopping you from doing this, but it is not required either. However, you need to include the acceptance criteria being addressed by the enabler story to keep the team aligned with its purpose. See the example below:

Enabler story: “Set up a new test line so we can reduce the regression run time in half.”

5. Prioritize and order sequence tasks with your Product Management. Since you now have new features which already have priority assignments, you need to realign with your PM. Negotiate if needed, and make sure that the PO/PM is well informed of the effects in case certain items are not done or are delayed. Your benefit hypothesis will be useful here because it says why the enabler story is important. This step can even be done prior to step 4, when the feature size is easily predictable. Otherwise, breaking down into enabler stories first can help in prioritization.

Frequently asked questions:

Does this mean that the test team is unlikely to finish anything within an iteration in SAFe? No. This is the reason why PI planning is needed — it results to a program board where it is visible to everyone which iteration a feature will most likely be done. Test planning should be based on this.

Is there even a test team in SAFe? There is no test team in SAFe. There is only the system team. They are responsible for maintaining the CI/CD infrastructure and some high level testing, most of which should be automated.

Are all of these mandatory in implementing SAFe? Enablers are an essential part of SAFe. It ensures that the product does not get stuck in a rut. However, SAFe is a framework that you can change according to what works for your team / organization. Just make sure you are still aligned with our agile principles!