The 6-Month Strategy to Prevent Your Automation Project from Becoming a Total Disaster

Ensure Your Automation Project Delivers Real Value: A 6-Month Strategy to Sidestep Failure
Written by
Ben Fellows
Published on
August 15, 2024

I’ve spent the last 5 years selling test automation. The vast majority of companies that I chat with have launched test automation initiatives in the past just to have them ultimately abandoned. 

What I’ve come to realize is that technical leaders don’t necessarily know how to manage complex test automation initiatives. They are so busy getting the app off the ground that they just assign an automation engineer to the work & 6 months later have a pile of garbage that offers no one value. Then ultimately, they leave & you are left looking at a worthless code base they get forgotten about.

Dramatic I know, but I just wanted to paint the picture so you can understand that these projects go off the rails dramatically quickly. I want to give you some basic questions to ask your team with some suggestions on the answers you should be expecting. 

Before you Greenlight Automation

Before your automation engineer begins writing a single line of code, there are several key questions they should be able to answer. This pre-automation checklist will help set clear expectations and provide a solid foundation for your initiative.

Framework Selection Question:

What framework are they going to use? 

Expected Answer: In today's landscape, you should expect to hear names like Playwright, Cypress, or Selenium. While proprietary tools exist, they often build upon these frameworks and can be more limiting than helpful. Your automation engineer should be able to justify their choice based on your specific needs.

Architectural Philosophy Question:

What architectural philosophy are they going to use? 

Expected Answer: The most common approaches are the Page Object Model or a Data-Driven Model. Your engineer should explain their choice and how it aligns with your project structure.

Guiding Principles Question:

What are some of the key principles that will guide them? 

Expected Answer: Look for principles such as:

  • Each test should focus on a single, specific concern to ensure clarity and maintainability.
  • Ensure tests are designed for parallel execution to speed up the testing process and improve efficiency.
  • Design the automation framework with scalability in mind to accommodate future growth and complexity.
  • Consider following the DRY (Don't Repeat Yourself) principle to avoid redundant code.

CI/CD Integration Question:

How will the automation be integrated into the CI/CD pipeline? 

Expected Answer: When integrating automation into the CI/CD pipeline, it's essential that the automation engineer provides a comprehensive plan that details every aspect of the process. The plan should outline how automated tests will be triggered, such as after each code commit or at specific stages within the pipeline, to ensure continuous feedback. The engineer should also specify where these tests will be executed, whether on local machines, dedicated testing servers, or cloud-based environments, ensuring that the environment is consistent and mirrors production as closely as possible.

Furthermore, the strategy must include a robust mechanism for reporting test results. This involves not only generating detailed reports that highlight test outcomes, but also integrating these results into the CI/CD tools being used, such as Jenkins, GitLab, or CircleCI. The results should be easily accessible to all stakeholders, with clear indicators of success or failure, and actionable insights to guide the development team in resolving issues promptly.

The engineer should also describe how test failures will be managed, including the process for analyzing and triaging failed tests, and how the pipeline will handle these failures—whether it will halt the deployment process or continue with specific conditions. Finally, the plan should cover how the automation will scale as the project grows, ensuring that the CI/CD integration remains efficient and effective even as the number of tests and the complexity of the application increase. This level of detail is crucial for ensuring that test automation is a seamless and reliable component of the overall CI/CD pipeline, contributing to faster and more dependable software releases.

Test Data Strategy Question:

What is going to be the strategy for test data? 

Expected Answer: The engineer should articulate a well-thought-out strategy for managing test data that aligns with the application’s specific requirements. This strategy might include approaches such as data generation, where synthetic data is created to mimic real-world scenarios; data seeding, which involves populating the database with a predefined set of data to ensure consistency across test runs; or data masking, where sensitive information is obfuscated to maintain privacy while still providing realistic data for testing.

The chosen approach should ensure that tests are both reliable and repeatable, with the ability to reset the test environment to a known state before each execution. Additionally, the strategy should address how to manage large datasets, handle edge cases, and ensure that test data is representative of actual usage patterns, thereby increasing the accuracy and relevance of the tests.

Development Roadmap Question:

What is the general development roadmap? 

Expected Answer: The engineer should present a clear, high-level development roadmap that outlines the key milestones and deliverables over the next six months. This roadmap should include specific phases of the automation initiative, such as initial framework setup, the creation of foundational test suites, and the integration of these suites into the CI/CD pipeline. It should also highlight key deliverables, like the completion of a smoke/sanity test suite within the first 30 days, followed by deeper feature testing and expansion of the test coverage in subsequent months. The timeline should account for iterative improvements, with checkpoints for reviewing progress, refining the approach, and addressing any challenges that arise. This roadmap is crucial for setting expectations, ensuring alignment with broader project goals, and providing a clear path for achieving a robust and scalable automation framework.

The First 30 Days: Laying the Groundwork

In the first month, your automation engineer should focus on setting up the framework and creating a foundational suite of tests.

Expectation: A smoke/sanity suite of approximately 50-60 end-to-end (E2E) tests should be completed. This initial suite should cover the critical paths of your application, ensuring that core functionality remains intact. At this stage, the focus should be on breadth rather than depth, covering essential features.

The suite should be highly reliable and not flaky. A smoke/sanity suite should almost never fail—if it does, it indicates a significant issue with the code. 

Developers should be running this suite regularly. Additionally, it should be integrated into your CI/CD pipeline. If your engineer is particularly skilled, they may also use this suite to gather performance-related metrics from production after each release.

IT IS AT THIS POINT YOU SHOULD MOST CONSIDER ABANDONING THE PROJECT. 

The reality is that most test automation engineers are not great at what they do. You now have 30 days of evidence. 

At least 50% of companies should likely call it quits here. If your engineer is not anywhere near the goals above, if they are still failing to answer questions from the initial evaluation, or if they are blaming developers. Pull the Plug. 

It won’t magically get better. 

30-90 Days:The Feature Deep Dives

As the initiative progresses into its second and third months, the focus should shift to more in-depth testing of critical features.

Expectations:

  1. Deep dive into critical features
  2. Implementation of database seeding
  3. Mocking of certain dependencies, if necessary

During this phase, your automation engineer should be expanding the test suite to cover more complex scenarios. They should also be implementing more sophisticated test data management techniques, such as database seeding, to ensure consistent and reliable test execution.

The biggest risk at this point is that your engineer doesn’t understand how to use test data to set state. They will write thousands of unnecessary lines of code if they don’t know how to do this. There are a variety of strategies to use data to handle state. 

Whether it is database seeding, data factories, data imaging, even just using the API, make sure they have a strategy. I’ll write another blog on this topic. 

90-180 Days: Scaling and Maintenance

The latter half of the six-month period is where many automation initiatives face their biggest challenges.

Expectations:

  1. Address maintenance challenges while expanding the suite
  2. Keep documentation up to date
  3. Deal with consistently failing test cases

As the test suite grows, maintenance becomes a significant concern. Your automation engineer should be implementing strategies to keep the codebase manageable, such as refactoring common code into reusable functions or libraries.

Documentation is crucial at this stage. As the complexity of the automation framework increases, clear and up-to-date documentation ensures that other team members can understand and contribute to the effort.

Failing test cases that aren't being fixed need to be addressed. This might involve updating the tests, fixing the underlying issues in the application, or in some cases, retiring tests that are no longer relevant.

I can't even count the number of times I've seen a suite run with 40+ failures, and the organization just says, 'Oh, we just ignore those.' Please have some standards.

Keys to Ongoing Success

As you move beyond the six-month mark, keep these factors in mind for continued success:

  1. Regular Review and Refinement: Schedule periodic reviews of the automation strategy to ensure it continues to align with business goals.
  2. Continuous Learning: Encourage your automation engineer to stay updated with the latest tools and best practices in the field.
  3. Cross-team Collaboration: Foster communication between the automation team, developers, and manual testers to ensure a holistic approach to quality assurance.
  4. ROI Tracking: Implement metrics to measure the return on investment of your automation efforts, such as time saved, increased test coverage, or faster issue detection.
  5. Scalability Planning: As your application grows, ensure your automation framework can scale accordingly.

Avoiding Common Pitfalls

To prevent wasting resources on your automation initiative:

  1. Set Clear Goals: Define what success looks like for your automation efforts and track progress against these goals.
  2. Avoid Over-Automation: Not everything needs to be automated. Focus on high-value, frequently executed test cases.
  3. Maintain Balance: While automation is powerful, it shouldn't completely replace manual testing. Some scenarios still require human insight.
  4. Invest in Training: Ensure your team has the skills necessary to maintain and expand the automation framework.
  5. Plan for Maintenance: Build time into the schedule for maintaining existing tests, not just creating new ones.

By following this roadmap and keeping these principles in mind, you can maximize the value of your automation initiative. 

Remember, successful test automation is not just about writing scripts; it's about creating a sustainable, efficient, and effective quality assurance process that supports your overall software development lifecycle. 

With proper planning, clear expectations, and regular check-ins, you can avoid the common pitfalls that lead to wasted resources and instead build a robust automation framework that delivers real value to your organization.

Free Quality Training
Enhance your software quality for free with our QA training and evaluation. Sign up now to boost your team's skills and product excellence!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.