After 2011, 2015/16 and 2020, the Software Testing in Practice and Research survey was conducted for the fourth time in 2024. This means that data on the state of the art in quality assurance is available for almost 15 years and, based on the results of the 2024 survey, allows us to look back in order to describe trends and lines of development. However, the data from the current survey also allows us to take a look into the future. Will AI change the tasks in quality assurance? What challenges will we have to face?
Survey on software testing in practice and research
In September 2024, the German Testing Board (GTB), under the scientific direction of Bremerhaven University of Applied Sciences, Cologne University of Applied Sciences and Aschaffenburg University of Applied Sciences, conducted the largest long-term survey on software testing in practice and research in the German-speaking world for the fourth time. With almost 800 participants, a considerable response rate was achieved.
While we recorded a significantly higher proportion of participants from development in 2020, the distribution of roles in 2024 is similar to 2011 and 2015/16, with a slight overhang from the quality assurance environment. People from larger companies with a focus on the finance and insurance sector, the public sector and the automotive sector tended to be reached.
Have the projects become agile?

Fig. 1: Results of a survey on software testing in practice and research, own illustration.
It seems that the battle between agile and phase-oriented projects has been decided in favor of agility. While the majority of projects were still phase-oriented in 2011, the picture had already completely reversed by 2020. Even though many respondents stated that they were agile in their projects in 2020, the available data gave the impression that key agile practices were not yet anchored in the mindset of the projects. For this reason, the question was expanded in 2024 to include hybrid process models in addition to agile and phase-oriented ones. The available results from 2024 confirm the impression from 2020 (see Fig. 1). Agile or hybrid approaches are on a par, while pure phase-oriented process models hardly play a role anymore.
The vast majority of respondents from the operational area see development-related, agile practices as promoting quality assurance measures. Test automation (82%) is the favorite, followed by continuous integration (70%), code review (68%) and clean code (66%). Less than 50% see organizational practices such as retrospectives, pair programming, stand-up meetings or collective code ownership as conducive to quality assurance. This continues the trend from 2020 that the focus is more on the use of development-related practices and that an agile mindset is not yet firmly established everywhere.
Has the degree of test automation increased?
Against the backdrop of ever shorter development cycles in iterative process models, test automation is playing an increasingly important role in discussions. In the survey, test automation is also very important to 82% of respondents. Nevertheless, this is not reflected in the degree of test automation. In 2024, three quarters of respondents had automated at least 75% of their unit tests. For higher test levels, however, this only applies to a third of respondents. The degree of test automation at the respective test levels has increased significantly from 2011-2020. However, it appears to have stagnated since 2020 (see Fig. 2).
The test automation of special test types such as regression, load and performance or security raises many questions. Even though the degree of test automation in these test types has increased compared to 2020, a quarter of respondents state that they do not automate load and performance tests at all (see Fig. 3). It can be assumed that in many agile teams, load and performance tests or security tests are not integrated into the daily CI/CD processes and are therefore less visible to the respondents. In addition, 63% of respondents see the provision of test environments for non-functional requirements as a major challenge.

Fig.2 Results of the survey on software testing in practice and research, own illustration.

Fig.3 Results of the survey on software testing in practice and research, own illustration.
Are systematic test procedures used?
The analysis of the survey data shows that systematic test procedures for the development of test cases tend to be less important. Only around 40% of respondents mostly use limit value analyses or equivalence class formation, less than a quarter of respondents mostly use state transitions or decision tables and only a third mostly use white-box test procedures. This means that the figures are on a par with 2015/16 and are well below our expectations. It will be interesting to see in the future whether explicit knowledge of test procedures will continue to decline with the increasing use of AI.
Will AI change quality assurance?

Fig.4 Results of the survey on software testing in practice and research, own illustration.
Many companies currently have high expectations for the use of AI and are launching projects for AI-supported process automation. The analyses of the survey also show that AI is already being used in software development in particular. Half of the respondents from the operational area state that they are already using AI for programming in their current projects or have concrete plans to do so. For software testing tasks, the proportion is significantly lower at around a third, but even in these areas, the respondents believe that implementation will take place in the near future.
Looking at the potential of AI, it is foreseeable that the tasks in quality assurance will change. Not surprisingly, expectations are higher in management than in operations (see Fig. 4). Interestingly, all respondents see significantly less potential for AI in the organizational and more creative tasks of software development.
What are the future challenges?
It is not surprising that many of those surveyed see the use of AI as one of the major challenges. However, half of those working in operations consider themselves ill-prepared. Even if management is more optimistic, a third of them also state that they are poorly prepared. Operational employees see an increased need for further training in the next three to five years. 72% would like further training on the topic of testing with AI and 56% see a need for testing AI. Researchers also see an increased need for research into the use of AI in quality assurance over the next few years.
In addition to the use of AI, other challenges can be identified on the basis of the survey data. Across all four surveys, two thirds of respondents rated their satisfaction as very high. However, a closer look at the data from the 2024 survey reveals that the customer satisfaction rating for the functionality of the software is significantly higher than this value. The projects are making very good progress here. However, the aspects of security and performance are significantly below this value (see Fig. 5). Operational staff see an increased need for further training in the areas of IT security tests (66%), load/performance tests (35%) and test automation (54%).
Further information on the survey Software testing in practice and research is available at https://softwaretest-umfrage.de.
About the authors
Mario Winter is Professor of Software Development and Project Management at the TH Cologne and a member of the GTB.
Frank Simon is head of the Business Development working group at the GTB and conducts research in the area of Research & Innovation.
Karin Vosseberg is Professor of Systems Integration with a focus on quality assurance at Bremerhaven University of Applied Sciences and a member of the ASQF Executive Committee.
Timea Illes-Seifert is a professor at the Faculty of Engineering and Computer Science at Aschaffenburg University of Applied Sciences and a member of the GTB.
Annette Simon is head of the GTB’s marketing working group and GTB project manager for the software test survey.

Recent Comments