# Manual testing is still important, in the age of AI Over the last few days I've seen an number of attempts to use AI, to push mobile app testing automation into area typically covered by manual (human) testing. This worries me, and I want to explain why. ## No automated/AI solution for exploratory testing There are a number of types of testing that manual testers perform to catch the recurrence of previous bugs and uncover newly introduced bugs. These testing strategies include: * Testing common customer user journeys * Ensuring the users can use the features of the app as intended * This includes following the “happy path”, and “unhappy paths” like triggering validation issues and connection issues. * Regression testing * Re-testing previously fixed bugs to ensure they haven’t re-occurred. * Monkey testing * Purposely doing unconventional actions to see if it causes bugs. * This might be pressing the screen in random places. * It could also be more focused and intentional, like pressing a button twice in quick succession. * Exploratory testing * Without following a set plan, the tester will probe unconventional paths through the app, attempting to uncover bugs. * The tester will use their knowledge of the system to try and uncover unexpected states that could lead to bugs. While newer AI agents systems can cover some of the above testing strategies, they don’t cover exploratory testing. Exploratory testing can’t be defined in advance, it comes from the human poking and prodding the app, following their instinct and their experience to uncover edge cases and bugs that the developer, who has [[#Software engineers shouldn’t write all the tests|written the automated tests]], couldn’t have expected. ## Our users are humans, our testers should be too If we remove all of our human based testing, and rely solely on automated testing, we will be overlooking bugs that come from the very human-ness of our users. ### Humans have fingers\! At risk of stating the obvious, humans have fingers\! Fingers don’t tap on a screen with single pixel precision. They intend to tap one thing, but can accidentally hit another. Their fingers obscure the very thing they are trying to press\! That might mean the cool animation that confirms you hit a button actually isn’t visible because your finger is obscuring it when it happens. You need a human, [[#Simulators and emulators shouldn’t completely replace real devices|pressing the screen of a real device]] to spot these issues. ### Testing gestures Testing of swipe gestures is very difficult to do with UI automation, and it’s almost impossible (currently) to emulate the variety and complexity of how real users interact with gestures. Here is a selection of ways a manual human tester can test gestures which would be very difficult to replicate using automated systems: * Incomplete swipe gestures * Catch a cancelled gesture mid-cancellation, and complete it * Jerky (non-smooth) swipe gestures * Multi-touch gestures (eg. pinch to zoom) * Performing 2 gestures/actions at the same time (partial swipe navigation gesture while rotating the device) * Accessibility feature testing ## Simulators and emulators shouldn’t completely replace real devices These automation and AI agent tests mostly involve running on a simulator or emulator. It’s obviously a lot easier to have an automated system interact with simulators/emulators than it is to run these tests on real devices. This presents a risk of missing issues that only present on real hardware, or require a specific device state that only happens on real hardware. Some issues that might only surface on a real devices include: * Bluetooth issues * Paired devices issues (watch for example) * Location issues * Other permission issues * Interaction with other installed apps ## Software engineers shouldn’t write all the tests Who will be writing these automated tests? Who will be writing the natural language prompts for the AI agents? It will be the software engineers; likely the very engineers that wrote the code that is being tested. **This is bad**. Why is it bad? Because software engineers aren’t testers. Because if the engineer could think of all the things that needed testing, all of the edge cases and unhappy paths, then they wouldn’t submit code that contained bugs in the first place… But they do. ![[Developer_Test_vs_QA_Test.gif|Developer Test vs QA Test]] Obviously software engineers will write a lot of tests, but if they are responsible for conducting 100% of the testing for a codebase, this introduces a big risk. ### But we have Beta testers! If could be argued that running beta testing program, with real users, keeps a human in the testing process. However, while beta testing is a worthwhile way to gather feedback, it is no replacement for an experienced manual testing team. There are many reasons why someone might sign up to beta test an app, there is no guarantee of the skill or knowledge level, and they may not even raise bugs that they encounter. ### Tester is a skilled profession Manual testing is a skill, just like software development is a skill. Manual testers are professionals that understand how to find the edge cases and weird user journeys that software engineers don’t consider. You can’t replace them with AI, or SWE written tests, or Dogfood users, and expect to maintain the same level of software quality. ## Manual testing needs to remain part of a cohesive testing strategy I’m not advocating against automated testing. It can obviously scale in a way that manual human testing can’t. However, I believe if we over-index on automated testing, we will miss the very bugs that are the hardest to find and replicate, leading to a poorer experience for users.