⇛Automation cost:This is the cumulative sum of the tool's acquisition and maintenance cost, combined with associated costs such as script development and execution. In all cases, cost of automation must never exceed the cost of manual testing. Break-even analysis can help strengthen the go-no go decision for tool adoption.
⇛Customization effort:This addresses the effort required to adapt tool test scripts to multiple operating systems, operating system variations, and device models. Based on the type of application (native, hybrid, or web) under test, customization effort can significantly influence a tool's adoption decision either way. While web applications may require minimal to nil customization effort, native applications present a bigger ask as the GUI and screen properties (order of screens, traversal across screens and usage of hardware buttons for various screen transitions) vary across phone platforms.
⇛Percentage of automation:For a given set of application features, this metric helps quantify benefits by computing return on investment required to justify automation.
⇛Content accuracy:This measures the tool's ability to verify application content (image, text, audio and video) accurately to ensure the quality of the final product. For example, the tool should recognize text in a selected screen irrespective of the color, background, theme, or font used.
⇛Multitasking support:The ability of the tool to connect and support the execution of test scripts on multiple devices concurrently will help save testing time and also reduce the number of licenses required for parallel execution on test devices. In such scenarios, tool performance must be continuously tracked and monitored, because it can negatively impact test productivity and effort.
1.Requirements and feature mapping:Comprises typical requirements gathering activities, wherein business and technical requirements are articulated and documented to serve as a future reference checklist. Tool functionality can then be mapped and compared with the checklist items. The process helps filter out tools that do not meet the evaluation criteria.
2.Tools feature score:Once the tools are identified in line with the requirements analysis criteria, their features can be evaluated and compared to determine the feature grading score of each tool. The grading score helps further sift and arrive at the final shortlist. Trial versions of the shortlisted tools can be downloaded for a pilot or Proof of Concept (PoC) exercise.
3.Proof of Concept (PoC):An iterative PoC involving all downloaded tools is the final step of the selection process. Sample test scenarios with the most comprehensive coverage must be executed using the selected tools. Finally, the one tool that best suits the project requirements must be selected.
I love to write sometimes about upcoming technology . I am An IT Professional with 9+ years of experience with World renowned MNC 's like IBM,CSC and Dell. Domain : IT infrastructure, Enterprise Server Administration and IT service dleivery BCA+MCA education MCP,MCSA,ITIL V3 and ITIL Intermediate certified professional.