By Venkata Ramana Lanka
Digital transformation is real, challenging enterprises to innovate and adopt to stay competitive. Disruption from innovative financial tech (FinTech) companies, increasing adoption of digital technology by customers, better and smarter devices, new products through Internet of Things (IoT) integration, the ability to deliver pointed solutions through machine learning and artificial intelligence, and a focus on customer experience are all part of digital transformation.
While change is great, it produces a critical question. How does this impact the way software is currently being tested? Nothing can be as wrong as believing that testing relies on techniques and strategies previously used for testing legacy and web-based applications. To summarize, digital testing strategy is different and traditional testing fails in its ability to meet the disruption.
Reduce Production Failures
Testing for customer experience reduces production failures. Customer experience is not limited to designing a great looking user interface (UI) but covers the entire gamut of customer touch points. It reflects the total interactions customers have with the enterprise throughout the customer lifecycle. It is not unrealistic to state that the customer experience journey starts even before someone becomes your customer. It lasts even when someone is no longer a customer.
Test design strategy for customer experience testing should closely align with key business objectives. Using a technique called Business Objectives Led Testing (BOLT), the strategy is to design and run tests that have a high probability to measure the impact of test failures on the key business objectives. The objective of this test strategy is to extend the test scope by visualizing potential product failures from the user perspective once the software is ready for end users. In short, this strategy aims at validating fitness for use. Take testing an interactive voice response system (IVRS) for example, let’s see the difference between traditional test scenarios and test scenarios designed using BOLT.
Typical test scenario and key test steps:
Self-service – Confirm pending invoice details and reconcile with agent
Connect to IVRS
Select option “2” – know pending invoice details
Select option “4” – speak to an agent
Reconcile pending invoice details
While the above test scenario is validated using positive and negative tests, and will ultimately be certified as test pass, this test pass does not guarantee the functionality will be perceived by end users as fit for use. Though the test has passed, this functionality can be considered a failure from the customer’s perspective. This is the limitation of traditional test design.
If the same test is designed keeping business objectives under focus, the difference is evident. The best part is, this is included while the testing phase is in progress, hence it does not require additional effort.
Test scenario based on BOLT:
Step 1 – List key business objectives
Enhanced IVRS experience
Right first time
Increased customer touch point
Increased customer conversion and retention ratio
Step 2 – List potential issues impacting business objectives (examples shown below)
High, “average wait time” to connect to IVRS
High, “on hold time”
Confusing “IVRS menus”
Multiple call transfers between IVRS and agent/managers
“Data duplication” (repeating the same information again and again) during call transfer
“Poor selection of music and product advertisements” while on hold (customer calls to call centers are not always happy events. Customer calling for service deficiency will not appreciate product advertisements while he/she is waiting for agents to speak)
Step 3 – Suggest remedial measures
Implement concierge service to set wait time expectation
Offer the customer call back option if they prefer
Music/product advertisements should be contextual to the customer need (different options for enquiries, complaints, and support)
Maintain data persistence between IVRS and human agent to avoid repetition of customer information
Implement single window to avoid call transfers
Automated email/SMS confirming the resolution to customer after the call
These small but proactive steps not only increase the test team’s value to the overall project but also help avoid potential customer pain points once the system is deployed on production.
Design to Succeed
It is important to note that test teams design for social media, mobile, analytics, and cloud integration testing.
While integration testing is done at a system level, integration of business processes across social media, mobile, data analytics, and cloud (SMAC) is either not considered or inadequately considered, resulting in products that lack fitness for use. To design a scalable and robust test strategy for SMAC, test teams should focus on identifying unique yet important exchange-to-exchange customer journeys across omni channels, IVR systems, content management systems, and an array of social media and mobile devices. Cloud testing should factor in specific aspects like on demand self-service, resource pooling, and elasticity and measured services.
Continuous Approach
Another strategy is to implement continuous testing and monitoring, DevOps, gamification, and impact-based testing
The digital landscape is huge and changes to requirements are the norm. Traditional test strategy can fail to meet the demanding nature of digital disruption. Though few commercial testing tools exist, they are most often not integrated with the lifecycle and do not provide seamless automation across the lifecycle. In short, commercial testing tools are often fragmented and do not scale to meet evolving needs. To overcome this critical limitation, test teams should develop and implement solution accelerators that enable lifecycle automation.
At a bare minimum, this should include automated impact-based assessment and selection of most impacted regression test packs, automated test data provisioning, unattended execution of test automation, and automated dashboards that continuously measure and provide live data on test progress. Implementation of a gamified dashboard takes this to the next level by introducing healthy competition among team members to increase personal productivity without diluting test coverage.
Performance Reports
Application testing teams should include periodic performance testing, virtualization, automated vulnerability assessment, and regulatory compliance testing.
Understanding how user profiles, tastes and preferences, volumes, and the devices they use to interact with the ecosystem is essential in order to draft the right performance test scenarios. Virtualization of a test environment and APIs—using stubs and drivers where needed—is a critical part of performance and functional testing. Planning to switch over from simulated device testing to testing over real devices on cloud is also a part of the test strategy.
Automated, tool-agnostic framework for running automated vulnerability scanning across different code bases and application UI should be a strategic part of every planned release. This ensures the code is clean and devoid of security vulnerabilities. This should also focus on tests to ensure personally identifiable information is never available to unauthorized entities. Compliance testing to ensure compliance with different regulatory bodies is another critical factor that must be incorporated into the test strategy.
Digital Disruption is Here
Digital disruption is now a reality. However, few organizations have a clearly defined digital testing strategy. Not knowing where to focus, what to focus on, as well as limiting thinking to traditional test strategies results in serious failures once the code moves to production.
While digital testing does not require a revolutionary approach to testing, there are clearly focus areas that deserve attention. Focusing on testing for customer experience, adoption of lifecycle automation, implementation of automated impact based testing, and enabling continuous automated testing are great starting points to equip testing teams for digital disruption. Continuous monitoring of quality—product and people quality—using gamified digital dashboard takes your test teams efficiency and reliability to the next level of effectiveness.
Focusing and building capabilities on cognitive testing, machine learning, IoT, and predictive analysis enable test teams to scale further and prepare to test the world of interconnected communication between machines and people.
Venkata Ramana Lanka (LRV), director of quality assurance for Virtusa, is an accomplished software test management professional with an extensive background in software testing, test automation framework design, building domain specific testing solution accelerators, leading large software testing teams, and supporting presales initiatives. LRV is a hands-on manager with proven ability to direct and improve quality initiatives, reduce defects, and improve overall efficiency and productivity. He has in-depth and proven expertise in developing and implementing test strategies and operational procedures. LRV has extensive experience in working with multiple commercial and open source test tools. As a quality assurance thought leader, lRV has written multiple white papers and articles some of which have been published in various media. Further, he has spoken and presented papers at various software testing conferences.
Nov2017, Software Magazine