Text/HTML

Razor Host

Blogs

22
Jan
2018

Digital Asset Management is a process for the structural organization, storage, and retrieval of rich media assets and management of permissions and rights. In simpler terms, DAM systems facilitate organizations to create, edit, share and organize their digital assets. Types of digital assets include images, audios, videos, animations, documents and other multimedia content. In this blog post, we’ll examine some challenges that exist in testing DAM systems. Metadata is king of content Metadata can be referred to as a set of data that provides information about data. It plays a vital role in managing and finding digital assets, especially when these are dispersed in the in the system. With an increased volume of digital assets, adding relevant metadata to every individual or group of assets for easy retrieval is critical. While metadata can improve the efficiency of the organization in placing assets, ensuring that the expected asset is returned when a user searches for it, with keywords, becomes paramount. Our experience, made us list down a few challenges that functional testers should be aware of while testing the system for readiness: ¬ Usage of common metadata across assets tend to provide inappropriate and inaccurate results while searching for digital assets ¬ Lack of understanding of metadata used by the end users could lead to inaccuracy ¬ For localization, understanding how end user uses search using contextual keywords becomes a challenge Large data set and Response time matters With the ever-growing data and data sets, the need of the hour is to store all the user-generated data appropriately. Typically, the end users manage assets in a DAM system which varies in 'Number of assets' and 'Volume of assets'. So, testing the scalability and performance of the system with large data sets plays a vital role in determining the consistency of the application. Replicating end-user behavior to simulate workflows with variety, volume, and velocity of data sets becomes a challenging task for testers. Personas and end-user workflows A “Persona” is an imaginary representation of an actual end user. Personas play a vital role in design decisions of the product, hence are identified in the early stages of product development. Identifying personas and user workflows help the team members to share a common understanding of the end user group who use the product. Test cases designed by considering various persons and workflows could help in identifying potential challenge an end user might face. Omni-channel and Compatibility With digitization comes the need for increased acceptance of Omni-channel. While ensuring that new product features are continuously released to outsmart the competition, it is imperative to ensure that the product is compatible with all the distribution channels and their versions. Take an example of users that consume mobile data. The user experience could be impacted not only by the functionality of the application but also by other factors including different sizes of screens, types of networks, WIFI, different OS versions used, localization, internalization, utilization of mobile resources such as RAM, battery power for the Mobile application under test. Based on the requirements, testing the application and its functions satisfying all the criteria across different devices, platforms, and environments, is much needed. Typical test approach should consider cross-platform, cross-browser compatibility and data synchronization testing techniques to ensure the application performs consistently. User roles and permissions Ensuring that end users data is stored in a secure location and the access to the information is restricted is paramount to any digital asset management system. Considering, Users as people, Roles as functions and Permissions as access rights to functions - only intended users of the system should be able to access the functions of the application. As more and more platforms move towards multi-tenant application, feature bleed is an area for testing to watch out for. A feature designed as per the requirement of one tenant should be accessed by the authorized tenant only. As an example, an update made for one tenant may not be available or may not be a requirement for another tenant. So, every change/update to the system has to be carefully tested, such that, it doesn’t affect the working features for every user role. As every user would have different privileges associated, it is necessary to perform access control and multi-privilege tests to ensure that the data of one tenant isn't accessible by another tenant. Conclusion A test approach designed with above considerations is fundamental for the successful development of Digital Asset Management platforms. With asset factory coverage of 100+ asset types, 500+ prebuilt test cases ZenQ has supported clients across the world for their implementation and continues testing needs.
Full Article..
15
Jan
2018

Test automation came into prominence with the agile development movement, but with CI/CD and DevOps, its importance has only increased. Tools and Frameworks have also matured over time, resulting in the emergence of promising automation tools in multiple languages and platforms. Having a large set of tools to choose from makes it daunting to identify the best-fit tool for your needs. If you do face the dilemma of selecting the best automation tool fitting your needs, your search stops here. Read on, as we delve into some very important criteria that you can use to select the right test automation tools and frameworks. In the entire article, the word Tool will be used to represent both the actual tool /framework. 1. Platform/technology stack One of the most important criteria, which is not given much importance is the Platform/technology stack. If your development team is very familiar with a particular stack, it makes sense to select a tool that is in the same stack. You want your developers to leverage the automation tests and even contribute to it. Maintenance also becomes easier when needing to correct/enhance the automated tests. If you have a Javascript-based Development platform (React/Angular/Nightwatch + Node.js etc.), choose a Javascript framework. On the other hand, if you have a more mature architecture using Java/C# on the Server side, with middleware/ESB, choose that language. 2. System complexity Evaluate how complex your system is and what layers it has. The more number of moving parts, your system has, the more “complete” your automation tool needs to be. For e.g., if you have a front-end JS layer, plus server-side business logic, API layer, ESB, DB layer, etc., you have to make sure you can extend your tests to all these layers (or at least have a set of tools that play well with each other). 3. Business case ROI calculation is key to set up and run a long-term Test Automation initiative. Consider the number of tests you will have when you multiply it with the effort needed to execute manually, not ignoring the frequency of such testing. Get quotes from Test Automation providers for developing and maintaining an Automated Test Suite and do ROI calculation. Sometimes you may be surprised that manual testing may be cheaper and faster! You need to factor in the cost of Open-source tools vs Commercial tools in terms of faster development, integration with existing tools, reporting and managing capabilities, etc. 4. Product Roadmap Don’t only look at your current application and tests. Consider your product roadmap – how will it change, what features you will add and at what frequency. Talk to your developers to understand if the current Dev platform can support all the new features or if you need to migrate to a new/different platform. Additionally, maintenance of Automated tests significantly increases, if you have an application that is scheduled to have a lot of changes. At the very least, ensure your Automation test framework makes it easy to change the flows in the automation code. 5. Reporting Discuss and document what kinds of reports and dashboards you need? Most test automation solutions provide you with execution reports but look for a Dashboard with the ability to see trends. Consider integration with Test Management and Defect Management tools. Include logging of the stack trace during automation execution to ensure that QA and developers can debug if there are repeated failures. 6. Product Development Methodology Ensure that your Test Automation development fits into the way you develop and release products, fixes, enhancements etc. If you run your projects using the Agile methodology or have a CI/CD pipeline, Test Automation is considered an important safety net. However, without a shared understanding or a “design contract”, the automation tests are likely to break far more frequently. Keeping the tests in sync with the application needs significant effort. However, this cost can be offset to a large extent by having a robust test suite to provide rapid feedback to the developers. 7. Test Automation Tools There are many tools for Test Automation today, all promising a “Silver Bullet” experience. From code-oriented tools such as Selenium, HP UFT, Parasoft tools, etc., to Test Automation products, there is a wide variety of choices. Most of the Test Automation products use either a “Keyword Driven” model or a parametrized “Record-and-Playback Driven” model and assure you that this can be used by non-technical QA teams easily. However, consider doing a detailed feasibility test with respect to complex workflows in your applications. Even with these products, implementation and maintenance can be a significant challenge as the underlying code may not be available for editing. 8. Multiple Codebases for your Tests Based on your application architecture, you may have UI Tests based on modern Javascript frameworks, web services/APIs, multiple protocols, iOS/Android, etc. Your Automation Suite will have multiple code bases for different parts of your application. In this scenario, ensure that your framework and tools can coexist and be run from a single control structure. 9. Integrated Testing Most of the business applications are not stand-alone entities. They have input from and output to other applications. A business flow may not be complete unless data is validated across different applications. This means your tests should run not only on one application, but across multiple apps operating on different platforms. 10. Long-term Maintenance Strategy Tests can quickly become redundant if they don’t keep pace with your applications. Automated tests are more susceptible to such entropy. Have a strategy for maintenance, include time to re-write some of the tests and be prepared to migrate the Automated Tests to a different Tool/platform if need be. By taking all the above criteria into consideration, you will be well informed and in a position to take the right decision on “THE” Test Automation tool that best fits your need. To make it easier for you, we have prepared a checklist that will enable you to check-off important criteria for your tool selection and help you in your decision making journey. Happy Shopping!!
Full Article..
19
Dec
2017

Uncovering the Test Strategy The Internet of Things (IoT) is the new buzzword with predictions ranging from "it's just another connected network" to "this is the new sliced bread of the generation”. Every new idea causes change; RADICAL ideas have a greater impact and REVOLUTIONARY ideas disrupt the course of humanity. IoT, as a revolutionary idea is coming off age and holds the promises to disrupt the Industrial ecosystem.IT is already being touted as the next Industrial revolution. The below graphics are evidence enough. Before we start thinking about what it means to the Testing industry, let us agree on the definition, to be on the same page! “The network of physical objects that contain embedded technology to communicate and sense or interact with their internal states or the external environment.” -Gartner With this definition and from what we see in reality, there are two different types of IoT; - The Industrial IoT - The Consumer IoT The Industrial IoT covers areas that organizations could use for improving the efficiency and/or effectiveness of producing/delivering their products/services, which includes: - Smart Factories - Utility Grids - Intelligent Machines/Robots - Smart Cities - Waiter-less Japanese restaurants [hmm, no tips?] The Consumer face of IoT focuses on providing a richer experience to individuals and families. Some examples include: - Smart Appliances (Whirlpool integrating Amazon Dash into its Washing machines or Parrot's plant-watering sensors) - Self-driving/autonomous cars - Smart utility meters - Wearables Many of them are already here, but this is only the beginning. As more and more devices get connected, sophisticated software’s developed to collect, process and store data, better algorithms designed to make faster decisions, thereby, showcasing substantial increase in the use of IoT in our daily lives. All this means that many physical things now will become digital and bring with it, its own set of challenges. Failed connections, loss of power, hardware failures, software errors and lot more, could cause a myriad of issues - minor inconveniences to large-scale loss; like - - Security threats leading to privacy and financial loss - Slow performance due to overworked networks and latency - Low fault-tolerance and high-maintenance devices - New interfaces - Completeness of sensor and other recorded data Some breaches that could occur include: - Target (retail giant in the US) data breach through malicious hacking of PoS systems - Easily control a connected car - Home Automation devices get hacked - lights are flipped or IP cameras are turned on without owner's knowledge What has this got to do with Testing? Testing IoT systems means that all the skills that are now applied individually to hardware and software systems must be brought together under one roof, to develop and deliver an IoT test solution. This could be a challenge as not all organizations would have the processes, the skills and the tools to get the job done effectively. Following are some considerations that need to be kept in mind while designing IoT test cases. Following are some considerations that need to be kept in mind while designing IoT test cases - 1.Connected devices - which means hardware, firmware and software are equal points of failure 2.Multiple protocols - different devices are connected using a variety of protocols, with each interconnection having the potential to disrupt the system 3.Device Fragmentation - this includes OS, architecture and other forms of differences between the connected devices 4.Power considerations - many field systems have low power specifications. Simply connecting them could cause them to drain power rapidly, so software must be written with this in mind What should a typical IoT Test Strategy cover? - Embedded System Testing – Testing the Device as a whitebox - Web Testing – Testing the device as another application on the web - Performance Testing – Internal and network communications - Security Testing – Authentication, Privacy & Control Levels (Autonomy vs Controlled) - UI Testing – User Interface, remote administration and usability - Architecture based Testing – Interfaces, OS and Device fragmentations - Exploratory Testing – Increased use of SBET and Testing Tours - Interoperability Testing – Supported protocols, encryption and data transfer - Automation and Tools – Multi-layered automation, use of Tools for specific areas It is recommended that companies looking to Test IoT systems build a deep understanding of Embedded system/hardware, Hardware and Software APIs, Multi-protocol Testing and also a high-level of "White-hat hacking" capabilities. Going by current trends, however, what may be possible is emergence of an ecosystem of organizations with complementary skills that partner together to develop and test complex IoT systems. It is only a matter of time and we will see mature organisations delivering complex IoT testing solutions.
Full Article..
23
Nov
2017

Performance Testing is a crucial part of any software application testing. It helps to test the application for speed, stability and scalability. This blog will touch upon the various stages of the Performance testing life cycle and the best practices for each of the stages, which would help testing professionals do design and deliver efficient test reports. Test Planning: Test plan is one of the crucial steps of performance testing, for smooth transition of all performance testing activities, throughout the project life cycle. Deviation from test plan could lead to conflicts in deadlines and deliverables. In order to avoid such situations, it is important to have an effective test plan in place. - Provide test schedule for smoke tests and baseline/benchmark tests in test document - Mention all statements which are derived out of assumptions - Get the test plan reviewed by senior management and approved by client before proceeding to testing - Set client expectations early on to avoid any confusion Test Design & Scripts Development: Want to reduce scripting load? Listed below are few best practices for the same: - Acquire user account details with exact permission level as that of the end users, as testing with admin accounts or accounts with additional features, may create problems while validating the scripts with live accounts - Always secure the copy of initial/raw version of the script to refer back whenever needed. - Correlate all values which appear to be dynamic, like unix timestamps etc. - Have parameters for all user input data in the flow. It is recommended to have a .CSV file format - Declare URL and ThinkTime as global variables, to reduce scripting efforts, whenever there is a change in URL - Always implement context check and error handling for every page of the application - Validate scripts for multiple iterations and multiple user accounts. - Always follow naming conventions in scripts for better readability. Tests Execution: Now that we have seen how to develop a test plan and design efficient scripts, it is now time to execute them. - Gather data requirements in advance and request expected number of accounts from the client. - Ensure sufficient privileges for the validated user accounts. - Use random ThinkTime in the script to emulate realistic end user behaviour - Disable logging during load tests to limit disk writes on load generators - Generate load from load generator machines whenever possible instead of using controller/master, as controller will collect results from load generators and render run-time data during test - Identify details like project name, number of virtual users and Date in the scenario names (Example:LoadTest01_ZenQ_50Users_01Jan14) - Validate load generators connectivity before starting the test - Conduct a smoke test before executing load tests to validate scripts for multiple users. Final Delivery and Report Submission Now is the time to generate reports based on the scripts that you have run and then present to the client. - Save final scripts within a designated folder with additional back-up (VSS or SVN or Google Drive) - Folder structure for all project artefacts should be organized as below: -03_TestDocs could have test plan document, user flow documents and final test reports -01_RawResults could have raw results file. (Example: .lrr for loadrunner and .jtl file for jmeter.) -02_Reports could contain test reports of that particular test - In load test reports, add legends to graphs for enhanced readability. - Ensure that all the graphs show data points starting from zero and scale should be corresponding to the data collected. We have seen the various stages of the Performance testing life cycle and the best practices for each of the stages. Using the above best practices, we have been able to design efficient test cases and execute and deliver high quality test reports to the client.
Full Article..
12
Oct
2017

Executing automated tests simultaneously in a distributed fashion is a great way to optimize execution time. This article illustrates how to use of Appium test automation tool on multiple physical devices simultaneously.Appium uses TestNG and Selenium Grid for Parallel test execution on multiple mobile devices. Note: Currently Parallel test execution on mobile devices work only for Android Platform. Before we start thinking about what it means to the Testing industry, let us agree on the definition, to be on the same page! What do you need? - Selenium Standalone Server jar - Hub Configuration json File - Node Configuration Json Files - Android devices (2 or more with api level >19) Appium - Appium is an open source test automation tool developed and supported by Sauce Labs to automate native and hybrid mobile apps. It uses JSON wire protocol internally to interact with iOS and Android native apps using the Selenium WebDriver. Hub Setup Launch command prompt and navigate to location where selenium standalone jar file is located And run the following command to launch Selenium Hub with json file. java –jar selenium-standalone-server 2.50.1.jar –role hub -hubConfig path\to\hubconfig.json Below is the json file content for hub { "host": null, "port": 4444, "newSessionWaitTimeout": -1, "servlets" : [], "prioritizer": null, "capabilityMatcher": "org.openqa.grid.internal.utils.DefaultCapabilityMatcher", "throwOnCapabilityNotPresent": true, "nodePolling": 5000, "cleanUpCycle": 5000, "timeout": 300000, "maxSession": 5 } Node Setup Start Appium servers as Nodes using node configuration json files. Each node will hold the information of android devices (Android version, Appium URL...) Node1 Following is the Json file content for node1, here URL refers the Http protocol that will listen on given Appium server port number (e.g. http://0.0.0.0:4723/wd/hub). { "capabilities": [ { "browserName": "ANDROID", "device": "GT-I9300", "version":"4.3", "maxInstances": 1, "platform":"ANDROID" } ], "configuration": { "cleanUpCycle":2000, "timeout":10000, "proxy": "org.openqa.grid.selenium.proxy.DefaultRemoteProxy", "url":"http://0.0.0.0:4723/wd/hub", "maxSession": 4, "port": 4723, "host": "0.0.0.0", "register": true, "registerCycle": 5000, "hubPort": 4444, "hubHost": "localhost" } } Launch a new command prompt and type the following line: appium --nodeconfig path\to\nodeconfig1.json -p 4723 –bp 5723 Note: here –p refers to the main Appium port to listen to given number (e.g. 4723) & -bp refers to the Appium bootstrap port to use on device to talk to Appium (-bp will work only for Android). Node2 Following is the Json file content for node2, { "capabilities": [ { "browserName": "ANDROID", "device": "Nexus 5", "version":"4.4.2", "maxInstances": 1, "platform":"ANDROID" } ], "configuration": { "cleanUpCycle":2000, "timeout":10000, "proxy": "org.openqa.grid.selenium.proxy.DefaultRemoteProxy", "url":"http://0.0.0.0:4724/wd/hub", "maxSession": 4, "port": 4724, "host": "0.0.0.0", "register": true, "registerCycle": 5000, "hubPort": 4444, "hubHost": "localhost" } } Similarly for node2, launch a new command prompt and type the following: appium --nodeconfig path\to\nodeconfig2.json -p 4724 –bp 5724 Parallel Execution with TestNG Create a new java class and a new method inside it. Annotate the method with @Test, @parameters. Provide optional values with @Optional annotation (we can override the values at the time of calling testing.xml.). The following is the list of parameters being passed. App – (.apk) file Location, deviceName – Name of the device (e.g. LG Nexus 5), deviceVersion – Android OS version (e.g. 4.4) , udid – Unique Device Identifier. The following sample of Java class will perform simple actions on ISTQB hybrid app on android devices. package com.testSuite; import java.net.MalformedURLException; import java.net.URL; import java.util.concurrent.TimeUnit; import io.appium.java_client.android.AndroidDriver; import org.openqa.selenium.By; import org.openqa.selenium.WebElement; import org.openqa.selenium.remote.DesiredCapabilities; import org.testng.annotations.*; public class SampleGridTest { private AndroidDriver driver; DesiredCapabilities capabilities = new DesiredCapabilities(); String app= "D:\\sampleApp\\net.one97.xxxxx.apk"; @Parameters({"deviceName","version","udid", "url"}) @Test public void appiumGridTest(@Optional("Galaxy SIII") String deviceName, @Optional("4.3") String version, @Optional("xxxxxxxxxxx") String udid, @Optional("http://0.0.0.0:4723/wd/hub") String url) { capabilities.setCapability("app", app); capabilities.setCapability("deviceName", deviceName); capabilities.setCapability("deviceVersion",version); capabilities.setCapability("udid", udid); try{ driver = new AndroidDriver(new URL(url), capabilities); driver.manage().timeouts().implicitlyWait(20, TimeUnit.SECONDS); WebElement swipeField = driver.findElement(By.id("net.one97.xxxxx:id/help_overlay")); /* Click on Swipe Field and No thanks button if present*/ if(swipeField.isDisplayed()){ swipeField.click(); } WebElement NoThanks_Option = driver.findElement(By.name("No, thanks")); if(NoThanks_Option.isDisplayed()){ NoThanks_Option.click(); } /*Clicking on Account,Wallet and Home Buttons*/ driver.findElement(By.name("Account")).click(); driver.findElement(By.name("Wallet")).click(); driver.findElement(By.name("Login to xxxxx")).click(); driver.findElement(By.name("Home")).click(); }catch (MalformedURLException e) { e.printStackTrace();} driver.quit(); } } Create a TestNG.xml file in the project root folder, copy the following content to it and update the values per your device specifications. Right click on TestNG.xml file -> click on Run As -> TestNG Suite. Appium, an open-source test automation tool is designed to meet mobile automation needs on iOS and Android platforms. ZenQ teams are well-versed in executing Appium Tests parallelly on multiple Android devices. Our clients have experienced increase in efficiency and reduced turnaround time in test automation.
Full Article..
21
Sep
2017

There is a lot of Buzz around TestRail test management system owing to its efficiency in managing releases, test cases & test results. TestRail also makes it easy to track individual tests, milestones and projects with its Dashboard and activity reports.What this does is; it helps to manage and track software testing efforts as well as organize large sets of test cases which makes it an efficient QA management tool. Below is an attempt to map the QA process in the TestRail management system. Mapping QA Process to TestRail - Let us consider a typical QA process where a new build is released for testing, plan and test cases are created, execution is done on test environments, test results are updated and a summary/quality report is sent at the end of test cycle. Given below are the steps to map this process in Test Rail – Step 1: A new test build is released for each release, with few new features and bug fixes implemented Action on TestRail: Create a Milestone with release name. Say Release 1.2.1. Step 2: A high level plan is created with the strategy, schedule and deliverables Action on TestRail: No Change on Test Rail Step 3: Test cases are created for the new implementations in the release (Features & Bug fixes) Action on TestRail: Test cases can be organized in folders called ‘Test Suites’. Test cases on new features can be added to respective Test Suites. Step 4: Test cases are executed on pre-defined environments for new features and bug fixes are retested. Action on TestRail: Add a test plan with build name say Build 1.2.1.1, and associate with the milestone 1.2.1. Add two test runs with names say ‘Release test cases’, ‘Regression test cases’, add respective test cases to these runs as the name indicates. Add couple of environments as configurations to each test run. Step 5: Test results are updated; bugs are followed up until fixed. Action on TestRail: Open each test cases to be executed, click on ‘Start Progress’ and execute test case. When execution is completed click on ‘stop’ button. Step 6: Quality report sent with outstanding bugs, ZenQ’s recommendation on build readiness to release. Action on TestRail: Reports can be generated as defined in ‘Reports’ section Below reports can be generated on Test Rail – By following the above steps on TestRail, a QA manager can effectively and efficiently manage and track test cases.
Full Article..