Thursday, September 30, 2021

Ad Hoc Testing Explained

 

What is Ad Hoc Testing?

Performing random testing without any plan is known as Ad Hoc Testing.  It is also referred to as Random Testing or Monkey Testing. This type of testing doesn’t follow any documentation or plan to perform this activity. The testing steps and the scenarios only depend upon the tester, and defects are found by random checking.

Ad hoc Testing does not follow any structured way of testing and it is randomly done on any part of application. Main aim of this testing is to find defects by random checking. Ad hoc testing can be achieved with the testing technique called Error Guessing. Error guessing can be done by the people having enough domain knowledge and experience to “guess” the most likely source of errors.

This testing requires no documentation/ planning /process to be followed. Since this testing aims at finding defects through random approach, without any documentation, defects will not be mapped to test cases. Hence, sometimes, it is very difficult to reproduce the defects as there are no test steps or requirements mapped to it.

Types of ad hoc testing

Buddy Testing:

Two buddies mutually work on identifying defects in the same module. Mostly one buddy will be from development team and another person will be from testing team. Buddy testing helps the testers develop better test cases and development team can also make design changes early. This testing usually happens after unit testing completion.

Pair testing:

Two testers are assigned modules, share ideas and work on the same machines to find defects. One person can execute the tests and another person can take notes on the findings. Roles of the persons can be a tester and scribe during testing.

Monkey Testing:

Randomly test the product or application without test cases with a goal to break the system.

When to execute Ad hoc Testing?

Ad-hoc testing can be done at any point of time whether it’s beginning, middle or end of the project testing. Ad hoc testing can be performed when the time is very limited and detailed testing is required. Usually adhoc testing is performed after the formal test execution. Ad hoc testing will be effective only if the tester is having thorough knowledge of the system under Test.

This testing can also be done when the time is very limited and detailed testing is required.

Ad Hoc Testing does have its own advantages:

A totally informal approach, it provides an opportunity for discovery, allowing the tester to find missing cases and scenarios which has been missed in test case writing.

  • The tester can really immerse him / her in the role of the end-user, performing tests in the absence of any boundaries or preconceived ideas.
  • The approach can be implemented easily, without any documents or planning.

That said, while Ad Hoc Testing is certainly useful, a tester shouldn’t rely on it solely. For a project following scrum methodology, for example, a tester focused only on the requirements and who performs Ad Hoc testing for rest of the modules of the project(apart from the requirements) will likely ignore some important areas and miss testing other very important scenarios.

When utilizing an Ad Hoc Testing methodology, a tester may attempt to cover all the scenarios and areas but will likely still end up missing a number of them. There is always a risk that the tester performs the same or similar tests multiple times while other important functionality is broken and ends up not being tested at all. This is because Ad Hoc Testing does not require all the major risk areas to been covered.

Drawbacks:

  • As this type of testing does not follow a structured way of testing and no documentation is mandatory, the main disadvantage is that the tester has to remember all scenarios.
  • The tester will not be able to recreate bugs in the subsequent attempts, should someone ask for issue reproducibility.

Ad hoc testing gives us knowledge on applications with a variety of domains.  And within a short time one gets to test the entire application, it gives confidence to the tester to prepare more Ad hoc scenarios, as formal test scenarios can be written based on the requirement but Ad hoc scenarios can be obtained by doing one round of Ad hoc testing on application to find more bugs, rather than through formal test execution.

You may like to read

Adhoc Testing vs Exploratory Testing

Ad-hoc Testing: Tester may refer existing test cases and just pick a few randomly to test the application. It is more like hit and trial to find a bug. In case you find one you have an already documented Test Case to fail here.

Exploratory testing: This is a formal testing process that doesn’t rely on test cases or test planning documents to test the application. Instead, testers go through the application and learn about its functionalities. They then, use exploratory test charters to direct, record and keep track of the exploratory test sessions observations.

Adhoc TestingExploratory Testing
Adhoc testing is considered to be more informal in nature.Exploratory testing is a formal form of testing.
Adhoc testing require having enough domain knowledge and experienceExploratory Testing is side by side process of exploring and learning the application
Adhoc testing doesn’t need any plan for performing any activity.Before performing exploratory testing, tester need to spend some time on identifying the area of attack (where possibility for getting bugs are more) including the scope of testing and detailing the goals that need to be completed during specified time.
There is no time limit for performing the Adhoc testing. Major focus is on finding the bug.It is a time basis approach to perform this testing which helps in managing and tracking. It includes a dedicated time-boxed testing session with no interruption from email, phone, messages etc.
There is no documentation required in this testing, neither prior to conducting the test, not after an error or bug is found while testing.Here, maintaining proper documentation becomes mandatory for keeping record and track of the exploratory test sessions observations.
Ad-Hoc testing can only be executed once as no documentation is maintained and an error cannot be reproduced.Exploratory testing involves learning the obtained test results and creating new solutions to resolve the issues.
This testing works on negative testing approachThis testing works on both, negative as well as positive comments, however, the focus lies more on a positive testing approach.

Performing Testing on the Basis of Test Plan

Test cases serve as a guide for the testers. The testing steps, areas and scenarios are defined, and the tester is supposed to follow the outlined approach to perform testing. If the test plan is efficient, it covers most of the major functionality and scenarios and there is a low risk of missing critical bugs.

On the other hand, a test plan can limit the tester’s boundaries. There is less of an opportunity to find bugs that exist outside of the defined scenarios. Or perhaps time constraints limit the tester’s ability to execute the complete test suite.

So, while Ad Hoc Testing is not sufficient on its own, combining the Ad Hoc approach with a solid test plan and Exploratory testing will strengthen the results. By performing the test per the test plan while at the same time devoting resource to Ad Hoc testing, a test team will gain better coverage and lower the risk of missing critical bugs. Also, the defects found through Ad Hoc testing can be included in future test plans so that those defect prone areas and scenarios can be tested in a later release.

Additionally, in the case where time constraints limit the test team’s ability to execute the complete test suite, the major functionality can still be defined and documented. The tester can then use these guidelines while testing to ensure that these major areas and functionalities have been tested. And after this is done, Ad Hoc testing can continue to be performed on these and other areas.

Conclusion:
The advantage of Ad-hoc testing is to check for the completeness of testing and find more defects than  planned testing. The defect catching test cases are added as additional test cases to the planned test cases.
Ad-hoc Testing saves lot of time as it doesn’t require elaborate test planning , documentation and Test Case design.

At Webomates, we have applied this technique in multiple different domains and successfully delivered with a quality defects. We specialize in taking many different software creators’ builds across domains, on 18 different Browser/OS/Mobile platforms and providing them with defects in 24 hours or less. If you are interested in a demo click here Webomates CQ

Wednesday, September 29, 2021

Benefits of Intelligent Automation

 Artificial Intelligence is a technique that enables a computer system to exhibit cognitive abilities and emulate human behavior based on pattern recognition, analysis, and learning derived from available data with the aid of predetermined rules and algorithms.

Machine learning and deep learning are two terms that are often used every time Artificial intelligence is discussed. People tend to use these interchangeably, however, there is a fundamental difference between them.

Understanding the fundamental difference between AI, ML, and DL

fundamental difference between AI, ML, and DL

Artificial intelligence is the superset of machine learning and deep learning.

Machine learning is a subset of AI which aids computer systems in learning and decision making without explicit human intervention. It works on pattern recognition technology and works with predefined algorithms to understand, learn, process, infer and predict, based on past data and new information. Its prime focus is to aid in decision-making. AI improves as ML improves.

Deep Learning is a subset of machine learning, also called scalable machine learning. It helps machine learning algorithms by extracting zeta bytes of unstructured and unprocessed data from data sets.

What makes intelligent automation important in software testing

Test automation promised to revolutionize the world of testing when it was first perceived and implemented. It delivered on that promise by improving overall testing speed and results. However, as technologies and processes further evolved, there was a need for improving the testing process too.

If you want to understand the journey of the testing process from manual to AI era, then read our blog “Evolution of software testing”.

Automation eased the testing load, but it could not “think”. For instance, test automation can execute thousands of test cases and provide test results, but human intervention is needed when it comes to deciding which tests to run. Adding the dimension of intelligence can add analysis and decision-making capability to test automation.

Intelligent automation works on data like test results, testing metrics, test coverage analysis, etc., which can be extracted and utilized by AI / ML algorithms to identify and implement an improved test strategy for efficient testing.

As per the Gartner study, “By 2022, 40% of application development (AD) projects will use AI-enabled test set optimizers that build, maintain, run and optimize test assets”

Let us explore further how intelligently automating the testing process helps in improving overall QA operations.

test reliability with improved accuracy
  • Higher level of test reliability with improved accuracy
    In the era of DevOps with frequent and shorter development cycles, continuous testing is conducted for every minor/major change or a new feature.  While test automation has helped a lot in reducing the testing burden, adding AI to automation can enhance the overall testing process, since it keeps evolving based on new information and analysis of past data. It also aids the teams in identifying the tests for better test coverage.
    With intelligent automation tools doing a large portion of recurring tedious tasks, the developers and testers can focus on other aspects like exploratory testing and finding better automation solutions.

  • Improved risk profiling and mitigation with enhanced test result analysis
    Intelligent automation renders the ability of risk profiling to testing.
    Intelligent automation and analytics help the testing and development teams to have a better insight into the impact of code changes and risks associated with those changes. Appropriate actions can be taken based on these insights and issues can be intercepted much earlier

  • Deeper insights in test results and predictive analysis
    Test reports and analysis are critical processes of software testing. It helps the teams in understanding the loopholes in their current test strategy and consequently aids them to define better strategies for the next test cycle.
    AI-infused tools can analyze and understand the test results, spot the flaws and suggest the workarounds. These tools constantly learn and update their knowledge base with every test cycle, based on test result analysis and apply that knowledge to improve software testing by detecting even minor changes and predicting the test outcome.
    Improved defect traceability and prediction is a game-changer when it comes to optimizing the test strategies.

  • Boosts efficiency by transforming DevOps with benefits of AI Ops and QA Ops
    To match pace with dynamic software testing demands, DevOps has to be augmented with the power of artificial intelligence. QA Ops have gained importance in the past few years and enabling it further by using intelligent automation will ensure faster time to market with better quality.

  • Faster delivery with improved results
    Intelligent automation plays an important role in accelerating releases since it optimizes the whole testing process based on a comprehensive analysis of previous test results. Continuous testing for frequent changes can be time-consuming, but AI/ML expedites the whole process by identifying the right set of tests to be executed, thus saving a significant amount of time and resources.

Maximizing the benefits of test automation using Webomates Intelligent Automation solutions

Webomates provides intelligent automation solutions with intelligent analytics. It leverages the power of data processing, analysis, reasoning, and machine learning to provide an end-to-end testing solution for your business.

Maximizing the benefits of test automation
  • Self-healing test cases
    Agile development leads to frequent application updates. A good testing tool should be able to do the following:
  • Trace the changes to user stories/epics/requirements and update the tests accordingly.
  • Keep track of these changes while testing to ensure that nothing is broken.
  • In case of any issues, the involved teams should be notified on a priority basis and appropriate action should be taken.
  • Update the tests regularly based on the changes due to defect rectifications.

Webomates applies AI and ML algorithms to its self-healing test automation framework to dynamically understand the changes made to the application and modifies the testing scope accordingly.
Webomates’ AI codeless engine effortlessly modifies (heal) the test cases, scripts and re-executes them within the same test cycle. Healed test suites lead to faster testing and development, thus speeding up the entire release process.

  • Defect reporting, triaging and tracing
    As stated in the previous section, defect tracking and tracing its source is an important analysis activity. It requires resources, time, and effort to conduct this exercise. Artificial Intelligence can help here by understanding and learning from software behavior.
    Webomates CQ provides a detailed test report with triaged defects. The QA and development team can access these reports, thereby enabling them to intervene on time and take appropriate action.
    Webomates’ Intelligent Analytics improvises your testing process by providing a continuous feedback loop of defects to requirements.
    Our ingenious AI Test Package Analyzer identifies all the test cases which are impacted due to a defect and traces them to impacted user stories/epics/requirements to identify the exact origin of the defect. This aids in understanding the root cause of the issue.
    The results of exploratory testing are analyzed by our test package analyzer. In case a module gets a high number of defects during exploratory testing, then it needs to be re-examined and more test cases need to be generated to cover all the possibilities.

  • Defect prediction based on test results
    With defect rectification and tracing sorted, imagine if the testing tool can predict potential issues and suggest corrective actions. That is exactly what Webomates’ AI Defect Predictor and Creator does.
    AI Defect Predictor helps in overcoming the challenges posed by false failures in automation. Consider an example, for 300 automated test cases with a failure rate of roughly 40%, the usual triaging time to identify false failures is around 12 hours. Using our tool, this time is reduced to just 3-4 hours. It not only differentiates true failures from false failures but also helps in creating a defect using the AI engine for True Failures.

  • Delivering value for money
    There are multiple options for automated testing available in the market. Many service providers offer AI as a part of their testing package. It is important to make the right choice from a business, financial and technical perspective.
    Webomates CQ is a financially and technically suitable option with the ability to scale up or down as per the customer requirement. We have a capable team of analysts and engineers to aid you along with the power of intelligent automation.

If this has piqued your interest and you want to know more, then please click here and schedule a demo. Partner with us and reach out at info@webomates.com.

If you liked this blog, then please like/follow us Webomates or Aseem.

Sunday, September 26, 2021

Smoke Testing vs Sanity Testing : A comparison

Software development and testing are an ecosystem that works on the collective efforts of developers and testers. Every new addition or modification to the application has to be carefully tested before it is released to the customers or end-users. In order to have a robust, secure, and fully functional end application, it has to undergo a series of tests. There are multiple testing techniques involved in the whole process, however, Smoke and Sanity are the first ones to be planned.

Let us examine Smoke and Sanity testing in this article.

Smoke testing essentially checks for the stability of a software build. It can be deemed as a preliminary check before the build goes for testing.

Sanity testing is performed after a stable build is received and testing has been performed. It can be deemed as a post-build check, to make sure that all bugs have been fixed. It also makes sure that all functionalities are working as per the expectations.  

Let us first take a quick look at the details of both types of testing, and then we can move onto identifying the differences between them. There is a thin line between them, and people tend to confuse one for another.

What is smoke testing?


Smoke testing, also known as build verification testing, is performed on initial builds before they are released for extensive testing. The idea is to net issues, if any, in the preliminary stages so that the QA team gets a stable build for testing, thus saving a significant amount of effort and time spent by the QA.

Smoke testing is non-exhaustive and focuses on testing the workflow of the software by testing its critical functionalities. The test cases for smoke testing can be picked up from an existing set of test cases. The build is marked rejected in case it fails the smoke tests. 

Note that, it is just a check measure in the testing process, and in no way, it replaces comprehensive testing. 

Smoke tests can be either manual or automated.

What is sanity testing?

what is sanity testing

Sanity testing is performed on a stable build which represents minor changes in code/functionality to ensure that none of the existing functionality is broken due to the changes. For this, a subset of regression tests is conducted on the build.

sanity testing on stable build

It is performed to verify the correctness of the application, in light of the changes made. Random inputs and tests are done to check the functioning of software with a prime focus on the changes made. 

Once the build successfully passes the sanity test, it is then subjected to further testing.

Sanity tests can be either manual or automated.

Differences between Smoke testing and Sanity testing

This table summarizes the differences between these two tests for quick reading.


Smoke testingSanity testing
Test Build TypeInitial BuildStable Build
Objective StabilityCorrectness 
ScopeWide (application as a whole is tested for functionality without a deep dive)Narrow (focus on one part and performs regression tests)
FocusCritical functionalitiesNew/Updated functionalities
ResponsibilityDevelopers and/or testersTesters
SupersetAcceptance testingRegression testingRegression testingExploratory testing
Documentation NeededYes(Uses existing test cases)Preferred release notes or released changes document

Is it sane to compare Smoke to Sanity testing?

sanity vs smoke

You must be wondering why are we even comparing these two when on a superficial level, both of them are performing tests before the “big” testing cycle commences.

It may be noted that they are different at many levels, albeit they appear to be similar.

Smoke testing takes care of build stability before any other comprehensive testing (including sanity testing) can be performed. It is the first measure that should be ideally taken in the software testing cycle because conducting testing on an unstable build is a plain waste of resources in terms of time, effort, and money. However, smoke testing is not limited to just the beginning of the cycle. It is an entry pass for every new phase of testing. It can be done at the Integration level, system-level, or acceptance level too.

Whereas, sanity testing is performed on a stable build, which has passed the acid test of smoke tests and another testing. 

Both can be performed either manually or with the help of automation tools. In some cases, where time is of the essence, sanity tests can be combined with smoke tests. So, it is natural for the developers/testers to often end up using these terms interchangeably. So, it is important to understand the difference between the two to ensure that no loopholes are left in the testing process due to this confusion.

To summarize, smoke tests can be deemed as general and overall health check-up of the application, whereas sanity tests focus is more targeted health check-up.

Conclusion

Webomates CQ helps in performing effective smoke testing, using various Continuous Testing methodologies that run using Automation and AI Automation channels. It provides better accuracy in testing and the results are generated within a short period, approximately 15 mins to 1 hour. It also provides CI-CD that can be linked with any build framework to give quick results.

Webomates CQ also provides services like Overnight Regression, which are conducted on specific modules, which can be either a fully developed module or work-in-progress module. The advantage of using our product is that it takes less time than a full regression cycle, and testing can be completed within 8-12 hours. Also, the development team gets a detailed execution report along with the defects report.

Webomates tools also help in generating and automating new test scenarios within hours, using AI tools. This reduces manual efforts during sanity testing.

Let us examine Smoke and Sanity testing -

Smoke testing essentially checks for the stability of a software build. It can be deemed as a preliminary check before the build goes for testing.

Sanity testing is performed after a stable build is received and testing has been performed. It can be deemed as a post-build check, to make sure that all bugs have been fixed. It also makes sure that all functionalities are working as per the expectations. 

Read More about Smoke Testing vs Sanity Testing

At Webomates, we continuously work to evolve our platform & processes to provide guaranteed execution, which takes testing experience to an entirely different level, thus ensuring a higher degree of customer satisfaction.If you are interested in learning more about Webomates’ CQ service please click here and schedule a demo, or reach out to us at info@webomates.com





Thursday, September 23, 2021

Top Benefits of API Testing

An application programming interface (API) is a computing interface that defines interactions between multiple software intermediaries. It defines the kinds of calls or requests that can be made, how to make them, the data formats that should be used, the conventions to follow, etc. (Source: Wikipedia)

API’s help not only in abstracting the underlying implementation but also improves the modularity and scalability of software applications. APIs encompass the business and functional logic and are a gateway to sensitive data and expose only objects & actions that are needed.

API testing deals with testing the business logic of an application, which is typically encompassed in the business layer and is instrumental in handling all the transactions between the user interface and underlying data.  It is deemed as a part of Integration testing that involves verification of functionality, performance, and robustness of APIs.

API testing

Besides checking for the functionality, API testing also tests for error condition handling, response handling in terms of time and data, performance issues, security issues, etc. 

API testing also deals with contract testing, which in simpler terms, is verifying the compatibility and interactions between different services. The contract is between a client/consumer and API/service provider. 

Clearly, API’s usage is not limited to just one application and in some cases, they can be used across many applications and are also used for third-party integrations. Hence, developing and testing them thoroughly is extremely critical. 

The following section highlights why thorough API testing entails many benefits in the long run.

Top Benefits of API testing

Top Benefits of API testing

These benefits are discussed in detail below.

Technology independent and ease of test automation

Technology independent
  • The API’s exchange data in structured format via XML or JSON. This exchange is an independent coding language, thus giving the flexibility to choose any language for test automation. 
  • API testing can be effectively automated because the endpoints and request parameters are less likely to change, unless there is a major change in business logic. Automation reduces manual efforts during regression testing and results in a significant amount of time-saving.
  • API tests require less scripting efforts as compared to GUI automated tests. As a result, the testing process is faster with better coverage. This culminates in time and resource-saving. Overall, it translates into the reduced project costs.

Reduced risks and faster time to market

Reduced risks and faster time to market
  • API testing need not be dependent on GUI and other sub-modules. It can be conducted independently at an earlier stage to check the core business logic. Issues, if any, can be reported back for immediate action without waiting for others to complete.
  • APIs represent specific business logic, it is easier for the teams to isolate the buggy module and rectify it. The bugs reported early can be fixed and retested, independently yet again. This reduces the quantum of time taken between builds and release cycles, resulting in faster releasing of products.
  • The amount and variety of input data combinations inadvertently lead to testing limit conditions, which otherwise may not be identified/tested. This exposes vulnerabilities, if any, at an earlier stage even before GUI has been developed. These vulnerabilities can be then rectified on a priority basis and any potential loophole for breaches is handled.
  • When there are multiple API’s from different sources involved in development of an application, the interface handshake may or may not be firm. API testing deep dives into these integration challenges and handles them at earlier stages. This ensures that end user experience is not affected because of the issues that could have been handled at API level.

Improved test coverage

Improved test coverage

API test automation helps in covering a high number of test cases, functional and non-functional. Also, for a complete scenario check API testing requires to run both, positive and negative tests. Since API testing is a data-driven approach, various combinations of data inputs can be used to test such test cases. This gives good test coverage overall. 

Good test coverage helps in identifying and managing defects in a larger scenario. As a result, miniscule bugs make way to production, thus resulting in a higher quality product.

Ease of test maintenance

ease of maintenance
  • API tests are usually deemed stable and major changes are done mainly when business logic is changed. The frequency and amount of changes are comparatively less. This means less rework in rewriting test cases in event of any changes. This is in sharp contrast to GUI testing, which requires rework at many levels in case of any changes. 
  • API tests can be reused during testing, thus, reducing the overall time quantum required for testing.

Also, it is a good idea to categorize the test cases for better test case management since the number of API’s involved in any application can be large. We have talked about this in our earlier blog “Do’s and Don’ts of API testing”.

Conclusion

APIs evolve and develop as and when business and functional requirements change, thus making it even more important to test them on a continuous basis. 

Webomates test APIs using Manual and Automation testing. Webomates provides API testing which focuses on Performance and Security Testing to make sure the application is secure and giving the application a strong backbone.

You can take a quick look at the following table to see which tools are already integrated in our Testing as a Service platform.

Manual API testingAutomated API Testing
PostmanJersey RESTful Web Services
Rest AssuredPOSTMAN
SwaggerRest Assured
JMeterJMeter
Any REST client or dev toolSwagger

API Fortress

At Webomates, we continuously work to evolve our platform and processes in order to provide guaranteed execution, which takes testing experience to an entirely different level, thus ensuring a higher degree of customer satisfaction.

Webomates offers regression testing as a service that is a combination of test cases based testing and exploratory testing. Test cases based testing establishes a baseline and uses a multi-execution channel approach, to reach true pass / true fail, using AI Automation, automation, manual, and crowdsourcing. Exploratory testing expands the scope of the test to take the quality to the next level. Service is guaranteed to do both and even covers updating test cases of modified features during execution in 24 hours.

API test automation helps in covering a high number of test cases, functional and non-functional. Also, for a complete scenario check API testing requires to run both, positive and negative tests. For More Details on apiautomation testing services offered by Webomates schedule a demo now!

If you are interested in learning more about Webomates’ CQ service please click here and schedule a demo, or reach out to us at info@webomates.com

Generative AI: A Force for Change or Replacement?

  The dawn of Generative AI marks the beginning of a new era, much like the rise of the Force wielders in the Star Wars galaxy. Applications...