Performance Testing – Assured Speed, Scalability and Stability of Applications

Today we have more expectations from software than we used to have earlier. This is the primary reason why Performance testing is turning out to be so critical. Performance testing is a part of any organization IT system. It’s a given that modern application regardless of volume usage should undertake standard Performance testing. These tests reveal defective assumptions about how applications handle high volume, guarantee framework-scaling works as anticipated and recognize load-related defects. Performance testing’s capacity to identify defects happening under high load can help enhance applications regardless of scale.

It is unusual that organizations keep on ignoring the significance of Performance testing, often-deploying applications with slight or no understanding of their performance. This mentality has changed little in the course of recent years as failure of high-profile software applications remains the bottleneck.

In short, Performance testing should be the topmost priority of the organization before releasing a software or an application.

Why Performance Testing?

Performance testing is used to check how well an application can deal with user traffic. By putting a repeated scenario on an application or site, it is likely to analyse breaking points and assess projected behaviour. In particular, Performance testing is performed to identify reliability, speed, scalability, responsiveness and stability of the software.

As a code change from a team that is endlessly incorporating new features, bug fixes can influence how an application looks, and functions on different devices and browsers.  It can change how rapidly that application loads across machines.erformance testing is used to check how well an application can deal with user traffic. By putting a repeated scenario on an application or site, it is likely to analyse breaking points and assess projected behaviour. In particular, Performance testing is performed to identify reliability, speed, scalability, responsiveness and stability of the software.

This is why performance testing is so crucial to a well-rounded QA strategy— checking an application’s performance, ensuring consumers are experiencing acceptable load time, and site speed is foundational to high-quality software.

Importance of Performance Testing

1.     Quick functional flows matters

Every end user of Software expects that each transaction s/he makes should be completed quickly, or take less time. Performance Testing plays crucial role in the same.

2.     Capacity Management

A performance test gives inputs on whether hardware or production configuration needs any improvement, before a new software is released on a live environment.

3.     Software Health Check-up

A performance test helps check the health of any software, and gives inputs for additional fine-tuning.

4.     Quality Assurance

A performance test also inspects the quality of code written in the development life cycle. It is a crucial part to identify if the development team needs special training, etc.  to create more fine-tuned code.

Now that you clearly know the importance of Performance testing, finding the bottleneck should be your next goal.

In a complex system, built with many pieces like application servers, network, database servers and many more there are high chances of you facing a problem. Let us discuss about the possible Bottlenecks.

What are Bottlenecks?

Performance bottlenecks can lead an otherwise functional computer or server to slow down or crawl. The term “bottleneck” can be used for both an overloaded network and the state of a computing device where any two components at any given time do not match pace, thus slowing down the overall performance. Solving bottleneck problems usually results in returning the system to operable performance levels; yet, addressing bottleneck fix involves first identifying the underperforming component.

Here are four common causes of bottlenecks

CPU Utilization

According to Microsoft, “processor bottlenecks occur when the processor is so busy that it cannot respond to requests for time.”  Simply put, these bottlenecks are a result of an overloaded CPU that is unable to perform tasks in a timely manner.

CPU bottlenecks appear in two structures:

  • a processor running at more than 80 percent volume for a prolonged period, and
  • an excessively long processor queue

CPU usage bottlenecks regularly originate from the lack of a system memory and constant disruption from I/O devices. Settling these issues includes improving CPU power, including more RAM, and enhancing programming coding proficiency.

Network Utilization

Network failures happen when the correspondence between two devices comes up short on bandwidth or processing capacity and is unable to finish the task rapidly. According to Microsoft, “network bottlenecks occur when there is an overloaded server, an overburdened network communication device, and when the network itself loses integrity”. Solving network usage issues normally includes adding or upgrading servers, and upgrading network hardware like hubs, routers and access points.

Software Limitation

Often problems related to performance occurs from within the software itself. At times, programs can be designed to deal with just a limited number of tasks at once, this makes it impossible for the program to use up any extra CPU or RAM assets even when they accessible. Furthermore, a program may not be written to work with numerous CPU streams, thus only using a single core on a multicore processor.

These issues are settled through rewriting and fixing software.

Disk Usage

The slowest segment inside a PC or server is generally a long-term storage, which involves HDDs and SSDs, and is usually an inevitable bottleneck. Additionally, the most rapid long-term storage solutions have physical speed limits, making this bottleneck cause one of the most troublesome ones to investigate. In most cases, disk usage speed can develop by reducing fragmentation problems and increasing data caching rates in RAM. On a physical level, you can solve insufficient bandwidth problem by moving to faster storage devices and expanding RAID configurations.

High-level activities during Performance Testing

Test Coverage

Test coverage includes a colossal ability to cover all functionalities while conducting performance testing. Although, the scenarios must be exhibitive of different parameters, you can attempt automating key functionalities by assembling many scenarios. User data must be projected properly, as there would be several users using the system in their own context.

Non-Functional Requirements

Functional as also non-functional requirements  hold equal importance in performance testing. Functional requirements are far more specific and contain within them input data types, algorithms, and functionality to be covered. The real challenge is identifying less specific non-functional requirements- some of which are  stability, capacity, usability, responsiveness, and interoperability.

Performance Test Analysis

Analysing the performance test results is the most challenging and key task in performance testing. It requires you to have detailed knowledge and good judgment to analyse reports and tools. Moreover, you need to regularly update the tests based on the situation.

Conclusion

Proactive Performance testing efforts help customers get an early feedback and assist in baselining application performance. This in turn ensures that cost of fixing the performance bottlenecks at later stages of development is drastically reduced. It is always easier and less costly to redesign an application in its early stages of development than at a much later stage.

This also makes sure that performance bottlenecks such as concurrency, CPU or memory utilization & responsiveness are addressed early on in the application life cycle.

Nitor excels at evaluating  the performance of different technology& domain applications. It has well defined processes & strategies for baselining the application performance.

Nitor TCoE has expert performance testers who are capable of executing the performance engagement with close coordination with various stakes holders. Nitor performance testers are highly skilled in carrying out performance testing activities through open source tool or Microsoft tools set.

For more information, please contact marketing@nitroinfotech.com

What, Why & How of White Box Testing!

White Box testing, a type of quality assurance methodology in software testing, focuses on evaluating the internal working of the system. A White Box tester should gain the knowledge of the internals of system and understand how the system is implemented. Once the tester has gained an understanding of the internal structure of the system, he/she can use the knowledge to develop test cases to test the data flow, control flow, information flow, error handling, exceptions, and even coding practices implemented in the system.

Why White Box testing?

White Box testing is carried out to check:

  • Security holes in the code
  • Broken or incomplete paths in the code
  • Flow of the structure mentioned in the specification document
  • All conditional loops in the code to verify the complete functionality of the application
  • Code line by line or section by section and provide 100% testing

White Box Testing Tools and Techniques

White Box testing is methodological approach of testing the internals of a system by following the below set of activities:

  • Understand the system: a White Box tester must carefully go through the requirement document and understand the functional requirements such as how the processing of the data is handled and nonfunctional requirements such as responsiveness.
  • Analyze the system: A White Box tester analyzes the technical design and implementation of system by reviewing the technical design and architecture documents.
  • Test Design: During test design, a White Box tester utilizes the understanding of the system’s functional/non-functional and technical requirements to create effective test designs by  using static & dynamic White Box test design techniques.
  • Test Implementation: In this stage, a White Box tester makes use of proven White Box testing frameworks to implement the White Box test cases.

Benefits of White Box testing:

Some  important benefits of White Box testing are listed below:

  • One of the major benefits of White Box testing is time and cost saving in finding defects which would otherwise require waiting for a Black Box tester to find the issue. This would happen only after the implementation is ready and deployed in the test environment.
  • White Box testing not only shows the presence of the defect but also helps in locating the lines of code that caused the defect.
  • White Box testing helps in optimization of code.
  • White Box testing detects errors that can crop up due to “hidden” code.
  • Deciding on which type of input/data should be utilized in testing the application effectively can be easily deduced, as the tester has the knowledge of the internal coding structure.
  • It facilitates in removing the extra lines of code which can otherwise cause hidden defects.

Top 11 Essential Considerations for Performing ETL Testing

ETL Testing is a crucial part of data warehouse systems. It involves performing end-to-end testing of a data warehouse application. Below is the description of each important phase of the ETL testing process.

1 Requirements Testing: The objective of requirements testing is to ensure that all defined business requirements are as per the business user expectations. During requirements testing, the testing team should perform the analysis of business requirements in terms of requirement test ability and completeness. Below listed pointers should be considered during requirement testing:

  • Verification of logical data model with design documents.
  • Verification of Many – Many attribute relationship
  • Verification of the type of keys used
  • All transformation rules must be clearly specified
  • Target data type must be specified in data model or design document
  • Purpose and overview of the reports must be clearly specified
  • Report design should be available
  • All report details such as grouping, parameters to be used, filters should be specified
  • Technical definitions such as data definitions and details about the tables and fields would be used in reports
  • All details for header, footer and column heading must be clearly specified
  • Data sources and parameter name and value must be clearly specified
  • Verification of technical mapping in terms of report name, table name, column name and description of each report must be documented

2 Data Model Testing:The objective of this testing is to ensure that the physical model is in accordance with the logical data model. Below activities should be performed during this testing:

  • Verification of logical data model as per design documents
  • Verification of all the entity relationships as mentioned in design document
  • All the attributes, keys must be defined clearly
  • Ensure that the model captures all requirements
  • Ensure that the design and actual physical model are in sync
  • Ensure naming conventions
  • Perform schema verification
  • Ensure that the table structure, keys and relationship are implemented in the physical model as per the logical model.
  • Validation of Indexes and Partitioning

3 Unit Testing:The objective of Unit testing is to validate whether the implemented component is functioning as per design specifications and business requirements. It involves testing of business transformation rules, error conditions, mapping fields at staging and core levels. Below listed pointers should be considered during Unit Testing:

  • All transformation logic should work as designed from source till target
  • Surrogate keys have been generated properly
  • NULL values have been populated where expected
  • Rejects have occurred where expected and log for rejects is created with sufficient details
  • Auditing is done properly
  • All source data that is expected to be loaded into target, actually is loaded− compare counts between source and target
  • All fields are loaded with full contents− i.e. no data field is truncated while transforming

Data integrity constraints implemented

4 System Integration Testing:Once unit testing is done and all exit criteria of unit testing are  met,  the next phase of testing is  integration testing. The objective of integration testing is to ensure that all integrated components are working as expected. The data warehouse application must be compatible with upstream and downstream flows and all the ETL components should be executed with correct schedule and dependency. Below listed pointers should be considered during Integration Testing:

  • ETL packages with Initial Load
  • ETL packages with Incremental Load
  • Executing ETL packages in sequential manner
  • Handling of rejected records
  • Exception handling verification
  • Error logging

5 Data Validation Testing:The objective of this testing is to ensure that the data flow through the ETL phase is correct and cleansed as per the applied business rules. Below listed pointers should be considered during Data Validation Testing:

  • Data comparison between source and target
  • Data flow as per business logic
  • Data type mismatch
  • Source to target row count validation
  • Data duplication
  • Data correctness
  • Data completeness

6 Security Testing:The objective of this testing is to ensure that only an authorized user can access the reports as per assigned privileges. While performing security testing, below aspects should be considered:

  • Unauthorized user access
  • Role based access to the reports

7 Report Testing:The objective of report testing is to ensure that BI reports meet all the functional requirements defined in the business requirement document. While performing functional testing, below aspects should be considered:

  • Report drill down, drill up and drill through
  • Report navigation and embedded links
  • Filters
  • Sorting
  • Export functionality
  • Report dashboard
  • Dependent reports

Verify the report runs with a broad variety of parameter values and in whatever way the users will be receiving the report (e.g. A subscription runs and deploys the report as desired)

  • Verify that the expected data is returned
  • Verify that the performance of the report is within an acceptable range
  • Report data validation (Correctness, Completeness and integrity)
  • Verify required security implementation
  • Automating processes whenever possible will save tremendous amounts of time
  • Verify that the business rules have been met

8 Regression Testing:The objective of Regression testing is to keep the existing functionality intact each time new code is developed for a new feature implementation or if existing code is changed during correction of application defects.. Prior to regression testing, impact analysis must be carried out in coordination with developers  in order to determine the impacted functional areas of application. Ideally, 100% regression is recommended for each drop/build.  In case builds  are too frequent and there is a time limitation on test execution, the regression should be planned for execution based on priority of test cases.

9 Performance Testing:The objective of performance testing is to ensure that  reports or data on the reports are loaded as per the defined nonfunctional requirements. In performance testing, different types of tests would be conducted such as load test, stress test, volume test etc. While executing performance testing, below aspects should be considered:

  • Compare the SQL query execution time on Report UI and backend data
  • Concurrent access of the reports with multiple users
  • Report rendering with multiple filters applied
  • Load the high volume of production like data to check the ETL process and check whether ETL process does it in an expected timeframe
  • Validate the OLAP system performance by browsing the cube with multiple options
  • Analyze the maximum users load at peak and off peak time that are able to access and process BI reports

10 Test Data Generation:As test data is very important, in of ETL testing, appropriate test data needs to be generated. So depending on the volume of data, test data will be generated and used by using a test data generation tool or SQL scripts. As a best practice, generated test data would be similar to production like data.

Data masking for test data generation – Data masking is the process of protecting personal sensitive information. Data is scrambled in such a way that sensitive information can be hidden but still usable for testing without being exposed. A few data masking techniques:

  • Randomization: – Generate random data within the specified data range
  • Substitution: – The data presented in columns will be replaced completely or partially with artificial records.
  • Scrambled:- The data type and size of the fields will be intact but the records would be scrambled.

11 User Acceptance Testing:The objective of UAT testing is to ensure that the all business requirements or rules  are met as per business user perspective, and the system is acceptable to the customer.

BI Testing Diagram