Performance Testing – Assured Speed, Scalability and Stability of Applications

Today we have more expectations from software than we used to have earlier. This is the primary reason why Performance testing is turning out to be so critical. Performance testing is a part of any organization IT system. It’s a given that modern application regardless of volume usage should undertake standard Performance testing. These tests reveal defective assumptions about how applications handle high volume, guarantee framework-scaling works as anticipated and recognize load-related defects. Performance testing’s capacity to identify defects happening under high load can help enhance applications regardless of scale.

It is unusual that organizations keep on ignoring the significance of Performance testing, often-deploying applications with slight or no understanding of their performance. This mentality has changed little in the course of recent years as failure of high-profile software applications remains the bottleneck.

In short, Performance testing should be the topmost priority of the organization before releasing a software or an application.

Why Performance Testing?

Performance testing is used to check how well an application can deal with user traffic. By putting a repeated scenario on an application or site, it is likely to analyse breaking points and assess projected behaviour. In particular, Performance testing is performed to identify reliability, speed, scalability, responsiveness and stability of the software.

As a code change from a team that is endlessly incorporating new features, bug fixes can influence how an application looks, and functions on different devices and browsers.  It can change how rapidly that application loads across machines.erformance testing is used to check how well an application can deal with user traffic. By putting a repeated scenario on an application or site, it is likely to analyse breaking points and assess projected behaviour. In particular, Performance testing is performed to identify reliability, speed, scalability, responsiveness and stability of the software.

This is why performance testing is so crucial to a well-rounded QA strategy— checking an application’s performance, ensuring consumers are experiencing acceptable load time, and site speed is foundational to high-quality software.

Importance of Performance Testing

1.     Quick functional flows matters

Every end user of Software expects that each transaction s/he makes should be completed quickly, or take less time. Performance Testing plays crucial role in the same.

2.     Capacity Management

A performance test gives inputs on whether hardware or production configuration needs any improvement, before a new software is released on a live environment.

3.     Software Health Check-up

A performance test helps check the health of any software, and gives inputs for additional fine-tuning.

4.     Quality Assurance

A performance test also inspects the quality of code written in the development life cycle. It is a crucial part to identify if the development team needs special training, etc.  to create more fine-tuned code.

Now that you clearly know the importance of Performance testing, finding the bottleneck should be your next goal.

In a complex system, built with many pieces like application servers, network, database servers and many more there are high chances of you facing a problem. Let us discuss about the possible Bottlenecks.

What are Bottlenecks?

Performance bottlenecks can lead an otherwise functional computer or server to slow down or crawl. The term “bottleneck” can be used for both an overloaded network and the state of a computing device where any two components at any given time do not match pace, thus slowing down the overall performance. Solving bottleneck problems usually results in returning the system to operable performance levels; yet, addressing bottleneck fix involves first identifying the underperforming component.

Here are four common causes of bottlenecks

CPU Utilization

According to Microsoft, “processor bottlenecks occur when the processor is so busy that it cannot respond to requests for time.”  Simply put, these bottlenecks are a result of an overloaded CPU that is unable to perform tasks in a timely manner.

CPU bottlenecks appear in two structures:

  • a processor running at more than 80 percent volume for a prolonged period, and
  • an excessively long processor queue

CPU usage bottlenecks regularly originate from the lack of a system memory and constant disruption from I/O devices. Settling these issues includes improving CPU power, including more RAM, and enhancing programming coding proficiency.

Network Utilization

Network failures happen when the correspondence between two devices comes up short on bandwidth or processing capacity and is unable to finish the task rapidly. According to Microsoft, “network bottlenecks occur when there is an overloaded server, an overburdened network communication device, and when the network itself loses integrity”. Solving network usage issues normally includes adding or upgrading servers, and upgrading network hardware like hubs, routers and access points.

Software Limitation

Often problems related to performance occurs from within the software itself. At times, programs can be designed to deal with just a limited number of tasks at once, this makes it impossible for the program to use up any extra CPU or RAM assets even when they accessible. Furthermore, a program may not be written to work with numerous CPU streams, thus only using a single core on a multicore processor.

These issues are settled through rewriting and fixing software.

Disk Usage

The slowest segment inside a PC or server is generally a long-term storage, which involves HDDs and SSDs, and is usually an inevitable bottleneck. Additionally, the most rapid long-term storage solutions have physical speed limits, making this bottleneck cause one of the most troublesome ones to investigate. In most cases, disk usage speed can develop by reducing fragmentation problems and increasing data caching rates in RAM. On a physical level, you can solve insufficient bandwidth problem by moving to faster storage devices and expanding RAID configurations.

High-level activities during Performance Testing

Test Coverage

Test coverage includes a colossal ability to cover all functionalities while conducting performance testing. Although, the scenarios must be exhibitive of different parameters, you can attempt automating key functionalities by assembling many scenarios. User data must be projected properly, as there would be several users using the system in their own context.

Non-Functional Requirements

Functional as also non-functional requirements  hold equal importance in performance testing. Functional requirements are far more specific and contain within them input data types, algorithms, and functionality to be covered. The real challenge is identifying less specific non-functional requirements- some of which are  stability, capacity, usability, responsiveness, and interoperability.

Performance Test Analysis

Analysing the performance test results is the most challenging and key task in performance testing. It requires you to have detailed knowledge and good judgment to analyse reports and tools. Moreover, you need to regularly update the tests based on the situation.

Conclusion

Proactive Performance testing efforts help customers get an early feedback and assist in baselining application performance. This in turn ensures that cost of fixing the performance bottlenecks at later stages of development is drastically reduced. It is always easier and less costly to redesign an application in its early stages of development than at a much later stage.

This also makes sure that performance bottlenecks such as concurrency, CPU or memory utilization & responsiveness are addressed early on in the application life cycle.

Nitor excels at evaluating  the performance of different technology& domain applications. It has well defined processes & strategies for baselining the application performance.

Nitor TCoE has expert performance testers who are capable of executing the performance engagement with close coordination with various stakes holders. Nitor performance testers are highly skilled in carrying out performance testing activities through open source tool or Microsoft tools set.

For more information, please contact marketing@nitroinfotech.com

User aligned context driven testing

Often, we take best practices as a straightforward formula of success, and testing is no exception. However, we end up with the dissatisfaction of the end user, bad  product quality, and a large amount of resource investment (cost, human efforts, time).

Some of the symptoms of this best practice problem are:

  • Difference in expected vs actual product
  • Many issues at the acceptance and production level
  • Extension of deadlines
  • Poor quality

This gap is due to blind acceptance of  best practices without understanding the context of the testing. It is a fact that every best practice has  its own dragging points, so when it comes to achieving the greatest quality, testers have to adopt  unconventional ways of testing, sometimes avoiding the best practices as well. This may apply to different types of software testing services – agile testing, cross browser testing, accessibility testing, quality assurance testing, compliance testing, performance testing, GUI testing, etc.

You may ask, then, what is it that a good tester should follow?

The answer is very simple: end user alignment and understanding the business context of the product.

With our 9+ years of testing expertise, Nitor has come up with the proven success formula of higher quality (definition of quality is again contextual to individuals).

For understanding the end user, Nitor follows a systematic approach which includes:

  • Collecting the demographics of end user – which includes their age, gender, education, country, location, and purpose of use
  • Defining user centric test scenarios
  • Getting the understanding and scenarios verified by customer (if allowed then even by end customers)
  • Test execution by adapting the end user’s mindset

For understanding the business context of a product, we follow an approach which includes:

  • Understanding the Product Nature
  • Understanding which problem has been tried to be addressed through this product
  • Understanding the integrations and dependencies
  • Understanding the fitment of product in the entire product range of customer

This gives you a  better grasp on customer expectations, and user alignment to ensure that the quantity of defects is decreasing but with superior quality.

By using this user aligned context driven testing approach, you can get:

  • Better coverage
  • Time saving
  • Early quality defects
  • Better product delivered.

Top 5 security testing breaches by developers

As the world is moving into the digital era, security is increasingly treated as the primary concern of  organizations across the globe.

Looking at current market trends, security testing is a grey area. It is a  headache for businesses to manage data, cost, and trust. World quality reports of various reputed organizations have predicted that 87% security is important. scores 6.4 on a scale of 1 to 7 as a business priority On a scale of 1 to 7, security scores 6.4 as a business priority.

Furthermore, as ethical hacking comes with no boundaries. Going forward, it will be considered more sophisticated, but the methods remains exactly same –These mistakes keep popping up as we all are humans, and humans make mistakes.

While performing a security assessment, we found that a certain gaps exists which might increase the chances of attacks on that particular application. These attacks can be avoided using a few precautionary measures on the development side. Here is an article that sheds light on some of the common mistakes made by developers.

  • Missing security during the design and requirements stages:

One of the software testing principles says that “Start Testing Early” in the software development life cycle. The fact is, currently most of the attacks are targeted on insecurely developed applications. Therefore, when planning an application, it is essential to implement security mechanisms, identify security areas, and minimize the security threat risks. Building a secure framework will not only help the developers, but will also relieve the tester from capturing security breaches at a later stage of development. In addition, this will definitely help to cut down on the number of vulnerabilities introduced in the application.

  • OWASP Top 10 vulnerabilities being neglected:

In the programming world, neglecting the Open Web Application Security project (OWASP) top 10 vulnerabilities is probably the single biggest category of insecurity.

The OWASP Top Ten provides a powerful awareness document for web application security. This is a fantastic solution to apply OWASP top-ten guidelines, both on legacy pages as well as on new functionality as it is being completed.

Even though OWASP Top 20 is not the pinnacle of  security testing,  it can be a good start, especially for organizations just starting to implement security testing.

  • Lack of Security Awareness:

Keeping all security testing until the end of the SDLC and allowing unauthorized entities to get  access to  an app without teaching developers to code securely is the biggest mistake that can be made by an organization. Also, most of the attacks in 2014-15 were targeting the victims through social engineering techniquesHence, the security awareness for  coders as well as end users is mandatory. .

  • Failing to Validate user Input and Output:

While the product is in the development process, validation of user input on the client and server side is necessary.. Secure coding helps to eliminate post-release critical data breach issues. Blacklisting and whitelisting user input/requests helps fight SQLi. Implementing validation might be time consuming, but it should be part of your standard coding practice and should never be ignored.

  • Underestimating the Threat:

Some websites do not have assets of value, for examplecredit cards or any confidential information. However, sometimes it is not known to developers whether a site allows an attacker to successfully in install any malware. In these cases, the attacker is looking to borrow the trust users have in websites like this to increase the chances of  infecting clients. A regular visitor to a neighborhood website may not think twice to install a video codec if asked to do so by a popup.

Therefore, trust is an important asset which is easily lost due to a compromise like this. 

All the issues listed above should be taken into consideration because everyone involved in designing web application has to understand these essential web security principles.

I hope that I have managed to tickle your brain a little bit with this post and to introduce a healthy dose of security vulnerability awareness among developers. As it rightly said, “Prevention is better than cure”.

Wide array understanding of Mobile Compatibility Testing

With the volume of mobile applications hitting the market and different devices being launched rapidly, it is not enough to have software compatibility testing considerations for only browser based apps. Checking the compatibility of native apps and cross platform apps, or cross browser compatibility testing and platform compatibility testing, has become the need of the hour.

This blog will elaborate upon  the 7 major compatibility aspects (for mobile, web, and hybrid apps) that need to be assessed for the mobile apps:

  • OS Compatibility
  • Screen Resolution Compatibility
  • Device Compatibility
  • Network Compatibility
  • Memory Compatibility
  • Processor Compatibility
  • Browser Compatibility

Now, let’s understand each one of these in detail.

OS Compatibility:

As Android and iOS both have at least one major release every year, apps under test need to be verified for at least the latest 3 versions for Android, Marshmallow, Lollipop, and KitKat and, iOS9, iOS8, iOS7 for iPhones.

Screen Resolution compatibility:

As the device comes with different screen sizes and resolutions, as a tester one must evaluate the apps for the specific targeted screen size and resolution so that the app can work efficiently and result in  higher user satisfaction. For example, for iPhone 5: 1136×640, for 4S: 640×960 and for 3GS: 320×480,  For Android devices, Small screens: 426dp x 320dp, Normal screens: 470dp x 320dp, Large screens: 640dp x 480dp and for Extra-large screens: 960dp x 720dp and for Extra-large screens: 960dp x 720dp

Device Compatibility:

As every app cannot be tested for every device in the market, as a tester we have to ensure that all the agreed devices have been used for testing the apps. To do this, we may choose to get these details directly from the customer as a targeted devices list.  If not,  we can share the list of devices we are targeting so that if in case a customer needs any other device,  the issue can be brought up in advance.

Network Compatibility:

Mobile apps can be consumed across the globe. However,  if it is targeted to any geographical area, all the possible network options such as, 2G, 3G, 4G, Wi-Fi, and broadband connections available need to be evaluated for the app’s performance and functionality. For native apps, cross platform apps and widgets, the testing should start from installation from the respective app store.

Memory Compatibility:

With camera upgrades and increasing digitization, data volume has significantly increased across the globe. To keep up with this, devices with a large RAM and ROM are being offered in the market. This ensures that the app is working under both extremes of memory. For the first test, we need to have the devices with both limited RAM and ROM. For the second test, however, we need to occupy the RAM by parallel executions and ROM by dumping the data.

Processor Compatibility:

Modern processors of hand held devices are as competent as those of  desktops and laptops. Due to this, performance evaluation of an apps against different processors becomes a key consideration for a tester.

Browser Compatibility:

Usually, when we talk about compatibility, we only consider browser compatibility. Actually, it is limited to the mobile web apps alone, but still not underestimated, as there are more than 15 reputed mobile browsers. To accomplish this, you may choose to follow two approaches:

  • Browser usage statistics: In case you do not have the list of targeted browsers, you may use the statistics of the browser usage from any reputed information engine and test the mobile app against selected top 5.
  • Customer Requirement: Testing on all those browsers that the customer has requested. .

In a nutshell, to give an idea about compatibility here is the sample compatibility matrix for the web application targeted at the iPhone and iPad.

Picture1

What, Why & How of White Box Testing!

White Box testing, a type of quality assurance methodology in software testing, focuses on evaluating the internal working of the system. A White Box tester should gain the knowledge of the internals of system and understand how the system is implemented. Once the tester has gained an understanding of the internal structure of the system, he/she can use the knowledge to develop test cases to test the data flow, control flow, information flow, error handling, exceptions, and even coding practices implemented in the system.

Why White Box testing?

White Box testing is carried out to check:

  • Security holes in the code
  • Broken or incomplete paths in the code
  • Flow of the structure mentioned in the specification document
  • All conditional loops in the code to verify the complete functionality of the application
  • Code line by line or section by section and provide 100% testing

White Box Testing Tools and Techniques

White Box testing is methodological approach of testing the internals of a system by following the below set of activities:

  • Understand the system: a White Box tester must carefully go through the requirement document and understand the functional requirements such as how the processing of the data is handled and nonfunctional requirements such as responsiveness.
  • Analyze the system: A White Box tester analyzes the technical design and implementation of system by reviewing the technical design and architecture documents.
  • Test Design: During test design, a White Box tester utilizes the understanding of the system’s functional/non-functional and technical requirements to create effective test designs by  using static & dynamic White Box test design techniques.
  • Test Implementation: In this stage, a White Box tester makes use of proven White Box testing frameworks to implement the White Box test cases.

Benefits of White Box testing:

Some  important benefits of White Box testing are listed below:

  • One of the major benefits of White Box testing is time and cost saving in finding defects which would otherwise require waiting for a Black Box tester to find the issue. This would happen only after the implementation is ready and deployed in the test environment.
  • White Box testing not only shows the presence of the defect but also helps in locating the lines of code that caused the defect.
  • White Box testing helps in optimization of code.
  • White Box testing detects errors that can crop up due to “hidden” code.
  • Deciding on which type of input/data should be utilized in testing the application effectively can be easily deduced, as the tester has the knowledge of the internal coding structure.
  • It facilitates in removing the extra lines of code which can otherwise cause hidden defects.

EDI Testing using BizTalk Integration Engine

To  implement interoperability, enterprises are implementing integration gates that allowseamless data exchange within or between organizations. BizTalk integration engine is industry-leading integration software that helps organizations  implement interoperability. BizTalk web services, including the BizTalk server and BizTalk EDI applications are being extensively discussed in the industry today.

Slide1

Data exchange occurs in various formats. Communication formats common in healthcare organizations are Health Level 7 (HL7 files), Electronic Data Exchange (EDI files)custom format (txt/XML files) etc. according to the organization’sneed/requirement. Testing various format with the BizTalk Integration engine is the challenge.  Moreover, different testing approaches require different testing formats. The current blog  explores EDI testing using the BizTalk integration engine.

Typical challenges associated with  this are:

  1. Test data generation
  2. Data integrity
  3. Test environment availability

In order to have EDI Testing, two approaches are required to define testing strategies:

  1. EDI Testing – format testing, data testing
  2. Integration Testing – BizTalk integration engine implementation

1  EDI Data Testing

The testing Strategy should include below mentioned  testing types:

2 Application Integration Testing – BizTalk

The Application Integration Testing strategy focuses on business requirements and end-to-end testing of the application/system involved in integration. Mentioned below are high-level points that Application Testing should cover:

  1. A separate test environments functionality is provided by BizTalk. This allows QA to conduct testing separately in test environment.
  2. Testing scenarios should focus on business rules validation and verification.
  3. Data exchange between/among systems
  4. Test scenarios across BAM (Business Activity Monitoring)
  5. Test manually retrying orchestration for suspended messages
  6. Test interruptible orchestration by sending interruption message
  7. BPM scenarios focusing on data patterns – in our case EDI837I, P, & EDI 834
  8. Security testing as per compliance rules laid out by HIPAA

To ensure 100% testing coverage, the recommended testing strategy build will be a “Hybrid” testing strategy – EDI Testing & Application Integration testing.  An Application Integration testing strategy ensures that data exchange is proper and there is no impact to the business processing. On the other hand, EDI testing ensures that no data is lost & no junk/unwanted data is processed in data exchange.

Why Test Automation in Agile Project?

Test automation is a critical component of agile testing, as there are frequent releases and short sprint cycles. In an agile project, \ a new feature is added/integrated with existing feature in every new sprint. Hence, it is very important to conduct regression testing for all features to ensure that all components are working as expected. It  is only possible to do this through test automation.

Need of Test Automation in Agile Project

Test Automation_Agile

 

 

 

 

 

  • Short release cycles:In the Agile methodology, in every sprint there are a few features that need to be delivered to the customer  limited timeframes.
  • Maximum test coverage:It is very important to test every feature of the application to ensure that it is working according to  the defined business requirement.
  • Continuous Integration:In Agile, It is important to automate the testing process from build deployment to reporting as it helps to share the execution results in quick timeframes.
  • Quick turnaround time:In an Agile project, it is very important to share the quick feedback of build stability, so that the team can take decisions based on feedback shared by the QA team.
  • Reusability:Execute same test case execution on multiple platform and browsers.
  • Higher ROI:Test Automation helps to reduce the manual test execution cycle timelines.  As a  result, ROI increases. .
  • Maintain automated regression pack at product level.

Test Automation Approach in Agile Project:

Adapt Changes: Understand the new enhancements or features to be implemented in every sprint and plan for the test case design or update accordingly.

Implement Changes: Implement the requested enhancements or changes and integrate the same in the existing product.

Maintain Test Cases: Document the test cases for new enhancements and features based on the application. Also, conduct the impact analysis due to new enhancements and identify the regression test cases to test the impacted areas.

Test Case Automation: Automate the test cases for new features or changes and integrate the same with the existing automated suite. Also, make sure that every identified defect has automated test cases.

Daily/Weekly Automated Test Scripts Execution: Execute the automated test scripts based on the identified regression scope. Test scripts will be executed based on defined execution cycles,which may be daily or weekly.

Below parameters should be considered while implementing test automation in an Agile project:

  • Business critical functionality: Identify the key functional test cases for test automation
  • Functions that are used frequently by many users
  • Configuration testing in which tests will be run with different configurations
  • Test cases which will be run several times with different test data or conditions
  • Test cases which will be run on multiple platforms and browsers

 

Benefits of Test Automation in Agile Projects:

  • Improve productivity and speed in sprints: Automated testing helps the tester to concentrate on exploratory testing as automation testing can be run in parallel with manual testing.
  • Reusability: Same scripts can be run multiple times on multiple platforms and used again in next sprints.
  • Reduce the cost of testing: Manual testing efforts reduce significantly.
  • Reduce time spent in regression testing: Automated regression testing is much faster than the manual process.

Top 11 Essential Considerations for Performing ETL Testing

ETL Testing is a crucial part of data warehouse systems. It involves performing end-to-end testing of a data warehouse application. Below is the description of each important phase of the ETL testing process.

1 Requirements Testing: The objective of requirements testing is to ensure that all defined business requirements are as per the business user expectations. During requirements testing, the testing team should perform the analysis of business requirements in terms of requirement test ability and completeness. Below listed pointers should be considered during requirement testing:

  • Verification of logical data model with design documents.
  • Verification of Many – Many attribute relationship
  • Verification of the type of keys used
  • All transformation rules must be clearly specified
  • Target data type must be specified in data model or design document
  • Purpose and overview of the reports must be clearly specified
  • Report design should be available
  • All report details such as grouping, parameters to be used, filters should be specified
  • Technical definitions such as data definitions and details about the tables and fields would be used in reports
  • All details for header, footer and column heading must be clearly specified
  • Data sources and parameter name and value must be clearly specified
  • Verification of technical mapping in terms of report name, table name, column name and description of each report must be documented

2 Data Model Testing:The objective of this testing is to ensure that the physical model is in accordance with the logical data model. Below activities should be performed during this testing:

  • Verification of logical data model as per design documents
  • Verification of all the entity relationships as mentioned in design document
  • All the attributes, keys must be defined clearly
  • Ensure that the model captures all requirements
  • Ensure that the design and actual physical model are in sync
  • Ensure naming conventions
  • Perform schema verification
  • Ensure that the table structure, keys and relationship are implemented in the physical model as per the logical model.
  • Validation of Indexes and Partitioning

3 Unit Testing:The objective of Unit testing is to validate whether the implemented component is functioning as per design specifications and business requirements. It involves testing of business transformation rules, error conditions, mapping fields at staging and core levels. Below listed pointers should be considered during Unit Testing:

  • All transformation logic should work as designed from source till target
  • Surrogate keys have been generated properly
  • NULL values have been populated where expected
  • Rejects have occurred where expected and log for rejects is created with sufficient details
  • Auditing is done properly
  • All source data that is expected to be loaded into target, actually is loaded− compare counts between source and target
  • All fields are loaded with full contents− i.e. no data field is truncated while transforming

Data integrity constraints implemented

4 System Integration Testing:Once unit testing is done and all exit criteria of unit testing are  met,  the next phase of testing is  integration testing. The objective of integration testing is to ensure that all integrated components are working as expected. The data warehouse application must be compatible with upstream and downstream flows and all the ETL components should be executed with correct schedule and dependency. Below listed pointers should be considered during Integration Testing:

  • ETL packages with Initial Load
  • ETL packages with Incremental Load
  • Executing ETL packages in sequential manner
  • Handling of rejected records
  • Exception handling verification
  • Error logging

5 Data Validation Testing:The objective of this testing is to ensure that the data flow through the ETL phase is correct and cleansed as per the applied business rules. Below listed pointers should be considered during Data Validation Testing:

  • Data comparison between source and target
  • Data flow as per business logic
  • Data type mismatch
  • Source to target row count validation
  • Data duplication
  • Data correctness
  • Data completeness

6 Security Testing:The objective of this testing is to ensure that only an authorized user can access the reports as per assigned privileges. While performing security testing, below aspects should be considered:

  • Unauthorized user access
  • Role based access to the reports

7 Report Testing:The objective of report testing is to ensure that BI reports meet all the functional requirements defined in the business requirement document. While performing functional testing, below aspects should be considered:

  • Report drill down, drill up and drill through
  • Report navigation and embedded links
  • Filters
  • Sorting
  • Export functionality
  • Report dashboard
  • Dependent reports

Verify the report runs with a broad variety of parameter values and in whatever way the users will be receiving the report (e.g. A subscription runs and deploys the report as desired)

  • Verify that the expected data is returned
  • Verify that the performance of the report is within an acceptable range
  • Report data validation (Correctness, Completeness and integrity)
  • Verify required security implementation
  • Automating processes whenever possible will save tremendous amounts of time
  • Verify that the business rules have been met

8 Regression Testing:The objective of Regression testing is to keep the existing functionality intact each time new code is developed for a new feature implementation or if existing code is changed during correction of application defects.. Prior to regression testing, impact analysis must be carried out in coordination with developers  in order to determine the impacted functional areas of application. Ideally, 100% regression is recommended for each drop/build.  In case builds  are too frequent and there is a time limitation on test execution, the regression should be planned for execution based on priority of test cases.

9 Performance Testing:The objective of performance testing is to ensure that  reports or data on the reports are loaded as per the defined nonfunctional requirements. In performance testing, different types of tests would be conducted such as load test, stress test, volume test etc. While executing performance testing, below aspects should be considered:

  • Compare the SQL query execution time on Report UI and backend data
  • Concurrent access of the reports with multiple users
  • Report rendering with multiple filters applied
  • Load the high volume of production like data to check the ETL process and check whether ETL process does it in an expected timeframe
  • Validate the OLAP system performance by browsing the cube with multiple options
  • Analyze the maximum users load at peak and off peak time that are able to access and process BI reports

10 Test Data Generation:As test data is very important, in of ETL testing, appropriate test data needs to be generated. So depending on the volume of data, test data will be generated and used by using a test data generation tool or SQL scripts. As a best practice, generated test data would be similar to production like data.

Data masking for test data generation – Data masking is the process of protecting personal sensitive information. Data is scrambled in such a way that sensitive information can be hidden but still usable for testing without being exposed. A few data masking techniques:

  • Randomization: – Generate random data within the specified data range
  • Substitution: – The data presented in columns will be replaced completely or partially with artificial records.
  • Scrambled:- The data type and size of the fields will be intact but the records would be scrambled.

11 User Acceptance Testing:The objective of UAT testing is to ensure that the all business requirements or rules  are met as per business user perspective, and the system is acceptable to the customer.

BI Testing Diagram

Consolidation of Computer System Validation SOPs

It is critical for every regulated company to comply with data integrity and product quality requirements set by the FDA. It isusually called cost of effective validation. It involves a  reduced number of 483s and most importantly existence of the product in the market too.

These days,  enhanced product quality needs, technological considerations and a controlled cost of production spur medical and pharmaceutical companies to adopt a  multi-site development model. Being a highly regulated industry, it raises a lot of questions on data security and integrity. The overall product development and operations model involves USFDA and other regional regulatory compliance. It is a huge cost to govern and maintain all SOPs involved during this process.

It becomes a major challenge for all medical device and pharma companies to consolidate  Computer System Validation SOPs. A focus on integration, optimization and standardization of Computer System Validation Processes across the globe is required  to achieve a high standard regulatory compliance. This is required to succeed  in the current challenging and market scenario.