The Blog is a series of articles about performance/Load/Stress testing It covers all the phases of a performance testing cycle starting from requirement gathering to bottleneck identification and Performance Re-engineering. We will also try to cover multiple tools (Jmeter, LR, RPT, VSTS, SOAP UI etc) and technologies (Like SOA, .net, Java etc) Some more Posts will be uploaded soon :)

Total Pageviews

Powered by Blogger.

Follow by Email

Recent

Comment

Search This Blog

Followers

Saturday, December 20, 2014

Jmeter Tips and Tricks



Jmeter Best Practices



Here are some Jmeter tips that would make your life easier while working with Jmeter.

Generic Tips 


  • Use the latest version of Jmeter as each new version has some enhancement in the performance.
  • If the Jmeter crashes while running a load test throwing "out of memory error" (error can be seen in Jmeter.log file located in 'bin' folder)
    • To resolve this you may need to tweak the Heap size settings (depending on the configuration of the load generating machine)
    • The heap setting can be modified in Jmeter.bat (For windows) or Jmeter.sh (for Linux) file "set HEAP=-Xms512m -Xmx1024m"
Below is the list of protocols supported by Jmeter
  • Web: HTTP, HTTPS sites 'web 1.0' web 2.0 (ajax, flex and flex-ws-amf)
  • Web Services: SOAP / XML-RPC
  • Database via JDBC drivers
  • Directory: LDAP
  • Messaging Oriented service via JMS
  • Service: POP3, IMAP, SMTP
  • FTP Service

Tips relevant while preparing the scripts


  • Add 'View Results Tree' listener during recording as a child of  the 'HTTP Proxy Server' (in Version 11 the name is 'HTTP(S) Test Script Recorder')

  This would help you to see the page view of response received from server during the script recording.




  • Add numeric prefix to sampler names - Helps in script debugging

   In case of long scripts this simplifies the mapping of the problem request in the test plan to the failing request sampler in Tree view
Open jmeter.properties file and set the property "proxy.number.requests" to true
The property file is present in the bin folder of Jmeter
This is the text you have to modify in the properties file
# Add numeric prefix to Sampler names (default false)
proxy.number.requests=true


  • Add a Filter to the type of requests to be captured during recording.[this will reduce the size of your scripts and will simplify our tasks during debugging]

Jmeter allows us to define which type of requests to capture[like .jsp, .asp, .php, .html or the like]
for example To set .jsp, enter ".*\.jsp" as an "Include Pattern".
If you are not sure about which requests to capture you can specify the type of requests that should not be captured [like .jpg, .jpeg, .gif]


  • Make sure that the "Retrieve all embedded resources" is checked in the HTTP request sampler while running the test so that proper load is simulated on the servers.

If this property is not checked, Jmeter will not call the secondary objects in the web page like images, .js, .css etc will not be requested during the test execution


  • Go through the scoping rules to make sure that your scenario runs as expected

for example a Timer gets executed before the parent sampler


Tips relevant while running the tests


  • While simulating large number of users Invoke multiple instances of JMeter (the number of threads to run from single instance will depend on the configuration of machines used to run Jmeter)


  • To reduce the resource consumption of the load generator machine, the jmeter should be run on-GUI mode 

A sample command to run jmeter on Non GUI mode is  
jmeter -n -t test.jmx -l test.jtl

  • Disable the “View Result Tree” listener to reduce the memory consumption while running the test.



  • For capturing the error screenshots during the test executions

Either add "View result tree" with "Only errors" checked
or
Use the "Save response to a file" Post processor with "Only errors" checked.


  • Do not include multiple listeners (including graph).

Save the .jtl during test execution and view the graphs etc by loading the .jtl into test plan once test is complete.

Friday, April 4, 2014

Types of Performance Testing




Types Of Performance Testing
Types of Performance Testing

Performance testing basically revolves around three "S"  Speed, Stability and Scalability

Speed : Speed can be simply termed as how fast is my application performing.
To test speed we perform  Load test, Volume Test and Spike Tests 

Stability: An application is said to be stable if
a) The application performance does not degrade with time and
b) In case of breakdown (happen exceptionally) the data should not be lost and application should easily recover to its original state, without any code changes.
To test stability we perform Endurance (soak) test and Stress Test

Scalability: Scalability is termed as the load handling capacity of the application at an increased user load.

Scalability tests will give you answer to this question
Will my application be able to handle the increased user load after say 5 years or 10 years?
Scalability tests provide us the statistics that would help us in projecting the server hardware that would be required in future to handle the increased user load.

To check scalability Capacity Tests are performed.


========================================================================

Now we will talk about the various terms mentioned above.

Load Test:  The load tests are performed at the normal (expected* user load) load to get a clear picture of the application performance when the application will be made available to the target audiences.

* Normal or expected user load is the load projected by the business team ( should be realistic)

Volume Tests: Volume tests are related to the volume of data (i.e no of records fetched, displayed or processed etc).

  Below are some of the situations where Volume tests should be performed:
  • Batch process testing: the results for a batch process could be something like, the daily batch takes 15 minutes to process 10,000 records and 1 hour to process 30,000 records. (Here the decrease in the rate of record processing can be due to some performance bottleneck and may require a root cause analysis)
  • Generating/Displaying a Report: the statistics to be captured would be something like How much time the application/process takes to download/display the report with 1000 records, 5000 records etc (based on the expected volume of records).
  • Database Synch Process:Consider a component that fetches data from a front end system's DB and inserts it into corresponding fields into the back-end database.
           How often this sych should be performed (?) can be answered based on statistics like how much time            it would take to Synchronize say 100, 500, 1000, 5000 records.


Spike Tests: The spike tests are performed to check the application performance under short spikes of user load increased beyond the expected values.
These tests make sure that application will be able to handle sudden burst of users.


Endurance (Soak) Tests: When the load tests are run at for a long duration (the duration can be 5 hours to 100 hours or more ) the tests are called as soak tests.
Once the application goes live it is expected to perform consistently for many months or years hence Soak tests becomes important as these tests provide us statistics to say that the application performance will degrade with time or not.


Stress Test: the stress tests are performed to test the application performance at user loads much beyond  the expected user load. These tests provide us the statistics about the breakpoint of the application and how application performance degrades under exceptionally high user load.
The expectation from the application is to degrade gracefully even at very high user loads.

The best case for the application is that even during exceptionally high user load rather than crashing, the application should give proper error messages (e.g currently we are experiencing high load so please try after some time) and should keep responding to some minimum users. 


Capacity test: The dictionary meaning of capacity is "the maximum amount that something can contain"
Hence capacity tests are aimed at assessing the load handling capacity of the current hardware and capturing the resource consumption statistics in such a way that the hardware requirements for an increased user load can be projected.

We will take one example how hardware requirements can be projected.

Assuming that the current server's hard disk is specified to handle 400 IOps and the application consumes 200 IOps when 100 transaction are performed per second, Implying 2 IOps per transaction
This leads us to a conclusion that for sustaining 400 transaction/sec we would require one more hard disk with the same capacity i.e. 400 IOps.


Bibliography


Performance Testing Guidance for Web Applications

J.D. Meier, Carlos Farre, Prashant Bansode, Scott Barber, and Dennis Rea
Microsoft Corporation


User Experience, Not Metrics

R. Scott Barber


Wednesday, February 26, 2014

Requirement Gathering



Capturing the client needs for a Performance testing project



Step 1: It is very important to know the Application's Communication protocol because it will be a very important factor to finalize the tool to be used for the activity.

Most of the web based applications are built to work with HTTP/S or now a days SOA is getting very popular hence you may need to go for websevice (API) testing.

Other Things worth knowing are (will be very Important in case of Bottleneck identification/Performance Reingeneering) 


  • The technology used for application development (like .net, PHP, Java/J2EE etc)
  • Database used (Like MS Sql Server, Oracle,MySQL, Postgresql) and on which platform they are deployed (like Windows, Linux etc)
  • Application/web server (like IIS, Apache, Tomcat, WebLogic, WebSphere) and on which platform they are deployed (like Windows, Linux etc)
  • Other components used in the application architecture (Like Load balancers, MongoDB, Solr search etc)
  • Check if there is encryption used for sending and/or receiving the request/response.
  • Check if there is Captcha used in any of the In-Scope workflows (properly implemented Captcha cannot be automated hence you need to ask them to remove/disable it during the testing)
  • Any batch process/Cron/Scheduler that is to be considered.


Do ask for the versions of the components wherever applicable also
Do not assume, ask the development team and get a confirmation.

For HTTP/s or webservice testing the popular tools (the tools about which i am aware :) ) are Apache Jmeter (open source), HP Loadrunner (commercial, very costly), Neotys Neoload (Commercial, medium cost), IBM RPT ie. Rational Performance tester (Commercial, medium cost), VSTS i.e Microsoft Visual studio(Commercial, cost gets compensated because the development team may already have the licenses).
Other tools used in the industry are:  Opensta (open source), Grinder (Open source)

Step 2: Understand the aim of the client for getting this activity done. This will help you bridge the gap between what client expects and what you deliver.

The image shown below correctly depicts the understanding gaps

Gaps in Requirement Gathering

Broadly there are three possible reasons (Please add in comments if you can share some more situations)

a) The application is Live and there is some Issue that is hampering the application performance on the live (Production) environment.

Go for Performance Bottleneck Identification and Performance Benchmarking Exercise

The aim should be to make sure that the scenario where the issue occurs is replicated while testing.
Here you need to be very careful in analyzing the problem that is present on Live servers. 
Is it during the high user load time, Is there any conflict between two activities accessing same resources like DB tables, Is it due to some Batch process (Crons, schedulers).


Quick fix:
If the situation is very grave then ask the team owning the servers to monitor the hardware resource consumption (CPU, Memory, Disk I/O) on the server machines.
Ask them to increase the hardware temporarily to overcome the urgent situation (on cloud based environment this is very easy while in case of physical servers they can even go for rented servers till the time application tuning completes)

b) The application is ready for Go live and they want to check how much load the application will be able to handle.[to make Go live or No Go Live decision]

Go for Performance Benchmarking Exercise
This is a very simple activity where you need to measure the current load handling capacity (based on response time SLAs) and the break point of the application.

The most important thing will be to capture the expected user load We will discuss about the user load model later. 

c) A product is being developed and the client wants to make sure that there will be no performance issue when they go and sell it to there customer [this is mostly true for applications with high transactions like in e-commerce, travel or telecom domain]

Go for Performance Bottleneck Identification and Performance Re-Engineering Exercise (finally do provide the performance benchmarks)
In this situation you need to have a sound technical knowledge or have support from technical experts to identify the issues and resolve them.



Step 3: Next step is to Identify the business scenarios to be considered for performance testing. (this is to help you in getting the flows prioritized as testing whole of the application is almost impossible)
Below are some of the factors to be kept in mind while deciding if a flow should be considered or not:

a) Business flows on which a high number of users are expected to work simultaneously.

b) Business flows on which volume (no of records processed or displayed) is high like report generation.

c) Business flows having high visibility (the dashboard seen by the CEO of the company should not be slow).

Step 4: Prepare a User load model
A properly prepared user load model will give you the exact idea of how the users will be using the application as a live site this will also be a check point to see if the scenarios identified (described at Step 3) are properly decided.

For starters, You can use this very simplified table to prepare the workload model.

Sample Workload Model
Sample Workload Model
 *% user Distribution = (Expected Active users during peak hour for that flow/Total expected users)*100
e.g: % user Distribution for login and Logout =(100/302)*100=33.11258

This matrix gives you the real life usage scenario The importance of capturing these statistics correctly is very high because

If the statistics captured are too high than the actual then you may end up procuring a very large server hardware out of which only a small portion gets consumed after the go live.
On the other side if the targets set are too low then the users will start complaining of slowness (If the gap is too high your site may even go down) on the first day itself.


An important factor to keep in mind is the database size, the relevant tables should have equal to or more than the expected number of records stored in them.

How to capture the statistics for User Load model

The marketing/product team should know the target audience as well as the number of targeted reach of the application in the product roadmap.[from them you can get information like "we are expecting to reach 6000 potential users in next 3 months"]
and for getting the breakdown of users across the flows the business analysts can help you in getting the expected user load breakdown.

Saturday, February 22, 2014

Purpose of Performance Testing



What is the aim of Performance Testing?

Confused about the jargons

Normally people are of the view that Performance/Load Testing is about
a) Running a load testing tool to see the response time and the work is done
b) The Sophisticated lot :) thinks it is about capturing a lot of statistics which require an Einstein's mind to analyse.

The real purpose of a performance tester is to enhance the End user's Experience and to balance it with the business we need to keep in mind that we have to work for "Providing best User Experience with Minimum hardware (Server configurations etc)"

A brilliant example of How user experience can do wonders is Google Search, before google arrived.People used to "Search it" over internet, now they "Google it".
Google has become synonymous of Searching because of user experience.

Do a search (search for Jitin Chadha) on Google and do the same search on bing or other search engines,
Most of the engines will result in almost same time (You may even get faster results on other engines) as compared to Google.

Based on the comparison of search engine speeds the Question arises "Is faster the better" we will discuss this in the next post. :)


                                                                                            
                                                                                                              To Be Continued...

Is Faster the better?




How fast is fast enough?


No Doubt faster is better, but how fast is fast enough is based on the perception of a person using it.

Perception or Expectations can be manipulated take an example
When we see a sign board saying "Work In Progress", it implies "Inconvenience for us"
But the feelings become positive when the sign board says "Please be patient, We are working towards a world class experience"

There will always be some activities which will take time (like transferring a file, fetching a report with lots of records etc). We have three choices to handle the situation

a) Let the user wait till the time the activity completes.
b) Show an Innovative message to the user (with a status bar for file sending) and ask him to wait till the activity completes.
c) Run the process in background and let the user perform something else, till the process completes.[show relevant messages as well]



                                                                                                                 Series Continues :) ......