Running distributed JMeter test

Reasoning

For running a significant Load Test (with hundreds or even thousands of concurrents) it might be necessary to utilise capacities of more than one machine. This instruction explains how to run a distributed load test where local machine operates JMeter controlling GUI only and all the load testing is done by remote JMeter servers (kind of slave nodes).

The overall process diagram looks like this:

Installation

    1. Download the most recent binary from http://jmeter.apache.org/download_jmeter.cgi to all your machines: your client (e.g. your local computer and your servers which will do the load test itself)
    2. cd to jmeter root and run:
  • sudo chmod -R +x bin/

* Note that you need java running on all the machines for JMeter to work. JMeter site also suggests running the same version of Java and JMeter on all the systems. They say that mixing versions may not work correctly.

Variant 1 (via config)

Client configuration

On the client (e.g. controlling jMeter instance on your machine) edit <jmeter root>/bin/jmeter.properties and add:

remote_hosts=127.0.0.1:55501,127.0.0.1:55502

# should be all localhost records and port should match the one used 

# for “server_port” on remote servers

 

client.rmi.localport=55512

# if you don‘t specify the port it’s randomly assigned which makes it slightly more 

# difficult to know which port to forward with -R

 

mode=Batch

# Optional change.

# Batch mode returns samples in batches. The default mode is StrippedBatch which 

# returns samples in batch mode (every 100 samples or every minute by default). 

# StrippedBatch also strips response data from SampleResult while Batch doesn’t

 

num_sample_threshold=250

# Optional change.

# means every 250 samples or every minute. It is 100 by default

Server configuration

On the server edit bin/jmeter.properties and add:

# Server 1

server_port=55501

server.rmi.localport=55511

 

# Server 2

server_port=55502

server.rmi.localport=55522

 

# Server N

server_port=<…>

server.rmi.localport=<…>

Note, that you only specify exact server_port to be consistent and sure about what to use in SSH tunnel afterwards. If you don’t specify it you’ll have port 1099 by default.

That works fine on remote servers but should be changed for local as you can not specify identical ports for remote_hosts values (e.g. this would be incorrect: remote_hosts=127.0.0.1:1099,127.0.0.1:1099)

Note, that you set server.rmi.localport to some exact value just in order to know it beforehand. jMeter tells you this port number anyway when you start the server:

$ ~/apache-jmeter-2.13# bin/jmeter-server -Djava.rmi.server.hostname=127.0.0.1

Created remote object: UnicastServerRef [liveRef: [endpoint:[127.0.0.1:55511](local),objID:[…

Note the endpoint:[127.0.0.1:55511] which contains the port number.

Starting jMeter instances

# Issue on Server 1

bash bin/jmeter-server -Djava.rmi.server.hostname=127.0.0.1

 

# Issue on Server 2

bash bin/jmeter-server -Djava.rmi.server.hostname=127.0.0.1

 

# Issue on local controlling jMeter server

bash bin/jmeter.sh -Djava.rmi.server.hostname=127.0.0.1

# note that can here we start JMeter GUI which is also fine for distributed LT

Launching SSH tunnels

Run locally to create ssh tunnels (different terminals):

# Terminal 1

ssh -vN -L 55501:127.0.0.1:55501 -L 55511:127.0.0.1:55511 -R 55512:127.0.0.1:55512 V338

 

# Terminal 2

ssh -vN -L 55502:127.0.0.1:55502 -L 55522:127.0.0.1:55522 -R 55512:127.0.0.1:55512 V364

where

      1. both -L parts are required for remote jMeter servers to operate normally
      2. -R part is only required to get the data back from remote servers (e.g. to see it in local GUI). Without this part LT is still running on server, but there’s no “feedback”

Variant 2 (via CLI params only)

Servers configuration and launch

On all the servers (note, on servers only, not on your controlling JMeter client) cd to jmeter root and run:

$ bin/jmeter-server -Djava.rmi.server.hostname=127.0.0.1 -Jserver_port=[port1] -Jserver.rmi.localport=[port2]

Note that you can randomly choose ports but be sure they can be used.

E.g. we could issue these commands on servers:

# Server 1

$ bin/jmeter-server -Djava.rmi.server.hostname=127.0.0.1 -Jserver_port=10101 -Jserver.rmi.localport=10102

 

# Server 2

$ bin/jmeter-server -Djava.rmi.server.hostname=127.0.0.1 -Jserver_port=20101 -Jserver.rmi.localport=20102

 

# Server N (be aware that there’s “N” in port number. Put a number instead)

$ bin/jmeter-server -Djava.rmi.server.hostname=127.0.0.1 -Jserver_port=N0101 -Jserver.rmi.localport=N0102

Client configuration and launch

Launch your jMeter controlling instance (e.g. on your local computer).

Important:

      • make sure you specify all the jmeter servers/ports separated by comma
      • parameter “client.rmi.localport” is a randomly chosen port (possible to be used)

$ bin/jmeter.sh -Djava.rmi.server.hostname=127.0.0.1 \

    -Jremote_hosts=“127.0.0.1:[server_port on server #1],127.0.0.1:[server_port on server #2]” \

    -Jclient.rmi.localport=[port]

In our example (for server #1 and server #2) that would be:

$ bin/jmeter.sh -Djava.rmi.server.hostname=127.0.0.1 \

    -Jremote_hosts=“127.0.0.1:10101,127.0.0.1:20101” \

    -Jclient.rmi.localport=10100

Starting SSH tunnels

Finally, start ssh tunnels with corresponding servers.

For each server issue:

ssh -vN \

    -L [server_port]:127.0.0.1:[server_port] \

    -L [server.rmi.localport]:127.0.0.1:[server.rmi.localport] \

    -R [client.rmi.localport]:127.0.0.1:[client.rmi.localport] \

    user@server

In our example we’re issuing:

# Terminal 1

ssh -vN -L 10101:127.0.0.1:10101 -L 10102:127.0.0.1:10102 -R 10100:127.0.0.1:10100 V364

 

# Terminal 2

ssh -vN -L 20101:127.0.0.1:20101 -L 20102:127.0.0.1:20102 -R 10100:127.0.0.1:10100 V338

Start testing

In jMeter GUI click on “Run -> Remote start all” to start testing from remote servers.

Resources

Advertisements

jMeter Graphs

Default Listeners for Vaimo Load Tests

By default the following out-of-box JMeter listeners are used for each load test:

Listeners powered by plugins

It’s possible to create different type of graphs using plugins that are available for jMeter (latest version of JMeterPlugins-Extras needs to be installed first).

Check the information below for plugins details:

  • Default Listeners for Vaimo Load Tests
  • Listeners powered by plugins
    • Each transaction response time (As a Histogram diagram)
    • Average transaction duration
    • Median transaction duration
    • 95% transaction duration
    • Min transaction time
    • Max transaction time
    • Threads (indicating concurrency) running at any point during the test (presented as a line graph)

Each transaction response time (As a Histogram diagram)

Add: Listener > Response Times Distribution to your overall plan

By the end of the test you will have response time of all transactions (grouped by name) as histogram diagram:

From the ‘Rows‘ tab you can choose which transaction to be displayed individually.

Category Page: 

Cart Page:

Product Page:

Average transaction duration

 Add: Listener > Aggregate Graph to your overall plan

By the end of the test you can configure the graph to show the average response time of all transactions (grouped by name) as a bar diagram, by checking ‘Columns to Display: Average’:

Click on the ‘Graph‘ tab and you can see the graph

Median transaction duration

 Add: Listener > Aggregate Graph to your overall plan

By the end of the test you can configure the graph to show the median response time of all transactions (grouped by name) as a bar diagram, by checking ‘Columns to Display: Median’:

Click on the ‘Graph‘ tab and you can see the graph

95% transaction duration

 Add: Listener > Aggregate Graph to your overall plan

By the end of the test you can configure the graph to show the 95% response time of all transactions (grouped by name) as a bar diagram, by checking ‘Columns to Display: 95%’:

Click on the ‘Graph‘ tab and you can see the graph

Min transaction time

 Add: Listener > Aggregate Graph to your overall plan

By the end of the test you can configure the graph to show the minimum response time of all transactions (grouped by name) as a bar diagram, by checking ‘Columns to Display: Min’:

Click on the ‘Graph‘ tab and you can see the graph

Max transaction time

 Add: Listener > Aggregate Graph to your overall plan

By the end of the test you can configure the graph to show the Maximum response time of all transactions (grouped by name) as a bar diagram, by checking ‘Columns to Display: Max’:

Click on the ‘Graph‘ tab and you can see the graph

Threads (indicating concurrency) running at any point during the test (presented as a line graph)

 Add: Listener > jp@gc – Active Threads Over Time to your overall plan

You can configure the graph to show all threads aggregated together (or displayed grouped by transaction name) plotted over execution time of the test plan:

How to push data back to NewRelic

Since jMeter is not a real ‘Browser’ it does not execute any Javascript after receiving the response from web server. NewRelic frontend timing and Browser session tracking is dependent on Javascript execution to send required data asynchronously back to NewRelic, therefore when we use jMeter we are not able to use features in Browser app section of NewRelic (we are interested in transactions breakdown in browser section). To overcome this problem we need to extract the required parameters from the page response and also fake some metrics then make a separate request to NewRelic server and push the data.

Step-by-step guide

    1. In order for NewRelic agent to inject variables into page source we need to have a User-Agent which is more modern than jMeter’s default User-Agent, since NewRelic agent on server does not inject these data for old User-Agents. What has worked for me is just to set my normal chrome User-Agent like this in the request headers:
    2. Now we are sure that NewRelic agent injects some Json data into page that we can parse:
    3. As you see NREUM Json object now contains some data which we can parse using jMeter’s ‘Regular Expression Extractor’ like this:
    4. Now that we have a Json object we can use jMeter’s ‘JSON path extractor’ to simply fetch the value of each variable in the Json:
    5. At this step we have extracted all NewRelic variables form the page and we can push them to NewRelic. The request should be a GET request to bam.nr-data.net/1/${nrLicence} where nrLicence is already extracted from Json and the rest of parameters are as following::
  • Parameter Name
  • Value
  • Description
  • a
  • ${nrAppId}
  • ApplicationId already extracted from Json
  • pl
  • ${__javaScript(new Date().getTime();)}
  • jMeter Javascript code to get current time (this will be used as a reference for calculating other front-end timings) not important the is accurate.
  • v
  • 768.2acc9fa
  • Javascript code version. If it changes just simply check what is sent to NewRelic in a normal page view.
  • to
  • ${nrTransactionName}
  • Hashed value of transaction name which is Magento’s module/controller/action, already extracted from Json.
  • ap
  • ${nrApplicationTime}
  • Application time in milliseconds, already extracted from Json.
  • be
  • Hard Coded integer of milliseconds
  • BE time including network, DNS and content download, since it’s not possible/important to calculate the exact value this can be hardcoded to anything.
  • fe
  • Hard Coded integer of milliseconds
  • whole FE time, since it’s not possible/important to calculate the exact value this can be hardcoded to anything.
  • dc
  • Hard Coded integer of milliseconds
  • ?
  • f
  • []
  • ?
  • perf
  • Hardcoded to:
     Expand source

  • This will be a Json with values of browser performance timing which is get from: performance.timing
  • command, since jMeter does not render anything in FE this should be hardcoded. 
  • at
  • ${nrAtts}
  • Obfuscated value of any custom attributes injected to the page, this is already extracted from Json.
  • jsonp
  • NREUM.setToken
  •  
  1. We are nearly done now, and just need to make sure we have Json extractor PostProcessors inside every request sampler and also a request sampler which is making the get request to NewRelic right after it:

How to push data back to Google Analytics

Since jMeter is not a real ‘Browser’ it does not execute any Javascript after receiving the response from web server. Google Analytics session tracking and number of visitors on the site is dependent on Javascript execution to send required data asynchronously back to Google, therefore when we use jMeter we are not able to see any data in Google Analytics regarding our load test (we are interested in number of visitors on the site). To overcome this problem we need to extract the required parameters from the page response and also fake some metrics then make a separate request to Google Analytics server and push the data.

Step-by-step guide

    1. GA tracks visitors bases on a unique user identifier that can be anything as long as it’s unique and each user has one. In jMeter each thread is viewed as a unique visitor, therefore if we have thread numbers #1 to thread number #20 putting load on the site with different scenarios then we can use the thread number as an identifier and push it to GA. To do this we need to do some simple coding in Java which is used by ‘BeanShell Preprocessor’ components in jMeter. My simple code to generate a uniqueId for thread is like this: 
       
  • import java.text.*;
    import java.io.*;
    import java.util.*;  
    try {
    int threadNo = ctx.getThreadNum()+1;
    int threadGroupBase = 123456789;
    int uniqueId = threadNo + threadGroupBase;
    vars.put(“uniqueId”, Integer.toString(uniqueId));
    }
    catch (Exception ex) {
    IsSuccess = false;
    log.error(ex.getMessage());
    System.err.println(ex.getMessage());
    }
    catch (Throwable thex) {
    System.err.println(thex.getMessage());
    }


  • As you can see it’s quite simple, where  ctx.getThreadNum() asks jMeter engine to gives us the current thread number and the rest is just adding it to a base value (see comment on base value below) and putting it inside a variable named ‘uniqueId’ which will be accessible during thread’s lifetime only to this thread.
    Note that base value should unique across thread groups (random value) but equal for all the requests within a particular thread group.

    1. Apart from that we need to have the page title to send across to GA which can be fetched from source using this RegEx extractor::
    2. At this step we have extracted the variables from the page and we can push them to GA. The request should be a GET request to www.google-analytics.com/collect?cid=${uniqueId} where ‘uniqueId’ is already generated in step 1 and the rest of parameters are:
  • Parameter Name
  • Value
  • Description
  • t
  • pageview
  • To indicate that this is a page view transaction
  • dt
  • ${documentTitle}
  • Page title which is already extracted with RegEx.
  • dl
  • ${BaseUrl}/${csvUrl}
  • Visited Page Url.
  • tid
  • UA-#######-#
  • Hardcoded to whatever value the Tracking ID / Property ID is, can be found in the Google Analytics account.
  • gtm
  • GTM-ABCDEF#
  • GTM Container ID. Hardcoded to whatever value the Google Tag Manager Id is.
  • v
  • 1
  • BE time including network, DNS and content download, since it’s not possible/important to calculate the exact value this can be hardcoded to anything.
  1. We are nearly done now, and just need to make sure we have RegEx extractor PostProcessors inside every request sampler and also a request sampler which is making the get request to GA right after it:
  2.  
  3.  

Useful Resources

Track Non-JavaScript Visits In Google Analytics

http://www.simoahava.com/analytics/track-non-javascript-visits-google-analytics/

Magento Theme validation

Overview

Validate use of local.xml in custom templates, no excessive file overwriting in template. Validate prober theme inheritance. Validate proper use of JS and CSS includes. Avoid having to much logic in template, and loading objects from templates.

Templates

  • Use “_” symbol as a word separator in .phtml file names. I.e.: price_msrp_item.phtml All other separators like “-” or CamelCaseFormat are not allowed.
  • Short tags (<? ?> , <?= ?>) in .phtml files are not allowed.
  • Retrieving collection in .phtml files is not allowed (and models as well, especially with load method calls)
  • Connected code (neither <!– nor /*, // ) is not allowed
  • Debug conditions like var_dump, print_r, Zend_Debug::dump(), exit or die are not allowed
  • Each .phtml file must have PHPDOC with the current class name:

/**

* @var $this Mage_Authorizenet_Block_Directpost_Iframe

*/

or 

/**

* @see Mage_Authorizenet_Block_Directpost_Form

*/

  • “return” statements in .phtml files are not allowed
  • Complex PHP logic is not allowed in .phtml files
  • All calls in .phtml file must to public functions i.e.: <?php $this->getProductName() instead of $this->_getProductName()
  • Use alternative syntax for if/for/foreach/while and other statements. I.e. <?php if (….): > <?php endif ?> instead of <?php if (….) { … } ?>
  • Inline css/js is not allowed in .phtml files

Layouts

  • Copypaste of the core layout files is not allowed (I.e. re-declaration of catalog.xml or page.xml in the custom theme)
  • All js/css files must be included only via layout file. Otherwise it’s not possible o merge them.
  • Use proper theme inheritance
  • Avoid using local.xml is possible. Each module should have own layout update file.
  • Block name in layout file must be separated by dots. I.e.: <reference name=”product.info.options.wrapper”>

Translations

  • Each module has own translation file.
  • Use Translator.translate() for JS translations

Start Automated Code Analysis for Your Project

We’re a big company with many teams and many developers in each team.

We work on many projects at the same time, sharing the existing code and creating the new one. Some of that code is legacy, some follows best practices, some better be optimised and rewritten.

It brings us to the idea of checking the quality of the code continuously, and there are tools that help developers do that on a regular basis.

Automated Code Analysis Tool

We present you an Automated Code Analysis Tool we built. It’s built using

  • Bitbucket as a holder of all code and project repositories, and the build repository we create to send it to Scrutinizer
  • Aja/Composer as a builder of the build repository
  • Dedicated Jenkins job server, that triggers building a build repository on a regular basis
  • Scrutinizer, that does the rest of magic: it analyzes the code, reveals issues, report them, tracks progress on issue fixes, a lot more.

As a bonus, Scrutinizer creates a badge for your project you can use as an indicator of your project code health.

Call for action – start using Scrutinizer

Are you up to have your Magento 1 project checked by our tool? Easily done. 

You just have to follow two simple steps below:

  1. Sign up at Scrutinizer with your Vaimo email address (https://scrutinizer-ci.com/)
  2. Ask Eugene Ivashin for further help. He will do the following for you:
    • Add your Scrutinizer account to Vaimo Organization
    • Create a job at our dedicated job server that will
    • Setup a project in Scrutinizer that will perform your built project check on a regular basis.

Before applying to use Code Analysis Tool for your project, make sure that vaimo/vaimo-composer-utils package in your composer.json is at least 0.16.1. If it’s older, please update your AJA project.

 

We have created a global configuration file in Scrutinizer that defines all checks and code analysis rules for every Magento 1 projects we maintain. It’s quite a big list of checks, plus Scrutinizer recognizes Magento 1 structure and code style. Even though you will have an access to all Vaimo projects at Scrutinizer, you shouldn’t modify it, as we aim to have same criteria for all our projects, and will overwrite your custom profile from the global configuration.

Consider reading about the PHP Analyzer the Scrutinizer is using to perform checks, there is a lot of valuable information there: https://scrutinizer-ci.com/docs/tools/php/php-analyzer/.