Implementing A/B Tests with Adobe Target & AngularJS Decorators

Implementing A/B Tests with Adobe Target & AngularJS Decorators

Web analytics tools are used to understand the behaviour of website visitors, and A/B testing is a technique that uses such tools to optimise a site. The tools facilitate this by giving you a means to measure and analyse site traffic and conversion.

Adobe Target is a real-time metrics-collection and reporting tool that is one of the most widely-used client-side analytics platforms available. In this blog, I’m going to talk about how to create an A/B test using Adobe Target and AngularJS, where the  ‘B’ version is swapped-in using Angular decorators.

What is an A/B Test?

Shine’s Fernando Maquedano wrote a great post about this in 2015, but in short, an A/B test aims to identify the ‘better’ of two versions of a website, where ‘better’ is defined as the one that produces higher metrics. These metrics could be stronger sales, larger conversion rates, or anything else that is important to you. We identify the winner by comparing changes to a page against the default page design.

When you have well-defined metrics for success, A/B testing is a great way to maximise the impact of a site. The only caveat I would add is that the most effective way to A/B test is to only make limited alterations within each test iteration. Otherwise, it becomes too hard to determine what might have caused your metrics to change.

How do website visits become an A/B report in Adobe Target?

In a nutshell, a reference to a script file called at.js is inserted into a page that is returned from your server. This script contains the reporting and analytics code, and while the page is loading, the script is executed.

This execution is very quick and doesn’t significantly influence the loading time. Then an mbox get request is made to the Adobe Target server with configured parameters. After that, the qualified audience is determined in accordance with rules describing to whom the content and experiences are available in an activity.

Having determined which experiences the visitor should have according to the URL and mbox parameters, Adobe Target sends the content back to the page. Following that, a web beacon embedded in the page transmits analytics data back to the server.

All collected data is kept in a report suite on the server. This data can be accessed through a web browser. After the test ends, an A/B test report is run to decide whether the metrics for page B are higher than page A, and thus whether it had a greater influence on visitors.

Creating an A/B Test

Before conducting an A/B test, you need to know what the number of daily visitors to your site is. This feeds into the calculation of how long the test will need to run if the results are to be statistically meaningful. Other parameters for this calculation include:

Confidence level: Always recommended to be 95% or above

Statistical power: High statistical power is preferred, which represents a greater chance of detecting a real difference between conversion rates, and a lower chance of producing false positives. 80% is a commonly adopted value for statistical power.

Baseline conversion rate:  The existing conversion rate of version A, which can usually be reasonably estimated based on prior experience

Minimum Reliably Detectable Lift: If you want to detect a small lift in the A/B test result, but still have a high probability of that lift being real (as opposed to just caused by random fluctuations), you must have a large sample size, either because you have lots of users each day, or because you’ve run the test for a long time. Minimum Reliably Detectable Lift lets you factor in the potential business impact of a change to find a balancing point between measuring a small lift and having to conduct a test with a longer duration. For example: suppose the business mandates that a minimum lift of 11% would be required to use option B, and that this meant that a sample size of 20000 visitors would be required for your test. This means you need 10000 visitors for the A case, and 10000 for the B case. If your site attracts 1000 visitors on a daily basis, you’re going to have run the test for twenty days to accurately measure for this lift. Alternately, if the business doesn’t want to wait 20 days, they’ll need to raise their minimum lift to something more than 11%.

Configuring an A/B Test

To configure an A/B Test in Adobe Target, we first click ‘Create Activity’ and choose ‘A/B Test’:


We then specify the URL that determines the site to be used in the test (we’ll use as an example), and click next:


At this stage, we name our test. We click ‘Experience B’ and identify the part of the page where we want to have an alternative version. In this example, we choose the ‘Application Development’ container and click ‘Swap with HTML Offer’ from the drop-down-list:


This will trigger a pop-up window:


We create a new folder and name it with our test Id (track Id). Inside the folder, we create an HTML Offer for the B Test. The ‘Code’  is the part where our generated bundle file that contains the required new feature B test goes, but we will leave it empty for now. Then we choose offer B and click ‘Done’.

When we are satisfied with our configuration, we click ‘Next’ to go to the ‘Targeting’ step:


Under ‘Audience’, “All Visitors” means all site visitors will be targeted for involvement in this activity. We can limit this to some other percentage, or create an activity-only audience. Of that percentage, we can then specify the proportion that should actually be allocated to Experience A and Experience B.

After setting these values, we click ‘Next’ to be taken to the final step:


This is where we specify the start date. As we have already calculated the number of days to complete the test, the end date will be 20 days after we start. We configure an activity to use Adobe Analytics as the reporting source. This requires us to link our Adobe Experience Cloud account with both Adobe Analytics and Adobe Target, and select where we want to save the report suite.

Note that in the ‘Goal Metric’ section, we choose the metric we created in Analytics, although we haven’t included this in the scope of this blog. Finally, we save and close the activity.

Implementing the B Test with AngularJS

Now we finally embark on the most exciting part of the process: how to implement the B part of the test.

In order to check which tests (or which version of the tests) are loaded or live, we first register the tests in a new property on the window object before the actual implementation of the new feature:

export default function setupTest(testId, variant = 'B') {
    const experiment = {
        testId: testId,
        variant : `experience${variant}`,
    window.experiments = window.experiments || [];

If you’re using Typescript, we can add a new property be directly onto the Window interface definition:

interface Window {
  experiments: object[];

For the B test, besides the testId and variant registration, a CSS class name with testId and variant combination needs to be attached to the body tag. This is so that new styles can be applied when the test is loaded:

attachClassValue = (testId, variant) => {
  window.document.body.classList.add(testId + variant);

When changing a feature from its original behaviour, AngularJS decorators are handy as they allow a service, directive or filter to be modified before it’s used. In this case, we’re going to change a directive.

First, we get the module where the directive that needs to be changed lives:

const module = angular.module("MyModule");

In this case, we’ll extend the scope of the original directive:

    function($provide) {
        $provide.decorator("myHeaderDirective", [
            function($delegate) {
                const directive = $delegate[0];

                angular.extend(directive.scope, {
                    discount: "=",
                    content: "=",
                    onClick: "&"
                return $delegate;

Now we’ll create a template that shares the same name as the template in the directive so that the new variables can be passed into the template, and the new data assigned when the directive is used:

<div class="mainHeader">
  <div class="mainHeader-header__bottom-highlight">{{discount}}</div>
  <div class="mainHeader-header__bottom">{{content}}</div>

In the parent template, we can then do:

  ng-repeat-start='header in headers'
  on-click='onFeatureHeaderToggle({$header: header})'&gt;

Sometimes extra functions need to be added in the link function of the directive. For example:

const originLink =;
const myLink = function(scope, element, attrs, ctrl) {
    scope.calculateDiscount = calculateDiscount;

    function calculateDiscount(InclGST, discount) {
        return Number.parseFloat(InclGST - InclGST * discount).toFixed(3);
    if (originLink) {
        originLink(scope, element, attrs, ctrl);
directive.compile = function() {
    return function() {
        myLink.apply($delegate[0], arguments);
delete $delegate[0].link;

The directive controller can be extended in the same way, or even replaced by a completely new one:

module.config(function($provide) {
    $provide.decorator("myHeaderDirective", function($delegate, $controller) {
        const directive = $delegate[0];
        directive.controller = function($scope) {
            angular.extend(this, $controller(controller, { $scope: $scope }));
            // New function goes here
        return $delegate;

Upon the completion of the test implementation, all files need to be bundled into an html file. Here is an example of how’d you do that as part of a Webpack configuration:

plugins: [
    new HtmlWebpackPlugin({
        cache: false,
        inlineSource: '.(js|css)$',
        filename: `myHeader.bundle.html`,
        templateContent: '',
        inject: 'head',  //will place the scripts in the head element
        minify: {
            removeAttributeQuotes: true,
            collapseWhitespace: true,
            html5: true,
            minifyCSS: true,
            removeComments: true,
            removeEmptyAttributes: true,

Large dependencies need to be excluded from the output bundles if they are already installed with the actual code. This is because when running test B, the dependencies will be available regardless.

externals: {
    jquery: 'jQuery',
    lodash : "_",

The last step is to copy the bundle file and paste it into the HTML Offer created for test B in Adobe Target.


Before I conclude, here are a couple of handy things I discovered whilst learning how to do this:

  • When getting an element either with an id or class comprised of variables or rendered inside a ng-repeat directive, it must be placed inside an angular.element(document).ready() function call. This ensures it will wait for Angular to finish rendering DOM elements first.
  • When a new library is introduced in B test, the best way to access it is via a CDN. This is because of Adobe Target having a size limit of 256kb when pasting in the code for an HTML offer. This is especially the case if you’re using npm or yarn to bring in external libraries, as they’ll all get put into the bundle.


In this blog, I’ve covered how A/B test works in Adobe Target. This has included how to set the test up in Target, and how to modify directives and controllers with AngularJS decorators to change behaviour for a B test. In my next blog, I’ll show how you can do the same thing with Angular 2+.
  • willzhou
    Posted at 23:35h, 30 May Reply

    Very useful article. Thanks very much.

  • BSzilard
    Posted at 17:52h, 03 October Reply

    An actual step-by-step guide for Adobe Target A/B testing setup! Thanks a lot, it is useful.

  • Brion
    Posted at 06:03h, 12 March Reply

    This is a goldmine!!!! I’ve had this article bookmarked for quite some time now…finally at a point where we may be ready to give this implementation some serious thought.

    Have you written this up for Angular 2+ yet? I’ve not been able to find it.

Leave a Reply

%d bloggers like this: