Adobe Marketing Cloud Data Workbench Controlled Experiments

Similar documents
Adobe Marketing Cloud Dataset Configuration

Adobe Campaign Business Practitioner Adobe Certified Expert Exam Guide. Exam number: 9A0-395

Adobe Marketing Cloud Data Workbench Client

Dreamweaver Domain 6: Evaluating and Maintaining a Site by Using Dreamweaver CS5

Adobe Target Analyst Adobe Certified Expert Exam Guide

Sitecore Experience Platform 8.0 Rev: September 13, Sitecore Experience Platform 8.0

Marketing Operations Cookbook

HCI and Design SPRING 2016

Topics for today. Quantitative data Instrumentation and logging Quantitative surveys AB testing Activity

Campaign Manager 2.0 for Sitecore CMS 6.6

KANRI DISTANCE CALCULATOR. User Guide v2.4.9

BEAWebLogic. Portal. Overview

And the benefits are immediate minimal changes to the interface allow you and your teams to access these

Excel Part 3 Textbook Addendum

vrealize Operations Manager Customization and Administration Guide vrealize Operations Manager 6.4

ONCONTACT MARKETING AND CAMPAIGN USER GUIDE V10

Load testing with WAPT: Quick Start Guide

Chapter 3: Data Description Calculate Mean, Median, Mode, Range, Variation, Standard Deviation, Quartiles, standard scores; construct Boxplots.

Tutorial 1 - Setting up your first process

Marketer's Guide. User guide for marketing analysts and business users

MT+ Beneficiary Guide

MT+ Beneficiary Guide

Campaign Manager for Sitecore CMS 6.3

Adobe Target Analyst Adobe Certified Expert Exam Guide

Adobe Marketing Cloud Best Practices Implementing Adobe Target using Dynamic Tag Management

Excel Forecasting Tools Review

Adobe Marketing Cloud Data Workbench with Target

Integration Guide. MaritzCX for Adobe

Working with Mailbox Manager

Section 9 Linking & Importing

Using Excel for a Gradebook: Advanced Gradebook Formulas

WebStudio User Guide. OpenL Tablets BRMS Release 5.18

Business Insight Authoring

WEB ANALYTICS A REPORTING APPROACH

SAS BI Dashboard 3.1. User s Guide Second Edition

Applied Machine Learning

Table of Contents CLIENT INTERNET ACCESS...4. Mobile Devices...4. Browser Compatibility...4 SYSTEM NAVIGATION Header Links...

Project and Portfolio Management Center

Spreadsheet Case 2. Clarkson Cosmetics

What s New in Spotfire DXP 1.1. Spotfire Product Management January 2007

Spreadsheet Management Software Cases. Evaluate the effectiveness of an e-commerce company s Web site and advertising sites

MT+ Beneficiary Guide

A/B Testing Overview

Optimize Online Testing for Site Optimization: 101. White Paper. White Paper Webtrends 2014 Webtrends, Inc. All Rights Reserved

Global Model Workstation Release Log

Orientation Assignment for Statistics Software (nothing to hand in) Mary Parker,

GRAPHING BAYOUSIDE CLASSROOM DATA

CONFIGURING SAFE V4.0 IN THE IBM COLLABORATIVE LIFECYCLE MANAGEMENT

Introduction to BEST Viewpoints

Getting Started with Code Coverage/Eclipse

Binary Diagnostic Tests Clustered Samples

Beacon Catalog. Categories:

Microsoft Excel 2007 and the Government Meeting Professional

1 Introduction to Using Excel Spreadsheets

UNIBALANCE Users Manual. Marcin Macutkiewicz and Roger M. Cooke

Advanced Training Manual: Surveys Last Updated: October 2013

07/20/2016 Blackbaud Altru 4.91 Reports US 2016 Blackbaud, Inc. This publication, or any part thereof, may not be reproduced or transmitted in any

EFIS 2.0 Training Materials Child Care User

WHO STEPS Surveillance Support Materials. STEPS Epi Info Training Guide

Architectural Engineering Senior Thesis CPEP Webpage Guidelines and Instructions

SelectSurveyASP Advanced User Manual

Data Analyst Nanodegree Syllabus

Identifying Updated Metadata and Images from a Content Provider

Monitor database health in a dashboard

Do-It-Yourself Guide for Advertisers

2. In Video #6, we used Power Query to append multiple Text Files into a single Proper Data Set:

Rapid Bottleneck Identification A Better Way to do Load Testing. An Oracle White Paper June 2008

Teiid Designer User Guide 7.5.0

Office 2016 Excel Basics 25 Video/Class Project #37 Excel Basics 25: Power Query (Get & Transform Data) to Convert Bad Data into Proper Data Set

Developer Resources: PIN2

C exam. Number: C Passing Score: 800 Time Limit: 120 min File Version: 1.0.

SAS Data Explorer 2.1: User s Guide

Statistics with a Hemacytometer

Table of Contents. Table of Contents 3

Excel 2010: Getting Started with Excel

Product Documentation SAP Business ByDesign August Analytics

CHAPTER 2: FINANCIAL REPORTING

Multivariate Calibration Quick Guide

Marketer's Guide. User guide for marketing analysts and business users

Creating a data file and entering data

Interaction Feedback

Applied Regression Modeling: A Business Approach

HPE Project and Portfolio Management Center

Adobe Certified Professional: Implementation for Omniture EXAM GUIDE

Topic C. Communicating the Precision of Measured Numbers

TRACKING YOUR WEBSITE WITH GOOGLE ANALYTICS CHRIS EDWARDS

CHAPTER 4: MICROSOFT OFFICE: EXCEL 2010

Appendix 3: Using the Exsys CORVID Servlet Runtime

Designing dashboards for performance. Reference deck

Simulation of Molecular Evolution with Bioinformatics Analysis

HERA and FEDRA Software User Notes: General guide for all users Version 7 Jan 2009

BEAWebLogic. Portal. Tutorials Getting Started with WebLogic Portal

VEDATRAK CRM 3.0. User Guide

22/10/16. Data Coding in SPSS. Data Coding in SPSS. Data Coding in SPSS. Data Coding in SPSS

ZENworks Reporting System Reference. January 2017

COPYRIGHT Wavextend B.V. All rights reserved. Calculation Framework user guide, Basic configuration for version

User Guide. Kronodoc Kronodoc Oy. Intelligent methods for process improvement and project execution

WELCOME! Lecture 3 Thommy Perlinger

More on Testing and Large Scale Web Apps

Data Analyst Nanodegree Syllabus

Transcription:

Adobe Marketing Cloud Data Workbench Controlled Experiments

Contents Data Workbench Controlled Experiments...3 How Does Site Identify Visitors?...3 How Do Controlled Experiments Work?...3 What Should I Test?...4 What Are the Requirements?...5 How Do I Implement a Controlled Experiment?...5 Enabling Controlled Experimentation...6 Modifying the ExpFile Parameter...6 Modifying the ExpCookieURL Paramter (Optional)...6 Modifying the ExpPartialMatch Parameter (Optional)...7 Creating a Controlled Experiment...8 Defining the Objective...8 Forming a Hypothesis...9 Determining the Number of Visitors Needed...11 Creating the Test Content...11 Configuring and Deploying the Experiment...11 Validating the Experiment...16 Viewing the Results...18 Configuring the Dataset...18 Modifying Log Processing.cfg...18 Modifying Transformation.cfg...19 Viewing the Experiment Results...21 Evaluating the Experiment...21 Experiment Design Spreadsheet...24 Last updated 2/8/2017 Data Workbench Controlled Experiments

Data Workbench Controlled Experiments 3 Data Workbench Controlled Experiments Controlled experiments are tests that enable you to compare results obtained from an experimental sample group with those from a standard control group. Site enables you to implement, measure, and analyze controlled experiments and their results as they relate to different aspects of your website. Doing so enables you to test hypotheses regarding the improvement of website performance before spending significant time or money fully implementing the proposed changes. Note: Site experiments can be analyzed only in datasets where the only method of visitor identification in use is the Sensor set persistent cookie method. Sensors running on J2EE servers (JBoss, Tomcat, WebLogic, and WebSphere) do not support controlled experimentation. For more information, see the following section. Using Site, you can implement A/B, A/B/A, and multivariate controlled experiments to gather enough test data to provide statistically accurate data for a detailed evaluation of your hypothesis, without impacting current website performance. How Does Site Identify Visitors? A typical configuration of Site uses cookies to uniquely identify visitors to your website and track their behavior over time. The first time that a particular browser (considered a visitor) makes a request of your website, Sensor works with your web server to set a persistent cookie (by default cs(cookie)(v1st)), which is interpreted internally within the system as x-trackingid. This cookie is set only once, on the very first request made to your website by that visitor. It then is collected from that visitor each time that browser makes a request (either page or embedded object request) of your website in the future. Accepting a persistent cookie is at the browser s discretion. If a user does choose to block persistent cookies, their page view requests are still logged, but the measurement data from those requests are not correlated to a particular visitor or their sessions on the website unless you implement an alternate method of visitor identification, such as using the Hash transformation on the IP and UserAgent fields. Note: Site experiments can be analyzed only in datasets where the only method of visitor identification in use is the Sensor set persistent cookie method. Sensors running on J2EE servers (JBoss, Tomcat, WebLogic, and WebSphere) do not support controlled experimentation. During a controlled experiment, users who do not accept cookies could be placed in different experiment groups from one click to the next. This becomes an issue only if you perform your test analysis with the Broken Session Filter turned off in Insight, which Adobe does not recommended. For more information about the Broken Session Filter, see the Insight User Guide. If a visitor clears the cookie during an experiment, the visitor is assigned a new cookie and potentially could be assigned to a different group. Because Adobe identifies the visitor as new, the experiment is not invalidated. How Do Controlled Experiments Work? In an experiment, you can define any number of test groups in addition to the control group.

Data Workbench Controlled Experiments 4 When an experiment is running, all visitors to your website become part of the experiment, either as part of a test group or of the control group, as soon as they access any page involved in the experiment. Visitors are allocated to your experiment groups randomly, in proportions defined during the experiment configuration. Controlled experiments are implemented using the Sensor software that is installed on each of the content servers in your web cluster. As the content servers receive requests, Sensor randomly selects visitors for your test groups and redirects their page requests to the experimental content. When Sensor selects a visitor to view the test content, the address bar continues to list the originally requested URI but the visitor is routed to the test URI. Because this process takes place internally in the server application, users are not aware of when they are being tested, which is an important consideration for unbiased experimentation. Sensor passes the test URIs, not the original URI displayed to the user, to the log files for use in analysis. The results of the experiments can be analyzed easily using Insight to determine whether the experimental hypothesis that you were testing is correct. Note: Adobe strongly recommends that controlled experiments be coordinated and performed with input from those individuals in your organization who are responsible for configuring and maintaining your analysis datasets. What Should I Test? Test results must be clear and meaningful so that you can feel confident making large dollar decisions based on those results. Although you can test various page layouts with Sensor and Site, Adobe suggests that you focus on testing high-value, strategic business initiatives, or new or redesigned website functionality that address the goals that you have set for your website as well as for your business.you can test for such issues as best price guarantees, personalization functionality, market offers (for example, packages or bundles), creative design, and application processes. The following concepts are most important when developing your controlled experiment: Understand the right changes to make. This requires some research into how your website functions and the business processes underlying the front-end website. You want to make changes that provide the most impact and can be tested easily. Small changes can have significant impact. Not all of the changes that you test need to be drastic to have a significant impact on your business. Always be open to making small, but very important changes. Supported Methodologies Many types of experiments with many different goals can be performed using Site. The following list provides a few examples: Altering pages, content, and website processes to improve conversion rates. Changing marketing campaigns, promotions, cross-sells, and up-sells to increase revenue. Varying page load times to understand customer quality of service and the actual value of infrastructure performance. To reach these goals, Site supports the following types of methodologies for controlled experimentation and testing: Page Replacement: Replace static URL X with static URL Y. This methodology is of limited use in a dynamic environment.

Data Workbench Controlled Experiments 5 Dynamic URI Replacement: This is a variant of Page Replacement that replaces static page X with dynamic page Y to render dynamic content. Object Replacement: Replace fixed object X with fixed object Y. Content Replacement: Replace content set X (multiple objects, pages, table, and so on) with content set Y. Experiment Variable Replacement: Replace JavaScript object /writecookie_x.js with JavaScript object /writecookie_y.js to write a cookie that can be used by a back-end system to serve particular content. Note: Controlled experiments are based on URI replacement, not query string replacement. The URI within a particular URL is highlighted in the following example: http://www.omniture.com/index.asp?id=1 For example, in your controlled experiment you could specify that the control group URI index.asp be replaced with the test group URI index2.asp to determine which page design would result in more value. What Are the Requirements? To perform a controlled experiment on your website using Site, you must meet the following requirements: Sensor must be installed and working properly on each web or application server that supports the website content that you are testing. Note: Sensors running on J2EE servers (JBoss, Tomcat, WebLogic, and WebSphere) do not support controlled experimentation. You must have a process in place for pushing content to all of your web or application server(s), such as a content management system. How Do I Implement a Controlled Experiment? Controlled experimentation with Site and Sensor is designed to be simple, feasible, and actionable. The rest of this guide describes how to fully implement a controlled experiment: Enabling Controlled Experimentation Creating a Controlled Experiment Validating the Experiment Viewing the Results

Enabling Controlled Experimentation 6 Enabling Controlled Experimentation To enable controlled experimentation, someone with administrator access to your web or application servers must modify the ExpFile parameter in the Sensor configuration file (usually named using txlogd.conf) on each web or application server in your web cluster on which a Sensor is installed. In addition, two other parameters in this file can be modified to implement a testing tool (ExpCookieURL parameter) or to remap large sections of your website (ExpPartialMatch parameter). This chapter provides more information about these parameters. To edit the txlogd.conf file If you have administrator access, complete the following steps. If you do not have administrator access, contact your system architect to request the changes, providing them with the following steps. 1. Navigate to the Sensor installation folder on a web or application server in your web cluster on which a Sensor is installed. 2. Open the Sensor configuration file (usually named using txlogd.conf) using a text editor and edit the file as indicated in Modifying the ExpFile Parameter and optionally in Modifying the ExpCookieURL Paramter (Optional) and Modifying the ExpPartialMatch Parameter (Optional). 3. Save and close the file. 4. Repeat this procedure for each web or application server in your web cluster on which a Sensor is installed. Modifying the ExpFile Parameter The ExpFile parameter points to the location of the experiment configuration file, which defines your experiment. Setting this parameter enables you to run experiments. For steps to create the experiment configuration file, see Configuring and Deploying the Experiment. Following is an example of the ExpFile parameter: ExpFile /home/experiment.txt This tab delimited text file (.txt) can be located anywhere in the Sensor folder and can have any convenient name. Make sure you record the location of the experiments directory and the name of the configuration file that you specify because you need to save your experiment configuration file (to be described later in this guide) using this name and in this directory. Note: If you do not set this parameter identically on each machine in your web cluster on which a Sensor is installed, controlled experimentation does not work. This entry can be preconfigured and remain in the Sensor configuration file on an ongoing basis with no adverse effect. If the experiment configuration file name specified is not found by Sensor or it is blank (that is, it exists but has no content), Sensor does not conduct the experiment, logs an error event on the HTTP server, and continues to operate normally in all other respects. Modifying the ExpCookieURL Paramter (Optional) The ExpCookieURL parameter can be used to test that your controlled experiment is working properly.

Enabling Controlled Experimentation 7 This parameter defines the URL of a virtual page that when requested places you into a specified experiment and group and then redirects you to the root of your website. From that point through the end of the experiment, you are part of the specified experiment and group. The default page for this parameter is setcookie.htm, but you can use any valid virtual URL. Note: This must be a virtual URL and must not be a real page or piece of existing content. There should be no file on the web server at the specified path with the specified name. Following is an example of the ExpCookieURL parameter: ExpCookieURL /setcookie.htm Modifying the ExpPartialMatch Parameter (Optional) If you want to enable your controlled experiments to remap your entire website or an entire subdirectory of your website to another location, you can set the ExpPartialMatch parameter in the txlogd.conf file to on. The default is off. Following is an example of the ExpPartialMatch parameter: ExpPartialMatch off Be very careful when setting this parameter to on because it can result in an inadvertent remapping of your entire website.

Creating a Controlled Experiment 8 Creating a Controlled Experiment The overall objective of running a controlled experiment on a website is to determine whether and to what degree a defined change or set of changes produces an effect on the users of the website. Defining a controlled experiment involves the following steps: 1. Defining the Objective 2. Forming a Hypothesis 3. Determining the Number of Visitors Needed 4. Creating the Test Content 5. Configuring and Deploying the Experiment Defining the Objective When defining your objective, consider the purpose of the website or website process you are analyzing: What is its primary function? Who is its target audience? Common website objectives include converting more visitors into customers or increasing the average amount of revenue gained for all visitors within an experiment over the duration of that experiment. Common website process objectives include improving the steps or pages in a process that cause visitors to abandon the process, removing unnecessary and confusing options in the process that tend to stop visitors from reaching the end of the process, or consolidating or expanding the process to eliminate or add steps or pages. Make sure to think carefully about what it is specifically that you want to understand about your website. Carefully planning your experiment makes the results much more meaningful to your business. Objective: To increase the number of visitors to our website who request a demo of our product using the Request a Demo graphical link, as shown in the following image:

Creating a Controlled Experiment 9 Forming a Hypothesis A hypothesis is an assumption, which can be taken as true for the purpose of argument, that provides a tentative explanation that can be tested by further investigation. Try to think about your hypothesis in terms of alternative pages, images, or processes that could be substituted easily for existing ones. Your hypothesis must be able to produce a result with statistical significance. This can be achieved by increasing the percentage of visitors included in the test, or by running the test for a longer period of time. At this point, you also should define your visitor-based success metrics, either as part of the hypothesis or as an additional matrix. Hypothesis: Moving the Request a Demo graphical link closer to the top of the page results in a Visitor Conversion increase of at least 1.5%. In our example hypothesis, we have defined the success criterion for this experiment as an increase in Visitor Conversion of at least 1.5%.

Creating a Controlled Experiment 10

Creating a Controlled Experiment 11 Determining the Number of Visitors Needed To create a statistically significant experiment, you must determine how long you need to run the experiment to include enough visitors to effectively evaluate the results of the changes to your website. If you need help determining the minimum length of your experiment, you can use the experiment design spreadsheet provided by Adobe as a tool to help you design your experiment. This file, named VS Controlled Experiment Design.xls by default, functions not only as a worksheet but also as a record of your decisions about the experiment. For more information about this file, see Experiment Design Spreadsheet. Note: The experiment design spreadsheet can provide useful statistical inferences only when the metric in question is defined as a percentage of visitors that meet some criteria. That is, it is useful only when trying to test a visitor-based metric hypothesis. Creating the Test Content Before you configure the experiment, you should create the alternate content that you want to use in the experiment. The control group is sent to the original URI, while the test group is sent to the new, alternate URI. To avoid confusion, do not reuse test group file names. For example, if you run an experiment using a test group file named test2.asp, do not use test2.asp as the name for the test file in your next experiment. For the hypothesis that moving the Request a Demo graphical link on your home page impacts Visitor Conversion, we create the alternate home page containing the Request a Demo graphical link in the new position. The following section describes how you then specify that the control group URI index.asp be replaced with the test group URI index2.asp for a certain percentage of visitors. Configuring and Deploying the Experiment After you have defined your objective, hypothesis, and experiment details as well as created your test content, you must configure Sensor to deploy the controlled experiment. You do this by completing the following two steps: Configuring the Experiment Configuration File Deploying the Configuration File and Test Content Configuring the Experiment Configuration File To configure the experiment, you must complete the experiment configuration spreadsheet provided by Adobe (named TestExperiment.xls by default). This file configures Sensor to perform the experiment and is the Excel version of the text file that you specified in Modifying the ExpFile Parameter. This file can contain information about multiple experiments, which can run at the same or at different times and use different groups and percentages, but these experiments are not correlated in any way. Users are placed into a group for each experiment listed in the file that is configured to be running at this time.

Creating a Controlled Experiment 12 Note: Each experiment is independent of all other experiments. Changes you make to one experiment do not affect any other experiment, and although visitors may be in multiple experiments, the results do not relate to one another. If you think a correlation exists between the changes in multiple experiments, you must create a new experiment that tests these changes together. To configure your experiment You should complete this file before the experiment begins and not modify the information while the experiment is running. Note: Any experiment is promptly invalid if the definition of the experiment changes after the experiment has begun. 1. If you have administrator access to your web or application servers, navigate to the Sensor installation folder on any Sensor machine in your web cluster to access the TestExperiment.xls file. If you do not have administrator access, contact your Adobe account manager to request the TestExperiment.xls file. 2. Open the TestExperiment.xls file (you can rename this file if desired) and complete the following fields: Field Experiment Description A descriptive name for the experiment. Each experiment name must be unique and cannot contain spaces. Experiment names are used when displaying the results of experiments in Insight. The names appear as the first half of the element names in the controlled experiment dimension. The second half of the element name is the group name from the Group field in this file. Each group is named in the following format using the experiment name followed by the group name: ExperimentName.Group Name For example: New_Homepage.Control Start The date and time that you want the experiment to begin. If you do not enter values, the experiment begins immediately after the file is deployed. Format: MM/DD/YYYY H:MM Note: If you leave the start and stop times empty, the experiment runs indefinitely. You can predefine start and stop times well in advance; therefore, you can configure all of your experiments for the next year at once if desired. Start and stop times are based on the system time of the web server. If that clock changes for any reason, your experiment may start or stop unexpectedly. If you would like to add an experiment as a configuration file entry but do not want the experiment to run in the near future, you can comment out the experiment information using the number sign # or define start and stop times in the past.

Creating a Controlled Experiment 13 Field Stop Description The date and time that you want the experiment to end.when the stop date and time occurs, Sensor will stop sending the cookie values identified as a test group to the test URIs and will send all cookies to the control group URIs. Format: MM/DD/YYYY H:MM Note: See the notes for the Start field. Group A descriptive name for each group of visitors in the experiment. Group names cannot contain spaces. Group names are used when displaying the results of experiments in Insight. For more information, see the Experiment field description. A control group can be implicitly or explicitly defined based on the value entered in the Percentage field. Note: To meet the number of visitors needed during the defined time period for the experiment to be statistically valid, you may need to either decrease the confidence level or increase the time period. For example, if your time frame is five days, your confidence level is 98%, and your number of visitors needed exceeds the number expected for that time period, you need to either increase the time period or decrease the confidence level until the number of visitors expected exceeds the number needed to run a statistically valid experiment. Percentage The percentage of website visitors to include in each defined group. These values can be expressed as either percentages or decimal values. In addition, both values must be either greater than or less than one. For example: 33.3% and 66.7%.99 and.01 If the sum for all groups is less than 100, the undefined excess defaults to a control group. Original URL The URI of the content to be remapped, followed by $. This value is case-sensitive. Format: index.asp$ Original URIs can be specified using a dollar sign ($) at the end of the URI to denote that an exact match of the file name is required. For example, the expression /product/product_view.asp$ matches that exact page only, while /product matches any page in the /product directory and could be used to remap that entire sub-tree. Original URL entries that do not specify the $ character at the end of the file name are ignored by the experiment unless the ExpPartialMatch parameter has been set to on. For more information about this parameter, see Modifying the ExpPartialMatch Parameter (Optional).

Creating a Controlled Experiment 14 Field Description Note: The controlled experiment functionality ignores any query strings appended to the URI stem. For example, the page /product/product_view.asp?productid=53982 is not a valid URI, but the page /product/product_view.asp is a valid URI. Remapped URL The URI of the alternate content. Format: index2.asp Note: See the notes for the Original URL field. The following is an example of a completed TextExperiment.xls spreadsheet: Note: Do not modify the column positions in the spreadsheet. This example indicates that the New_Homepage experiment starts on June 1, 2006, ends on June 30, 2006, and contains a control group with 50% of the visitors and a test group with 50% of the visitors, who see different content for one URI. Note: Although the sample file above has an explicit control group defined, it is not necessary to explicitly define a control group the experiment automatically creates the control group. If the sum of the percentages for all groups in an experiment is less than 100%, an implicit control group is assigned to users that do not fall into one of the explicit groups. 3. To insert comments to provide additional information about specific experiments, begin the cell with a number sign (#) and follow with your comments. Comments can be inserted anywhere in the file. 4. After you have completed the variables in the experiment configuration spreadsheet, save the changes, then save the file in tab-delimited text format (*.txt) using the name that you specified in the ExpFile parameter in the Sensor configuration file. See Modifying the ExpFile Parameter. The following is an example of an experiment configuration text file:

Creating a Controlled Experiment 15 Note: Because of the tabs required in this file, do not edit the experiment configuration text file by hand. If you need to make changes to the file, make the changes in the experiment configuration Excel file and re-save the file as a tab-delimited text file. If you have defined Start and Stop times, there is no reason to ever delete an experiment from the experiment configuration file. Keeping all of your experiments listed in the experiment configuration file is actually a good way to keep a record of how you defined each of your experiments. Deploying the Configuration File and Test Content You must deploy the experiment configuration file to each machine in your web cluster that is running a Sensor and serving the pages involved in the experiment.you can do so using either a manual procedure or your existing content management system. To deploy your test content On each application or web server running a Sensor that is serving pages involved in the experiment, use your existing publishing process to deploy the test content to the appropriate location. For example, if you want to publish the test group page index2.asp to the test folder for your website (mysite.com), you would publish the file to www.mysite.com/test. Note: Do not link to any of your test files directly from a page on your website. Doing so invalidates your test results and your index scores. To deploy your experiment On each application or web server running a Sensor that is serving pages involved in the experiment, place the experiment configuration text file in the directory that you specified in the ExpFile parameter in the Sensor configuration file. See Modifying the ExpFile Parameter. Sensor randomly selects website visitors for each group based on the percentages that you defined in the file and serves the test or control group content to them as appropriate.

Validating the Experiment 16 Validating the Experiment After you have deployed your experiment, you should validate that the experiment is working properly. As discussed in Modifying the ExpCookieURL Paramter (Optional), the page specified in the ExpCookieURL parameter in the Sensor configuration file can be used to place yourself in a specific experiment group. The default virtual page is /setcookie.htm, but you must use the value that you set in the ExpCookieURL parameter. Requesting the Test Page To test a specific experiment group for your website, your browser must be configured to accept cookies and you must not already have a cookie for this website. Each time you want to test a new group, make sure to clear your cookies for the website. To place yourself into a specific group within a specific experiment, request the test page with a query string in the following form: http:// <sitename/?experiment Name=Group Name> For example: http://www.omniture.com/setcookie.htm?new_homepage=index2 When the virtual URL request is sent to the server, Sensor identifies you as a member of the specified group within the specified experiment and then redirects you to the root of the website. You now can navigate to the appropriate location on the website to validate whether the correct content displays for that experiment and group. If you were to type the following into your browser, the browser would display the home page of the website and place you into the index2 group within the New_Homepage experiment: http://www.omniture.com/setcookie.htm?new_homepage=index2 When visitors in the index2 group request the home page, the Request a Demo graphical link displays higher on the page, as in the following graphic:

Validating the Experiment 17

Viewing the Results 18 Viewing the Results You can view the results of an experiment in a visualization within Insight. Configuring the Dataset Before you can view controlled experiment data in Insight, you must modify your dataset configuration files to include a field and an extended dimension to contain information about your experiments, which enables you to view the results of an experiment in a table within Insight. To do so, you must complete the following steps: Modifying Log Processing.cfg Modifying Transformation.cfg Note: Editing these configuration files causes your dataset to begin reprocessing immediately. It is important to schedule this for a time that has the least effect on the users of your system. For more information about reprocessing a dataset, see the Server Products Installation and Administration Guide as well as the Dataset Configuration Guide. Modifying Log Processing.cfg You must add the x-experiment field to the Log Processing.cfg file, which is used to create an extended dimension. See Modifying Transformation.cfg. To modify Log Processing.cfg 1. In Insight, open the Profile Manager by right-clicking within a workspace and clicking Admin > Profile Manager, or by opening the Profile Management workspace on the Admin tab. 2. In the Profile Manager, click Dataset to show its contents. 3. Right-click the check mark next to Log Processing.cfg and click Make Local. A check mark for this file appears in the User column. 4. Right-click the newly created check mark and click Open > in Insight. The Log Processing.cfg window appears. 5. Click Fields to show its contents. 6. Right-click the last field in the current list and click Add new > Field. 7. Type x-experiment in the newly created field, as shown in the following example:

Viewing the Results 19 8. Right-click (modified) at the top of the window and click Save. 9. In the Profile Manager, right-click the check mark for Log Processing.cfg in the User column, then click Save to > <profile name> to save the locally made changes to the working profile. Note: The dataset begins reprocessing immediately. For more information about Log Processing and Fields see the Dataset Configuration Guide. Modifying Transformation.cfg Now that the x-experiment field is available, you must create an extended dimension to include the x-experiment field in your dataset, which allows you to view your results in Insight. To do so, you must add a new dimension to the Transformation.cfg file. If you plan to run multiple experiments, you also must add a new Split transformation to the Transformation.cfg file. This Split transformation separates the different experiment and group names so that the information is easier to interpret. To avoid reprocessing your data again if you were to need to add additional experiments at a later date, Adobe recommends that you add the Split transformation even if you are not currently planning to run multiple experiments. The following procedure includes the creation of both the new Split transformation and the extended dimension. If you do not want to add the Split transformation, simply skip steps 5 7. To modify Transformation.cfg 1. In Insight, open the Profile Manager by right-clicking within a workspace and clicking Admin > Profile Manager, or by opening the Profile Management workspace on the Admin tab. 2. In the Profile Manager, click Dataset to show its contents. 3. Right-click the check mark next to Transformation.cfg and click Make Local. A check mark for this file appears in the User column. 4. Right-click the newly created check mark and click Open > in Insight. The Transformation.cfg window appears. 5. Click Transformation to show its contents. 6. Right-click Transformations and click Add new > Split. 7. Complete the new split on comma transformation as shown in the following example:

Viewing the Results 20 Note: You can enter any value in the Name field. 8. Right-click Extended Dimensions and click Add new > ManyToMany. 9. Complete the new dimension as shown in the following example: Note: You can enter any value in the Name field. If you did not include the Split transformation, you must type x-experiment in the Input field. 10. Right-click (modified) at the top of the window and click Save. 11. In the Profile Manager, right-click the check mark for Transformation.cfg in the User column, then click Save to > <profile name> to save the locally made changes to the working profile. Note: The dataset begins retransforming immediately. For more information about Transformation.cfg and extended dimensions, see the Dataset Configuration Guide.

Viewing the Results 21 Viewing the Experiment Results After you have added the new field to Log Processing.cfg and created the new Split transformation and extended dimension, you can view the new extended dimension that you created as soon as the Fast Input stage of data reprocessing has finished. This dimension, by default, displays the number of sessions for each of your experiment groups. To view the experiment dimension Within any workspace in Insight, open a table with the experiment dimension that you created. The experiment dimension elements, which represent each experiment you are currently running and each group within each experiment, display with the current number of sessions for each group. Each group is named in the following format using the experiment name followed by the group name: Experiment Name.Group Name For example: New Homepage.Control The following table shows the Controlled Experiment Groups dimension that was created in Transformation.cfg and each of the experiments and their groups. The New Homepage experiment is shown at the bottom of the table with its two groups: Control and index2. You now can use the experiment dimension and any relevant metrics to explore and interpret the experiment results, as well as create useful reports detailing those results. Evaluating the Experiment After running the experiment until the required minimum number of visitors have participated in the experiment, you can be assured of sufficient statistical confidence to evaluate the results of your experiment. Using Insight, compare whichever metrics or key performance indicators were defined as part of the hypothesis to determine whether the experiment was a success (that is, the hypothesis was validated with the specified confidence.) In our example experiment, our hypothesis is proven correct if the Visitor Conversion increases by at least 1.5%, which is the success criterion we defined earlier. The following workspace example shows that the Conversion for the index2 test group was actually 1.8% higher than for the control group, proving our hypothesis.

Viewing the Results 22 Summarizing the Experiment Results Taking Action Based on the Results Monitoring Your Actions Summarizing the Experiment Results Using Insight, you can create detailed reports to summarize and illustrate the results of your experiment. You then can use your reports, as shown in the following example, to make recommendations based on the results, which are backed up by the visual information you have provided in your reports:

Viewing the Results 23 Taking Action Based on the Results After the results are clear, you are ready to act on those results by making production-level changes to the tested pages, applying these same changes to other areas of your website, and making sure to completely document the test, its results, and the changes that you have made. Monitoring Your Actions After the controlled experiment is complete and you have implemented the appropriate changes, make sure to continue to monitor the changes that you made by, for example, viewing validation metrics, creating control charts, and providing dashboard metrics. Always be prepared to re-test your hypothesis if you think the changes that you tested and made are not bearing out the original results.

Experiment Design Spreadsheet 24 Experiment Design Spreadsheet This file functions not only as a worksheet but also as a record of your decisions about the experiment. If you need help designing your experiment, you can use the experiment design spreadsheet (named VS Controlled Experiment Design.xls by default) provided by Adobe. The experiment design spreadsheet can provide useful statistical inferences only when the metric in question is defined as a percentage of visitors that meet some criteria. That is, it is useful only when testing a visitor-based metric hypothesis. To design your experiment using the experiment design file 1. If you have administrator access to your web or application servers, navigate to the Sensor installation folder on any Sensor machine in your web cluster. If you do not have administrator access, contact your Adobe account manager to request the file. 2. Open the VS Controlled Experiment Design.xls file. (You can rename this file if desired.) The spreadsheet on the following page is an example of how you would complete the spreadsheet when preparing to test the example hypothesis used throughout this guide.

Experiment Design Spreadsheet 25 3. Enter text or values for all of the fields in blue in this file, which are described in the following table.the calculated fields are defined in the second table. Fields to Define In this field Experiment Title Experiment Description Metric Being Studied Specify A descriptive name for your experiment. A textual description of the experiment. The name of the metric on which the experiment is based. Example: Visitor Conversion Metric Definition The definition of the metric on which the experiment is based. Format: Visitors[X]/Visitors Example: Visitors[URI='conversionpage.asp']/Visitors Intended Start Time Intended End Time Applicable Selections Experiment URIs Expected Metrics for Application Selections Average Visitors per Day Visitor Conversion Experiment Will Determine if the metric name for the Test Groups is The date and time you want the experiment to begin. The date and time you want the experiment to end. (Optional) The dimension name and element set or range by which you want to further segment the dataset. The URIs involved in your hypothesis. You define the current URIs for the control group and the alternate URIs that you have created or will create for the test group(s). Heading for the metric values that you expect for your website. The average number of visitors to your website per day. The average visitor conversion rate for your website. Heading for how the metric values should be compared.

Experiment Design Spreadsheet 26 In this field Greater Than The Value For the Control Group? Less Than The Value For the Control Group? By at Least (Detection Level) With a Confidence Level of at Least and a Power Level of % of Visitors Test Group Control Group Other Design Notes Specify Set this field to True if you want the ability to conclude that the test group s metric increased during the experiment. Set this field to False to reduce the number of visitors needed to draw conclusions. Adobe recommends that you set it to True. Set this field to True if you want the ability to conclude that the test group s metric decreased during the experiment. Adobe recommends that you set it to True. The percentage by which you want the metric for the test group to be higher or lower than that for the control group. The desired confidence level for the test group values. The confidence level determines the number of false positives to measure the probability that the stated expectation is true. The desired power level for the test group values. The power level determines the number of false negatives. Heading for the percent of visitors values. Percent of visitors you want to include in the test group. You can play with this number until the value in the Total (Usually 100%) field in the Visitors section is equal to or greater than the value in the Minimum Visitors Required (Test+Control Groups) field, both of which are described in the following table. Percent of visitors you want to include in the control group. Any notes that you want to save for future reference. The remaining fields are calculated based on the values that you entered and are described in the following table. Calculated Fields Field Expected Metrics for Application Selections Expected Visitors per Period Calculated Z Score for Type I Error Calculated Z Score for Type II Error Minimum Visitors Required (Test+Control Groups) Description Heading for the metric values that you expect for your website. This field is normally automatically calculated by the spreadsheet. It relies on the assumption that on most days the website receives many more new visitors than return visitors. If this is not the case, this cell s calculation should be overwritten with the actual number of visitors expected during the experiment. The Z score for a false positive result. This is an intermediate statistical calculation. The Z score for a false negative result. This is an intermediate statistical calculation. Minimum number of visitors needed in your experiment to meet your specified confidence level, power level, and Z score, expressed as a percentage of the value in the Expected Visitors per Period field.

Experiment Design Spreadsheet 27 Field Minimum Visitors Required (Test+Control Groups) Minimum Experiment Time (Days) Visitors Test Group Control Group Total (Usually 100%) Description Minimum number of visitors needed in your experiment to meet your specified confidence level, power level, and Z score. This value must be less than or equal to the value in the Total (Usually 100%) field in the Visitors section. Minimum number of days you need to run your experiment to meet your specified confidence level, power level, and Z score. This calculated number is subject to the same issues as discussed in the Expected Visitors per Period field. In the case of a website with many returning visitors, the Minimum Experiment Time (Days) field is the expected number of days it takes to see a number of unique visitors equal to the value in the Minimum Visitors Required field. Heading for the visitors values. Number of visitors needed in the test group. Number of visitors needed in the control group. Total number of visitors needed for the experiment. This value must be equal to or greater than the value in the Minimum Visitors Required (Test+Control Groups) field. Test Group Accuracy (at Target Confidence Level) Percentage indicating that there is a chance equal to the specified confidence level that the measured value of the metric calculated for the test group will be within this percentage of its true value. Control Group Accuracy (at Target Confidence Level) Z Score (at Target Accuracy) Actual Confidence Level (at Target Interval) Actual Interval (at Target Confidence Level) Percentage indicating that there is a chance equal to the specified confidence level that the measured value of the metric calculated for the control group will be within this percentage of its true value. Number of standard deviations a given value is from the test mean. The confidence level achieved for the experiment. The confidence level measures the probability of the stated expectation to be true. The confidence interval achieved for the experiment, which provides an estimated range of values that is likely to include an unknown population parameter. This range is calculated from a given set of sample data. You need to look at the value in the Minimum Visitors Required (Test+Control Groups) field... and compare it to the value in the Total field in the Visitors column.

Experiment Design Spreadsheet 28 For your experiment to be statistically valid, the value in the Total (Usually 100%) field must be equal to or greater than the value in the Minimum Visitors Required (Test+Control Groups) field. Given the inputs provided, what the example worksheet shows you is that 10,475 visitors need to participate in this experiment to achieve the entered 95% confidence rate (which is the minimum suggested confidence for any controlled experiment, although you can increase this number). The experiment as currently designed includes 30,000 visitors, which is well over the minimum number of visitors required. If you keep the number of days the same, you could increase the confidence level as long as the total number of visitors continues the meet or exceed the required minimum. 4. Save the file for your records and then use the information from the file to configure the experiment using the experiment configuration spreadsheet. For more information about this spreadsheet, see Configuring and Deploying the Experiment.