SE Labs Test Plan for Q Endpoint Protection : Enterprise, Small Business, and Consumer

Similar documents
SE Labs Test Plan for Q Endpoint Protection : Enterprise, Small Business, and Consumer

SE Labs Test Plan for Q Endpoint Protection : Enterprise, Small Business, and Consumer

Predictive malware response testing methodology. Contents. 1.0 Introduction. Methodology version 1.0; Created 17/01/2018

MRG Effitas Test Plan for Q Degree Assessment and Certification

Enterprise Anti-Virus Protection

Enterprise Anti-Virus Protection

Small Business Anti-Virus Protection

Home Anti-Virus Protection

Enterprise Anti-Virus Protection

Home Anti-Virus Protection

Home Anti-Virus Protection

Home Anti-Virus Protection

Home Anti-Virus Protection

Home Anti-Virus Protection

CONSUMER REPORT April - June 2016

SMALL BUSINESS ENDPOINT PROTECTION

MRG Effitas Trapmine Exploit Test

MRG Effitas 360 Degree Assessment & Certification Q1 2018

HOME ANTI- MALWARE PROTECTION

MRG Effitas 360 Degree Assessment & Certification Q4 2017

HOME ANTI- MALWARE PROTECTION

WEB BROWSER SANDBOXING: SECURITY AGAINST WEB ATTACKS

HPE DATA PRIVACY AND SECURITY

ENTERPRISE ENDPOINT PROTECTION

Advanced Threat Defense Certification Testing Report. Trend Micro Incorporated Trend Micro Deep Discovery Inspector

Advanced Threat Defense Certification Testing Report. Symantec Corporation Symantec Advanced Threat Protection

COMPARATIVE MALWARE PROTECTION ASSESSMENT

Invincea Endpoint Protection Test

MRG Effitas 360 Assessment & Certification Programme Q4 2015

ADVANCED ENDPOINT PROTECTION TEST REPORT

ENTERPRISE ENDPOINT PROTECTION

MRG Effitas 360 Degree Assessment & Certification Q MRG Effitas 360 Assessment & Certification Programme Q2 2017

Certification Report

HOME ANTI- MALWARE PROTECTION

Client Computing Security Standard (CCSS)

IBM Managed Security Services - Vulnerability Scanning

Trend Micro. Apex One as a Service / Apex One. Best Practice Guide for Malware Protection. 1 Best Practice Guide Apex One as a Service / Apex Central

ENTERPRISE ENDPOINT COMPARATIVE REPORT

Washington State Emergency Management Association (WSEMA) Olympia, WA

Service Description: Cisco Security Implementation Services. This document describes the Cisco Security Implementation Services.

BT Compute Protect Schedule to the General Terms

Reviewer s guide. PureMessage for Windows/Exchange Product tour

Survey on Patient Safety Culture Database Data Use Agreement

UBT IT&T Terms & Conditions of Sale

Seqrite Endpoint Security

IBM Internet Security Systems Proventia Management SiteProtector

ENTERPRISE ENDPOINT PROTECTION

What can we lose not implementing proper security in our IT environment? Aleksandar Pavlovic Security Account Manager Cisco

Appendix B SUBSCRIPTION TERMS AND CONDITIONS. Addendum to British Columbia Electronic Library Network Electronic Products License Agreement

Accreditation Process. Trusted Digital Identity Framework February 2018, version 1.0

ENTERPRISE ENDPOINT PROTECTION

SMALL BUSINESS ENDPOINT PROTECTION APR - JUN 2018

Version v November 2015

Symantec Protection Suite Add-On for Hosted Security

Timber Products Inspection, Inc.

Starflow Token Sale Privacy Policy

Tenable.io User Guide. Last Revised: November 03, 2017

SERVICE SCHEDULE MANAGED DATABASE

THE RISE OF GLOBAL THREAT INTELLIGENCE

PRIVACY COMMITMENT. Information We Collect and How We Use It. Effective Date: July 2, 2018

IBM Proventia Management SiteProtector Installation Guide

PROFESSIONAL SERVICES RAPID RESPONSE RETAINER WITH CYBER INTEL STATEMENT OF WORK TO VERIZON PROFESSIONAL SERVICES ATTACHMENT

ABOUT THE DELTEK CERTIFICATION PROGRAM

Data Curation Handbook Steps

Schedule Identity Services

INFORMATION CONCERNING HANDLING OF DATA. 1, Preamble. For TRENDO Invest Ingatlanfejlesztő Korlátolt Felelősségű Társaság {TRENDO Invest Real

Beam Technologies Inc. Privacy Policy

PCI DSS v3.2 Mapping 1.4. Kaspersky Endpoint Security. Kaspersky Enterprise Cybersecurity

11, 2018 PRIVACY POLICY

PRIVATE MOBILE CONNECTION (formerly COMMERCIAL CONNECTIVITY SERVICE (CCS)) CUSTOM APN ATTACHMENT

Vulnerability Management

ONC-ACB Certification Program

Token Sale Privacy Policy

Whole Product Dynamic Real-World Protection Test (July-November 2018)

Avast Customer & Technical Support Policy

FedRAMP: Understanding Agency and Cloud Provider Responsibilities

Version v November 2015

The SANS Institute Top 20 Critical Security Controls. Compliance Guide

Website Privacy Policy

360 Degree Assessment & Certification

Web Gateway Security Appliances for the Enterprise: Comparison of Malware Blocking Rates

Symantec Endpoint Protection 12

Managed Security Services - Endpoint Managed Security on Cloud

HOME ANTI- MALWARE PROTECTION APR - JUN 2018

ISO COMPLIANCE GUIDE. How Rapid7 Can Help You Achieve Compliance with ISO 27002

WEBSITE DESIGN, DEVELOPMENT AND HOSTING SERVICES

BoostMyShop.com Privacy Policy

FINRA Arbitrator Application

REQUEST FOR PROPOSAL DATABASE DEVELOPMENT FOR THE DISTRICT OF COLUMBIA COLLEGE ACCESS PROGRAM

THREAT ISOLATION TECHNOLOGY PRODUCT ANALYSIS

SEC Key Considerations for Public Companies for Mitigating and Disclosing Cybersecurity Risks

GDPR Processor Security Controls. GDPR Toolkit Version 1 Datagator Ltd

Vulnerability Assessment Process

Security and Compliance Powered by the Cloud. Ben Friedman / Strategic Accounts Director /

Information Security Policy

Contents. 1 General Terms. Page 1 of 8

Sarri Gilman Privacy Policy

Notification of Issuance of Binding Operational Directive and Establishment of. AGENCY: National Protection and Programs Directorate, DHS.

Site Impact Policies for Website Use

DATA BREACH NUTS AND BOLTS

Transcription:

Keywords: anti-malware; compliance; assessment; testing; test plan; template; endpoint; security; SE Labs SE Labs and AMTSO Preparation Date : July 20, 2017 Documentation Source Dates : June 2017 Version 1.0 SE Labs Test Plan for Q3 2017 Endpoint Protection : Enterprise, Small Business, and Consumer Sponsored and Authored by: SE Labs (Simon Edwards) AMTSO (John Hawes, Scott Jeffreys) Abstract: This Test Plan has been prepared jointly by SE Labs and AMTSO as part of the AMTSO Operational Pilot Implementation of the Draft 5 Standards. The Plan details the SE Labs testing activities in Endpoint Protection for the period July through September 2017. This document has been developed using Test Plan Template Version 1.1 from June 2017. Wherever conflicts might exist between this Template and the Standards, the Testing Protocol Standards will provide the prevailing rule. 1

Table of Contents 1. Introduction... 3 2. Scope... 3 3. Methodology and Strategy... 4 4. Participation... 5 5. Environment... 5 6. Schedule... 7 7. Control Procedures... 7 8. Dependencies... 8 9. Scoring Process... 8 10. Dispute Process... 8 11. Attestations... 8 2

1. Introduction 2. Scope SE Labs Test Plan for Q3 2017 Endpoint Protection SE Labs tests a variety of endpoint security products from a range of well-known vendors in an effort to judge which were the most effective. Each enterprise, small business, or consumer class product are exposed to the same threats, which are a mixture of targeted attacks using well-established techniques, public email, and web-based threats that are known or found to be live on the internet at the time of the test. The Test Reports indicate how effectively the products were at detecting and/or protecting against those threats in real time. The most recent quarterly reports for each class of application are listed below. https://selabs.uk/download/enterprise/apr-june-2017-enterprise.pdf https://selabs.uk/download/small_business/apr-june-2017-smb.pdf https://selabs.uk/download/consumers/apr-june-2017-consumer.pdf The SE Labs Endpoint Test examines applications from the enterprise, small business, and consumer sector with the following companies and software versions composing the latest tests. 3

3. Methodology and Strategy Test Framework - The test framework collects threats, verifies that they will work against unprotected targets and exposes protected targets to the verified threats to determine the effectiveness of the protection mechanisms. Threat Management Systems (TMS) - The Threat Management System is a database of attacks including live malicious URLs; malware attached to email messages; and a range of other attacks generated in the lab using a variety of tools and techniques. Threats are fed to the Threat Verification Network (TVN). Threat Verification Network (TVN) - When threats arrive at the Threat Verification Network they are sent to Vulnerable Target Systems in a realistic way. For example, a target would load the URL for an exploit-based web threat into a web browser and visit the page; while its email client would download, process and open email messages with malicious attachments, downloading and handling the attachment as if a naïve user was in control. Replay systems are used to ensure consistency when using threats that are likely to exhibit random behaviors and to make it simpler for other labs to replicate the attacks. Target Systems (TS) - Target Systems (TS) are identical to the Vulnerable Target Systems used on the Threat Verification Network, except that they also have endpoint protection software installed. Threat Selection - All of the following threats are considered valid for inclusion in the test, although the distribution of the different types will vary according to the test s specific purpose: Public exploit-based web threats (exploitation attacks) Public direct-download web threats (social engineering attacks) Public email attachment threats (exploitation and social-engineering attacks) Private exploit-based web threats (exploitation attacks) Private direct-download web threats (social engineering attacks) Private email attachment threats (exploitation and social-engineering attacks) Public threats are sourced directly from attacking systems on the internet at the time of the test and can be considered live attacks that were attacking members of the public at the time of the test run. Multiple versions of the same prevalent threats may be used in a single test run, but different domain names will always be used in each case. Private threats are generated in 4

the lab according to threat intelligence gathered from a variety of sources and can be considered as similar to more targeted attacks that are in common use at the time of the test run. All threats are identified, collected and analyzed independently of security vendors directly or indirectly involved in the test. The full threat sample selection will be confirmed by the Threat Verification Network as being malicious. False positive samples will be popular and nonmalicious website URLs as well as applications downloaded directly from their source websites where possible. 4. Participation The business tests all contain voluntary vendors with who we have negotiated contracts. As such, there are no specific invitations. The SE Labs Statements of Work are commercial documents that are not available for public disclosure. Rates schedules and services are available from SE Labs upon request. The consumer test comprises around one-third voluntary participants and two-thirds being neutral. There are no involuntary participants in the consumer tests. The general process for participation in this, or any, test with SE Labs follows. SE Labs or Vendor approach one another in person, by email or on the phone. Both parties will then : o Discuss the desired testing methodology and configuration o Agree terms and sign contracts o Begin work or both parties do not reach agreement o SE Labs reserves the right that they may or may not test the product o Dispute processes precede o Report publication Please contact us at info@selabs.uk. We will be happy to arrange a phone call to discuss our methodology and the suitability of your product for inclusion. Opt-Out Policy : No opt-out policy has been specified. Funding : The products are chosen for this test by SE Labs. This test will not be sponsored. This means that no security vendor has control over the report s content or its publication. The anticipated test date range will be between between July 1 st, 2017 and August 15, 2017. SE Labs do not charge directly for testing products in public tests such as this one, but do charge for private tests. 5. Environment Physical Configuration : The Target Systems are identical Windows PCs specified as below. Each system has unrestricted internet access and is isolated from other Target Systems using Virtual Local Area Networks (VLANs). Each system runs Windows 7 (64-bit), updated with security patches available up to Service Pack 1. The general Target System specification includes 5

Windows PCs with an Intel Core i3-4160 3.6GHz processor, 4GB RAM, and a 500GB 7200rpm SATA hard disk. Popular but vulnerable third-party applications installed include Adobe Flash Player, Adobe Reader, Apple QuickTime and Oracle Java (32-bit). If a security product requires an updated file from Microsoft the tester will install the necessary file. A web session replay system will be used when exposing systems to web-based threats. This provides an accurate simulation of a live internet connection and allows each product to experience exactly the same threat. All products have real-time and unrestricted access to the internet. Sample Relevance (SE Labs criteria defining legitimate sample selection) - Non-malicious website URLs and application files are used to check for false positive detection. The number of these URLs and files will match the number of malware samples used. Candidates for legitimate sample testing include newly released applications, ranging from free software to the latest commercial releases. Potentially unwanted programs, which are not clearly malicious but that exhibit dubious privacy policies and behaviors, will be excluded from the test. Curation Process : Malicious URLs and legitimate applications and URLs were independently located and verified by SE Labs. Targeted attacks were selected and verified by SE Labs. They were created and managed by Metasploit Framework Edition using default settings. The choice of exploits was advised by public information about ongoing attacks. One notable source was the 2016 Data Breach Investigations Report from Verizon. Details regarding Test Threat Selection and Management follow. Sample numbers and sources - The Target Systems will be exposed to a selection of threats. These are weighted heavily (~75 per cent) towards public web-based threats, specifically direct and automatically exploiting drive-by attacks. A smaller set of the samples will include public threats attached to emails and private, targeted attacks delivered by web exploitation or as email attachments. There may also be some threats found via alternative routes, such as internet messaging (IM) or peer-to-peer (P2P) networks. Sample verification - Threats will be verified using Vulnerable Target Systems. Threat verification occurs throughout the test period, with live public threats being used shortly after they are verified as being effective against the Vulnerable Target Systems on the Threat Verification Network. In cases where a threat is initially verified to be effective, but which is found not to be effective during testing (e.g. its C&C server becomes unavailable), the threat sample will be excluded from the test results of each product. Attack stage - Threats will be introduced to the system in as realistic a method as possible. This means that threats found as email attachments will be sent to target systems in the same way as attachments to email messages. Web-based threats are downloaded directly from their original sources. These downloads occur through a proxy system that includes a session replay service to ensure consistency. Public threats that run on the Target System are allowed 10 minutes to exhibit autonomous malicious behavior. This may include initiating connections to systems on the internet or making changes to the system to establish persistence. 6

Distribution of Test Data : Malicious and legitimate data will be provided to partner organizations once the full test is complete. SE Labs does not share data on one partner with other partners. We do not currently partner with organizations that do not engage in our testing. Any security vendor that has their product tested without permission may, at SE Labs discretion, gain access to small subsets of data used in the test. A small administration fee is applicable. 6. Schedule Schedule Summary for SE Labs Q3 2017 Endpoint Protection Test Project Index Test Activity Start Date Range Dependencies 1 Test Commencement July 1, 2017 July 17, 2017 2 Confirm Vendor Configuration Feedback 3 Milestone 1 Preliminary Results 4 Feedback and Dispute Resolution Time Retests as Needed July 17, 2017 August 14, 2017 August 18, 2017 August 21, 2017 September 11, 2017 (1), (2) (3) 6 Milestone 2 Issue Final Report End Date for Test September 25, 2017 (4) 7. Control Procedures Connectivity Validation : Automatic submission of data to vendors is disabled where possible unless this reduces the immediate effectiveness of the product. A means for confirming whether a Product s cloud connectivity or other features are functioning can be provided by the Vendor. Logging : Products run with the default settings. Additional logging may be enabled if requested by the vendor of the product in question. Vendors of business software are invited to make configuration recommendations. Updates : All products are updated fully using the latest definitions, patches and any other available updates. These updates are made immediately prior to each exposure to a threat or legitimate application. Products may be upgraded to the latest version, if the version changes during the test period. 7

8. Dependencies Participant Actions : Contact SE Labs for inclusion in the test cycle. 9. Scoring Process The following occurrences during the attack stage will be recorded and all contribute to the product effectiveness measure. The point of detection (e.g. before/after execution). Detection categorization, where possible (e.g. URL reputation, signature or heuristics). Details of the threat, as reported by the product (e.g. threat name; attack type). Unsuccessful detection of threats. Legitimate files allowed to run without problems. Legitimate files acted on in non-optimal ways (e.g. accusations of malicious behavior; blocking of installation) and at what stage (e.g. before/after execution). User alerts/interaction prompts such as: o Pop-up information messages (even if not interactive). o Requests for action (take default option or follow testing policy of naïve user if no default provided). o Default suggestions. o Time-out details (e.g. record if an alert/request for action disappears/takes a default action after n seconds of no user response). When an initial attack or attacker succeeds in downloading further malicious files, such downloads will be recorded along with the product s behavior (if any). This additional data will be presented alongside the main results, clearly labeled as representing a SE Labs Endpoint Anti-Malware Testing Methodology second attack. For statistical purposes, detection rates of these files will not be automatically added to the overall totals for each product (although doing so after the event will be possible). Any anomalies (e.g. strange or inconsistent behavior by the product). Measuring Product Effectiveness : Each Target System is monitored to detect a product s ability to detect, block or neutralize threats that are allowed to execute. Third-party software records each Target System s state before, during and after the threat exposure stage. These results show the extent of an attacker s interaction with the target and the level of remediation provided by the product being tested. The same level of monitoring occurs when introducing legitimate URLs and files when assessing false positive detection rates. Awards : SE Labs provides badges such as AAA, AA, and others based on the Test Scoring results. Partners can use the SE Labs awards logos for marketing purposes. 10. Dispute Process Partners can dispute results. 11. Attestations Please confirm that you intend to meet each of the following guidelines to the best of your 8

ability. 1. You (Tester) intend to provide public notification on the AMTSO website that has met its obligation for public notification of a Public Test, regardless of whether a potential Participant is in actual receipt of such notification prior to the Commencement Date of a Test. 2. You (Tester) may charge for Participation in a Test, but will not charge additional fees for Participants to be Voluntary. 3. You (Tester) confirm that any material conflict of interests or other information that could materially impact the reliability of the Test will be disclosed. 4. You (Tester) confirm that all products included in this Test will be analyzed fairly and equally. 5. You (Tester) will disclose any anticipated imbalance or inequity in your test design to all Participants in your Test. 6. You (Tester) will provide disclosure as to how the test was funded. I hereby affirm that this Test Plan complies with the AMTSO Testing Standards. Signature: /s/ Simon Edwards Name: Simon Edwards Test Lab: SE Labs AMTSO test ID: AMTSO-OP1-TP103 9