IBM BigFix Inventory Version Scalability Guide. Version 1 IBM

Similar documents
IBM BigFix Inventory Version 9.5. Scalability Guide. Version 2 IBM

IBM License Metric Tool 9.2.x. Scalability Guide. Version 2 IBM

IBM Tivoli Storage Manager for Windows Version 7.1. Installation Guide

IBM Endpoint Manager for Software Use Analysis Version 2 Release 2. Scalability Guide. Version 4

Tivoli Application Dependency Discovery Manager Version 7 Release 2.1. Installation Guide

IBM Tivoli Monitoring: AIX Premium Agent Version User's Guide SA

IBM Tivoli Storage Manager for Windows Version Tivoli Monitoring for Tivoli Storage Manager

IBM Tivoli Storage Manager for Windows Version Installation Guide

IBM Spectrum Protect for AIX Version Installation Guide IBM

xseries Systems Management IBM Diagnostic Data Capture 1.0 Installation and User s Guide

System i and System p. Capacity on Demand

High Availability Guide for Distributed Systems

IBM Spectrum Protect for Linux Version Installation Guide IBM

Tivoli Application Dependency Discovery Manager Version 7.3. Installation Guide IBM

IBM Tivoli Storage Manager Version Optimizing Performance IBM

IBM Tivoli Storage Manager for AIX Version Installation Guide IBM

IBM i Version 7.2. Security Service Tools IBM

IBM. Systems management Logical partitions. System i. Version 6 Release 1

IBM Director Virtual Machine Manager 1.0 Installation and User s Guide

Road Map for the Typical Installation Option of IBM Tivoli Monitoring Products, Version 5.1.0

Data Protection for IBM Domino for UNIX and Linux

Live Partition Mobility ESCALA REFERENCE 86 A1 85FA 01

High Availability Guide for Distributed Systems

Tivoli Monitoring: Windows OS Agent

Monitoring: Windows OS Agent Version Fix Pack 2 (Revised May 2010) User s Guide SC

IBM Agent Builder Version User's Guide IBM SC

IBM Tivoli Storage Manager for Virtual Environments Version Data Protection for VMware Installation Guide IBM

IBM Tivoli Monitoring for Virtual Environments: Dashboard, Reporting, and Capacity Planning Version 7.1 Fix Pack 1. User s Guide SC

Tivoli IBM Tivoli Advanced Catalog Management for z/os

iplanetwebserveruser sguide

Solutions for BSM Version 1.1. Solutions for BSM Guide

Internet Information Server User s Guide

IBM. Basic system operations. System i. Version 6 Release 1

License Administrator s Guide

IBM Unica Detect Version 8 Release 5 October 26, Administrator's Guide

LotusLive. LotusLive Engage and LotusLive Connections User's Guide

Upward Integration Modules Installation Guide

IBM Operational Decision Manager Version 8 Release 5. Installation Guide

Registration Authority Desktop Guide

IBM Geographically Dispersed Resiliency for Power Systems. Version Deployment Guide IBM

IBM Campaign Version 9 Release 1 October 25, User's Guide

Deployment Overview Guide

High Availability Policies Guide

IBM Sterling Gentran:Server for Windows. Installation Guide. Version 5.3.1

IBM Security Access Manager for Web Version 7.0. Upgrade Guide SC

WebSphere Message Broker ESQL

IBM Marketing Operations and Campaign Version 9 Release 0 January 15, Integration Guide

IBM Tivoli Storage Manager for Virtual Environments Version Data Protection for VMware User's Guide

Solutions for BSM 1.1 Expanded Operating System Release. Solutions for BSM Guide

IBM i Version 7.3. Networking TCP/IP troubleshooting IBM

IBM Tivoli Storage Manager Version Single-Site Disk Solution Guide IBM

IBM. Installing, configuring, using, and troubleshooting. IBM Operations Analytics for z Systems. Version 3 Release 1

IBM InfoSphere Data Replication for VSAM for z/os Version 11 Release 3. Guide and Reference

Installing and Configuring Tivoli Enterprise Data Warehouse

Version 10 Release 0 February 28, IBM Campaign User's Guide IBM

Performance Tuning Guide

IBM EMM Reports Version 9 Release 1 October 25, Installation and Configuration Guide

IBM Monitoring Agent for OpenStack Version User's Guide IBM SC

IBM Security QRadar Version Installation Guide IBM

IBM Tivoli OMEGAMON XE for CICS TG on z/os Version User's Guide SC

IBM. Client Configuration Guide. IBM Explorer for z/os. Version 3 Release 1 SC

Tivoli Tivoli Provisioning Manager

IBM InfoSphere Information Server Integration Guide for IBM InfoSphere DataStage Pack for SAP BW

Monitor Developer s Guide

IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Active Directory Agent Fix Pack 13.

Planning and Installation

Tivoli Tivoli Intelligent ThinkDynamic Orchestrator

Common Server Administration Guide

WebSphere Message Broker

Platform Analytics Version for Symphony. Installing SC

IBM Cloud Orchestrator Version Content Development Guide IBM

Data Protection for Microsoft SQL Server Installation and User's Guide

Tivoli Tivoli Provisioning Manager

IBM i Version 7.2. Networking TCP/IP troubleshooting IBM

IBM Features on Demand. User's Guide

Extended Search Administration

IBM Security Role and Policy Modeler Version 1 Release 1. Planning Guide SC

IBM Spectrum Protect Snapshot for Oracle Version What's new Supporting multiple Oracle databases with a single instance IBM

IBM Endpoint Manager. Security and Compliance Analytics Setup Guide

Tivoli Tivoli Intelligent ThinkDynamic Orchestrator

IBM Tivoli Monitoring for Business Integration. User s Guide. Version SC

IBM Cognos Dynamic Query Analyzer Version Installation and Configuration Guide IBM

Tivoli Tivoli Provisioning Manager

Using Platform Process Manager

Tivoli Tivoli Provisioning Manager

IBM Tivoli Privacy Manager for e-business. Installation Guide. Version 1.1 SC

IBM Tivoli Enterprise Console. User s Guide. Version 3.9 SC

IBM Security Access Manager Version Appliance administration topics

IBM Security Access Manager Version June Administration topics IBM

IBM Interact Version 9 Release 0 May 31, User's Guide

Warehouse Summarization and Pruning Agent Version Fix Pack 1. User's Guide SC

IBM. Networking TCP/IP troubleshooting. IBM i 7.1

IBM Marketing Operations and Campaign Version 9 Release 1.1 November 26, Integration Guide

IBM i Version 7.2. Connecting to IBM i IBM i Access for Web IBM

IBM Endpoint Manager Version 9.0. Patch Management for AIX User's Guide

IBM. Installing. IBM Emptoris Suite. Version

IBM System Migration Assistant 4.2. User s Guide

Tivoli Tivoli Intelligent ThinkDynamic Orchestrator

WebSphere Message Broker Monitoring Agent User's Guide

IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Exchange Server Agent Fix Pack 10. Troubleshooting Guide IBM

Transcription:

IBM BigFix Inentory Version 9.2.4 Scalability Guide Version 1 IBM

IBM BigFix Inentory Version 9.2.4 Scalability Guide Version 1 IBM

Scalability Guide This edition applies to ersion 9.2.4 of IBM BigFix Inentory (product number 5725-F57) and to all subsequent releases and modifications until otherwise indicated in new editions. Copyright IBM Corporation 2002, 2016. US Goernment Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents Scalability Guidelines......... 1 Introduction.............. 1 Scanning and uploading scan data...... 1 Extract, Transform, Load (ETL)....... 2 Decision flow.............. 3 Planning and installing BigFix Inentory..... 6 Hardware requirements.......... 6 Network connection and storage throughput.. 7 Diiding the infrastructure into scan groups.... 7 Good practices for running scans and imports... 8 Plan the scanning schedule........ 8 Aoid scanning when it is not needed..... 8 Limit the number of computer properties that are to be gathered during scans........ 9 Limit the number of BigFix Inentory computer groups............... 9 Recommendations for the serice proider enironments............. 9 Ensure that scans and imports are scheduled to run at night.............. 9 Run the initial import.......... 9 Reiew import logs........... 10 Maintain frequent imports........ 10 Enabling the cryptographic hash collection.... 11 Disable collection of usage data........ 11 Change the mode of sorting bundling options on the IBM Software Classification pane...... 12 Make room for end-of-scan-cycle actiities.... 12 Configuring the application and its database for medium and large enironments....... 12 Increasing Jaa heap size......... 13 Configuring the transaction logs size..... 13 Configuring the transaction log location for DB2 14 Configuring swappiness in Linux hosting DB2 database serer............ 14 Configuring the DB2_COMPATIBILITY_VECTOR ariable for improed UI performance.... 15 Shrinking MS SQL Serer transaction log... 15 Configuring the transaction log location for MS SQL Serer............. 15 Rebuilding database indexes........ 15 Optimizing the tempdb database in Microsoft SQL Serer............. 16 Backing up and restoring the database..... 16 Backing up the DB2 database....... 16 Backing up the SQL Serer database..... 17 Restoring the DB2 database........ 18 Restoring the SQL Serer database..... 19 Preentie actions............ 20 Limiting the number of scanned signature extensions............... 21 Disabling the calculation of extended software aggregates............... 22 Recoering from accumulated scans...... 22 Shortening the retention period gradually to aoid problems with growing database size...... 23 IBM PVU considerations.......... 24 Web user interface considerations....... 24 REST API considerations.......... 24 Using relays to increase the performance of IBM BigFix................ 25 Reducing the BigFix serer load...... 25 Appendix. Executie summary.... 27 Notices.............. 29 Trademarks.............. 31 Terms and conditions for product documentation.. 31 Copyright IBM Corp. 2002, 2016 iii

i

Scalability Guidelines This guide is intended to help system administrators plan the infrastructure of IBM BigFix Inentory and to proide recommendations for configuring the application serer to achiee optimal performance. It explains how to diide computers into scan groups, schedule software scans, and run data imports. It also proides information about other actions that can be undertaken to aoid low performance. Introduction IBM BigFix clients report data to the BigFix serer that stores the data in its file system or database. The BigFix Inentory serer periodically connects to the BigFix serer and its database, downloads the stored data and processes it. The process of transferring data from the BigFix serer to the BigFix Inentory serer is called Extract, Transform, Load (ETL). By properly scheduling scans and distributing them oer the computers in your infrastructure, you can reduce the length of the ETL process and improe its performance. Scanning and uploading scan data To ealuate whether particular software is installed on an endpoint, you must run a scanner. It collects information about files with particular extensions, package data, and software identification tags. It also gathers information about the running processes to measure software usage. The software scan data must be transferred to the BigFix serer from which it can be later on imported to BigFix Inentory. To discoer software that is installed on a particular endpoint and collect its usage, you must first install a scanner by running the Install Scanner fixlet. After the scanner is successfully installed, the Initiate Software Scan fixlet becomes releant on the target endpoint. The following types of scans are aailable: Catalog-based scan In this type of scan, the BigFix Inentory serer creates scanner catalogs that are sent to the endpoints. The catalogs do not include signatures that can be found based on the list of file extensions or entries that are irreleant for a particular operating system. Based on these catalogs, the scanner discoers exact matches and sends its findings to the BigFix serer. This data is then transferred to the BigFix Inentory serer. File system scan In this type of scan, the scanner uses a list of file extensions to create a list of all files with those extensions on an endpoint. Package data scan In this type of scan, the scanner searches the system registry (Windows) or package management system (Linux, UNIX) to gather information about packages that are installed on the endpoints. Then, it returns the findings to the BigFix serer where the discoered packages are compared with the software catalog. If a particular package matches an entry in the catalog, the software is discoered. Application usage statistics In this type of scan, the scanner gathers information about processes that are running on the target endpoints. Software identification tags scan In this type of scan, the scanner searches for software identification tags that are deliered with software products. Resource utilization scan In this type of scan, the scanner searches for software license metric (SLM) tags that contain information about internal product metrics. These metrics are periodically logged from software Copyright IBM Corp. 2002, 2016 1

products enabled for SLM (standardized into ISO 19770-4) and can be used to track their usage. The scanner returns its findings to the BigFix serer, where the tags are processed. Based on the information that they contain, the maximum usage of license metrics oer the last 30 days and the trend alues are calculated. The scan collects information about license metrics that are releant for licensing models different than PVU or RVU MAPC (these two metrics do not need to be logged separately). Since the olume of the collected data might be large, do not run this scan unless you want to monitor metrics other than PVU and RVU MAPC. You should run the catalog-based, file system, package data, and software identification tags scans on a regular basis as they are responsible for software discoery. The application usage statistics gathers usage data and can be disabled if you are not interested in this information. When the status of the Initiate Software Scan fixlet shows complete (100%), it indicates that the scan was successfully initiated. It does not mean that the releant data was already gathered. After the scan finishes, the Upload Software Scan Results fixlet becomes releant on the targeted endpoint. It means that the releant data was gathered on the endpoints. When you run this fixlet, the scan data is uploaded to the BigFix serer. It is then imported to BigFix Inentory during the Extract, Transform, Load (ETL) process. Extract, Transform, Load (ETL) The Extract, Transform, Load (ETL) is a process in the database usage that combines three database functions that transfer data from one database to another. The first stage, Extract, inoles reading and extracting data from arious source systems. The second stage, Transform, conerts the data from its original format into the format that meets the requirements of the target database. The last stage, Load, saes the new data into the target database, thus finishing the process of transferring the data. In BigFix Inentory, the Extract stage inoles extracting data from the BigFix serer. The data includes information about the infrastructure, installed agents, and detected software. ETL also checks whether a new software catalog is aailable, gathers information about the software scan and files that are present on the endpoints, and collects data from VM managers. The extracted data is then transformed to a single format that can be loaded to the BigFix Inentory database. This stage also inoles matching scan data with the software catalog, calculating processor alue units (PVUs), processing the capacity scan, and conerting information that is contained in the XML files. After the data is extracted and transformed, it is loaded into the database and can be used by BigFix Inentory.The hardest load on the BigFix Inentory serer occurs during ETL when the following actions are performed: A large number of small files is retrieed from the BigFix serer (Extract). Many small and medium files that contain information about installed software packages and process usage data are parsed (Transform). The database is populated with the parsed data (Load). At the same time, BigFix Inentory prunes large olumes of old data that exceeds its data rentention period. Performance of the ETL process depends on the number of scan files, usage analyses, and package analyses that are processed during a single import. The main bottleneck is storage performance because many small files must be read, processed, and written to the BigFix Inentory database in a short time. By properly scheduling scans and distributing them oer the computers in your infrastructure, you can reduce the length of the ETL process and improe its performance. An important factor that influences the duration of the ETL process is the amount of updates on the file system since the last scan. Such operations as security updates or significant system upgrades can cause 2

ETL to run longer, because it has to process information about all modified files. For example, regular updates released by Microsoft on Tuesdays would significantly lengthen the Wednesday import in enironments with many Windows platforms. Extract, Transform, and Load Client computer - console Client computer - browser BigFix serer BigFix database Information about files BigFix file system Raw scan files Core business logic Relay 1. Extract Infrastructure information Installed agents Scan data files Software Use data files Capacity data files Package data files Files with VM manager information High-speed network connection Software catalog BigFix Inentory serer Web user interface Core business logic 2. Transform Information from the XML files is processed. Data is transformed to a single format. Raw data is matched with the software catalog. PVU and RVU alues are calculated. The capacity scan is processed. BigFix Inentory database 3. Load Data is loaded into the BigFix Inentory database tables. BigFix client on Windows, Linux and UNIX XML Scan data Usage Catalog-based data File system Capacity Package Software identification tags XML VM manager data (Windows and Linux x86/x64 only) BigFix client on Linux on System z XML Scan data Usage Catalog-based data File system Capacity Package Software identification tags XML Capacity configuration Decision flow To aoid running into performance issues, you should diide the computers in your infrastructure into scan groups and properly set the scan schedule. You should start by creating a benchmark scan group on which you can try different configurations to achiee optimal import time. After the import time is satisfactory for the benchmark group, you can diide the rest of your infrastructure into analogous scan groups. Start by creating a single scan group that will be your benchmark. The size of the scan group might ary depending on the size of your infrastructure. Howeer, the recommendation is to aoid creating a group larger than 20 000 endpoints. Scalability Guidelines 3

Scan the computers in this scan group. When the scan finishes, upload its results to the BigFix serer and run an import. Check the import time and decide whether its duration is satisfactory. For information about running imports, see section Good practices for running scans and imports on page 8. If you are not satisfied with the import time, check the import log and try undertaking one of the following actions: If you see that the duration of the import of raw file system scan data or package data takes longer than one third of the ETL time and the olume of the data is large (a few millions of entries), create a smaller group. For additional information, see section Diiding the infrastructure into scan groups on page 7. If you see that the duration of the import of raw file system scan data or package data takes longer than one third of the ETL time but the olume of the data is low, fine-tune hardware. For information about processor and RAM requirements as well as network latency and storage throughput, see section Planning and installing BigFix Inentory on page 6. If you see that processing of usage data takes an excessie amount of time and you are not interested in collecting usage data, disable gathering of usage data. For more information, see section Disable collection of usage data on page 11. After you adjust the first scan group, run the software scan again, upload its results to the BigFix serer and run an import. When you achiee an import time that is satisfactory, decide whether you want to hae a shorter scan cycle. For example, if you hae an enironment that consists of 42 000 endpoints and you created seen scan groups of 6000 endpoints each, your scan cycle will last seen days. To shorten the scan cycle, you can try increasing the number of computers in a scan group, for example, to 7000. It will allow you for shortening the scan cycle to six days. After you increase the scan group size, obsere the import time to ensure that its performance remains on an acceptable leel. When you are satisfied with the performance of the benchmark scan group, create the remaining groups. Schedule scans so that they fit into your preferred scan cycle. Then, schedule import of data from BigFix. Obsere the import time. If it is not satisfactory, adjust the configuration as you did in the benchmark scan group. When you achiee suitable performance, plan for end-of-cycle actiities. Use the following diagram to get an oeriew of actions and decisions that you will hae to undertake to achiee optimal performance of BigFix Inentory. 4

Installation Plan and install BigFix Inentory Configuration Create a scan group (up to 20 000 computers) Fine tune hardware (if possible) Initiate the scan and upload scan results Create a smaller scan group Run an import and check its time Disable gathering of usage data (if you do not need it) Is the import time satisfactory? No Yes Increase the size of the scan group Yes Do you want to hae a shorter scan cycle? No Create the remaining scan groups Fine tune hardware (if possible) Schedule the scans to fit into the scan cycle Create a smaller scan group Schedule daily imports Disable gathering of usage data (if you do not need it) Is the import time still satisfactory? No Yes Plan for end-of-cycle actiities Scalability Guidelines 5

Planning and installing BigFix Inentory Your deployment architecture depends on the number of endpoints that you want to hae in your audit reports. For information about the BigFix requirements, see Serer requirements aailable in the documentation. Hardware requirements If you already hae the BigFix serer in your enironment, plan the infrastructure for the BigFix Inentory serer. BigFix Inentory serer stores its data in a dedicated database, either DB2 or MS SQL Serer. The following tables are applicable for enironments in which the scans are run weekly, imports are run daily, and 60 applications are installed per endpoint (on aerage). Table 1. Processor and RAM requirements for BigFix Inentory installed with Microsoft SQL serer Enironment size Topology Processor Memory Small enironment 1 serer IBM BigFix, BigFix Inentory, and SQL Serer 2-3 GHz, 4 cores 8 GB Up to 5 000 endpoints Medium enironment 5 000-50 000 endpoints* Large enironment 50 000-150 000 endpoints* Very large enironment More than 150 000 endpoints* 2/3 serers** 2/3 serers** 2/3 serers** IBM BigFix 2-3 GHz, 4 cores 16 GB BigFix Inentory and SQL Serer 2-3 GHz, 4 cores 12-24 GB IBM BigFix 2-3 GHz, 4-16 cores BigFix Inentory and SQL Serer 2-3 GHz, 8-16 cores 16-32 GB 32-64 GB IBM BigFix 2-3 GHz, 16 cores 32-64 GB BigFix Inentory and SQL Serer 2-3 GHz, 8-16 cores 64-96 GB*** Table 2. Processor and RAM requirements for BigFix Inentory installed with DB2 Enironment size Topology Processor Memory Small enironment 1 serer IBM BigFix, BigFix Inentory, and DB2 2-3 GHz, 4 cores 8 GB Up to 5 000 endpoints Medium enironment 5 000-50 000 endpoints* Large enironment 50 000-150 000 endpoints* 2/3 serers** 2/3 serers** IBM BigFix 2-3 GHz, 4 cores 16 GB BigFix Inentory and DB2 2-3 GHz, 4 cores 12-24 GB IBM BigFix 2-3 GHz, 4-16 cores BigFix Inentory and DB2 2-3 GHz, 8-16 cores 16-32 GB 32-64 GB 6

Table 2. Processor and RAM requirements for BigFix Inentory installed with DB2 (continued) Enironment size Topology Processor Memory Very large enironment More than 150 000 endpoints* 2/3 serers** IBM BigFix 2-3 GHz, 16 cores BigFix Inentory and DB2 2-3 GHz, 8-16 cores 32-64 GB 64-96 GB * For enironments with up to 35 000 endpoints, there is no requirement to create scan groups. If you hae more than 35 000 endpoints in your infrastructure, you must create scan groups. For more information, see section Diiding the infrastructure into scan groups. ** A distributed enironment, where BigFix Inentory is separated from the database, is adisable. *** SQL Serer must be throttled to 3/4 RAM capacity. Medium-size enironments You can use irtual enironments for this deployments size, but it is adisable to hae dedicated resources for processor, memory, and irtual disk allocation. The irtual disk that is allocated for the irtual machine should hae dedicated RAID storage, with dedicated input-output bandwidth for that irtual machine. Large and ery large enironments For large deployments, use dedicated hardware. For optimum performance, use a database serer that is dedicated to BigFix Inentory and is not shared with BigFix or other applications. Additionally, you might want to designate a separate disk that is attached to the computer where the application database is installed to store the database transaction logs. You might need to do some fine-tuning based on the proided recommendations. Network connection and storage throughput The Extract, Transform, Load (ETL) process extracts a large amount of scan data from the BigFix serer, processes it on the BigFix Inentory serer, and saes it in the DB2 or MS SQL database. The following two factors affect the time of the import to the BigFix Inentory serer: Gigabit network connection Because of the nature of the ETL imports, you are adised to hae at least a gigabit network connection between the BigFix, BigFix Inentory, and database serers. Disk storage throughput For large deployments, you are adised to hae dedicated storage, especially for the database serer. The minimum expected disk speed for writing data is approximately 400 MB/second. Diiding the infrastructure into scan groups It is critical for BigFix Inentory performance that you properly diide your enironment into scan groups and then accurately schedule scans in those scan groups. If the configuration is not well-balanced, you might experience long import times. For enironments larger than 35 000 endpoints, diide your endpoints into separate scan groups. The system administrator can then set a different scanning schedule for eery scan group in your enironment. Example If you hae 60 000 endpoints, you can create six scan groups (eery group containing 10 000 endpoints). The first scan group has the scanning schedule set to Monday, the second to Tuesday, and so on. Using this configuration, eery endpoint is scanned once a week. At the same time, the BigFix serer receies Scalability Guidelines 7

data only from 1/6 of your enironment daily and for eery daily import the BigFix Inentory serer needs to process data only from 10 000 endpoints (instead of 60 000 endpoints). This enironment configuration shortens the BigFix Inentory import time. The image below presents a scan schedule for an infrastructure that is diided into six scan groups. You might achiee such a schedule after you implement recommendations that are contained in this guide. The assumption is that both software scans and imports of scan data to BigFix Inentory are scheduled to take place at night, while uploads of scan data from the endpoints to the BigFix serer occur during the day. If you hae a powerful serer computer and longer import time is acceptable, you can create fewer scan groups with greater number of endpoints in the BigFix console. Remember to monitor the import log to analyze the amount of data that is processed and the time it takes to process it. For information how to create scan groups, see the topic Computer groups that is aailable in the BigFix documentation. Good practices for running scans and imports After you enable the BigFix Inentory site in your BigFix console, carefully plan the scanning actiities and their schedule for your deployment. Plan the scanning schedule After you find the optimal size of the scan group, set the scanning schedule. It is the frequency of software scan on an endpoint. The most common scanning schedule is weekly so that eery endpoint is scanned once a week. If your enironment has more than 100 000 endpoints, consider performing scans less frequently, for example monthly. If scans are running daily, take into account system updates, because when many files are modified, it will make the next data import run longer. Aoid scanning when it is not needed The frequency of scans depends both on how often software products change on the endpoints in your enironment and also on your reporting needs. If you hae systems in your enironment that hae dynamically-changing software, you can group such systems into a scan group (or groups) and set more frequent scans, for example once a week. The remaining scan groups that contain computers with a more stable set of software can be scanned less frequently, for example once a month. 8

Limit the number of computer properties that are to be gathered during scans By default, the BigFix Inentory serer includes four primary computer properties from the BigFix serer that is configured as the data source: computer name, DNS name, IP address, and operating system. Imports can be substantially longer if you specify more properties to be extracted from the BigFix database and copied into the BigFix Inentory database during each import. As a good practice, limit the number of computer properties to 10 (or fewer). Limit the number of BigFix Inentory computer groups Create as small as possible number of computer groups. Data import phase (ETL) gets longer with a growing number of computer groups. If the size of your enironment requires that you create many computer groups, despite the recommendations, consider skipping extended software aggregates calculations. By skipping these calculations, you can noticeably reduce the length of data imports in ery large enironments. For more information, see Disabling the calculation of extended software aggregates. Recommendations for the serice proider enironments The serice proider functionality is used to create separate subcapacity reports for different sets of computers. You can create such a report for each computer group. By default, the calculation of PVU and RVU MAPC consumption is disabled for new computer groups. After you enable it, a computer group becomes a subcapacity computer group, and can hae its own subcapacity report. This, howeer, impacts performance due to increased number of subcapacity calculations. If you decide to create more than 10 subcapacity computer groups, you must adjust the use of your CPU resources and tune the default configuration of BigFix Inentory. One of the most important processes for subcapacity computer groups are aggregation and reaggregation that are responsible for PVU and RVU calculations. The performance of these processes depend on the maximum number of threads that can be run for the calculations, which by default is 2. By increasing the number of aailable threads, you improe performance. For many subcapacity computer groups, it is adisable to increase the number of aailable threads by specifying the alues of the maxaggregationthreads and maxreaggregationthreads parameters. In general, you should proide two processor cores for each thread on your database serer. For example, if you can proide 12 processor cores for the aggregation process itself, increase the number of threads to 6 by specifying maxaggregationthreads=6. You can specify the alues of these parameters by using REST API. For more information, see Configuration of the administration serer settings. Ensure that scans and imports are scheduled to run at night Some actions in the BigFix Inentory user interface cannot be processed when an import is running. Thus, try to schedule imports when the application administrator and Software Asset Manager are not using BigFix Inentory or after they finished their daily work. Run the initial import It is a good practice to run the first (initial) import before you schedule any software scans and actiate any analyses. Examples of when imports can be run: The first import uploads the software catalog from the installation directory to the application and extracts the basic data about the endpoints from the BigFix serer. The second import can be run after the scan data from the first scan group is aailable in the BigFix serer. Scalability Guidelines 9

The third import should be started after the scans from the second scan group are finished, and so on. Reiew import logs Reiew the following INFO messages in the import log to check how much data was transferred during an ETL. Information about Items specified in the import log Description Infrastructure Computer items The total number of computers in your enironment. A computer is a system with an BigFix agent that proides data to BigFix Inentory. Software and hardware Installed packages SAM::ScanFile items SAM::FileFactDelta items SAM::FileFact items SAM::CitFact items SAM::IsotagFact items SAM::PackageFact items SAM::UnixPackageFact items The number of files that hae input data for the following items: File system scan information (SAM::FileFact items) Catalog-based scan information (SAM::CitFact items) Software identification tag scan information (SAM::IsotagFact items) The total count of information pieces about files that changed between the last two full file system scans. The total count of information pieces about files from all computers in your enironment (contained in the processed scan files). The total count of information pieces from catalog-based scans (contained in the processed scan files). The total count of information pieces from software identification tag scans (contained in the processed scan files). The total count of information pieces about Windows packages that hae been gathered by the package data scan. The total count of information pieces about UNIX packages that hae been gathered by the package data scan. Software usage SAM::AppUsagePropertyValue items The total number of processes that were captured during scans on the systems in your infrastructure. Example: INFO: Computer items: 15000 INFO: SAM::AppUsagePropertyValue items: 4250 INFO: SAM::ScanFile items: 30000 INFO: Delta changes applied on model SAM::FileFact: 0 rows INFO: Number of computers processing delta file scan data: 0 INFO: SAM::FileFactDelta items: 0 INFO: Number of computers processing full file scan data: 16 INFO: Inserting new 28423 rows into SAM::FileFact INFO: SAM::FileFact items: 15735838 INFO: SAM::IsotagFact items: 0 INFO: SAM::CitFact items: 149496 INFO: SAM::PackageFact items: 406687 INFO: SAM::UnixPackageFact items: 1922564 Maintain frequent imports After the installation, imports are scheduled to run once a day. Do not change this configuration. Howeer, you might want to change the hour when the import starts. If your import is longer than 24 hours, you can: Improe the scan groups configuration. Presere the current daily import configuration because BigFix Inentory handles oerlapping imports gracefully. If an import is running, no other import is started. 10

Enabling the cryptographic hash collection Each change to the configuration of the cryptographic hash collection (enabling, disabling, adding new types) significantly lengthens the first data import that follows the change. Because of multiple modifications on the file system, the new configuration triggers a complete data import instead of the delta one, in which only modifications are imported. This first data import might take up to three times longer, and the subsequent ones about 10% longer than data imports without file hashes. The impact of subsequent data imports is considered as moderate. Before enabling the collection of file hashes, it is recommended to diide your enironment into scan groups to distribute the load of the imported data. If the extended data import (three times longer) is acceptable, enable the collection of file hashes for all scan groups, and collect the data according to schedule. If the extended data import does not meet your expectations, rearrange your scan groups into smaller ones with fewer endpoints to lower the amount of data included in a single data import. After the first import is completed for all scan groups, you can go back to the preious setup. In general, scan groups are highly recommended for enironments with more than 35,000 endpoints. For smaller enironments, you must decide whether the extended data import is acceptable, or you want to distribute it among seeral smaller ones by using scan groups. Example In an enironment with 60 000 endpoints diided into 6 scan groups (each with 10 000 endpoints), where each scan group is scanned on a different day, the file hashes will be collected in 6 days. The initial import for each scan group after enabling the collection might be three times longer. Next imports will take about 10% longer. Impact of file hashes on the BigFix Inentory database size For both DB2 and SQL Serer databases, the collection of file hashes (MD5 and SHA-256) is expected to result in a 20% growth of the disk space consumption. Impact of file hashes on the BigFix client File hashes are calculated during the software scan and the results are gathered on the endpoint. The size of scan results will increase by about 5%. For an aerage endpoint with 30 matched and 800 unmatched raw data files, an additional 0.5 MB of disk space might be consumed. To reduce the impact on the import time and the size of the database, consider limiting the number of scanned signature extensions. Disable collection of usage data Software usage data is gathered by the Application Usage Statistics analysis. If the analysis is actiated, usage data is gathered from all endpoints in your infrastructure. Howeer, the data is uploaded to the BigFix serer only for the endpoints on which you run software scans. For the remaining endpoints, the data is stored on the endpoint until you run the software scan. About this task If you do not need usage data or the deployment phase is not finished, do not actiate the analysis. It can be actiated later on, if needed. If the analysis is already actiated, but you decide that processing of usage data takes too much time or you are not interested in usage statistics, disable the analysis. Scalability Guidelines 11

Procedure 1. Log in to the BigFix console. 2. In the naigation tree, open the IBM BigFix Inentory 9 > Analyses. 3. In the upper-right pane, right-click Application Usage Statistics, and click Deactiate. Change the mode of sorting bundling options on the IBM Software Classification pane By default, the bundling options that are displayed when you reassign a software component on the IBM Software Classification pane are sorted by confidence. You might want to change this mode of sorting bundling options if the IBM Software Classification pane is running slowly or the BigFix Inentory serer is under a heay load. If you set the alue of the blockuibundlingcomputations parameter to true, the bundling options are sorted alphabetically and are displayed more quickly because no additional computation is added to the serer workload. Procedure To set the parameter alue to true, open the REST add-on in your web browser, for example Adanced REST client, and run the following REST API query. PUT http://bfi_serer_host_name:port_number/rest/configs?token=token& name=blockuibundlingcomputations&alue=true Example: PUT http://localhost:9981/api/sam/configs?token=7adc3efb175e2bc0f4484bdd2efca54a8fa04623& name=blockuibundlingcomputations&alue=true To run the query with the curl tool, start curl and enter the following command: curl - PUT http://bfi_serer_host_name:port_number/rest/configs?token=token& name=blockuibundlingcomputations&alue=true Example: curl - PUT http://localhost:9981/api/sam/configs?token=7adc3efb175e2bc0f4484bdd2efca54a8fa04623& name=blockuibundlingcomputations&alue=true Make room for end-of-scan-cycle actiities Plan to hae a data export to other integrated solutions (i.e. SmartCloud Control Desk through IBM Tioli Integration Composer) at the end of a 1- or 2-week cycle. Configuring the application and its database for medium and large enironments To aoid performance issues in medium and large enironments, configure the location of the transaction log and adjust the log size. If you are using MS SQL as BigFix Inentory database, you might want to shrink the transaction log or update query optimization statistics. Component Configuration tasks BigFix Inentory serer Increasing Jaa heap size on page 13 DB2 Configuring the transaction logs size on page 13 Configuring the transaction log location for DB2 on page 14 Configuring swappiness in Linux hosting DB2 database serer on page 14 Configuring the DB2_COMPATIBILITY_VECTOR ariable for improed UI performance on page 15 12

Component Configuration tasks Microsoft SQL serer Shrinking MS SQL Serer transaction log on page 15 Configuring the transaction log location for MS SQL Serer on page 15 Rebuilding database indexes on page 15 Optimizing the tempdb database in Microsoft SQL Serer on page 16 Note: In Microsoft SQL Serer, the transaction log is increased automatically - no further action is required. Increasing Jaa heap size The default settings for the Jaa heap size might not be sufficient for medium and large enironments. If your enironment consists of more than 5000 endpoints, increase the memory aailable to Jaa client processes by increasing the Jaa heap size. Procedure 1. Go to the <INSTALL_DIR>/wlp/usr/serers/serer1/ directory and edit the jm.options file. 2. Set the maximum Jaa heap size (Xmx) to one of the following alues, depending on the size of your enironment: For medium enironments (5000-50 000 endpoints), set the heap size to 6144m. For large enironments (oer 50 000 endpoints), set the heap size to 8192m. 3. Restart the BigFix Inentory serer. Configuring the transaction logs size If your enironment consists of many endpoints, increase the transaction logs size to improe performance. About this task The transaction logs size can be configured through the LOGFILSIZ DB2 parameter that defines the size of a single log file. To calculate the alue that can be used for this parameter, you must first calculate the total disk space that is required for transaction logs in your specific enironment and then multiply it, thus obtaining the size of one transaction log. The required amount of disk space depends on the number of endpoints in your enironment and the number of endpoints in the biggest scan group for which data is processed during the import. Important: Use the proided formula to calculate the size of transaction logs that are generated during the import of data. More space might be required for transaction logs that are generated when you remoe the data source. Procedure 1. Use the following formula to calculate the disk space that is needed for transaction logs: <The number of computers> 1.2 MB + <the number of computers in the biggest scan group> 1.2 MB + 17 GB 2. To obtain the size of a single transaction log file that can be specified in the LOGFILSIZ DB2 parameter, multiply the result by 1852. Note: The number 1852 expresses the relation between the primary and secondary log files and is necessary to calculate the size of a single transaction log file (LOGFILSIZ). The factor was calculated assuming the default number of log files (LOGPRIMARY = 25 and LOGSECOND = 110). 3. Run the following command to update the transaction log size in your database. Substitute alue with the size of a single transaction log. Scalability Guidelines 13

db2 update database configuration for TEMADB using logfilsiz alue 4. For the changes to take effect, restart the database. Run the following commands: db2 deactiate db TEMADB db2stop db2start db2 actiate db TEMADB 5. Restart the BigFix Inentory serer. a. To stop the serer, run the following command: /etc/init.d/bfiserer stop b. To start the serer, run the following command: Example /etc/init.d/bfiserer start Calculating the single transaction log size for 100 000 endpoints and 15 000 scan results: 100 000 1.2 MB + 15 000 1.2 MB + 17 GB = 155 GB 155 1852 = 287060 287060 is the alue to be specified in the LOGFILSIZ parameter. Configuring the transaction log location for DB2 To increase database performance, moe the DB2 transaction log to a file system that is separate from the DB2 file system. About this task Medium enironments: Large enironments: Very large enironments Strongly adised Required Required Procedure 1. To moe the DB2 transaction log to a file system that is separate from the DB2 file system, update the DB2 NEWLOGPATH parameter for your BigFix Inentory database: UPDATE DATABASE CONFIGURATION FOR TEMADB USING NEWLOGPATH alue Where alue is a directory on a separate disk (different from the disk where the DB2 database is installed) where you want to keep the transaction logs. This configuration is strongly adised. 2. For the changes to take effect, restart the database. Run the following commands: DEACTIVATE DB TEMADB DB2STOP DB2START ACTIVATE DB TEMADB 3. Restart the BigFix Inentory serer. a. To stop the serer, run the following command: /etc/init.d/bfiserer stop b. To start the serer, run the following command: /etc/init.d/bfiserer start Configuring swappiness in Linux hosting DB2 database serer Swappiness determines how quickly processes are moed from RAM to hard disk to free memory. It can assume the alue 0-100. A low alue means that your Linux system swaps out processes rarely while a 14

high alue means that processes are written to disk immediately. Swapping out runtime processes should be aoided on the DB2 serer on Linux, so it is adisable to set the swappiness kernel parameter to a low alue or zero. Procedure 1. Log in to the Linux system as root. 2. Set the swappiness parameter to a low alue or 0. Option A: a. Open the file /etc/sysctl.conf in a text editor and enter the m.swappiness parameter of your choice. Example: m.swappiness = 0 b. Restart the operating system to load the changes. Option B: To change the alue while the operating system operating, run the following command: sysctl -w m.swappiness=0. Configuring the DB2_COMPATIBILITY_VECTOR ariable for improed UI performance For enironments with 5000 or more clients in your infrastructure, it is adisable to set the alue of the DB2_COMPATIBILITY_VECTOR ariable to MYS. This change might result in a UI response time that is significantly faster on some BigFix Inentory installations. Procedure For information about how to modify this registry ariable, see DB2_COMPATIBILITY_VECTOR registry ariable in IBM Knowledge Center. Shrinking MS SQL Serer transaction log Shrink the transaction log once a month. If the enironment is large, it is adisable to shrink the log after eery data import. For more information, see How to: Shrink a File (SQL Serer Management Studio). Configuring the transaction log location for MS SQL Serer The transaction log in MS SQL Serer records all transactions and the database changes caused by each transaction. For information how to configure the database transaction log, see Moe the Database Transaction Log to Another Drie. Rebuilding database indexes Use the following SQL script to rebuild all indexes in all BigFix Inentory database tables. USE TEMADB SELECT ALTER INDEX ALL ON + t.[table_schema] +. + t.[table_name] + REBUILD; FROM INFORMATION_SCHEMA.TABLES t USE TEMADB GO IF EXISTS (SELECT * FROM dbo.imports WHERE success IS NULL) Scalability Guidelines 15

BEGIN PRINT N CANNOT RUN index rebuild. BFI import is running! PRINT N Wait until BFI import finishes END ELSE BEGIN DECLARE table_cursor CURSOR FOR SELECT table_schema, table_name FROM INFORMATION_SCHEMA.TABLES WHERE table_type = BASE TABLE OPEN table_cursor DECLARE @tablename sysname DECLARE @tableschema sysname FETCH NEXT FROM table_cursor INTO @tableschema, @tablename WHILE @@fetch_status!= -1 BEGIN PRINT N START alter index all on + @tableschema + N. + @tablename + N rebuild ; EXECUTE (N alter index all on + @tableschema + N. + @tablename + N rebuild ) PRINT N END alter index all on + @tableschema + N. + @tablename + N rebuild ; FETCH NEXT FROM table_cursor INTO @tableschema, @tablename END CLOSE table_cursor DEALLOCATE table_cursor PRINT N START sp_updatestats ; EXECUTE sp_updatestats PRINT N END sp_updatestats ; END GO Optimizing the tempdb database in Microsoft SQL Serer tempdb is a system database in SQL Serer whose main functions are to store temporary tables, cursors, stored procedures, and other internal objects that are created by the database engine. By default, the database size is set to 8 MB and it can grow by 10% automatically. In large enironments, its size can be as large as 15 GB. It is therefore important to optimize the tempdb database because the location and size of this database can negatiely affect the performance of the BigFix Inentory serer. For information about how to set the database size and how to determine the optimal number of files, see the TechNet article Optimizing tempdb Performance. Backing up and restoring the database Perform regular backups of the data that is stored in the database. It is adisable to back up the database before updating the software catalog or upgrading the serer to facilitate recoery in case of failure. Backing up the DB2 database You can sae your database to a backup file. Procedure 1. Stop the BigFix Inentory serer. 2. Check which applications connect to the database and then close all actie connections: 16 a. List all applications that connect to the database:

db2 list applications for database TEMADB b. Each connection has a handle number. Copy it and use in the following command to close the connection: db2 force application "( <handle_number> )" 3. Optional: If you actiated the database before, deactiate it: db2 deactiate db TEMADB 4. Back up the database to a specified directory: db2 backup database TEMADB to <PATH> Backing up the SQL Serer database You can make a copy of your database by saing it to a backup file. If you want, you can then moe the backup to another computer and restore it in a different BigFix Inentory instance. Before you begin You can back up and restore the database only within one ersion of BigFix Inentory. BigFix Inentory and Microsoft SQL Serer Management Studio must be installed. Stop the BFIserer serice. Open the command prompt and run net stop BFIserer. Procedure 1. Log in to the computer that hosts the database that you want to back up. 2. Open Microsoft SQL Serer Management Studio. 3. In the left naigation bar, expand Databases. 4. Right-click on the database that you want to back up and then click Tasks > Back Up. 5. Reiew the details of the backup and then click OK to create the backup. Scalability Guidelines 17

6. Click OK. Results If the database was backed up successfully, you can find the bak file in the location that you specified in step 5. What to do next If you want to moe the database to a different BigFix Inentory instance, copy the backup file to the target computer and then restore the database. Restoring the DB2 database You can restore a damaged or corrupted database from a backup file. Procedure 1. Stop the BigFix Inentory serer. 2. Check which applications connect to the database and then close all actie connections: a. List all applications that connect to the database: db2 list applications for database TEMADB 18

b. Each connection has a handle number. Copy it and use in the following command to close the connection: db2 force application "( <handle_number> )" 3. Optional: If you actiated the database before, deactiate it: db2 deactiate db TEMADB 4. Restore the database from a backup file: db2 restore db TEMADB from <PATH> taken at <TIMESTAMP> REPLACE EXISTING Example: db2 restore db TEMADB from /home/db2inst1/ taken at 20131105055846 REPLACE EXISTING Restoring the SQL Serer database If you encounter any problems with your database or if you want to moe it between different instances of BigFix Inentory, you can use a backup file to restore the database. Before you begin You can back up and restore the database only within one ersion of BigFix Inentory. Ensure that you are logged in to Microsoft SQL Serer Management Studio as the user who created the temadb database. If you log in as a different user, the restoring will fail. BigFix Inentory and Microsoft SQL Serer Management Studio must be installed. Stop the BFIserer serice. Open the command prompt and run net stop BFIserer. Procedure 1. Log in to the computer on which you want to restore the database. 2. Open Microsoft SQL Serer Management Studio. 3. In the left naigation bar, right-click on Databases and then click Restore Database. 4. In the Source section, select Deice and click the button with three dots. Scalability Guidelines 19

5. In the pop up window that opens, click Add and browse for your backup file. Click OK. 6. In the left naigation menu, click Options. 7. In the pane on the right select Oerwrite the existing database (WITH REPLACE) and Close existing connections to destination database. 8. Click OK. Results You restored the temadb database from a backup file. What to do next Ensure that you upload to BigFix Inentory the latest software catalog or the one that was used just before restoring the database. Preentie actions Turn off scans if the BigFix Inentory serer is to be unaailable for a few days due to routine maintenance or scheduled backups. If imports of data from BigFix to BigFix Inentory are not running, the unprocessed scan data is accumulated on the BigFix serer. After you turn on the BigFix Inentory serer, a large amount of data will be processed leading to a long import time. To aoid prolonged imports, turn off scans for the period when the BigFix Inentory serer is not running. 20

Limiting the number of scanned signature extensions The scanner scans the entire infrastructure for files with particular extensions. For some extensions, the discoered files are matched against the software catalog before the scan results are uploaded to the BigFix serer. It ensures that only information about files that produce matches is uploaded. For other extensions, the scan results are not matched against the software catalog on the side of the endpoint. They are all uploaded to the BigFix serer. Thus, you aoid rescanning the entire infrastructure when you import a new catalog or add a custom signature. The new catalog is matched against the information that is aailable on the serer. Howeer, such a behaior might cause that large amounts of information about files that do not produce matches is uploaded to the serer. It might in turn lead to performance issues during the import. To reduce the amount of information that is uploaded to the serer, limit the list of file extensions that are not matched against the software catalog on the side on the endpoint. Procedure 1. Stop the BigFix Inentory serer. Linux a. Run the following command: /etc/init.d/bfiserer stop. Windows a. Click Start > Administratie Tools > Serices. b. Right-click IBM BigFix Inentory 9.2.4.0 serice, and then click Stop. 2. To limit the number of extensions that are not matched against the software catalog on the side of the endpoint, remoe the extensions that you want to be omitted by the scanner from the following files: file_names_all.txt file_names_unix.txt file_names_windows.txt They are in the following directory: Linux BFI_install_dir/wlp/usr/serers/serer1/apps/tema.war/WEB-INF/domains/sam/config Windows BFI_install_dir\wlp\usr\serers\serer1\apps\tema.war\WEB-INF\domains\sam\config Note: Do not remoe file extensions that you used to create custom signatures. They are likely to produce matches with the software catalog, so they can be uploaded to the BigFix serer. 3. Start the BigFix Inentory serer. Linux a. Run the following command: /etc/init.d/bfiserer start. Windows a. Click Start > Administratie Tools > Serices. b. Right-click IBM BigFix Inentory 9.2.4.0 serice, and then click Start. 4. Run an import. During this import, performance might be lower because the software catalog is imported. Important: After the import, some software items might not be isible on the reports. It is an expected behaior. Complete the remaining steps for the software inentory to be properly reported. 5. Wait for the scheduled software scan. Alternatiely, if you hae infrequent software scans, stop the current scan and start a new one. It will allow you for using the optimized list of file extensions in a shorter time. a. Log in to the BigFix console and in the left naigation tree, click Actions. Scalability Guidelines 21