Informatica ILM Nearline for use with SAP NetWeaver BW (Version 6.1) Administrator Guide

Size: px
Start display at page:

Download "Informatica ILM Nearline for use with SAP NetWeaver BW (Version 6.1) Administrator Guide"

Transcription

1 Informatica ILM Nearline for use with SAP NetWeaver BW (Version 6.1) Administrator Guide

2 Informatica ILM Nearline Administrator Guide Version 6.1 February 2013 Copyright (c) Informatica Corporation. All rights reserved. This software and documentation contain proprietary information of Informatica Corporation and are provided under a license agreement containing restrictions on use and disclosure and are also protected by copyright law. Reverse engineering of the software is prohibited. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica Corporation. This Software may be protected by U.S. and/or international Patents and other Patents Pending. Use, duplication, or disclosure of the Software by the U.S. Government is subject to the restrictions set forth in the applicable software license agreement and as provided in DFARS (a) and (a) (1995), DFARS (1)(ii) (OCT 1988), FAR (a) (1995), FAR , or FAR (ALT III), as applicable. The information in this product or documentation is subject to change without notice. If you find any problems in this product or documentation, please report them to us in writing. Informatica, Informatica Platform, Informatica Data Services, PowerCenter, PowerCenterRT, PowerCenter Connect, PowerCenter Data Analyzer, PowerExchange, PowerMart, Metadata Manager, Informatica Data Quality, Informatica Data Explorer, Informatica B2B Data Transformation, Informatica B2B Data Exchange Informatica On Demand, Informatica Identity Resolution, Informatica Application Information Lifecycle Management, Informatica Complex Event Processing, Ultra Messaging and Informatica Master Data Management are trademarks or registered trademarks of Informatica Corporation in the United States and in jurisdictions throughout the world. All other company and product names may be trade names or trademarks of their respective owners. Portions of this software and/or documentation are subject to copyright held by third parties, including without limitation: Copyright DataDirect Technologies. All rights reserved. Copyright Sun Microsystems. All rights reserved. Copyright RSA Security Inc. All Rights Reserved. Copyright Ordinal Technology Corp. All rights reserved.copyright Aandacht c.v. All rights reserved. Copyright Genivia, Inc. All rights reserved. Copyright Isomorphic Software. All rights reserved. Copyright Meta Integration Technology, Inc. All rights reserved. Copyright Intalio. All rights reserved. Copyright Oracle. All rights reserved. Copyright Adobe Systems Incorporated. All rights reserved. Copyright DataArt, Inc. All rights reserved. Copyright ComponentSource. All rights reserved. Copyright Microsoft Corporation. All rights reserved. Copyright Rogue Wave Software, Inc. All rights reserved. Copyright Teradata Corporation. All rights reserved. Copyright Yahoo! Inc. All rights reserved. Copyright Glyph & Cog, LLC. All rights reserved. Copyright Thinkmap, Inc. All rights reserved. Copyright Clearpace Software Limited. All rights reserved. Copyright Information Builders, Inc. All rights reserved. Copyright OSS Nokalva, Inc. All rights reserved. Copyright Edifecs, Inc. All rights reserved. Copyright Cleo Communications, Inc. All rights reserved. Copyright International Organization for Standardization All rights reserved. Copyright ej-technologies GmbH. All rights reserved. Copyright Jaspersoft Corporation. All rights reserved. Copyright is International Business Machines Corporation. All rights reserved. Copyright yworks GmbH. All rights reserved. Copyright Lucent Technologies. All rights reserved. Copyright (c) University of Toronto. All rights reserved. Copyright Daniel Veillard. All rights reserved. Copyright Unicode, Inc. Copyright IBM Corp. All rights reserved. Copyright MicroQuill Software Publishing, Inc. All rights reserved. Copyright PassMark Software Pty Ltd. All rights reserved. Copyright LogiXML, Inc. All rights reserved. Copyright Lorenzi Davide, All rights reserved. Copyright Red Hat, Inc. All rights reserved. Copyright The Board of Trustees of the Leland Stanford Junior University. All rights reserved. Copyright EMC Corporation. All rights reserved. Copyright Flexera Software. All rights reserved. Copyright Jinfonet Software. All rights reserved. This product includes software developed by the Apache Software Foundation ( and other software which is licensed under the Apache License, Version 2.0 (the "License"). You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. This product includes software which was developed by Mozilla ( software copyright The JBoss Group, LLC, all rights reserved; software copyright by Bruno Lowagie and Paulo Soares and other software which is licensed under the GNU Lesser General Public License Agreement, which may be found at The materials are provided free of charge by Informatica, "as-is", without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and fitness for a particular purpose. The product includes ACE(TM) and TAO(TM) software copyrighted by Douglas C. Schmidt and his research group at Washington University, University of California, Irvine, and Vanderbilt University, Copyright ( ) , all rights reserved. This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit (copyright The OpenSSL Project. All Rights Reserved) and redistribution of this software is subject to terms available at and This product includes Curl software which is Copyright , Daniel Stenberg, <daniel@haxx.se>. All Rights Reserved. Permissions and limitations regarding this software are subject to terms available at Permission to use, copy, modify, and distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. The product includes software copyright ( ) MetaStuff, Ltd. All Rights Reserved. Permissions and limitations regarding this software are subject to terms available at license.html. The product includes software copyright , The Dojo Foundation. All Rights Reserved. Permissions and limitations regarding this software are subject to terms available at This product includes ICU software which is copyright International Business Machines Corporation and others. All rights reserved. Permissions and limitations regarding this software are subject to terms available at

3 This product includes software copyright Per Bothner. All rights reserved. Your right to use such materials is set forth in the license which may be found at kawa/software-license.html. This product includes OSSP UUID software which is Copyright 2002 Ralf S. Engelschall, Copyright 2002 The OSSP Project Copyright 2002 Cable & Wireless Deutschland. Permissions and limitations regarding this software are subject to terms available at This product includes software developed by Boost ( or under the Boost software license. Permissions and limitations regarding this software are subject to terms available at / This product includes software copyright University of Cambridge. Permissions and limitations regarding this software are subject to terms available at This product includes software copyright 2007 The Eclipse Foundation. All Rights Reserved. Permissions and limitations regarding this software are subject to terms available at This product includes software licensed under the terms at license.html, license.html, license-agreement; and This product includes software licensed under the Academic Free License ( the Common Development and Distribution License ( the Common Public License ( the Sun Binary Code License Agreement Supplemental License Terms, the BSD License ( the MIT License ( and the Artistic License ( This product includes software copyright Joe WaInes, XStream Committers. All rights reserved. Permissions and limitations regarding this software are subject to terms available at This product includes software developed by the Indiana University Extreme! Lab. For further information please visit This Software is protected by U.S. Patent Numbers 5,794,246; 6,014,670; 6,016,501; 6,029,178; 6,032,158; 6,035,307; 6,044,374; 6,092,086; 6,208,990; 6,339,775; 6,640,226; 6,789,096; 6,820,077; 6,823,373; 6,850,947; 6,895,471; 7,117,215; 7,162,643; 7,243,110, 7,254,590; 7,281,001; 7,421,458; 7,496,588; 7,523,121; 7,584,422; ; 7,720,842; 7,721,270; and 7,774,791, international Patents and other Patents Pending. DISCLAIMER: Informatica Corporation provides this documentation "as is" without warranty of any kind, either express or implied, including, but not limited to, the implied warranties of noninfringement, merchantability, or use for a particular purpose. Informatica Corporation does not warrant that this software or documentation is error free. The information provided in this software or documentation may include technical inaccuracies or typographical errors. The information in this software and documentation is subject to change at any time without notice. NOTICES This Informatica product (the Software ) includes certain drivers (the DataDirect Drivers ) from DataDirect Technologies, an operating company of Progress Software Corporation ( DataDirect ) which are subject to the following terms and conditions: 1. THE DATADIRECT DRIVERS ARE PROVIDED AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. 2. IN NO EVENT WILL DATADIRECT OR ITS THIRD PARTY SUPPLIERS BE LIABLE TO THE END-USER CUSTOMER FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL OR OTHER DAMAGES ARISING OUT OF THE USE OF THE ODBC DRIVERS, WHETHER OR NOT INFORMED OF THE POSSIBILITIES OF DAMAGES IN ADVANCE. THESE LIMITATIONS APPLY TO ALL CAUSES OF ACTION, INCLUDING, WITHOUT LIMITATION, BREACH OF CONTRACT, BREACH OF WARRANTY, NEGLIGENCE, STRICT LIABILITY, MISREPRESENTATION AND OTHER TORTS. Part Number: INL-ADG

4 Table of Contents 1 NEARLINE SERVICE OPTIONS AND COMMANDS SYNTAX OPTIONS COMMANDS SPECIAL TASK FILE COMMANDS NEARLINE SERVICE LOG FILE LOG FILE SETTINGS LOG FILE ENTRIES Queue Status Messages ANALYZING REQUEST STATE INFORMATION THE SAPBI_REQUEST DETAILS VIEW CONNECTING TO THE FILE ARCHIVE SERVICE USING THE FILE ARCHIVE SQL TOOL LINKING EXTERNAL DATA TO MICROSOFT EXCEL USING A DATABASE QUERY INVESTIGATING REQUEST DATA ASSOCIATING AN SCT FILE WITH AN SAP ARCHIVE REQUEST DETERMINING THE CURRENT SIZE OF NEARLINE DATA RETRIEVING DAP LISTS FROM SAP SYSTEMS TIMING INFORMATION THE TIMING FILE IMPORTING THE TIMING FILE INTO EXCEL TRACING OBJECT TRANSACTIONS TRACING FILTER SYNTAX Alternative Transaction Codes OUTPUT FILES System Calls TRACING EXAMPLES... 42

5 6 MONITORING THE NEARLINE STORE WITH SYSMON SYSMON SYNTAX DESCRIPTION LOG FILE DISK STATUS RUNNING SSA PROCESSES LIST SCAN ERROR IN THE LOG FILE NETSTAT COMMAND SSA ADMINISTRATOR VIEW SSASERVICE JOB MONITOR VIEW VMSTAT VERIFYING METADATA AND SCT FILE CONSISTENCY SSAVALIDATE SYNTAX DESCRIPTION Validate Metadata and Report Only Validate Metadata and Fix Process CSV Corrections Test SCT Files OUTPUT FILES Summary Report Log File EXAMPLES MONITORING LICENSE USAGE AND COMPRESSION THE LICENSE MONITORING UTILITY The License Monitoring Utility Report THE COMPRESSION RATE UTILITY FILE CLEANUP AND ARCHIVING THE FILE OPERATION INTERFACE... 66

6 9.2 FILE OPERATION INTERFACE EXAMPLES USING FILE OPERATIONS TO DELETE LOG FILES ACCESSING THE FILE OPERATION INTERFACE FROM SAP DELETING ORPHAN SCT FILES AUTOMATIC CLEANUP OF THE TASKS DIRECTORY Transactional Cleanup Scheduled Cleanup AUTOMATIC NDL FILE DELETION SCT FILE MIGRATION DEFINING A MIGRATETO TASK SCT FILE SELECTION RULES SELECT statement Retention Period Syntax (//TODAY-n//TABLE:x) EXAMPLES IMMEDIATE SCT FILE EXPORT TO EXTERNAL STORAGE UPDATING THE SSA.INI FILE SAMPLE SERVICE AND XAM SECTIONS IN THE SSA.INI FILE MISCELLANEOUS VIEWING ILM NEARLINE VERSION INFORMATION SAP Components Nearline Service and File Archive Service Components Viewing File Archive Service Version Information from a Command Line ILM NEARLINE PROCESSES EXTRA FILE ARCHIVE SERVICE AGENT PROCESSES CHANGING THE PASSWORD FOR THE FILE ARCHIVE REPOSITORY THE HANDLING OF SPECIAL CHARACTERS TROUBLESHOOTING HTTP COMMUNICATION SESSION POOL MANAGEMENT... 90

7 12.8 RESETTING THE NEARLINE SERVICE NUMBER RANGE OLAP CACHE INVALIDATION THE SEED DATABASE FOR THE META.N00 (FILE ARCHIVE REPOSITORY) FILE CHANGING THE SQL QUERY USED BY THE DELETE ALL DATA FUNCTION APPENDICES APPENDIX A: SAPBI_REQUEST_DETAIL VIEW STRUCTURE APPENDIX B: DDL FOR THE TIMING FILE APPENDIX C: NDL FOR THE TIMING FILE APPENDIX D: REGULAR EXPRESSIONS Summary of Regular Expression Constructs Backslashes, Escapes and Quoting Character Classes Line terminators APPENDIX E: SAND_NLIC.PROPERTIES APPENDIX F: NEARLINE SERVICE TRANSACTION CODES APPENDIX G: SAMPLE /SAND/Z4VERSION OUTPUT

8 Preface Informatica Resources Informatica Customer Portal As an Informatica customer, you can access the Informatica Customer Portal site at The site contains product information, user group information, newsletters, access to the Informatica customer support case management system (ATLAS), the Informatica How-To Library, the Informatica Knowledge Base, the Informatica Multimedia Knowledge Base, Informatica Product Documentation, and access to the Informatica user community. Informatica Documentation The Informatica Documentation team takes every effort to create accurate, usable documentation. If you have questions, comments, or ideas about this documentation, contact the Informatica Documentation team through at We will use your feedback to improve our documentation. Let us know if we can contact you regarding your comments. The Documentation team updates documentation as needed. To get the latest documentation for your product, navigate to Product Documentation from Informatica Web Site You can access the Informatica corporate web site at The site contains information about Informatica, its background, upcoming events, and sales offices. You will also find product and partner information. The services area of the site includes important information about technical support, training and education, and implementation services. Informatica How-To Library As an Informatica customer, you can access the Informatica How-To Library at The How-To Library is a collection of resources to help you learn more about Informatica products and features. It includes articles and interactive demonstrations that provide solutions to common problems, compare features and behaviors, and guide you through performing specific real-world tasks. Informatica Knowledge Base As an Informatica customer, you can access the Informatica Knowledge Base at Use the Knowledge Base to search for documented solutions to known technical issues about Informatica products. You can also find answers to frequently asked questions, technical white papers, and technical tips. If you have questions, comments, or ideas about the Knowledge Base, contact the Informatica Knowledge Base team through at KB_Feedback@informatica.com. Informatica Multimedia Knowledge Base As an Informatica customer, you can access the Informatica Multimedia Knowledge Base at The Multimedia Knowledge Base is a collection of instructional multimedia files that help you learn about common concepts and guide you through performing specific tasks. If you have

9 questions, comments, or ideas about the Multimedia Knowledge Base, contact the Informatica Knowledge Base team through at Informatica Global Customer Support You can contact a Customer Support Center by telephone or through the Online Support. Online Support requires a user name and password. You can request a user name and password at Use the following telephone numbers to contact Informatica Global Customer Support: North America / South America Europe / Middle East / Africa Asia / Australia Toll Free Toll Free Toll Free North America Brazil Mexico United Kingdom: France : Netherlands : Germany: Switzerland: Spain: Portugal: Italy: Australia: New Zealand: Standard Rate India: Standard Rate France: Belgium: Germany: Netherlands: United Kingdom:

10 1 Nearline Service Options and Commands There are a number of Nearline Service options and commands, which are described in this section. 1.1 Syntax snic [-h -help] [-a -advanced] [-I -interactive] [-v -verbose] [ {-j -jvmargs} "JVM-args"] [{"-Dproperty=value"}...] [-heap size] command where command is one of the following: start stop [immediate] call [-c host:port] call-name [data-file] checkconfig service { -install -uninstall -start -stop -pause } status test [-http] [-keepfiles] [tasksetpath xml-file] version viewlog log-file [command] task-name sh [-x] shell-command [arguments] 1.2 Options -h or -help Displays the standard Nearline Service command line syntax. -a or -advanced Displays the advanced Nearline Service command line syntax (extra commands are related to Java functionality and debugging). 1

11 -i or -interactive Used in conjunction with the start command, the -interactive option causes the Nearline Service to write its output to the terminal. By default, the Nearline Service runs in nohup mode. -v or -verbose Writes extra diagnostic information for other Nearline Service commands. For example, specifying verbose with the start command returns additional Java startup information: $ snic -verbose start COMMAND=java -Xmx1024m - Dsand.properties=/raid5nb/snic/bin/sand_nlic.properties - Djava.library.path=/raid5nb/snic/lib -Dsand.log.to.console=false -jar /raid5nb/snic/lib/sand_nlic.jar --start DEFS=-Xmx1024m - Dsand.properties=/raid5nb/snic/bin/sand_nlic.properties -Djava.library.path=/raid5nb/snic/lib - Dsand.log.to.console=false COMMAND_ARGS=--start JARFILE=/raid5nb/ snic/lib/sand_nlic.jar BACKGROUND=1... { -j -jvmargs } "JVM-args" Passes the specified arguments ("JVM-args") to the Java Virtual Machine (JVM). For more information about the JVM parameters, see the Java documentation. Note that all of the JVM arguments must be contained in double quotation marks (" "). {"-Dproperty=value"} Changes the value (value) of the specified Nearline Service parameter (property) for the current session only. Multiple sand_nlic.properties file parameters can be overridden in this manner. For example: snic "-Dsand.data.chunksize= " "-Dsand.load.namedpipe.timeout=25000" "-Dsand.timing.roll.count=50" start Note that each D specification must be contained in double quotation marks (" "). -heap size The maximum heap size (in megabytes) that Java uses when running the Nearline Service. For example: 2

12 snic -heap 1024 start sets the heap to 1024 MB. The heap value can also be defined permanently in the sand_nlic.properties file using the sand.java.maxmemory parameter. If that parameter value is changed while the Nearline Service is running, the server/service must be stopped and then restarted for the new heap size to take effect. 1.3 Commands start Starts the Nearline Service manually. By default, the Nearline Service is started in the background. If the Nearline Service is in interactive mode ( i or interactive), a message indicates that the Nearline Service started successfully when the Nearline Service has finished starting all of the required components and connected to the metadata. The message is also written to the Nearline Service log file. stop Stops the Nearline Service manually. If the Nearline Service is in interactive mode, the shutdown sequence is displayed, concluding with a message that indicates the Nearline Service stopped. The shutdown sequence messages are also written to the Nearline Service log file. Note that the stop command waits for all processes started by the Nearline Service to end gracefully before the Nearline Service itself is stopped. In some cases, this wait can be lengthy, as some processes, such as the File Archive Repository Service, perform internal housekeeping on shutdown. To force a rapid shutdown of the Nearline Service and its processes, use the stop immediate command. stop immediate Stops the Nearline Service immediately. All processes started by the Nearline Service are terminated at once, even if the processes are in the middle of performing tasks. For this reason, the stop immediate command should only be used when absolutely necessary. When the Nearline Service is shutting down via this command, an "ACTION_REJECTED" error message is sent to SAP users whose activities are affected by the shutdown. For example: 3

13 Figure 1: Nearline Service Stop Immediate Messages Note that it may take considerably longer than usual for the File Archive Repository Service to start up again the first time after it was terminated as a result of the stop immediate command. call [-c host:port] call-name [data-file] Sends a call or task file request (call-name) via the HTTP interface. Include the full path to the file. To specify a host and HTTP port other than that which is defined for the local host, include the -c host:port option. If required, include the name of the data file associated with the request. checkconfig Checks the sand_nlic.properties, ssa.ini, nucleus.ini, odbc.ini configuration files and displays all values that are changed from the default, and any nonstandard or deprecated parameters. For example: Checking changes in configuration files (host name and TCP/IP ports changes are omitted) File:/raid5nb_a/snic/bin/sand_nlic.properties sand.java.max_heap_size=1024 <----- nonstandard property (no default) sand.log.level=fine <----- changed property (default=info) File:/raid5nb_a/dna/ssa.ini [SAPBI]CTRL_STATUS=null <----- nonstandard property (no default) File:/raid5nb_a/dna/nucleus.ini No change, all default values are used by SNIC Server File:/raid5nb_a/dna/odbc.ini No change, all default values are used by SNIC Server 4

14 service { -install -uninstall -start -stop } On Windows systems only: The -install option installs the Nearline Service, if the Nearline Service was not set to run as a service. If the Nearline Service was installed on a user account, the service is installed on that same account. If the Nearline Service was installed on the local system account, installation of the service includes a prompt for the specific <domain>\<user> account to use for the service logon. The -uninstall option uninstalls the Nearline Service, if the service is installed. The -start option manually starts the Nearline Service if the service is not running. The -stop option manually stops the Nearline Service if it is running. status Reports whether the Nearline Service is running to stdout, and writes information about the contents of running queues to the Nearline Service log file. If the Nearline Service is running, there are running/waiting queues, and the sand.log.level parameter is set to INFO or higher, a message like the following is returned (details will vary): Running Queue information Running queue: 1 [0] ARC_ _8_PNO (sessionid ARC_ ) (long task) (session concurrency limited) Waiting queue: 3 Server information CTRL_CONCURRENCY: 5 MAX_JOBS: 3 Memory (in bytes) Maximum: Total: Available: Opened writers: 2 session: ARC_ , id: ARC_ _4_OFW session: ARC_ , id: ARC_ _4_OFW UTC date and time: :44:53 With the same conditions, but with sand.log.level set to FINEST or lower, there is expanded information for the waiting queue:... Queue information Running queue: 1 [0] ARC_ _8_PNO (sessionid ARC_ ) (long task) (session concurrency limited) Waiting queue: 3 [0] ARC_ _9_PNO (sessionid ARC_ ) (long task) (session concurrency delayed) [1] ARC_ _10_PNO (sessionid ARC_ ) (long task) (session concurrency delayed) (bad sequence delayed) 5

15 [2] ARC_ _11_PNO (sessionid ARC_ ) (long task) (bad sequence delayed)... Using the examples from above, the Nearline Service status information is described in the following table: Status Information Running Queue information Running queue: 1 [0] ARC_ _8_PNO (sessionid ARC_ ) (long task) (session concurrency limited) Description The current state of the Nearline Service The Running and Waiting queues The number of running tasks Running task details [0] Item number ARC_ _8_PNO (sessionid ARC_ ) (long task) (session concurrency limited) Call ID Session ID Possible details: (long task) identifies a long-running task (PNO, OCU, CWR) (session concurrency limited) identifies a task that must not run if a task is already running in this session (any BI_CALL) (MAXJOBS limited) identifies a task limited by the MAXJOBS parameter (OFW) (memory consumer) identifies a task that needs memory to execute (FNP in HTTP mode) Waiting queue: 3 [0] ARC_ _9_PNO (sessionid ARC_ ) (long task) (session concurrency delayed) The number of waiting tasks Waiting task details [0] Item number ARC_ _9_PNO Call ID 6

16 (sessionid ARC_ ) (long task) (session concurrency delayed) Session ID Possible details: (long task delayed) waiting because the long task limit was reached (long task) identifies a long task (PNO, OCU, CWR) (MAXJOBS delayed) waiting because the MAXJOBS limit was reached (memory consumer delayed) waiting because the memory limit was reached (session delayed) waiting because the whole session was delayed due to one of its tasks being delayed (session concurrency delayed) waiting because another task in the same session is already running and session concurrency is limited (bad sequence delayed) waiting because this sequence ID is not the next one in the session Server information CTRL_CONCURRENCY: 5 MAXJOBS: 3 Memory (in bytes) Maximum: Information about the Nearline Service ssa.ini parameter (maximum number of parallel tasks that can run on the system) ssa.ini parameter (the maximum number of jobs that can run simultaneously) Information about the Nearline Service The maximum amount of Java heap memory that can be allocated by the Nearline Service (sand.java.maxmemory parameter) Total: The amount of Java heap memory currently allocated by the Nearline Service Available: The amount of Java heap memory currently available 7

17 Opened writers: 2 session: ARC_ , id: ARC_ _4_OFW Number of opened writers Names of opened writers session: ARC_ , id: ARC_ _4_OFW UTC date and time: :44:53 Current UTC server time If the Nearline Service is not running, the following message is returned after the banner (an internal Java message is included in parentheses): Not able to connect to SNIC Server (connection refused: connect) Please verify if SNIC Server is started Other situations where the server cannot be reached returns a Java error message, which can be passed on to Informatica Global Customer Support if the cause of the error is not apparent. The information written to the Nearline Service log file provides internal details about queues and the associated queries running in parallel. For example: INFO Queue information INFO Running queue: INFO [0] ARC_ _8_PNO (sessionid ARC_ ) (long task) (session concurrency limited) INFO Waiting queue: FINEST [0] ARC_ _9_PNO (sessionid ARC_ ) (long task) (session concurrency delayed) FINEST [1] ARC_ _10_PNO (sessionid ARC_ ) (long task) (session concurrency delayed) (bad sequence delayed) FINEST [2] ARC_ _11_PNO (sessionid ARC_ ) (long task) (bad sequence delayed) INFO Server information INFO CTRL_CONCURRENCY: INFO MAX_JOBS: INFO Memory (in bytes) INFO Maximum: INFO Total: INFO Available: INFO Opened writers: INFO session: ARC_ , id: ARC_ _4_OFW INFO session: ARC_ , id: ARC_ _4_OFW test [-http] [-keepfiles] [tasksetpath xml-file] By default, this command runs a small test generation program that tests the interface between the File Archive Service and the Nearline Service. The test data is stored in the test subdirectory of the SAND_NLIC directory. 8

18 To run only a basic HTTP test, specify the -http option. To retain the generated test files (for example, for debugging purposes), specify the - keepfiles option. To run a different test generation program, specify the path (tasksetpath) and XML file name (xml-file), separated by a space. version Returns the version information for the Nearline Service, including the full startup banner. viewlog log-file If a Nearline Service log file was generated while the sand.log.xml.format property was set to TRUE in the sand_nlic.properties file, the viewlog command displays the specified log file in the standard format, rather than in XML format. [command] task-name The specified task is executed in file-based mode (as opposed to HTTP mode). A task file named task-name.call.xml must exist in the ${SAND_NLIC}/conf directory. For example, the following command: snic command mytask requires the following XML control file for execution: ${SAND_NLIC}/conf/mytask.call.xml If the task-name parameter is specified by itself (without the command keyword), the specified task is executed in HTTP mode instead of file-based mode. Again, a task file named task-name.call.xml must exist in the ${SAND_NLIC}/conf directory. There are a couple of special built-in tasks for the Nearline Service named "apply_properties" and "read_properties". For more information, see 1.4 Special Task File Commands. sh [-x] shell-command [arguments] You can execute a shell command (shell-command) and its given parameters (arguments), if applicable, through the Nearline Service using the sh command. If the -x flag is included, the shell command execution is echoed to the screen. 1.4 Special Task File Commands There are special task file commands available for re-reading the Nearline Service configuration parameters: apply_properties Re-reads the Nearline Service configuration parameters for changes made after the Nearline Service was started. Updated runtime parameter values is used in the current Nearline Service session, but updated static parameter values is not used until the 9

19 Nearline Service is restarted. (Runtime parameters are identified in the Nearline Service properties table in Appendix E: sand_nlic.properties.) read_properties Re-reads the Nearline Service configuration parameters for changes made after the Nearline Service was started and displays those changes only. Updated static and runtime parameter values are not used until the Nearline Service is restarted. These tasks require that associated "task-name.call.xml" files (that is, "apply_properties.call.xml" and "read_properties.call.xml") are present in the ${SAND_NLIC}/conf directory. These files are included with the Nearline Service installation; if they have been moved or deleted accidentally, they must be restored. The tasks are executed in HTTP mode. For example, the following command forces the Nearline Service to re-read its configuration parameters: snic apply_properties In the sample output below, three Nearline Service properties were changed: sand.log.entry.maxlen, sand.log.roll.count, and sand.sap.systems. Since sand.log.entry.maxlen and sand.sap.systems are runtime properties, the apply_properties command forces the Nearline Service to use those changes right away. However, sand.log.roll.count is a static property, so that change is only applied the next time the Nearline Service is restarted. *** HTTP Request follows *** POST / HTTP/1.0 User-Agent: SAPHTTP Host: localhost:18600 Message-Length: 227 Content-Type: text/plain Content-Length: 227 <?xml version="1.0" encoding="iso "?> <!DOCTYPE SERVICE_CALL SYSTEM "sand_binli.dtd"> <SERVICE_CALL> <READ_PROPERTIES IS_APPLY_CHANGES="TRUE"> <PROPERTY_STORE STORE_NAME="ALL"/> </READ_PROPERTIES> </SERVICE_CALL> *** HTTP Response follows *** HTTP/ OK Content-Type: text/plain Date: Wed, 23 Jun :20:21 GMT data-length: 0 content-type: text/plain date: Wed, 23 Jun :20:21 GMT message-length: 621 Content-Type: text/plain <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE SERVICE_ANSWER SYSTEM "sand_binli.dtd"> <SERVICE_ANSWER RESULT="SUCCESS"> <R_READ_PROPERTIES> <PROPERTY_STORE FILENAME="/raid5nb/snic/bin/sand_nlic.properties" STORE_NAME="sand_nlic.properties"> 10

20 <PROPERTY_CHANGE NAME="sand.log.entry.maxlen" NEW_VALUE="512" OLD_VALUE="1024" DEFAULT_VALUE="1024" IS_UPDATED="true" IS_RUNTIME="true"/> <PROPERTY_CHANGE NAME="sand.log.roll.count" NEW_VALUE="40" OLD_VALUE="30" DEFAULT_VALUE="20" IS_UPDATED="true" IS_RUNTIME="false"/> <PROPERTY_CHANGE NAME="sand.sap.systems" NEW_VALUE="*.*.*" OLD_VALUE="Z_DA3.DA3.sand101" DEFAULT_VALUE="*.*.*" IS_UPDATED="true" IS_RUNTIME="true"/> </PROPERTY_STORE> <PROPERTY_STORE FILENAME="/raid5nb/dna/ssa.ini" STORE_NAME="ssa.ini"/> <PROPERTY_STORE FILENAME="/raid5nb/dna/nucleus.ini" STORE_NAME="nucleus.ini"/> </R_READ_PROPERTIES> </SERVICE_ANSWER> 11

21 2 Nearline Service Log File The Nearline Service maintains a log file of its activities. By default, the log file is named sand_nls.0.log. The file is located in the ${SAND_NLIC}/logs or %SAND_NLIC%\logs directory. The logged activities are timestamped and listed in order of sequence. Use the log file to analyze performance and for troubleshooting. Normally, the Nearline Service log file has a size limit of 10 MB. Once that limit is reached, the log file is renamed and a new log file is created. You can have up to 19 old log files in addition to the current log. You can customize the log file configuration. 2.1 Log File Settings Use the sand_nlic.properties file to customize the characteristics of the Nearline Service log file. The following table describes the parameters in the sand_nlic.properties file. For more information, see Appendix E: sand_nlic.properties. Parameter sand.http.debug Description Enables or disables the reporting of HTTP requests in the log file. Default is FALSE. sand.log.file Defines the Nearline Service log file name. Default is sand_nls.n.log. sand.log.level Determines the level of Nearline Service messages to log. Default is INFO. sand.log.roll.count Determines the maximum number of the Nearline Service log files to retain. Default is 20. sand.log.roll.size Determines the maximum size for a Nearline Service log file before a new log file is started. Default is bytes. sand.log.to.console Enables or disables printing log entries to the console. Default is TRUE. 12

22 Parameter sand.log.to.file Description Enables or disables writing log entries to the Nearline Service log file. Default is TRUE. sand.log.xml.format Enables or disables writing the Nearline Service log file in XML form. Default is FALSE. 2.2 Log File Entries Each entry in the Nearline Service log file has the following format: where: <seq num> <thread num> <log level> <timestamp> <message> <seq num> is a unique sequence number for the log file entry. This value starts at 0 during the Nearline Service startup, and increments by 1 for each succeeding entry. <thread num> is an internal Nearline Service thread number. <log level> is the log level for the message. It can be one of the following, in order of priority from highest to lowest: o o o o o SEVERE (errors) WARNING (warnings) INFO (high-level information) FINE (low-level information) FINEST (debug information) You can use the sand.log.level parameter in the sand_nlic.properties file to set the log level written to the log file. Lower priority log level settings include the higher priority levels. For example, the default log level, INFO, also includes WARNING and SEVERE messages in the log, in addition to INFO messages. The FINEST log level includes all messages in the log file. <timestamp> is the date and time when the entry was logged. <message> is the log message returned from the Nearline Service. The following text shows a sample log message: 13

23 INFO *** SNIC Server stopped *** The following table describes the components in the sample log message: <seq num> <thread num> <log level> <timestamp> <message> INFO *** SNIC Server stopped *** The following text shows a sample log message with consecutive entries: INFO Executing OpenForWrite (EMP/PERSONAL/001) WARNING Writer with id open_for_write is already open. Ignoring call to OpenForWrite The following table describes the components in the sample log message: <seq num> <thread num> <log level> <timestamp> <message> INFO WARNING Executing OpenForWrite (EMP/PERSONAL/001) Writer with id open_for_write is already open. Ignoring call to OpenForWrite The following text shows a sample log message that involves an error condition with log level SEVERE: SEVERE Error Detail: java.lang.exception: Invalid sequence. Request '123456' for table "TEST_NLIC"."EMP/PERSONAL/001" must be 'closed' or 'active' to deactivate. Current state is 'inactive' com.sand.nlic.bi2.session.requestx.deactivate(requestx.java:1094) com.sand.nlic.bi2.tag.alterrequest.execute(alterrequest.java:87) com.sand.nlic.bi2.tag.bicall.execute(bicall.java:96) com.sand.nlic.task.executor.run(executor.java:94) java.lang.thread.run(thread.java:534) The following table describes the components in the sample log message: <seq num> <thread num> <log level> <timestamp> <message> SEVERE Error Detail: java.lang.exception: Invalid sequence. Request '123456' for 14

24 table "TEST_NLIC"."EMP/PER SONAL/001" must be 'closed' or 'active' to deactivate. Current state is 'inactive' com.sand.nlic.bi2.session.requestx.deactivate(re questx.java:1094) com.sand.nlic.bi2.tag.alte rrequest.execute(alterre quest.java:87) com.sand.nlic.bi2.tag.bic all.execute(bicall.java:96 ) com.sand.nlic.task.execu tor.run(executor.java:94) java.lang.thread.run(thr ead.java:534) Queue Status Messages The queue status message provides information about currently running and waiting-to-run tasks from the running task queue and the waiting task queue. A queue status message in the log shows only the current number of tasks in each queue. To get details about each of the tasks in the running queues, execute the snic status command. Queue status messages are logged when a new call goes from a waiting queue to a running queue or when a call is removed from a running queue. For example, after the call has finished. The following text is an example of a queue status message: INFO Queue status. Running: 1, Waiting: 6 In the sample message, there is one running task and six tasks in the waiting queue. Notes on Running and Waiting Tasks There is no limit on the number of tasks that can go into the waiting queue. Conversely, a number of factors affect what can go into the running queue. The following factors affect what goes in to the running queue: The CTRL_CONCURRENCY parameter limits the number of tasks running at the same time to limit the number of concurrent threads. The number of archiving processes is limited by the MAXJOBS parameter. The number of long tasks (OCU, PNO, CWR) cannot be more than CTRL_CONCURRENCY 1. In HTTP mode, data transfers (FNP) are limited by available memory. 15

25 One extra running slot is reserved for Connection (CON). One extra running slot is reserved for Get Service Properties (GSP). One extra running slot is reserved for priority tasks, such as priority service calls READ_PROPERTIES, LICENSE_INFO, and further monitoring calls. A task that cannot go into the running queue due to any of the restrictions above is placed in the waiting queue. For more information about the transaction types, see Appendix F: Nearline Service Transaction Codes. 3 Analyzing Request State Information 3.1 The SAPBI_REQUEST DETAILS View The movement of data from SAP BW to the nearline store is tracked using logical objects called requests. During the migration process, the requests undergo a series of state changes as the data is verified for integrity. The SAP Nearline Add-On component maintains a group of tables in the File Archive repository, to preserve these requests and to track their state changes in real time. The SAPBI_REQUEST_DETAIL view provides a summary of the nearline storage requests by associating the request tables with the File Archive Service internal metadata tables. The following table describes the structure of the SAPBI_REQUEST_DETAIL view: SAPBW_REPOSITORY.SAPBI_REQUEST_DETAIL ATTRIBUTE REQUESTID STATE DESCRIPTION Unique identifier from SAP BW that tracks the request. Current state of the request. Requests can have the following states: created copying closed cancelled active inactive invalid CREATED CHANGED REQUEST_SET Timestamp when the request was created. Timestamp when the request last changed state. Filter statement that uniquely identifies the data tracked by the request. 16

26 TABLE_NAME NBROWS SCT_PATH SCT_FILE INPUT_KBYTES SCT_FILESIZE_KBYTES COMPRESSION INPUT_FILES SAP table name. Number of rows of data migrated in the request. Path of the SCT file that stores the data in compressed form. File name of the SCT file. (deprecated) SCT file size in kilobytes. (deprecated) Number of input files delivered for the request. KBYTES_PER_INPUT_FILE Average size of an input file in kilobytes. INPUT_ROW_WIDTH_BYTES Average size of an input row in bytes. Note that the view provides information at the level of the individual SCT file. If data from a single request resides in more than one file, there will be more than one entry in the table for the request. To access the data, you must connect to and query the repository with an SQL tool. From a UNIX shell you can use the File Archive SQL Tool, which provides an interactive SQL prompt. Additionally, you can use third-party tools via the File Archive Service ODBC driver to query a running File Archive Service system. 3.2 Connecting to the File Archive Service Using the File Archive SQL Tool The following example demonstrates how to connect to the File Archive Service system with the File Archive SQL Tool, using the Nearline Service administrator s settings. Note: In a Windows installation for a specified user, the database and connection names will have the user name and domain appended ("adm_<user>_<domain>"). Warning: The following command should only be used to view data from the repository. Any changes to the repository should only be done with the approval of Informatica Global Customer Support. Unapproved changes can cause damage to the repository and also cause loss or inconsistencies in the SAP-nearlined data. $> ssasql ssa adm DBA SESSION 1: SSA@adm SQL:1> select * from SAPBW_REPOSITORY.SAPBI_REQUEST_DETAIL; 17

27 2 rows selected REQUESTID STATE CREATED CHANGED REQUEST_SET TABLE_NAME NBROWS SCT_PATH SCT_FILE SCT_FILESIZE INPUT_FILES KBYTES_PER_I INPUT_ROW_W TEST1 active "/BIC/DTVALIDFR" <= ' ' /BIC/ONCBY 3020 /raid5b/snic/sct TEST1_ sct rows fetched Alternatively, on a Windows system with TCP/IP access to the machine where the File Archive Service runs, you can display the view in Microsoft Excel as a Linked table via the File Archive Service ODBC driver. 3.3 Linking External Data to Microsoft Excel Using a Database Query After the DSN is set up, you can use Microsoft Excel to link to the request detail data in the nearline store. 1. From the Microsoft Excel Data menu, choose Get External Data> New Database Query. Figure 2: Microsoft Excel New Database Query 2. Select the adm data source created in the previous step. 18

28 Figure 3: Microsoft Excel Choose Data Source 3. In the list of tables, select the SAPBI_REQUEST_DETAIL table. Figure 4: Microsoft Excel Query Wizard Select Table 4. Include all columns from the SAPBI_REQUEST_DETAIL table in the query. 19

29 Figure 5: Microsoft Excel Query Wizard Choose Columns 5. You can filter data to only select the records of interest, for example by selecting a specific REQUESTID. To see all the data, leave the fields on the Filter Data screen blank as shown in the following example. 6. Specify the sort order for the data. Figure 6: Microsoft Excel Query Wizard Filter Data Figure 7: Microsoft Excel Query Wizard Sort Order 20

30 7. Click to select the Return Data to Microsoft Excel option. To save this query for later use, click Save Query. 8. Click Finish to extract the data. Figure 8: Microsoft Excel Query Wizard Finish 3.4 Investigating Request Data The request data should appear in the Excel spreadsheet as in the following figure. In this example, there is information about one request, with the ID TEST1. Note that in an SAP BW 7 implementation, the request ID is numeric. Figure 9: Microsoft Excel Request ID 21

31 3.5 Associating an SCT File with an SAP Archive Request The naming convention for an SCT file is as follows: [NLSID]_[SYSID]_[SEQNO]_[STEPNO]_[STEPID]_[DT].sct where: [NLSID] = SAP NLS Request ID [SYSID] = SAP System ID [SEQNO] = SAP Number Range ID (Sequence Number) [STEPNO] = Nearline Request Step Number [STEPID] = Nearline Request Step Acronym [DT] = Creation Date and Time Examples: 202_LSTA1_ _ _OFW_ sct 209_LSTA1_ _ _OFW_ sct 211_LSTA1_ _ _OFW_ sct In the examples above, the SAP NLS request IDs are 202, 209, and 211 for the respective SCT files. Multiple SCT files can belong to a single SAP NLS request ID and should be seen as a group. They will either be all active or all orphaned, but never a mix. Use SE16 transaction to search the SAP NLS request ID in table RSDAARCHREQ. The following figure shows transaction SE16: Figure 10: Data Browser Table Selection 22

32 Use the SAP NLS request ID number found in the name of the SCT file as search criteria. The following figure shows the search criteria: Figure 11: Data Browser Request ID Entry The result contains the name of the object that was archived in the DAPNAME column and the SAP archive request ID in the REQUID_SID column. The following figure shows a sample search result: 23

33 Figure 12: Associated Archived Objects SCT files associated with an SAP NLS request ID that is not present in this table are considered orphaned. This usually happens when a DAP is deleted from the SAP system, or if a given archive request is deleted from the system (not yet supported). The following figure shows the final two columns of the RSDAARCHREQ table that contain the SAP NLS request ID NLSREQUID_SID and the SAP reload request ID ARESREQUID_SID: Figure 13: Data Browser Request IDs and Reload Request IDs A value other than 0 in the ARESREQUID_SID column means that the SAP archive request has been or is being reloaded. In all cases, the status of the SAP archive request REQUID_SID must be verified in the RSDAARCHREQ_V table: 24

34 Figure 14: Verify Archive Requests The SCT files are considered orphaned if the status REQSTAT of their associated SAP archive request REQUID_SID is equal to 80 (reloaded) or 99 (invalidated). Otherwise, the SCT files are considered active (values under 80): Figure 15: Request Status For this example, the following table indicates the states of the archive request and associated SCT files of the InfoProvider ZEDU07D (DAPNAME of table RSDAARCHREQ): 25

35 SAP Archive Request ID SAP NLS Request ID SAP Reload Request ID State of Archive Request State of SCT Files (reloaded) Orphaned (invalidated) Orphaned (completed) Active You can verify the state of the request in the workbench for the same InfoProvider: Figure 16: Request Status Verification 26

36 3.6 Determining the Current Size of Nearline Data To see information about the size of data held in the File Archive repository, use the SA38 or SE38 transaction to run the /SAND/0_UTIL_ARCHIVE_SIZE program. Figure 17: Running the /SAND/0_UTIL_ARCHIVE_SIZE Program The following figure shows an example of information about the nearline data: Figure 18: /SAND/0_UTIL_ARCHIVE_SIZE Output 27

37 Note that /SAND/0_UTIL_ARCHIVE_SIZE calculates kilobytes, megabytes, gigabytes, terabytes, and so on, using base 1000, as is standard among storage vendors, rather than base Retrieving DAP Lists from SAP Systems To view the list of currently defined DAPs in an SAP system, use transaction SE16 to access the RSDADAP table. Figure 19: Specifying RSDADAP on Transaction SE16 To view only the active DAPs, enter "A" in the Object Version (OBJVERS) field and click Execute (F8). Figure 20: The RSDADAP Table Selection Screen 28

38 A list of the active DAPs appears. Figure 21: A List of Active DAPs To view the details of a specific DAP, select the DAP in the list. A checkmark appears next to the DAP name. Then, click Display Details (F2). 29

39 Figure 22: Details for a Selected DAP Note that the timestamp field (TIMESTMP) specifies when the DAP was activated. If the timestamp for the associated InfoProvider which shares the same name is later than that of the DAP, it could indicate that the DAP should be reactivated. 30

40 4 Timing Information 4.1 The Timing File The timing file collects timing information for objects in the SAP Nearline system. The file is stored in the logs directory with the default name sand_nls_timing.0.csv. Use the sand.timing.file property to configure the timing file name. Timing files are rolled. When the current timing file reaches a maximum size, the file is renamed and a new file is created in its place. By default, the maximum size is 10MB. If the total number of timing files reaches the maximum amount when the current file is rolled, the oldest timing file is deleted. By default, the maximum amount of files is 10. Rolled timing files are distinguished by their generation number, which is part of the file name. For example, sand_nls_timing.7.csv has generation number 7. The current timing file always has the generation number 0 until it is rolled. Use the sand.timing.roll.size property to configure the maximum timing file size. Use the sand.timing.roll.count property to configure the maximum number of timing files. When generating an event, there are a number of filters that determine whether the event is logged in the timing file. These filters relate to columns in the timing file as described below. There are three filters, SOURCE, EVENT, and ACTION. The keyword 'ALL' is valid for each of these filters, meaning the filter will match all values. You can specify a comma-separated list of acceptable values. When deciding whether to write to the timing file, the event is compared to the filter by means of a case-insensitive string comparison. For an event to be logged, it must match all filters. The default values for the filters are as follows: "sand.timing.source.filter","all" "sand.timing.event.filter","all" "sand.timing.action.filter","call,statechange" Any character combination is valid as a filter however, it only makes sense to specify filter values that can occur in the field to which the filter is applied. The only possible values for the ACTION filter presently are Call and StateChange, so specifying both as above is equivalent to specifying ALL. A good practice is to set the source and event filters to ALL, then perform a test run and analyze these fields in the timing file itself to determine if you wish to filter the output. The table/file structure for the timing file is as follows: SAPBW_REPOSITORY.SAPBI_TIMING 31

41 Attribute EVENT_TIMESTAMP SAP_SYSTEMID SAP_USERID SESSIONID ACTION EVENT SOURCE EVENT_TIME_SEC SOURCE_TIME_SEC SOURCE_IDENTIFIER EVENT_SUBJECT COUNTER DETAIL DETAIL_1 DETAIL_2 DETAIL_3 Description Timestamp at which the event occurred The SAP system ID (this value is blank for ABAP versions older than v30) The SAP user ID (this value is blank for ABAP versions older than v30) A unique identifier for the session The action that describes this event, for example Call or StateChange A description of the event The source of the event, typically the component responsible for generating it The time (seconds.ms) that the event took to complete The time (seconds.ms) since the source last generated an event A unique identifier for the source, or null The resource that the event is acting on (for example, the tablename), or null A counter to return information such as the number of rows fetched A string that can contain, for example, the SQL statement associated with the event (16 KB max) as above as above as above Note that the column headers are stored in a separate file named sand_nls_timing_header.csv in the same directory as the timing files. 4.2 Importing the Timing File into Excel A timing file is presented in CSV format, and so can be opened directly with Microsoft Excel. In the example presented here, the column headers (see above) have been added to the Excel spreadsheet By default, Call and StateChange events are written to the file. In the example below, timing information regarding state changes of the RequestX object with the name /BIC/ONCBY is 32

42 inspected. The Sort feature of Microsoft Excel can be used to view only the StateChange objects, by sorting on the ACTION column. 1. From the Microsoft Excel Data menu, select Sort. Figure 23: Microsoft Excel Sort Option 2. On the Sort dialog box, specify ACTION, then EVENT TIMESTAMP as the fields to sort by. Figure 24: Microsoft Excel Sort Dialog 33

43 3. On the spreadsheet, select and delete rows where Action = Call, so that only the StateChange actions are shown. Figure 25: Delete CALL Actions Note that StateChanges will provide details of sub-events of a task, which can provide details of the time spent in the Nearline Service and the File Archive Service. The Call commands will provide general time spent in a particular task, including time spent in the Wait queue. 4. Select Sort on the Data menu, as in Step On the Sort dialog box, specify SOURCE IDENTIFIER, then EVENT TIMESTAMP as the fields to sort by. 6. To see subtotals for EVENT TIME SEC, and SOURCE TIME SEC, go to the Data menu and select Subtotals 34

44 Figure 26: Microsoft Excel Subtotals 7. Apply the Sum function to the EVENT TIME SEC and SOURCE TIME SEC fields only (make sure all other fields are deselected). Figure 27: Microsoft Excel Subtotal Options This produces a subtotal of the event time for the TEST1 request (note that in the illustration, some fields have been removed for clarity). The EVENT TIME is the elapsed time in seconds in the nearline store for each state change EVENT. The subtotal for this field (26.2 seconds in the example) is the time required by the system to bring the request from the "created" to "active" state. 35

45 Figure 28: EVENT TIME Subtotals The SOURCE TIME is the time in seconds since the last state change. In the example, moving the Source object from the "created" to the "copying" state took 3.2 seconds. Thus the subtotal for SOURCE TIME (88.2 seconds) is equivalent to the elapsed time between creation and the final state shown. Figure 29: SOURCE TIME Subtotals This can be verified using the timestamp field, as in the example below ( = 88 seconds). 36

46 Figure 30: EVENT TIMESTAMP Verification In this example, timing information was collected for a RequestX object. Timing information for data retrieval can be collected in a similar manner by filtering on the CursorX object and fetch/close events. 37

47 5 Tracing Object Transactions Optionally, object transactions between SAP and the Nearline Service can be traced, with the possibility of filtering on nearline object names and/or transaction types. As well, the tracing output can be limited to either control information (XML files) or data (DAT, NDL files), or can include both types of output. There are two parameters in the sand_nlic.properties file that govern object tracing behavior: sand.transaction.trace.directory : the location where trace files will be written (if omitted, the files will go into a folder named "tracing" in the tasks directory) sand.transaction.trace.filter : the tracing filter string that sets limits on what will be traced and what output will be produced By default, object tracing does not occur. It must be enabled by specifying a filter definition for the sand.transaction.trace.filter parameter. The simplest filter definition is the following: sand.transaction.trace.filter=*:* which means "trace all objects, all transaction types, and generate all possible output files". In other words, do not filter tracing at all. This kind of unrestricted tracing can produce output that collectively takes up a lot space, so care must be taken when using it. It may be preferable to define a filter that limits tracing to specific objects or transactions, and to avoid generating DAT files unless absolutely necessary. The syntax for the tracing filter is described in the next section. Note that trace files must be deleted manually. Since these files, especially the DAT files, can occupy a substantial amount of disk space, it is a good idea to check available space and, if necessary, clean up old files before activating object tracing. 5.1 Tracing Filter Syntax The tracing filter syntax is the following (using BNF notation): {{<object-name> *}:{{<transaction-type> *} [ONLY_TRANS ONLY_DATA]},...};... 38

48 or expressed visually: object-name : transactiontype * * ONLY_TRANS ONLY_DATA, ; where: object-name specifies a particular SAP object to trace. Alternatively, specify * (asterisk) to trace all objects. : (colon) separates the object-name parameter from the transaction-type parameter. transaction-type specifies a particular Nearline Service transaction type to trace. The possible transaction codes (for example, "CON" for CONNECT, "CTA" for CREATE_TABLE, and so on) are listed in Appendix F: Nearline Service Transaction Codes. Alternatively, specify * (asterisk) to trace all transaction types. There are a few special transaction names that represent a collection of transaction codes, which can save users some time when defining the tracing filter string. These special transactions are listed in section a space character must separate the transaction-type or * parameter from the filter restriction, if present. This is the only whitespace allowed between parameters in the filter string. [ ONLY_TRANS ONLY_DATA ] is the optional filter restriction parameter. The value can be one of the following: o o ONLY_TRANS (trace transactions only) ONLY_DATA (trace data only) If neither value is present, both transaction and data are traced (the default)., (comma) separates multiple transaction definitions. There must be no whitespace before or after the comma. ; (semicolon) separates multiple filter definitions. There must be no whitespace before or after the semicolon. Tracing filter examples can be found in section 5.3 below. 39

49 5.1.1 Alternative Transaction Codes There are several transaction specifiers that can be used as a shorthand for multiple transaction codes, described in the following table. Transaction Code ARCHIVE Description Includes all of the following archive-related transaction types for the same session: CRE (CREATE_REQUEST) OFW (OPEN_FOR_WRITE) PNO (PUT_NEXT_OBJECT) CWR (CLOSE_WRITER) LOOKUP Includes all of the following lookup-related transaction types for the same session: OCL (OPEN_CURSOR_FOR_LOOKUP) FNP (FETCH_NEXT_PACKAGE) CCU (CLOSE_CURSOR) QUERY Includes all of the following query-related transaction types for the same session: OCU (OPEN_CURSOR) FNP (FETCH_NEXT_PACKAGE) CCU (CLOSE_CURSOR) SYSTEM Includes all SAP system calls for the same session. For instance, specifying QUERY as the transaction type in the filter is equivalent to specifying the transactions OCU, FNP, and CCU separately for the same object. 5.2 Output Files All tracing files are created in the location specified by the sand.transaction.trace.directory parameter in the sand_nlic.properties file, or else (by default) in the tasks/tracing directory. For calls from SAP, each session will have a subfolder named according to the session ID, with all of the tracing files for the session contained in that folder. The path structure for each session's output files is the following: <sand.transaction.trace.directory or tasks/tracing folder>/<session ID>/ 40

50 The files written to this folder depend on the filter output restriction: Restriction Output Files ONLY_TRANS <call ID>.xml <call ID>.ANSWER.xml ONLY_DATA <call ID>.ndl <call ID>.dat If there is no filter restriction on the output files, the output folder will contain all of the XML, NDL, and DAT files listed in the table above (if applicable to the transaction type). Files are named according to the transaction CALL_ID (refer to the table above). If a given file name already exists in the output folder, a new file with the same name will be prefixed with an incremental sequence number starting at 1 in parentheses: (n)<file name> For instance, if the file "b_close_writer.xml" already exists, the next output files for the same CALL_ID in the same session will have the following names: (1)b_close_writer.xml (2)b_close_writer.xml (3)b_close_writer.xml System Calls In contrast to calls from SAP, most system calls (traced if the transaction type parameter is set to * or SYSTEM in the tracing filter) have their output files placed in the SERVICE_CALL folder: <sand.transaction.trace.directory or tasks/tracing folder>/service_call/ System call trace files consist of call and answer XML files. For example: FILE_OPERATION.xml FILE_OPERATION_ANSWER.xml READ_PROPERTIES.xml READ_PROPERTIES_ANSWER.xml Some system calls (for example, calls bypassing authentication) use a connection to the File Archive Service, and thus are associated with a session. The trace files belonging to these system calls are written to the same location as non-system call trace files in the same session: <sand.transaction.trace.directory or tasks/tracing folder>/<session ID>/ 41

51 5.3 Tracing Examples In the following example, the tracing parameters in the sand_nlic.properties file are as follows: #sand.transaction.trace.directory= sand.transaction.trace.filter=/bic/onzedu99i:archive ONLY_TRANS Since the sand.transaction.trace.directory parameter is not set, the default ${SAND_NLIC}/tasks/tracing directory is where trace files will be stored. The filter defined for sand.transaction.trace.filter specifies the following: tracing is limited to the SAP object /BIC/ONZEDU99I only ARCHIVE (that is, CRE, OFW, PNO, and CWR) transactions will be traced only transaction call and answer XML files will be written to the trace directory Note that the tracing filter used above is equivalent to the following string: sand.transaction.trace.filter=/bic/onzedu99i:cre ONLY_TRANS,OFW ONLY_TRANS,PNO ONLY_TRANS,CWR ONLY_TRANS In the next example, an explicit trace directory (/opt/sand/trace) is specified, and a more complex tracing filter string is used: sand.transaction.trace.directory=/opt/sand/trace sand.transaction.trace.filter=/bic/zcc1:* ONLY_TRANS; /BIC/ZCC2:ATA,CTA,DTA;/BIC/ZCC3:SYSTEM,LOOKUP,CWR ONLY_DATA,PNO ONLY_TRANS The tracing filter can be broken down into three separate object filters: /BIC/ZCC1:* ONLY_TRANS /BIC/ZCC2:ATA,CTA,DTA /BIC/ZCC3:SYSTEM,LOOKUP,CWR ONLY_DATA,PNO ONLY_TRANS The first object filter (/BIC/ZCC1:* ONLY_TRANS) specifies the following: the SAP object /BIC/ZCC1 will be traced all transactions associated with /BIC/ZCC1 will be traced all transaction call and answer XML files associated with /BIC/ZCC1 will be written to the trace directory The second object filter (/BIC/ZCC2:ATA,CTA,DTA) specifies the following: the SAP object /BIC/ZCC2 will be traced all ATA, CTA, and DTA transactions associated with /BIC/ZCC2 will be traced all XML, NDL, and DAT files associated with the /BIC/ZCC2 transactions listed above will be written to the trace directory 42

52 The last object filter (/BIC/ZCC3:SYSTEM,LOOKUP,CWR ONLY_DATA,PNO ONLY_TRANS) specifies the following: the SAP object /BIC/ZCC3 will be traced the following transactions associated with /BIC/ZCC3 will be traced: o o o o SYSTEM LOOKUP (OCL, FNP, and CCU) CWR PNO the following files will be written to the trace directory: o o o o all files associated with SYSTEM transactions all files associated with OCL, FNP, and CCU transactions all NDL/DAT files associated with CWR transactions all XML files associated with PNO transactions Note that, in the above example, the files generated in the /opt/sand/trace directory will be contained in subfolders named after the session IDs from when the traced transactions occurred (most SYSTEM-related transactions will be contained in a separate directory named "SERVICE_CALL"). For instance, /opt/sand/trace might contain folders and files similar to the following: LSTA1_ o o o o o LSTA1_ _ _FNP.xml LSTA1_ _ _FNP.answer.xml LSTA1_ _ _FNP.xml LSTA1_ _ _FNP.answer.xml LSTA1_ o o o o LSTA1_ _ _PNO.xml LSTA1_ _ _PNO.answer.xml LSTA1_ _ _PNO.dat SERVICE_CALL o o o o o FILE_OPERATION.xml FILE_OPERATION_ANSWER.xml READ_PROPERTIES.xml READ_PROPERTIES_ANSWER.xml 43

53 6 Monitoring the Nearline Store with Sysmon The File Archive Service System Monitor utility (sysmon) can provide snapshot views of the nearline store. The utility writes the following information to a log file: the status of system hard disks number of File Archive Service and File Archive Repository Service port connections Nearline Service and File Archive Service log file errors (in the last 600 seconds, or 10 minutes) current File Archive Service clients and sessions running File Archive Service Administration Console, File Archive Service Agents, processes, tasks, jobs vmstat/wmi output 6.1 Sysmon Syntax sysmon [ <flags> ] Flags (can be specified in any order): -h Display the online help screen. -f n The number of log files to retain. By default, 20 logs are kept. -l log-dir The directory where log files will be written. By default, the log directory is $SAND_NLIC/logs (%SAND_NLIC%/logs in Windows). -p n The amount of time (in seconds) to pause between snapshot information output. By default, sysmon writes snapshot information to a log file every 10 seconds. -s system-name The Nearline system name, used to specify a specific system if multiple systems are running on the same machine. The system-name value could also be the user ID or the directory where the Nearline Service -related executables reside. By default, snapshot views of all running Nearline systems are generated. -t temp-dir The directory where temporary sysmon files will be written. By default, the temporary directory is $SAND_NLIC/tmp (%SAND_NLIC%/tmp in Windows). 44

54 -v install-dir The directory where File Archive Service is installed. By default, the install directory is $SSA_INI_DIR (%SSA_INI_DIR% in Windows). Example sysmon -l /snic/pd1/logs -t /snic/pd1//tmp -s VB5 -v /snic/pd1/dna 6.2 Description The sysmon utility writes point-in-time information to a log file. It can be started manually, but is intended to be run as a system cron in UNIX/Linux, or a scheduled task in Windows. Each log file written by sysmon represents a snapshot of the system state at the time of execution. The number of log files to retain, as well as the directory path where they will be written, and the interval between snapshots, can be customized via command line flags. Other parameters that can be set on the sysmon command line include the Nearline system name, the directory where temporary files will be written, and the directory where the File Archive Service is installed. If the File Archive Service is installed in the default location ($SSA_INI_DIR), then all of the sysmon parameters are optional and can be omitted. The log files created by sysmon are named "sysmon_n.log" (where n is the rolling log file number), and are located in the path specified by the -l parameter (by default, $SAND_NLIC/logs). Within the SAP environment, the logs can be viewed through transaction AL11. The latest sysmon log file is always named "sysmon_0.log". When a new log file is created, if the current number of logs is already at the limit set by the -f parameter (20 by default), the oldest log file is deleted. The remaining log files are renamed with the index number incremented by one for each, and the new log file is generated with an index number of 0. For example, if the maximum number of logs is set to 4, and the current log files are the following: sysmon_0.log sysmon_1.log sysmon_2.log sysmon_3.log the next generated log file will cause "sysmon_3.log" to be deleted, and the rest of the logs will have their log number incremented: "sysmon_2.log" becomes "sysmon_3.log", "sysmon_1.log" becomes "sysmon_2.log", and "sysmon_0.log" becomes "sysmon_1.log". The new "sysmon_0.log" file, containing the latest system information, is then written. 6.3 Log File Each log file generated by the sysmon utility contains multiple sections, each describing information about a specific area of the Nearline system: DISK STATUS RUNNING SSA PROCESSES LIST 45

55 SCAN ERROR IN THE LOG FILE NETSTAT COMMAND SSA ADMINISTRATOR VIEW SSASERVICE JOB MONITOR VIEW VMSTAT Note that each section displays a "STATUS" below the section heading. If any issues are detected in the context of that section, the status will read "ERROR". For example, if the command or tool for generating the content of the section could not execute, the status will be "ERROR", and associated error message(s) will be displayed. Otherwise, if the output for a log file section was generated without any problems, the status will be "SUCCESS" DISK STATUS The DISK STATUS section contains information about the hard disks mounted in the Nearline system, including the number of blocks on the disk, the amount of free space, and the percentage of the disk that is in use. The STATUS line will read "WARNING" if any of the disks are using 80% or more of their capacity. These disks will be listed immediately below this line, in descending order of percentage used. For example: DISK STATUS STATUS: WARNING /dev/lvlgcide2 97% full /dev/hd2 95% full /dev/hd4 85% full Filesystem 1024-blocks Free %Used Iused %Iused Mounted on /dev/hd % % / /dev/hd % % /usr /dev/hd9var % % /var /dev/hd % % /tmp /dev/fwdump % 11 1% /var/adm/ras/pla tform /dev/hd % % /home /proc /proc /dev/hd10opt % % /opt /dev/lvlgcide % % /snic /dev/lvtempdata % 108 1% /tempspace_ RUNNING SSA PROCESSES LIST The RUNNING SSA PROCESSES LIST section displays information about the File Archive Service processes that are currently running. The output (which includes such things as user ID, process ID, start time, and command line) is equivalent to executing the system command ps -f. 46

56 For example: RUNNING SSA PROCESSES LIST STATUS: SUCCESS sand :46:25-0:02 /snic/pd1/dna/ssaagent sand :46:25-0:56 /snic/pd1/dna/ssaserver sand :46:25-61:53 /snic/pd1/dna/ssaeng meta adm sand :46:25-0:02 /snic/pd1/dna/ssaagent Number of registered ssaagents match number described in ssa.ini file (2) SCAN ERROR IN THE LOG FILE The SCAN ERROR IN THE LOG FILE section greps the File Archive Service and the Nearline Service log files and lists all the error and/or warning messages written to these files in the last 600 seconds (10 minutes). For example: SCAN ERROR IN THE LOG FILE STATUS: ERROR FILE:/raid5nbb/vassiliy_db/snic2_2/logs/server/ssa _ log >> ERROR(1) Tue Nov 3 15:08:06 EST 2009 Thrd:0x1b21 EXECUTE DIRECT< SOCKET: 2 > META FILE:/raid5nbb/vassiliy_db/snic2_2/logs/server/ssa _ log [SSA Metadata Database][SQLExecDirect] Syntax error: 'create<sd>;'. FILE:/raid5nbb/vassiliy_db/snic2_2/logs/server/ssa _ log >> ERROR(1) Tue Nov 3 15:08:06 EST 2009 Thrd:0x1b21 SCT_SERVER<- >COM thread SCT_SERVER FILE:/raid5nbb/vassiliy_db/snic2_2/logs/server/ssa _ log Request 'EXECUTE DIRECT' < SOCKET: 2 QUERYID: SSA SQL: "create sd ;" > failed with error: #200 "[SSA Metadata Database][SQLExecDirect] Synta x error: 'create<sd>;' NETSTAT COMMAND The NETSTAT COMMAND section lists the number of connections on the File Archive Service and File Archive Repository Service ports. For example: NETSTAT command STATUS: SUCCESS Number of SSAserver connections on PORT: Number of Metadata connections on PORT:

57 6.3.5 SSA ADMINISTRATOR VIEW The SSA ADMINISTRATOR VIEW section lists the currently running File Archive Service Agent processes, File Archive Service Administration Console, client connections, and client sessions, as well as Agent tasks that are currently executing or are queued for execution, as returned by the File Archive Service Administration Console. For example: SSA ADMINISTRATOR VIEW STATUS: SUCCESS The admin: <host: AIX53, AID: >, was successfully registered The admin: <host: AIX53, AID: >, was registered in READ ONLY mode Ready>Ready>Agent(s), count 2: Host AID Priority State Start Time AIX READY Tue Nov 3 14:46:26 EST 2009 AIX READY Tue Nov 3 14:46:26 EST 2009 Ready>Administrator(s), count 1: Host AID Mode AIX READ ONLY Ready>Client list, count 1: ID User Elapsed Active # queries # active Output size Output rows DBA Ready>Session info, count 0. Ready>Task(s), count SSASERVICE JOB MONITOR VIEW The SSASERVICE JOB MONITOR VIEW section displays information about current and completed SSAService activities, as returned by the File Archive Service Monitor CLI. For example: SSASERVICE JOB MONITOR VIEW STATUS: SUCCESS Service/Jobs/Steps PID UID PSR CPU MEM SSASERVICE qwerty

58 6.3.7 VMSTAT The VMSTAT section displays the virtual memory statistics returned by the UNIX/Linux vmstat command, or from the Windows Management Instrumentation (WMI) service via scripting. Example (vmstat): VMSTAT STATUS: SUCCESS System configuration: lcpu=8 mem=7743mb kthr memory page faults cpu r b avm fre re pi po fr sr cy in sy cs us sy id wa System configuration: lcpu=8 mem=7743mb kthr memory page faults cpu r b avm fre re pi po fr sr cy in sy cs us sy id wa System configuration: lcpu=8 mem=7743mb kthr memory page faults cpu r b avm fre re pi po fr sr cy in sy cs us sy id wa Example (WMI): VMSTAT STATUS: SUCCESS CPU 0 USER:0 % IDLE:100 % Physical Memory Total : 768 Mb Available : Mb System Cache: Mb Commit Charge Total: Mb Limit: 2,721.5 Mb

59 7 Verifying Metadata and SCT File Consistency The File Archive Service Metadata Validation program (ssavalidate) detects and reports discrepancies in the File Archive repository, optionally fixing the problems automatically (where possible). Over time, especially if the File Archive repository is migrated to a different system, inconsistencies can develop in the metadata, as SCT files are accidentally moved or deleted from their original locations. The ssavalidate tool is designed to find such metadata issues and, if possible, correct them. In addition to metadata analysis, the ssavalidate tool can also be used to test the logical structure of registered SCT files for inconsistencies and evaluate the integrity of the encoded data. Important: Using the -a flag, this program can operate on an active system (for instance, while data archiving is being performed), so it can be scheduled to provide administrators with consistency reports on a regular basis. But note that if the program is run without the -a flag on an active system, it might report false errors in report-only mode, and can disrupt archiving processes in fix mode. Conversely, running ssavalidate with the -a flag on a nonactive system could result in some issues not being reported. Therefore, the -a flag should be included in the ssavalidate invocation only for active systems. 7.1 Ssavalidate Syntax The ssavalidate syntax is as follows: ssavalidate [ <flags> ] Flags (can be specified in any order): -h display the online help -a run the consistency check on an active system -c test the integrity of the registered SCT files in the -s locations -d delete orphan SCT files only -f fix metadata and delete orphan SCT files automatically -l <CSV file> fix Nearline Service metadata using CSV information generated by the consistency check program -m n specify the number of threads used for SCT file integrity checking (the default is 1) -r n delete only orphan SCT files older than n hours (the default is 0.5, which is 30 minutes) 50

60 -s <dir>[,<dir>] SCT files path(s). Multiple comma-separated paths can be specified; if a path contains spaces, it must be enclosed by quotation marks. The default path is "%SSA_INI_DIR%\..\sct" (Windows) or "$SSA_INI_DIR/../sct" (UNIX). -t <dir> working directory for log and runtime files. The default path is "%SSA_INI_DIR%\..\tmp" (Windows) or "$SSA_INI_DIR/../tmp" (UNIX). Default: validate metadata consistency and report only (equivalent to "ssavalidate -m 1 -r 0.5 -s <default> -t <default>") Example ssavalidate a c m 4 s $SAND_NLIC/sct/ -t $SAND_NLIC/logs/ This command will operate on an active system, looking for metadata issues and orphan SCT files; then it will test the integrity of any SCT files (-c) listed in the metadata and that exist in the specified file paths (-s) using four threads in parallel (-m). The results are displayed on-screen and in a summary report file ("report_<timestamp>.log") in the temporary folder (-t). 7.2 Description There are three modes of operation for the ssavalidate utility: Check the metadata for inconsistencies and report the findings only Check the metadata for inconsistencies and automatically fix them where possible; list the issues that must be corrected manually Implement the inconsistency fixes detailed in a CSV file generated by the consistency check program. Additionally, registered SCT files can be tested to ensure that they have not been damaged. While administrators can schedule ssavalidate to run in report-only mode every day, it really only needs to run after data archiving processes have finished, or if there have been any changes that affect the Nearline system (upgrades, porting, and so on). Since the ssavalidate -c option (test SCT file integrity) can require a lot of time to run, it should probably be executed at most once per month. If the summary produced by a report-only metadata validation does not show any data-loss situations (which would require that Informatica Global Customer Support be contacted), ssavalidate can be run subsequently in fix-metadata mode (-f) to correct inconsistencies, if any were detected. CSV mode is typically used in disaster recovery situations, or after system refreshes and ports, but it can be run periodically in conjunction with the consistency check program to verify nearline system consistency. Note that the consistency check program can only run on a non-active system. For more information about using the consistency check program and ssavalidate together, see the "Checking Consistency" chapter in the ILM Nearline Backup and Restore Guide. 51

61 7.2.1 Validate Metadata and Report Only The report-only mode is the default, used if the -d and -f flags are omitted from the program invocation. In this mode, the metadata is checked for inconsistencies, with the results of the analysis displayed on the console and also written to a text report (see below). If "orphan" SCT files are found on disk, the standard output will conclude with the following message: METADATA VALIDATION IS FINISHED. ORPHAN SCT FILES DETECTED. RUN PROGRAM WITH -d or -f FLAG TO FIX If metadata errors, as well as orphan SCT files, are found, the standard output will conclude with the following message: METADATA VALIDATION IS FINISHED. PROBLEMS DETECTED. RUN PROGRAM WITH -f FLAG TO FIX Otherwise, if no problems were encountered, the following message will appear: METADATA VALIDATION IS FINISHED. NO PROBLEMS DETECTED Note that no actual changes are made to the metadata in this mode Validate Metadata and Fix The fix-metadata mode, activated by including the -f flag, analyzes the metadata for inconsistencies and looks for orphan SCT files, then attempts to fix these issues automatically. As with the report-only mode, the results of the analysis are displayed on the console, as well as written to a log (see below). To skip metadata analysis/correction and only delete orphan SCT files, include the -d flag instead of -f. If fixable metadata errors are found, the standard output will conclude with the following message: METADATA SYNCHRONIZATION IS FINISHED. EXISTING ERRORS FIXED If one or more registered SCT files could not be found on disk, the following message will be returned: METADATA SYNCHRONIZATION IS FINISHED, BUT SOME MANUAL CORRECTION IS REQUIRED In the above case, the specified report file should be viewed to determine which fixes require manual implementation. Note that any fixable metadata errors will have been corrected automatically. Otherwise, if there were no problems to fix, the standard output will conclude with the following message: METADATA VALIDATION IS FINISHED. NO PROBLEMS DETECTED 52

62 7.2.3 Process CSV Corrections CSV mode, enabled via the -l flag, is actually the second part of a two-part process. Before ssavalidate can operate in this mode, a CSV file must have been generated through the consistency check program. The consistency check program verifies the consistency of data and object structures between SAP BW and the Nearline Service, returning (in CSV format) the appropriate corrections to any discovered metadata inconsistencies. The CSV file produced from the consistency check program should then be processed by ssavalidate to implement the corrections. The results of the CSV processing are displayed on the console, as well as written to a log (see below). If no errors in the CSV processing are encountered, the following message should be displayed: SNIC SERVER METADATA FIXED Note that after the CSV changes are implemented, ssavalidate will continue with either standard metadata validation (the default) or will validate and fix the metadata (if the -f option was also specified). View the previous sections for information about the messages returned by those operations. For more information, see the ILM Nearline Backup and Restore Guide Test SCT Files If the -c flag is included with the invocation, ssavalidate will also test the integrity of registered SCT files in the directories specified by the -s flag (orphan SCT files are not checked). The SCT file integrity testing includes internal metadata verification and checksum validation of encoded data. When the SCT file testing is complete, if any files failed the integrity checks, the following message will be returned: Error(s) detected while testing SCT file integrity. Check report file for details In this situation, the list of files that failed the tests will be written to the summary report (see below). If errors are detected in an SCT file and no backup is available, that file will most likely have to be re-created, as there is currently no way to correct such errors. Note that SCT file testing requires considerably more time than metadata validation, so if there are many SCT files to test, the ssavalidate operation can be time-consuming. A way to speed up SCT file testing on multiprocessor systems is through multithreaded operation. The ssavalidate -m flag specifies the number of parallel threads that will be used for SCT file testing. As a rule of thumb, there can be as many threads as there are machine processors. For example, -m 4 specifies four threads, which means that four SCT files can be tested concurrently if there are at least four CPUs in the system. 7.3 Output Files Ssavalidate produces two output files when it runs: 53

63 a summary report a log file Summary Report Each time ssavalidate finishes executing, it creates a text file containing a summary of the operation in the directory specified by the -t parameter. This report includes the following information: the date and time when ssavalidate ran statistics on the number of nearline tables and SCT files affected by the operation the SCT file locations on disk specified for the operation a list of all metadata records that were validated a list of all SCT files on disk that were validated a list of SCT files, if any, that failed an integrity check statistics for the number of manual corrections required, fixed errors, and warnings a summary of the missing SCT files on disk a summary of all metadata errors that were corrected automatically a summary of warnings relating to the operation The list of issues where manual correction is required on the part of the user is located in the summary section of the report, with the heading "MANUAL CORRECTIONS REQUIRED". The list of metadata issues that were fixed automatically is located in the summary section of the report, with the heading "ERRORS FIXED AUTOMATICALLY". The list of warnings (for example, orphan SCT files that were not removed automatically) is located in the summary section of the report, with the heading "WARNINGS". The list of SCT files that were found to be damaged is located in the section of the report called "SCT FILE INTEGRITY CHECK". The summary report is named "report_yyyy_mm_dd_hh_mm_ss.log", where yyyy_mm_dd_hh_mm_ss is the file's creation timestamp. For example: report_2009_04_20_15_19_56.log Log File While ssavalidate is running, it writes trace information to a log file in the directory specified by the -t parameter. This diagnostic information includes timestamped function calls, detected errors, and automatic fixes. If required, the log file can be analyzed to troubleshoot a problem. 54

64 The log file is named "ssavalidate_yyyy_mm_dd_hh_mm_ss.log", where yyyy_mm_dd_hh_mm_ss is the file's creation timestamp. For example: ssavalidate_2009_04_20_15_19_54.log 7.4 Examples ssavalidate The ssavalidate command, without any parameters, validates the metadata and looks for orphan SCT files in the default directory ("%SSA_INI_DIR%\..\sct" in Windows, or "$SSA_INI_DIR/../sct" in UNIX). Detected issues are written to a summary report only. The report, as well as the log file and all temporary files, are written to the default temp directory ("%SSA_INI_DIR%\..\tmp" in Windows, or "$SSA_INI_DIR/../tmp" in UNIX). ssavalidate a -s d:\sand\sct -t "C:\SCT\temp files" The above command validates the metadata and looks for orphan SCT files in the "d:\sand\sct" directory. Detected issues are written to a summary report only, which, along with the log and temporary files, is written to the "C:\SCT\temp files" directory. The command can run on an active system. ssavalidate -d -s d:\sand\sct The above command looks for orphan SCT files and deletes them if found in the "d:\sand\sct" directory. ssavalidate -f -r 24 -s d:\sand\sct,\\alpha\public The above command validates the metadata and looks for orphan SCT files in the "d:\sand\sct" directory and the shared directory "Public" on the computer named "ALPHA". Metadata errors are fixed automatically if possible, and orphan SCT files older than 24 hours are deleted. The report, log file, and temporary files are created in the default location. ssavalidate -c -m 2 In addition to validating the metadata and looking for orphan SCT files in the default directory, the above command also checks the integrity of all registered SCT files in the same directory. The number of threads for the SCT file checking is set to two, which means that (assuming there are at least two processors in the current machine) two SCT files can be tested simultaneously, thereby reducing the amount of time required for this kind of validation. Detected issues, including a list of damaged SCT files, are written to a summary report in the default temp directory. 55

65 ssavalidate a -c -m 4 -f -r 168 -s d:\sand\sct,"c:\sct\new files",c:\tmp\sct, -t C:\tmp The above command can operate on a system while data archiving processes are running. It does the following: looks for metadata issues and orphan SCT files automatically fixes metadata issues if possible deletes orphan SCT files older than 168 hours (7 days) in the "d:\sand\sct", "c:\sct\new files", and "c:\tmp\sct" paths tests the registered SCT files (four at a time, if the current machine has at least four CPUs) in the "d:\sand\sct", "c:\sct\new files", and "c:\tmp\sct" paths writes a summary report, log file, and all temporary files in the "C:\tmp" directory ssavalidate -f -l scc.csv -r 720 -s d:\sand\sct -t "C:\SCT\temp files" This command will implement corrections from a CSV file (scc.csv) that was previously generated through the File Archive Service consistency check program. It will also look metadata issues (attempting to fix them automatically) and orphan SCT files. Any orphan SCT file older than 720 hours (30 days) in the "d:\sand\sct" directory will be deleted. The report, as well as the log file and all temporary files, are written to the "C:\SCT\temp files" directory. 56

66 8 Monitoring License Usage and Compression 8.1 The License Monitoring Utility You can use a utility to analyze and monitor the File Archive Service license usage at the nearline level. The License Monitoring utility reports information related to the license, such as the compressed and uncompressed data size per ILM Nearline table, per active SCT file, and for the overall system. To run the License Monitoring utility: 1. From area menu /SAND/0_MAINP, select Administration > Display File Archive Service license information. The License Monitoring utility options will appear. Figure 31: License Monitoring Utility Startup Screen Note: Documentation for the License Monitoring utility can be viewed by clicking the Information button. 2. Specify the nearline connection on which to run the report, enter the number of seconds to wait for the report to be generated before timing out (by default, 300 seconds), and then click the Execute button. The license size report will be generated The License Monitoring Utility Report The initial license monitoring report screen consists of two parts. The upper part shows general statistics and information related to licensing for the overall system (Figure 32). 57

67 Figure 32: License Monitoring Utility Report (General) The lower part of the initial license report screen shows license size information, broken down by table (Figure 33). Note that if a table displays blank values across most of its fields, it means that there are no registered SCT files for that table, and therefore no license size information to display. Figure 33: License Monitoring Utility Report (Tables) The fields in this part of the report are as follows: Table ID (the internal table identifier stored in the File Archive repository) File Archive Service Schema (the table's schema) File Archive Service Table Number of Columns (the number of columns in the table) SCT Kbytes (the total size of all SCT files for this table) Total Number of Rows in the File Archive Service Table (the total number of rows across all SCT files for this table) File Size Kbytes (the total uncompressed size of the data in this table) Compression Rate (the rate of compression for this table) Number of Requests (the number of active nearline requests associated with this table) Number of Files (the number of SCT files for this table) 58

68 In the toolbar, there are standard SAP function buttons that allow the table report section to be manipulated or viewed in different ways, printed, and saved to file/clipboard: Details (Ctrl+Shift+F3) Sort in Ascending Order (Ctrl+F4) Sort in Descending Order (Ctrl+Shift+F4) Set Filter (Ctrl+F5) Total (Ctrl+F6) Print Preview (Ctrl+Shift+F10) Change Layout (Ctrl+F8) Select Layout (Ctrl+F9) Save Layout (Ctrl+F10) Local File (Ctrl+Shift+F9) Refresh (Shift+F5) Display the license size details for the selected table. Sort the table on the selected column in ascending order. Sort the table on the selected column in descending order. Set a filter on the displayed details. Display the sum for the selected column. Display a preview of the table that can be subsequently printed. Change the table layout. Select the table layout from the list of saved layouts. Save the current table layout. Save the table to a file format or the clipboard. Refresh the information in the report License Size Details per Table To view license size information for a specific table, click the associated Table ID. Figure 34: Table IDs The license size details for the table, broken down by SCT file, will appear. 59

69 Figure 35: SCT File License Size Info for a Specific Table The fields in this report are the following: File ID (the internal SCT file identifier stored in the File Archive repository) File Name (the SCT file name) Path (the location of the SCT file) NLS Request ID (the SAP NLS request identifier) Total Number of Rows in SCT File (the number of records in the SCT file) SCT KBytes (the size of the SCT file in kilobytes) Additional fields will be displayed if the Detailed Info button at the bottom of the screen has been clicked (see section Detailed Information Mode for more information): Number of Columns (the number of columns in the SCT file) Number of Domains (the number of domains used in the SCT file) Max Length of a Row (the length of the longest record in the SCT file) Compression Rate (the compression rate for the SCT file) At the bottom of the file details report, there are standard SAP function buttons that allow the report to be manipulated or viewed in different ways, printed, and saved to file/clipboard: Details (F2) Sort in Ascending Order (Ctrl+F4) Sort in Descending Order (Ctrl+Shift+F4) Set Filter (Ctrl+F5) Display the license size details for the selected SCT file. Sort the table on the selected column in ascending order. Sort the table on the selected column in descending order. Set a filter on the displayed details. 60

70 Print Preview (Ctrl+Shift+F10) Local File (Ctrl+Shift+F9) Change Layout (Ctrl+F8) Close (F12) Display a preview of the table that can be subsequently printed. Save the table to a file format or the clipboard. Change the table layout. Closes the SCT files list Detailed Information Mode In addition to the standard buttons, there is also the button (Ctrl+F11), which turns on the display of extra details for all of the table's SCT files. Because accumulating this extra information (especially calculating the compression rate) can take a substantial amount of time, depending on the number of SCT files for the table, a warning will be displayed when this button is pressed (Figure 36). Figure 36: SCT File Extra Details Warning If YES is clicked, once the extra details are amassed, the report will be re-displayed with four additional fields (Number of Columns, Number of Domains, Max Length of a Row, and Compression Rate). Figure 37: License Monitoring Utility Report for a Specific Table (Extra Details) As an alternative to accumulating and displaying the extra information for all of the table's SCT files, the File ID of a specific SCT file can be clicked to show only the extra information for that file. The extra columns will appear, but the information will be filled out only for those SCT files whose File ID has been clicked. 61

71 Figure 38: Extra Details for a Specific SCT File Only 8.2 The Compression Rate Utility You can use a utility for viewing the data compression information from the SAP side. The Compression Rate Utility displays the uncompressed request sizes, associated compressed (SCT) sizes, number of requests, and compression rates, overall and broken down by InfoProvider, for active requests, unregistered requests, and undefined requests. To run the Compression Rate Utility: 1. Go to transaction SA In the Program field, enter /SAND/0_UTIL_COMPRESSION and click the Execute button (F8). The Compression Rate Utility output appears. Figure 39: Compression Rate Utility 62

72 The main section lists the overall compression statistics (number of requests, request size, SCT size, compression rate), followed by a table of the requests information, split by InfoProvider. The table displays the following for each InfoProvider: nearline connection name InfoProvider name archive request size in kilobytes SCT size in kilobytes number of requests calculated compression rate Below that table is the unregistered requests information, if applicable. Figure 40: Unregistered Requests This section lists the number of requests that have been unregistered and their total request size. The associated table shows the unregistered requests information, broken down by InfoProvider. The table displays the following for each InfoProvider: nearline connection name InfoProvider name request size in kilobytes number of requests 63

73 Below the unregistered requests table is the "undefined" requests information, if applicable. Figure 41: Undefined Requests This section lists the number of requests with FAILED, SKIPPED, and STARTED activity states and the total size of these requests. The associated table shows the requests information, broken down by InfoProvider. The table displays the following for each InfoProvider: nearline connection name InfoProvider name request size in kilobytes number of requests If undefined archive requests are encountered, they can be fixed using the /SAND/0_DELETE_NLS_REQUESTS program. To access the program, select Administration > Delete nearline requests from the area menu or start transaction /SAND/0_DELETE_REQ. First, address the causes of those failed requests. After that, click the Fix Errors in /SAND/0_REQ_STAT button to display the requests in a list. Figure 42: Undefined Requests in 0_DELETE_NLS_REQUESTS Program Select the requests, and then click the FIX REQUESTS button. The request activities will be reexecuted. For more information, see the "Repeating Failed Activities" section in the Deleting Archive Requests at the Nearline Level chapter of the ILM Nearline User Guide. Note: A request that is in the STARTED state is considered "undefined" even if it has not yet timed out. Obviously such a request does not necessarily need to be fixed, and it will not show up on the Fix Errors list anyway. 64

74 At the bottom of the Compression Rate Utility output is some general information. This information includes the following: Figure 43: Compression Rate Utility General Info the number of archive request inconsistencies between SAP and the nearline store (this should always be 0) the SAP system ID & installation number the timestamp when the Compression Rate Utility was executed the current user name 65

75 9 File Cleanup and Archiving 9.1 The File Operation Interface To communicate requests to the nearline store, control files are placed in the tasks directory. Files with the extension.call.xml are processed by the system. During processing, the files are renamed to.processing.xml and, on completion,.done.xml. Additionally, an answer to the call is produced on completion in the form of a file having the same name but with an.answer.xml suffix. For example: mycall.call.xml -> mycall.processing.xml -> mycall.done.xml + mycall.answer.xml During the process of moving data to the nearline store, a number of components are used. For support purposes, the log file of each component is retained by the system. There are services to aid in the management of these log files. These log files may either be archived as tarred gzipped files, or deleted from the system. The interface for the management of these files is described below, followed by various examples. <!ELEMENT SERVICE_CALL (FILE_OPERATION)> <!ELEMENT SERVICE_ANSWER (R_FILE_OPERATION)> <!ATTLIST SERVICE_ANSWER RESULT (SUCCESS FAILED) #REQUIRED> <!ELEMENT FILE_OPERATION (SOURCE, FILTER?, TARGET?)> <!ATTLIST FILE_OPERATION TYPE (MOVE DELETE ARCHIVE LIST) #REQUIRED> A file operation enables one or more files to be specified and DELETEd, MOVEd or ARCHIVEd to a tarred gzipped file. This element contains: <!ELEMENT SOURCE (FILENAME*)> <!ATTLIST SOURCE PATH CDATA #REQUIRED> The source of the operation is required. It consists of a source path, and zero or more FILENAMES to process. If a filter is used, the FILENAMEs are ignored. <!ELEMENT FILTER (#PCDATA)> <!ATTLIST FILTER DAYSOLD CDATA #IMPLIED BEFORE CDATA #IMPLIED AFTER CDATA #IMPLIED LARGER CDATA #IMPLIED SMALLER CDATA #IMPLIED > 66

76 The next tag to consider is the FILTER tag. The data of a filter tag is a regular expression (as described in Appendix D: Regular Expressions) to use on the filename. If the regular expression matches, then the file matches the filter. The use of REGEX makes building the mask a bit more complicated, but much more powerful than a typical file mask used by a UNIX shell. In addition to the REGEX expression matching there are four other attributes that can be considered. These four attributes can be specified in five different ways. Note that a file must match all specified attributes in order to be included in the operation. The attributes are as follows: BEFORE="Date" Files with a modification time "BEFORE" this date will be considered to match, if they also match the REGEX filter, and all other specified attributes. AFTER="Date" As above, but "AFTER" DAYSOLD="number of days" An ease-of-use specifier which is translated to "BEFORE" using the current date. DAYSOLD overrides "BEFORE", if both are specified. LARGER="Size in bytes" Files larger than this value in bytes will be considered to match, if they also match the REGEX filter and all other specified attributes. SMALLER="Size in bytes" As above, but "SMALLER" files. When specifying literal dates, a date picture parameter can be included in the sand_nlic.properties file (see Appendix E: sand_nlic.properties). By default this is as follows: sand.nlic.datepic=yyyy-mm-dd To see valid datepics (patterns) refer to the documentation for the Java java.util.date.simpledateformat class. <!ELEMENT TARGET (FILENAME?)> <!ATTLIST TARGET PATH CDATA #REQUIRED> In the case of a MOVE or ARCHIVE operation, the TARGET directive can be used to indicate where the files are to be stored. The FILENAME is optional, but should be used when ARCHIVE'ing to specify the name of the archive. Notes on archiving: 1. When archiving, files that are archived are deleted from their original location. 2. If archiving or deletion leaves a directory empty, the directory is also removed. 3. Extra path info is retained in the archive. 4. If the target path (specified in TARGET) does not exist, it will be created (this also applies to MOVE operations) 67

77 5. On AIX systems, use the "tar -tvif" command to view the contents of an archive, rather than the "tar -tvf" command. The latter command can produce false checksum errors. The answer tag for a file operation is as follows: <!ELEMENT R_FILE_OPERATION ((SOURCE, TARGET?) BI_EXCEPTION)> <!ATTLIST R_FILE_OPERATION TYPE (MOVE DELETE ARCHIVE) #REQUIRED> The answer tag reports on what files were processed by the operation. The files are listed in the SOURCE tag. Each file has a separate FILENAME entry. 9.2 File Operation Interface Examples The following examples illustrate how to use the file operation interface. This task recursively lists log files older than three days from the logs directory: <?xml version="1.0" encoding="iso "?> <!DOCTYPE SERVICE_CALL SYSTEM "sand_binli.dtd"> <SERVICE_CALL> <FILE_OPERATION TYPE="LIST"> <SOURCE PATH="c:\opt\2.3\logs" /> <FILTER RECURSIVE="TRUE" DAYSOLD="3"><![CDATA[(.*Log$.*log$)]]></FILTER> </FILE_OPERATION> </SERVICE_CALL> This task recursively archives log files older than three days from the logs directory into the binlilogs.tar.gz file: <?xml version="1.0" encoding="iso "?> <!DOCTYPE SERVICE_CALL SYSTEM "sand_binli.dtd"> <SERVICE_CALL> <FILE_OPERATION TYPE="ARCHIVE"> <SOURCE PATH="c:\opt\2.3\logs" /> <FILTER RECURSIVE="TRUE" DAYSOLD="3"><![CDATA[(.*Log$.*log$)]]></FILTER> <TARGET PATH="c:\opt\2.3\archive"> <FILENAME>binli-logs.tar.gz</FILENAME> </TARGET> </FILE_OPERATION> </SERVICE_CALL> This task recursively deletes ALL files older than three days from the logs directory: <?xml version="1.0" encoding="iso "?> <!DOCTYPE SERVICE_CALL SYSTEM "sand_binli.dtd"> <SERVICE_CALL> <FILE_OPERATION TYPE="DELETE"> <SOURCE PATH="c:\opt\2.3\logs" /> <FILTER DAYSOLD="3" RECURSIVE="TRUE"><![CDATA[.*$]]></FILTER> </FILE_OPERATION> </SERVICE_CALL> 68

78 This task explicitly deletes a few files from a directory: <?xml version="1.0" encoding="iso "?> <!DOCTYPE SERVICE_CALL SYSTEM "sand_binli.dtd"> <SERVICE_CALL> <FILE_OPERATION TYPE="DELETE"> <SOURCE PATH="c:\opt\2.3\logs"> <FILENAME>a</FILENAME> <FILENAME>b</FILENAME> </FILE_OPERATION> </SERVICE_CALL> 9.3 Using File Operations to Delete Log Files The following file operations can be used to clean up and archive system log files. It is suggested that a crontab script be used to copy a call file with the following contents to the tasks directory (including communication ok file). This might be done every 30 days, for example. The crontab could then also collect the binli-logs.tar.gz file and give it a unique name to prevent it from being overwritten. Sample crontab script: #!/bin/sh # Remove the file_archive.answer file we will wait on rm file_archive.answer.xml # Copy the process file into the tasks folder cp /home/sand/file_archive.call.xml ${SAND_NLIC}/tasks touch ${SAND_NLIC}/tasks/file_archive.call.xml.ok while [! -f ${SAND_NLIC}/tasks/ file_archive.answer.xml ] ; do sleep 5 ; done # Move the tar.gz file to a dated version cp ${SAND_NLIC}/archive/latest-logs.tar.gz ${SAND_NLIC}/archive/`date '+%m%d%y'`.tar.gz # Delete other log files cp /home/sand/file_delete.call.xml ${SAND_NLIC}/tasks This script is only provided as an example. A more sophisticated script (including ing of result files, for example) might also be devised. 1. Archive all log files older than 30 days from the system (file_archive.call.xml) <?xml version="1.0" encoding="iso "?> <!DOCTYPE SERVICE_CALL SYSTEM "sand_binli.dtd"> <SERVICE_CALL> <FILE_OPERATION TYPE="ARCHIVE"> <SOURCE PATH="/opt/sand/logs" /> <FILTER RECURSIVE="TRUE" 69

79 DAYSOLD="30"><![CDATA[(.*Log$.*log$)]]></FILTER> <TARGET PATH="/opt/sand/archive"> <FILENAME>latest-logs.tar.gz</FILENAME> </TARGET> </FILE_OPERATION> </SERVICE_CALL> 2. Delete all other files older than 30 days from logs directory in the system (file_delete.call.xml) <?xml version="1.0" encoding="iso "?> <!DOCTYPE SERVICE_CALL SYSTEM "sand_binli.dtd"> <SERVICE_CALL> <FILE_OPERATION TYPE="DELETE"> <SOURCE PATH="/opt/sand/logs" /> <FILTER DAYSOLD="3" RECURSIVE="TRUE"><![CDATA[.*$]]></FILTER> </FILE_OPERATION> </SERVICE_CALL> 9.4 Accessing the File Operation Interface from SAP The installation includes a report called /SAND/0_UTIL_FILE which can be used to access the File Operation interface from within SAP. To access the report, select area menu Administration > Maintain Nearline Service directories or start transaction /SAND/0_UTIL_FILE. Note: If authorization checking is enabled, the /SAND/0_UTIL_FILE program requires either the YSAND_NLS004 or ZSAND_NLS004 authorization profile in the calling user's account, otherwise only the list function will be available. For more information, see the "Setting Up Authorization Checking" chapter in ILM Nearline Configuration Guide. Figure 44: The /SAND/0_UTIL_FILE program The parameters of the /SAND/0_UTIL_FILE report correspond to those in the XML file described in section 3.1 above. The Nearline Connection field should contain the name of the connection to the nearline store. The Service Type list includes List files, Delete, Archive, and Collect. The 70

80 Source Directory list is limited to backup, logs, tasks, temporary, or timing. The Pattern field allows for specification of a filter or mask to designate the files that will be affected by the operation (for information about the syntax of expressions that may be included in the Pattern field, consult the online help). The Target Directory can be backup, logs, or temporary. When the auto Filename option is enabled, the name of the resulting archive file is generated automatically, and will include the service type and timestamp followed by a random number. 9.5 Deleting Orphan SCT Files The ssavalidate command can be run to produce a report of the state of the File Archive repository, including orphan SCT files. Orphan SCT files are produced by the deletion of archive requests in the File Archive repository during a reload, an archive request invalidation, a DAP deletion, or a not-yet-supported archive request deletion. Orphan SCT files are left behind to ensure that a proper backup can be taken of the system during normal operations (online backup). To delete orphan SCT files, the ssavalidate program can be run with the following arguments: ssavalidate -d -s $SAND_NLIC/sct/ -t $SAND_NLIC/logs/ Note that this command must not be run if backups are currently being taken of the nearline store. 9.6 Automatic Cleanup of the Tasks Directory There are two different ways to automate the cleanup of files in the tasks directory: one method enables cleanup at the transaction level, upon task completion; the other involves scheduled housekeeping at regular intervals. These are described in the next sections. Note that the two different methods can be used together Transactional Cleanup You can use the Delete Time and Delete State parameters in the /SAND/0_NLS_CFG table to configure the automatic deletion of files from the tasks directory when certain transactions have completed. You access the parameters in area menu /SAND/0_MAINP Configuration > Configure ILM Nearline or start transaction /SAND/0_CONFIG. Delete Time Value Description 71

81 Value Never Description No files will be deleted automatically. Note: Files can still be deleted through scheduled housekeeping even if the "Never" option is enabled. Refer to the next section for more details. Immediate Close WRITER-related files are deleted on CWR (CLOSE_WRITER) call; all other files are deleted immediately after they are read (default). CURSOR- and WRITER-related files are deleted on CCU (CLOSE_CURSOR) and CWR (CLOSE_WRITER) calls, respectively; all other files are deleted immediately after they are read. Delete State Value Description On Success Only delete the files for successfully completed tasks (default). Force Delete files for tasks causing errors as well as files for successfully completed tasks. Unless Delete Time is set to Never, all files related to the following commands are deleted immediately after the command is executed: ARE (ALTER_REQUEST) ATA (ALTER_TABLE) CON (CONNECT) CRE (CREATE_REQUEST) CTA (CREATE_TABLE) DTA (DROP_TABLE) EXE (EXECUTE) GSP (GET_SERVICE_PROPERTIES) To maintain manual control over file deletion, specify Never for the Delete Time parameter. Note that in this case, care should be taken to ensure that old files do not accumulate in the tasks directory. This approach is recommended when debugging or monitoring issues, as it enables verification of system operation. To automatically delete CURSOR and WRITER files when they will no longer be used, specify Close for the Delete Time parameter. This ensures that large dat files will be deleted at the package level as soon as they are no longer needed. This setting is useful when Delete State is set to On Success, since when an error occurs, all dat files in the associated package will still be 72

82 available. If Delete Time is set to CLOSE, the files will only be deleted at the end of a nearline request. This setting should only be used if enough disk space is available in the tasks directory. To delete all files, specify Immediate for the Delete Time parameter. Because the File Archive Service uses the dat files to create compressed data files, the files for a single WRITER activity will be deleted together. If Immediate is combined with On Success (recommended), only the files of the failed subtasks will remain in the tasks directory Scheduled Cleanup In addition to automatic file cleanup after task completion, there is also the option of regularly scheduled housekeeping of the tasks directory to clean out files older than a specified age. This scheduled cleanup can be configured via two parameters in the sand_nlic.properties configuration file: Parameter Name Default Description sand.nlic.cleanup_old_files_ interval sand.nlic.cleanup_old_files_ retention_period (1 day) (120 days) The amount of time (in milliseconds) between automatic cleanup of the tasks directory. Setting this property to 0 disables the automated cleanup. The maximum amount of time (in milliseconds) to retain files in the tasks directory. Certain files older than this value will be deleted during the periodic, automated cleanup of the tasks directory. Files with the following extensions will be deleted via this automated cleanup: *.answer.xml *.csv *.dat *.done.xml *.fifo *._fifo *.ndl *.results.xml *.task.xml There is a minimum retention period of 172,800,000 milliseconds (2 days) to prevent active task files from being deleted accidentally. If a lower value is specified, a warning will be issued and the minimum value will be used instead. By default, this automated housekeeping checks the tasks directory each day for files older than 120 days. 73

83 9.7 Automatic NDL File Deletion During archive transactions between SAP and the Nearline Service, loader specification files (*.ndl) are automatically generated in the tasks directory. These NDL files define the parameters for SCT file creation, used by the File Archive Loader. Unlike other file types in the tasks directory, NDL files are not covered by the automatic file deletion parameters set in the /SAND/0_NLS_CFG table. Instead, the sand.ndl.delete_files parameter in the sand_nlic.properties file is used to configure the automatic NDL file deletion options. By default, NDL files are always deleted by the Nearline Service at the end of a transaction, but this can be changed so that NDL files are never deleted, or NDL files are deleted only if there are no errors in the transaction. The following are the possible NDL file deletion options in the sand_nlic.properties file: sand.ndl.delete_files=always (default) The Nearline Service always deletes NDL files at the end of a transaction. sand.ndl.delete_files=never The Nearline Service never deletes NDL files. sand.ndl.delete_files=onsuccess The Nearline Service deletes NDL files only if the transaction concluded with no errors. 74

84 10 SCT File Migration Over time, it may be desirable to move certain categories of SCT files to other locations. For example, SCT files older than two years, or SCT files that are no longer queried very often, can be moved to a "slower" machine. Automated SCT file migration is supported through ad hoc or scheduled "MigrateTo" tasks defined with the File Archive Service add_task program Defining a MigrateTo Task To define a MigrateTo task that will move specified SCT files from one location to another, execute the add_task -m command using the syntax below: add_task m <optional flags> l <destination path> -s <script file> ssa adm dba[/password] where: -m specifies a MigrateTo task -l <destination path> specifies the target location to where the SCT files will be moved -s <script file> specifies a text file containing the rules for selecting the SCT files for the migration task. There are two ways to select SCT files for migration: o o Retention Period: Specify a list of Compacted Tables and an end date; the SCT files associated with the specified Compacted Tables, that were created before or on the end date, will be migrated. SELECT Statement: Specify a SELECT statement that queries a Compacted Table; those SCT files affected by the query will be selected for migration. For instance, if the query specifies all records that have a column1 value between 0 and 100, every SCT file registered with the Compacted Table that contains records satisfying this condition will be moved when the task is executed. Note that the actual results of the query are unimportant; the task is concerned only with the list of SCT files involved. Optional Flags: Refer to the 10.2 SCT File Selection Rules section below for details about the script file contents. -k Keeps the SCT files that are being migrated. When the MigrateTo task has completed, the SCT files that were migrated will be found in both the original location and the new location. However, note that the original path of a migrated SCT file is completely removed from the File Archive repository. 75

85 -z <task frequency> -D n [ -H hh:mm:ss ] The frequency at which this task will be performed. The <task frequency> argument can be one of the following values: daily weekly monthly The -D flag, which must be uppercase, is required if -z is included in the command. The -D n argument specifies 1-7 for the day of the week ("weekly" task), or 1-31 for the day of the month ("monthly" task), when the task will recur. Note that weekday number 1 could be Sunday or Monday, depending on the locale settings. The -H flag, which must also be uppercase, is optional. The -H hh:mm:ss argument specifies the exact time when the job will be started during each recurrence of the task. The default is midnight (00:00:00). If -z is omitted, the task is considered "ad hoc", and will be performed one time only, immediately after the MigrateTo task is created SCT File Selection Rules The script file specified by the MigrateTo task contains either: A SELECT statement to execute against the Compacted Table -or- The string "//TODAY-n//TABLE:x" to designate a retention period SELECT statement The SCT files touched by the query will be migrated to the new location when the task executes. Note that, under the covers, the SELECT statement may be optimized to provide more precise selection for 3rd party storage migration (using OpenText or XAM for example) Retention Period Syntax (//TODAY-n//TABLE:x) The //TODAY-n component, where n is an integer greater than or equal to 0, specifies an end date; applicable SCT files that were created before or on this date will be selected for migration. The way this string is read is "current date minus n days"; so, for example, //TODAY-0 is today, //TODAY-1 is yesterday, //TODAY-2 is two days ago, and so on. The //TABLE:x component lists one or more Compacted Tables whose registered SCT files will be migrated, if their creation date is prior to or on the specified end date. Multiple tables in the list must be separated by commas. A wildcard character (*) can be used to signify all tables in the database or in a particular schema. As well, the SCT files associated with a Compacted Table 76

86 can be excluded from storage by prefixing the table or schema name with an exclamation point (!). The following are possible //TABLE:x strings: //TABLE:* (all Compacted Tables in the database) //TABLE:schema1.table1,schema1.table2,schema5.table9 (tables schema1.table1, schema1.table2, and schema5.table9) //TABLE:*,!schema5.table9 (all Compacted Tables except for schema5.table9) //TABLE:schema1.* (all Compacted Tables in schema schema1) //TABLE:schema1.*,!schema1.table2 (all Compacted Tables in schema schema1 except for table table2) //TABLE:*,!schema1.* (all Compacted Tables in the database except for the tables in schema schema1) Note that, apart from in the list of Compacted Tables, there must not be any whitespace in the "//TODAY-n//TABLE:x" string Examples add_task m l /alpha/sct -s /beta/work/migrate1.txt ssa adm dba/dba migrate1.txt: SELECT * FROM s1.transact1 WHERE tdate BETWEEN ' ' AND ' ' The MigrateTo task above defines an ad hoc job that will be started right away. The specified script contains a SELECT statement that filters on a date range. The affected SCT files will be moved to the "/alpha/sct" folder. add_task m -k -z monthly -D 1 -H 00:00:00 l C:\work\sct -s C:\temp\migrate2.txt ssa adm dba/dba migrate2.txt: //DAY-7//TABLE:s1.*,!s1.temp1,prod1.worktab The MigrateTo task above defines a monthly job that will be performed on the first day of each month at midnight. The specified script selects only SCT files created in the last seven days ("//DAY-7") that are associated with all tables in the s1 schema (except table temp1) and table prod1.worktab. 77

87 11 Immediate SCT File Export to External Storage You can export SCT files to external storages, such as Centera. To do this, you must update the ssa.ini file with the URL of the external storage and connection information. Note: Do not follow the migration information and tasks in SCT File Migration chapter if you choose to export SCT files to an external source Updating the ssa.ini File Complete the following steps to update the ssa.ini file. 1. Find the ssa.ini file in the <installation directory>/dna folder. 2. Open the ssa.ini file and scroll to the [Service] section. 3. Add the following parameter to the [Service] section: MOVESCT=<URL of the external storage> For example, to export SCT files to Centera, enter the following statement: MOVESCT=snia-xam://1/ 4. Add a new section for connection attributes: [XAM_CONNECTION <n>] XRI=<connection details> Where n is the number of the connection index in URL you entered in step 3. For example, to export SCT files to Centera, enter the following statement: [XAM_CONNECTION 1] XRI=centera_vim! ?c:\ssa2_5\ilmcentera.pea 11.2 Sample Service and XAM Sections in the ssa.ini File Refer to the following sample ssa.ini file as an example of the list of parameters in the [Service] and [XAM_CONNECTION] sections. [SERVICE] SSA_DATABASE=adm SSA_CONNECTION=ssa SSA_UID=DBA SSA_PWD= 78

88 TIMEOUT=5 SSASERVICELOG=C:\snic_mr2\logs\service SSAJOBLOG=C:\snic_mr2\logs\service JOBRETRY=2 LOADBUFFER=20000 MAXJOBS=3 LOADPARAM=j:2,k:2,z:2 LOADWHITESPACE=0x09,0x0A,0x0B,0x0C,0x0D,0x20 NULL=\x7F MOVESCT=snia-xam://1/ [XAM_CONNECTION 1] XRI=centera_vim! ?c:\ssa2_5\ilmcentera.pea LOG_LEVEL=5 LOG_VERDBOSITY=0X7FFFFFF LOG_PATH=xam.log 79

89 12 Miscellaneous 12.1 Viewing ILM Nearline Version Information It is possible to view version information for ILM Nearline software components from within SAP BW. The specific method for retrieving the version information depends on the ILM Nearline software under consideration, which can generally be divided as follows: components installed in the SAP system (SAP Nearline Add-On, migration and sizing tools, versioning program) components that reside outside the SAP system (Nearline Service, File Archive Service) SAP Components In terms of transport packages, Informatica products can be identified by the following data element names: Product Data Element Name Versioning Program Informatica version /SAND/Z400_wxx_yyzz /SAND/Z400_VERSION Migration Tool for BW3 to BW7 /SAND/Z401_wxx_yyzz /SAND/Z401_VERSION SAP Nearline Add-On for BW 3.5 /SAND/Z402_wxx_yyzz /SAND/Z402_VERSION Sizing Tool /SAND/Z403_wxx_yyzz /SAND/Z403_VERSION SAP Nearline Add-On for BW 7 /SAND/Z404_wxx_yyzz /SAND/Z404_VERSION where wxx_yyzz indicates the component version: w is the major version number xx are the minor version number yy are the maintenance release number zz are the patch number. For example, /SAND/Z404_310_1202 is read as SAP Nearline Add-On for BW 7 version 3.10, maintenance release number 12, patch number 02. In the table above, there is also a versioning program associated with each product that, when run, simply returns the version number of the product. For instance, if the program 80

90 /SAND/Z404_VERSION is executed, and it is associated with data element /SAND/Z404_310_1202, it will return the value , which is the version of the SAP Nearline Add-On for BW 7 product The /SAND/Z4VERSION Program There is a global versioning program, called /SAND/Z4VERSION, which returns version information for all Informatica software installed in the SAP system, as well as general information about the SAP system (name, version), installed SAP components, SAP notes of component BW- WHM-DST*, and all transport orders involving ILM Nearline objects. To get additional information about a particular SAP note or transport order, double-click the relevant line in the generated report. The report produced by the /SAND/Z4VERSION program has the following structure and information: <date of the report> <program name and description> <SAP components and their corresponding version numbers (or "NOT PRESENT" for each component that has no associated versioning program)> <SAP system name> <SAP version> <other installed SAP components: release number, level, highest support package, and description> <SAP notes: number, version, short text, component, processing status, implementation status, and processor> <number of transport orders> <transport order details> An example of a report generated by /SAND/Z4VERSION can be found in Appendix G: Sample /SAND/Z4VERSION Output Saving the /SAND/Z4VERSION Report The output of the /SAND/Z4VERSION program can be saved to a file by selecting System> List> Save> Local File from the menu. In the dialog box that appears, choose unconverted, and save the file to the desired path. For troubleshooting purposes, Informatica Global Customer Support may request that a /SAND/Z4VERSION report be generated, saved, and sent to Informatica Global Customer Support for analysis as needed. 81

91 Nearline Service and File Archive Service Components To view version information for components that reside outside the SAP system, in the RSDAP transaction, click the button next to the Nearline Connection name on either the General Settings or Nearline Storage tab. To view the version information in the RSA1 transaction, click the button at the bottom of the Archiving tab. You can also view this information by double-clicking in the report generated by the /SAND/0_UTIL_TEST_CONNECTION program. You access the program from area menu /SAND/0_MAIN Verification > Test connection to Nearline Service, area menu /SAND/0_MAINP Configuration > Verification > Test connection to Nearline Service, or start transaction /SAND/0_TEST_CONNECT. In both cases, a Status Information window will display the list of installed components available through the specified Nearline Connection. The list also includes some system information, such as host name, environment details (SAP system ID, connection name), and the timestamp for when the version information was retrieved. Figure 45: Nearline Connection Components Version Information Fields The version information contains the following fields: Field COMPONENT SHORT DESCRIPTION OF STATUS (or STATUS) Description The name of the component. For most components, this is the file name. The current status of the component, represented by traffic lights. The possible status indicators are: (green) The component is available for use. 82

92 Field Description (yellow) The component is available for use, but problems are expected. (red) The component is available, but cannot be used. (off) The component is unknown. Some items in the component list actually represent system information (for example, Host or Environment), so this is their normal status. The off status could also mean that a component was not properly installed. (no status) The status is initializing; it is not yet defined. To get further information about a component s status, place the mouse cursor over the traffic lights for that component. A tooltip will appear with additional details about the status. RELEASE The version number of the component, usually displayed in this form: <major version>.<minor version>.<build number> The version number of the overall package has a different form: xxxyyy where xxx is the major version number, and yyy is the minor version number. PATCH LEVEL (or LEVEL) SUPPORT PACKAGE (or SUPP. PKG.) SHORT DESCRIPTION OF COMPONENT The revision number for the specified component. If there have been no revisions for the component (that is, the revision number is 0), this field will be empty. The support package number for the specified release. The full name or internal description of the component Exporting Version Information To save the ILM Nearline version information to an HTML document, click the download button and select HTML Download. It is recommended that the output be saved in its own directory, as some graphics files are generated along with the HTML document. 83

93 The exported version information will look similar to this: Figure 46: Exported Version Information Viewing File Archive Service Version Information from a Command Line Version information for individual File Archive Service programs can be displayed on the command line by invoking a program with the "- -version" flag. For example, to view information about the File Archive Service, execute the following: ssaservice --version A message provides the following information: the component name: File Archive Service the version number in square brackets: [2,5,2107,0] the name of the program: Workflow Service Alternatively, in many cases the same version information can also be displayed, along with copyright/patent information and the program s online help, by invoking the program with the "-h" option. For instance, the following command: ssaservice h will return output similar to this: usage: ssaservice [options] options are: 84

94 -h = print this screen -m = send messages when JOBs finished -u = check MD version and convert it to new one -v = send ssaservice messages on screen start = start service install = install service remove = uninstall service to stop service = net stop ssaservice 12.2 ILM Nearline Processes When ILM Nearline is operating, the following processes should always be active: Nearline Service (Java component, sand_nlic.jar) File Archive Service File Archive Service Agent (the number of agents will be the value of NUM_AGENT in the ssa.ini configuration file plus the value of the sand.ssaagent.extra parameter in the sand_nlic.properties files) File Archive Repository Service Workflow Management Service (ssaservice) The list of running processes can be verified using the ps command in UNIX, or via the Windows Task Manager program in Windows. There are also third-party process-monitoring tools that can be configured to alert users if a required process is not running. Additional processes can appear during different nearline activities: DAP Creation File Archive SQL Tool Archive Process Workflow Management Service (ssaservice) File Archive Loader File Archive Service Administration Console Query or Verification Phase Workflow Management Service (ssaservice) File Archive SQL Tool File Archive Administration Utility SCT File Query Result Set Merge Tool (ssam) Reload Workflow Management Service (ssaservice) 85

95 File Archive SQL Tool File Archive Service Administration Console File Archive Administration Utility SCT File Query Result Set Merge Tool (ssam) The processes that run during a given activity are summarized in the following table: Process General DAP Creation Archive Process Query or Verification Phase Reload Nearline Service (Java component, sand_nlic.jar) File Archive Service File Archive Service Agent File Archive Repository Service Workflow Management Service (ssaservice) File Archive SQL Tool File Archive Loader File Archive Service Administration Console File Archive Administration Utility SCT File Query Result Set Merge Tool (ssam) 12.3 Extra File Archive Service Agent Processes By default, the installation sets the number of File Archive Service Agent processes that will run to 2. This setting is written to the ssa.ini file as follows: [SAPBI] Num_Agents=n In addition to these standard Agent processes, the Nearline Service also starts one extra Agent by default, configured to handle administrative requests only. The main purpose of this special Agent is to allow the processing of archive requests and SCT registration on a system where the standard Agents are overwhelmed with query requests. The number of these extra Agents to start can be configured via the sand.ssaagent.extra parameter in the sand_nlic.properties file. The default value of 1 is sufficient for most cases, but it can be set higher if required. Setting the parameter to 0 disables the extra Agent. 86

96 12.4 Changing the Password for the File Archive Repository During the installation, you configure the password to access the File Archive repository. To change the password, complete the following tasks: 1. Use the following command to stop the Nearline Service: $./snic stop 2. Use the following command from $SAND_NLS to set the environment: $. conf/env.setup 3. Use the following command to start the File Archive Repository Service: $ ssaeng meta meta & 4. Use one of the following commands to change the password: For Windows: ChangeDBAPassword.bat <old password> <new password> For UNIX: ChangeDBAPassword.sh <old password> <new password> 5. Use the following command to stop the File Archive Repository Service: $ ssasql meta meta dba/<newpassword> SQL> shutdown; SQL>.exit 6. Start the Nearline Service. The new password is used to access the File Archive repository The Handling of Special Characters The following unprintable characters have special internal meaning to ILM Nearline software, and are therefore treated differently during a load process: 0x00 (the NUL character) 0x7F (the DEL character) The 0x00 character represents a string terminator in standard string functions. Having that character as part of the data can cause issues, and therefore it will not be loaded at all. Internally, the 0x7f character by itself represents a null value. During an archive job, if a data field contains only the 0x7f character, it will cause the record to be rejected, an error will appear, and the job will fail. On the other hand, if a string begins with the 0x7f character and other characters follow, it will be processed as a standard string by default. 87

97 12.6 Troubleshooting HTTP Communication If the HTTP service is not running, HTTP communication between the Nearline Service and SAP will be blocked. The status of the HTTP service can be checked using the transaction SMICM. From transaction SMICM, click on the Services button (Shift+F1). The configured services will appear. Figure 47: Transaction SMICM Figure 48: Active Services The HTTP service will be listed by SAP host; a green check mark must be present in the activity (Actv.) column, indicating that the service is up and running. If the HTTP service is not active for the host (absence of a green check mark), the service must be activated for HTTP communication to work. To activate an HTTP service, select the service for a particular host and select Activate from the Service menu. 88

98 Additional HTTP troubleshooting: The following command should be executed on each application server of the SAP instance where HTTP is to be used, via the command line by the user running the SAP instance: telnet <IP address or server name> <port> The <IP address or server name> and <port> defined in the URL to Nearline Service parameter of the ILM Nearline configuration in the SAP system must be used. For example: telnet When the above telnet command is executed, it will connect directly to the Nearline Service port. The following message should appear: > telnet Trying Connected to SAP1 ( ). Escape character is '^]'. HTTP/ Bad Request Date: Thu, 24 Mar :48:28 GMT Content-Type: text/plain BAD REQUEST: missing METHOD in HTTP header. received from host: / Connection closed by foreign host. At the same time, a message similar to the following should appear in the Nearline Service log file (sand_nls.0.log): WARNING Error reading HTTP request. BAD REQUEST: missing METHOD in HTTP header. received from host: / If these messages are displayed, the telnet command is successfully connecting to the Nearline Service from the SAP host. If a message such as the following appears, it indicates that the IP address used cannot be resolved or a firewall is blocking the port. Trying telnet: connect to address : Connection refused telnet: Unable to connect to remote host: Connection refused 89

99 12.7 Session Pool Management Since the SAP BW Nearline interface does not implement an explicit disconnect function, the Nearline Service manages sessions in the following way (and keep in mind that every the Nearline Service task will have its own session): There is a maximum number of concurrent sessions allowed in memory (called the "session pool"), as defined by the sand.session.poolsize parameter (1000 by default). When a certain percentage of this pool is filled (80% by default, but configurable via the sand.session.max_pool_filling_rate parameter), the session manager looks for existing sessions to remove from memory. Qualified sessions are either serialized (written to disk), or else deleted if a specified session lifespan is exceeded (86,400,000 milliseconds 24 hours is the default for the sand.session.lifespan parameter). A session is serialized if it meets the following criteria: session is older than the amount of time specified by the sand.session.max_elapse_in_pool parameter ( milliseconds, or 15 minutes, by default) session does not contain any active writer session does not contain any active reader However, if the pool becomes 100% full and there are no sessions older than the sand.session.max_elapse_in_pool value that do not have active readers/writers, the oldest sessions without active readers/writers will be serialized to disk until the pool capacity goes down to 95%. Serialized sessions are written to the directory given by the sand.session.serialization_directory, which is initially set to %SAND_NLIC%/tmp/sessions. Automated cleanup of this directory occurs at regular intervals (every 43,200,000 milliseconds 12 hours by default; configurable via the sand.session.disk_cleanup_interval). During these cleanups, only serialized sessions older than the sand.session.lifespan amount are deleted. After a session is serialized, it can be brought back into memory as part of the session pool if the session becomes active again. However, this should not occur very often. In most cases, serialized sessions are never re-used. When it does occur, a warning will be written to the Nearline Service log file: Session has been unserialized. If this warning recurs, please increase the 'sand.session.poolsize' parameter in file '{SAND_NLIC}/bin/sand_nlic.properties'. As this situation is atypical in normal operations, multiple warnings of this type probably indicate that the session pool size has been set too low and should therefore be increased. Note that changes to the sand.session.poolsize parameter take effect right away; it is not necessary to restart the Nearline Service Resetting the Nearline Service Number Range During the installation, the Nearline Service number range is configured as 1 to 99,999,999. If large numbers of requests are generated by the system each day, it is possible to reach this number range limit. The configuration checker program /SAND/0_CONFIG_CHECK will give a 90

100 warning if the current number reaches 95% of the upper limit (that is, 95,000,000). If and when this warning occurs, the current number should be reset back to 1 (or any other value). To reset the current number: Important! If file-based communication is used and the automatic deletion of files in the tasks directory is not enabled, it is strongly recommended that the tasks directory be cleaned up before resetting the number range, so as to avoid the possibility of conflicting request numbers, which can cause serious problems. 1. On transaction SNRO (Number Range Object Maintenance), specify "/SAND/0NLS" in the Object field and click the Number Ranges button (F7). Figure 49: SNRO Transaction The screen changes to the SNUM transaction. 91

101 Figure 50: SNUM Transaction 2. Click the Status button. The Maintain Number Range Intervals screen appears. Figure 51: Maintain Number Range Intervals 3. Change the number under the Current number field to the desired value, and then click the Save button on the toolbar (Ctrl+S) to apply the change. A popup message may appear, warning that custom changes to the number range are not automatically added to a transport. 92

Informatica (Version 9.1.0) Data Quality Installation and Configuration Quick Start

Informatica (Version 9.1.0) Data Quality Installation and Configuration Quick Start Informatica (Version 9.1.0) Data Quality Installation and Configuration Quick Start Informatica Data Quality Installation and Configuration Quick Start Version 9.1.0 March 2011 Copyright (c) 1998-2011

More information

Informatica PowerExchange for MSMQ (Version 9.0.1) User Guide

Informatica PowerExchange for MSMQ (Version 9.0.1) User Guide Informatica PowerExchange for MSMQ (Version 9.0.1) User Guide Informatica PowerExchange for MSMQ User Guide Version 9.0.1 June 2010 Copyright (c) 2004-2010 Informatica. All rights reserved. This software

More information

Informatica Data Services (Version 9.5.0) User Guide

Informatica Data Services (Version 9.5.0) User Guide Informatica Data Services (Version 9.5.0) User Guide Informatica Data Services User Guide Version 9.5.0 June 2012 Copyright (c) 1998-2012 Informatica. All rights reserved. This software and documentation

More information

Informatica PowerCenter (Version HotFix 1) Metadata Manager Business Glossary Guide

Informatica PowerCenter (Version HotFix 1) Metadata Manager Business Glossary Guide Informatica PowerCenter (Version 9.0.1 HotFix 1) Metadata Manager Business Glossary Guide Informatica PowerCenter Metadata Manager Business Glossary Guide Version 9.0.1 HotFix 1 September 2010 Copyright

More information

Informatica Data Integration Analyst (Version 9.5.1) User Guide

Informatica Data Integration Analyst (Version 9.5.1) User Guide Informatica Data Integration Analyst (Version 9.5.1) User Guide Informatica Data Integration Analyst User Guide Version 9.5.1 August 2012 Copyright (c) 1998-2012 Informatica. All rights reserved. This

More information

Informatica (Version 9.1.0) Data Explorer User Guide

Informatica (Version 9.1.0) Data Explorer User Guide Informatica (Version 9.1.0) Data Explorer User Guide Informatica Data Explorer User Guide Version 9.1.0 March 2011 Copyright (c) 1998-2011 Informatica. All rights reserved. This software and documentation

More information

Informatica B2B Data Transformation (Version 9.5.1) Studio Editing Guide

Informatica B2B Data Transformation (Version 9.5.1) Studio Editing Guide Informatica B2B Data Transformation (Version 9.5.1) Studio Editing Guide Informatica B2B Data Transformation Studio Editing Guide Version 9.5.1 June 2012 Copyright (c) 2001-2012 Informatica Corporation.

More information

Informatica PowerExchange for SAP NetWeaver (Version 10.2)

Informatica PowerExchange for SAP NetWeaver (Version 10.2) Informatica PowerExchange for SAP NetWeaver (Version 10.2) SAP BW Metadata Creation Solution Informatica PowerExchange for SAP NetWeaver BW Metadata Creation Solution Version 10.2 September 2017 Copyright

More information

Informatica B2B Data Transformation (Version 9.5.1) Administrator Guide

Informatica B2B Data Transformation (Version 9.5.1) Administrator Guide Informatica B2B Data Transformation (Version 9.5.1) Administrator Guide Informatica B2B Data Transformation Administrator Guide Version 9.5.1 June 2012 Copyright (c) 2001-2012 Informatica. All rights reserved.

More information

Informatica PowerCenter (Version 9.1.0) Mapping Architect for Visio Guide

Informatica PowerCenter (Version 9.1.0) Mapping Architect for Visio Guide Informatica PowerCenter (Version 9.1.0) Mapping Architect for Visio Guide Informatica PowerCenter Mapping Architect for Visio Guide Version 9.1.0 March 2011 Copyright (c) 1998-2011 Informatica. All rights

More information

Informatica PowerExchange for Hive (Version HotFix 1) User Guide

Informatica PowerExchange for Hive (Version HotFix 1) User Guide Informatica PowerExchange for Hive (Version 9.5.1 HotFix 1) User Guide Informatica PowerExchange for Hive User Guide Version 9.5.1 HotFix 1 December 2012 Copyright (c) 2012-2013 Informatica Corporation.

More information

Informatica Persistent Data Masking and Data Subset (Version 9.5.0) User Guide

Informatica Persistent Data Masking and Data Subset (Version 9.5.0) User Guide Informatica Persistent Data Masking and Data Subset (Version 9.5.0) User Guide Informatica Persistent Data Masking and Data Subset User Guide Version 9.5.0 December 2012 Copyright (c) 2003-2012 Informatica.

More information

Informatica PowerCenter (Version HotFix 3) Metadata Manager User Guide

Informatica PowerCenter (Version HotFix 3) Metadata Manager User Guide Informatica PowerCenter (Version 9.1.0 HotFix 3) Metadata Manager User Guide Informatica PowerCenter Metadata Manager User Guide Version 9.1.0 HotFix 3 December 2011 Copyright (c) 1998-2011 Informatica.

More information

Informatica Data Quality (Version 9.5.1) User Guide

Informatica Data Quality (Version 9.5.1) User Guide Informatica Data Quality (Version 9.5.1) User Guide Informatica Data Quality User Guide Version 9.5.1 December 2012 Copyright (c) 2009-2012 Informatica. All rights reserved. This software and documentation

More information

Informatica (Version HotFix 1) PowerCenter Installation and Configuration Guide

Informatica (Version HotFix 1) PowerCenter Installation and Configuration Guide Informatica (Version 9.0.1 HotFix 1) PowerCenter Installation and Configuration Guide Informatica PowerCenter Installation and Configuration Guide Version 9.0.1 HotFix 1 September 2010 Copyright (c) 1998-2010

More information

Informatica ILM Nearline for use with SAP NetWeaver BW (Version 6.1) Configuration Guide

Informatica ILM Nearline for use with SAP NetWeaver BW (Version 6.1) Configuration Guide Informatica ILM Nearline for use with SAP NetWeaver BW (Version 6.1) Configuration Guide Informatica ILM Nearline Configuration Guide Version 6.1 February 2013 Copyright (c) 1998-2013 Informatica Corporation.

More information

Informatica B2B Data Exchange (Version 9.1.0) Developer Guide

Informatica B2B Data Exchange (Version 9.1.0) Developer Guide Informatica B2B Data Exchange (Version 9.1.0) Developer Guide Informatica B2B Data Exchange Developer Guide Version 9.1.0 June 2011 Copyright (c) 2001-2011 Informatica. All rights reserved. This software

More information

Informatica PowerCenter Express (Version 9.5.1) User Guide

Informatica PowerCenter Express (Version 9.5.1) User Guide Informatica PowerCenter Express (Version 9.5.1) User Guide Informatica PowerCenter Express User Guide Version 9.5.1 April 2013 Copyright (c) 1998-2013 Informatica Corporation. All rights reserved. This

More information

Informatica Data Archive (Version HotFix 1) Amdocs Accelerator Reference

Informatica Data Archive (Version HotFix 1) Amdocs Accelerator Reference Informatica Data Archive (Version 6.4.3 HotFix 1) Amdocs Accelerator Reference Informatica Data Archive Amdocs Accelerator Reference Version 6.4.3 HotFix 1 June 2017 Copyright Informatica LLC 2003, 2017

More information

Informatica Development Platform (Version HotFix 4) Developer Guide

Informatica Development Platform (Version HotFix 4) Developer Guide Informatica Development Platform (Version 9.1.0 HotFix 4) Developer Guide Informatica Development Platform Developer Guide Version 9.1.0 HotFix 4 March 2012 Copyright (c) 1998-2012 Informatica. All rights

More information

Informatica B2B Data Exchange (Version 9.5.0) Operational Data Store Schema Reference

Informatica B2B Data Exchange (Version 9.5.0) Operational Data Store Schema Reference Informatica B2B Data Exchange (Version 9.5.0) Operational Data Store Schema Reference Informatica B2B Data Exchange Operational Data Store Schema Reference Version 9.5.0 November 2012 Copyright (c) 2001-2012

More information

Informatica Test Data Management (Version 9.6.0) User Guide

Informatica Test Data Management (Version 9.6.0) User Guide Informatica Test Data Management (Version 9.6.0) User Guide Informatica Test Data Management User Guide Version 9.6.0 April 2014 Copyright (c) 2003-2014 Informatica Corporation. All rights reserved. This

More information

Informatica SSA-NAME3 (Version 9.5.0) Getting Started Guide

Informatica SSA-NAME3 (Version 9.5.0) Getting Started Guide Informatica SSA-NAME3 (Version 9.5.0) Getting Started Guide Informatica SSA-NAME3 Getting Started Guide Version 9.5.0 June 2012 Copyright (c) 1998-2012 Informatica. All rights reserved. This software and

More information

Informatica PowerExchange for Tableau (Version HotFix 1) User Guide

Informatica PowerExchange for Tableau (Version HotFix 1) User Guide Informatica PowerExchange for Tableau (Version 9.6.1 HotFix 1) User Guide Informatica PowerExchange for Tableau User Guide Version 9.6.1 HotFix 1 September 2014 Copyright (c) 2014 Informatica Corporation.

More information

Informatica Fast Clone (Version 9.6.0) Release Guide

Informatica Fast Clone (Version 9.6.0) Release Guide Informatica Fast Clone (Version 9.6.0) Release Guide Informatica Fast Clone Release Guide Version 9.6.0 December 2013 Copyright (c) 2012-2013 Informatica Corporation. All rights reserved. This software

More information

Informatica (Version 9.6.1) Mapping Guide

Informatica (Version 9.6.1) Mapping Guide Informatica (Version 9.6.1) Mapping Guide Informatica Mapping Guide Version 9.6.1 June 2014 Copyright (c) 1998-2014 Informatica Corporation. All rights reserved. This software and documentation contain

More information

Informatica Data Director for Data Quality (Version HotFix 4) User Guide

Informatica Data Director for Data Quality (Version HotFix 4) User Guide Informatica Data Director for Data Quality (Version 9.5.1 HotFix 4) User Guide Informatica Data Director for Data Quality User Guide Version 9.5.1 HotFix 4 February 2014 Copyright (c) 1998-2014 Informatica

More information

Informatica PowerCenter Express (Version 9.6.0) Administrator Guide

Informatica PowerCenter Express (Version 9.6.0) Administrator Guide Informatica PowerCenter Express (Version 9.6.0) Administrator Guide Informatica PowerCenter Express Administrator Guide Version 9.6.0 January 2014 Copyright (c) 1998-2014 Informatica Corporation. All rights

More information

Informatica Development Platform (Version 9.1.0) Relational Data Adapter Guide

Informatica Development Platform (Version 9.1.0) Relational Data Adapter Guide Informatica Development Platform (Version 9.1.0) Relational Data Adapter Guide Informatica Development Platform Relational Data Adapter Guide Version 9.1.0 March 2011 Copyright (c) 2010-2011 Informatica.

More information

Data Federation Guide

Data Federation Guide Data Federation Guide Informatica PowerCenter (Version 8.6.1) Informatica PowerCenter Data Federation Guide Version 8.6.1 December 2008 Copyright (c) 1998 2008 Informatica Corporation. All rights reserved.

More information

Informatica PowerCenter Express (Version 9.6.1) Mapping Guide

Informatica PowerCenter Express (Version 9.6.1) Mapping Guide Informatica PowerCenter Express (Version 9.6.1) Mapping Guide Informatica PowerCenter Express Mapping Guide Version 9.6.1 June 2014 Copyright (c) 1998-2014 Informatica Corporation. All rights reserved.

More information

Informatica PowerCenter (Version HotFix 1) Metadata Manager Administrator Guide

Informatica PowerCenter (Version HotFix 1) Metadata Manager Administrator Guide Informatica PowerCenter (Version 9.0.1 HotFix 1) Metadata Manager Administrator Guide Informatica PowerCenter Metadata Manager Administrator Guide Version 9.0.1 HotFix 1 September 2010 Copyright (c) 1998-2010

More information

Informatica PowerCenter (Version HotFix 1) Advanced Workflow Guide

Informatica PowerCenter (Version HotFix 1) Advanced Workflow Guide Informatica PowerCenter (Version 9.0.1 HotFix 1) Advanced Workflow Guide Informatica PowerCenter Advanced Workflow Guide Version 9.0.1 HotFix 1 September 2010 Copyright (c) 1998-2010 Informatica. All rights

More information

Informatica (Version ) SQL Data Service Guide

Informatica (Version ) SQL Data Service Guide Informatica (Version 10.1.0) SQL Data Service Guide Informatica SQL Data Service Guide Version 10.1.0 May 2016 Copyright (c) 1993-2016 Informatica LLC. All rights reserved. This software and documentation

More information

Informatica PowerExchange for SAP NetWeaver (Version 9.5.0) User Guide for PowerCenter

Informatica PowerExchange for SAP NetWeaver (Version 9.5.0) User Guide for PowerCenter Informatica PowerExchange for SAP NetWeaver (Version 9.5.0) User Guide for PowerCenter Informatica PowerExchange for SAP NetWeaver User Guide for PowerCenter Version 9.5.0 June 2012 Copyright (c) 1998-2012

More information

Informatica (Version HotFix 4) Metadata Manager Repository Reports Reference

Informatica (Version HotFix 4) Metadata Manager Repository Reports Reference Informatica (Version 9.6.1 HotFix 4) Metadata Manager Repository Reports Reference Informatica Metadata Manager Repository Reports Reference Version 9.6.1 HotFix 4 April 2016 Copyright (c) 1993-2016 Informatica

More information

Informatica PowerCenter (Version 9.0.1) Getting Started

Informatica PowerCenter (Version 9.0.1) Getting Started Informatica PowerCenter (Version 9.0.1) Getting Started Informatica PowerCenter Getting Started Version 9.0.1 June 2010 Copyright (c) 1998-2010 Informatica. All rights reserved. This software and documentation

More information

Informatica Informatica PIM - Media Manager Version October 2013 Copyright (c) Informatica Corporation. All rights reserved.

Informatica Informatica PIM - Media Manager Version October 2013 Copyright (c) Informatica Corporation. All rights reserved. Informatica Informatica PIM - Media Manager Version 5502 October 2013 Copyright (c) 1998-2013 Informatica Corporation All rights reserved This software and documentation contain proprietary information

More information

Informatica PowerCenter Express (Version 9.6.1) Getting Started Guide

Informatica PowerCenter Express (Version 9.6.1) Getting Started Guide Informatica PowerCenter Express (Version 9.6.1) Getting Started Guide Informatica PowerCenter Express Getting Started Guide Version 9.6.1 June 2014 Copyright (c) 2013-2014 Informatica Corporation. All

More information

Informatica PowerExchange for SAS (Version 9.6.1) User Guide

Informatica PowerExchange for SAS (Version 9.6.1) User Guide Informatica PowerExchange for SAS (Version 9.6.1) User Guide Informatica PowerExchange for SAS User Guide Version 9.6.1 October 2014 Copyright (c) 2014 Informatica Corporation. All rights reserved. This

More information

Informatica Data Quality for SAP Point of Entry (Version 9.5.1) Installation and Configuration Guide

Informatica Data Quality for SAP Point of Entry (Version 9.5.1) Installation and Configuration Guide Informatica Data Quality for SAP Point of Entry (Version 9.5.1) Installation and Configuration Guide Informatica Data Quality for SAP Point of Entry Installation and Configuration Guide Version 9.5.1 October

More information

Informatica PowerExchange (Version 9.5.0) CDC Guide for Linux, UNIX, and Windows

Informatica PowerExchange (Version 9.5.0) CDC Guide for Linux, UNIX, and Windows Informatica PowerExchange (Version 9.5.0) CDC Guide for Linux, UNIX, and Windows Informatica PowerExchange CDC Guide for Linux, UNIX, and Windows Version 9.5.0 June 2012 Copyright (c) 1998-2012 Informatica.

More information

Informatica (Version 9.6.1) Profile Guide

Informatica (Version 9.6.1) Profile Guide Informatica (Version 9.6.1) Profile Guide Informatica Profile Guide Version 9.6.1 June 2014 Copyright (c) 2014 Informatica Corporation. All rights reserved. This software and documentation contain proprietary

More information

Informatica (Version 10.0) Rule Specification Guide

Informatica (Version 10.0) Rule Specification Guide Informatica (Version 10.0) Rule Specification Guide Informatica Rule Specification Guide Version 10.0 November 2015 Copyright (c) 1993-2015 Informatica LLC. All rights reserved. This software and documentation

More information

Informatica Development Platform (Version 9.6.1) Developer Guide

Informatica Development Platform (Version 9.6.1) Developer Guide Informatica Development Platform (Version 9.6.1) Developer Guide Informatica Development Platform Developer Guide Version 9.6.1 June 2014 Copyright (c) 1998-2014 Informatica Corporation. All rights reserved.

More information

Informatica PowerExchange for JD Edwards World (Version 9.1.0) User Guide

Informatica PowerExchange for JD Edwards World (Version 9.1.0) User Guide Informatica PowerExchange for JD Edwards World (Version 9.1.0) User Guide Informatica PowerExchange for JD Edwards World User Guide Version 9.1.0 March 2011 Copyright (c) 2006-2011 Informatica. All rights

More information

Informatica 4.0. Installation and Configuration Guide

Informatica 4.0. Installation and Configuration Guide Informatica Secure@Source 4.0 Installation and Configuration Guide Informatica Secure@Source Installation and Configuration Guide 4.0 September 2017 Copyright Informatica LLC 2015, 2017 This software and

More information

Informatica B2B Data Transformation (Version 10.0) Agent for WebSphere Message Broker User Guide

Informatica B2B Data Transformation (Version 10.0) Agent for WebSphere Message Broker User Guide Informatica B2B Data Transformation (Version 10.0) Agent for WebSphere Message Broker User Guide Informatica B2B Data Transformation Agent for WebSphere Message Broker User Guide Version 10.0 October 2015

More information

Informatica PowerExchange for Hive (Version 9.6.0) User Guide

Informatica PowerExchange for Hive (Version 9.6.0) User Guide Informatica PowerExchange for Hive (Version 9.6.0) User Guide Informatica PowerExchange for Hive User Guide Version 9.6.0 January 2014 Copyright (c) 2012-2014 Informatica Corporation. All rights reserved.

More information

Informatica (Version HotFix 4) Installation and Configuration Guide

Informatica (Version HotFix 4) Installation and Configuration Guide Informatica (Version 9.6.1 HotFix 4) Installation and Configuration Guide Informatica Installation and Configuration Guide Version 9.6.1 HotFix 4 Copyright (c) 1993-2016 Informatica LLC. All rights reserved.

More information

Informatica ILM Nearline for use with SAP NetWeaver BW (Version 6.1) Backup and Restore Guide

Informatica ILM Nearline for use with SAP NetWeaver BW (Version 6.1) Backup and Restore Guide Informatica ILM Nearline for use with SAP NetWeaver BW (Version 6.1) Backup and Restore Guide Informatica ILM Nearline Backup and Restore Guide Version 6.1 February 2013 Copyright (c) 1998-2013 Informatica

More information

Informatica Data Quality for Siebel (Version HotFix 2) User Guide

Informatica Data Quality for Siebel (Version HotFix 2) User Guide Informatica Data Quality for Siebel (Version 9.1.0 HotFix 2) User Guide Informatica Data Quality for Siebel User Guide Version 9.1.0 HotFix 2 August 2011 Copyright (c) 1998-2011 Informatica. All rights

More information

Informatica PowerCenter (Version 9.1.0) Web Services Provider Guide

Informatica PowerCenter (Version 9.1.0) Web Services Provider Guide Informatica PowerCenter (Version 9.1.0) Web Services Provider Guide Informatica PowerCenter Web Services Provider Guide Version 9.1.0 March 2011 Copyright (c) Informatica. All rights reserved. This software

More information

User Guide. Informatica PowerCenter Connect for MSMQ. (Version 8.1.1)

User Guide. Informatica PowerCenter Connect for MSMQ. (Version 8.1.1) User Guide Informatica PowerCenter Connect for MSMQ (Version 8.1.1) Informatica PowerCenter Connect for MSMQ User Guide Version 8.1.1 September 2006 Copyright (c) 2004-2006 Informatica Corporation. All

More information

Advanced Workflow Guide

Advanced Workflow Guide Advanced Workflow Guide Informatica PowerCenter (Version 8.6.1) PowerCenter Advanced Workflow Guide Version 8.6.1 July 2009 Copyright (c) 1998 2009 Informatica Corporation. All rights reserved. This software

More information

Informatica Cloud (Version Winter 2015) Box API Connector Guide

Informatica Cloud (Version Winter 2015) Box API Connector Guide Informatica Cloud (Version Winter 2015) Box API Connector Guide Informatica Cloud Box API Connector Guide Version Winter 2015 July 2016 Copyright Informatica LLC 2015, 2017 This software and documentation

More information

Informatica Cloud (Version Winter 2015) Dropbox Connector Guide

Informatica Cloud (Version Winter 2015) Dropbox Connector Guide Informatica Cloud (Version Winter 2015) Dropbox Connector Guide Informatica Cloud Dropbox Connector Guide Version Winter 2015 March 2015 Copyright Informatica LLC 2015, 2017 This software and documentation

More information

Informatica PowerExchange for Web Services (Version 9.6.1) User Guide for PowerCenter

Informatica PowerExchange for Web Services (Version 9.6.1) User Guide for PowerCenter Informatica PowerExchange for Web Services (Version 9.6.1) User Guide for PowerCenter Informatica PowerExchange for Web Services User Guide for PowerCenter Version 9.6.1 June 2014 Copyright (c) 2004-2014

More information

Informatica PowerCenter (Version 9.5.1) Workflow Basics Guide

Informatica PowerCenter (Version 9.5.1) Workflow Basics Guide Informatica PowerCenter (Version 9.5.1) Workflow Basics Guide Informatica PowerCenter Workflow Basics Guide Version 9.5.1 December 2012 Copyright (c) 1998-2012 Informatica. All rights reserved. This software

More information

SAND CDBMS Nearline for SAP BW v3.1 MR2

SAND CDBMS Nearline for SAP BW v3.1 MR2 SAND CDBMS Nearline for SAP BW v3.1 MR2 Release Notes 1 Introduction This maintenance release introduces new functionality, changes/improvements to existing functionality, and fixes for known issues. Refer

More information

Informatica Data Archive (Version 6.1) File Archive Service Message Reference

Informatica Data Archive (Version 6.1) File Archive Service Message Reference Informatica Data Archive (Version 6.1) File Archive Service Message Reference Informatica Data Archive File Archive Service Message Reference Version 6.1 September 2012 Copyright (c) 1996-2012 Informatica.

More information

Informatica Data Archive (Version 6.1.1) Enterprise Data Manager Guide

Informatica Data Archive (Version 6.1.1) Enterprise Data Manager Guide Informatica Data Archive (Version 6.1.1) Enterprise Data Manager Guide Informatica Data Archive Enterprise Data Manager Guide Version 6.1.1 May 2013 Copyright (c) 2003-2013 Informatica Corporation. All

More information

Informatica Data Services (Version 9.6.0) Web Services Guide

Informatica Data Services (Version 9.6.0) Web Services Guide Informatica Data Services (Version 9.6.0) Web Services Guide Informatica Data Services Web Services Guide Version 9.6.0 January 2014 Copyright (c) 1998-2014 Informatica Corporation. All rights reserved.

More information

Informatica Cloud (Version Fall 2016) Qlik Connector Guide

Informatica Cloud (Version Fall 2016) Qlik Connector Guide Informatica Cloud (Version Fall 2016) Qlik Connector Guide Informatica Cloud Qlik Connector Guide Version Fall 2016 November 2016 Copyright Informatica LLC 2016 This software and documentation contain

More information

Informatica (Version 9.6.0) Developer Workflow Guide

Informatica (Version 9.6.0) Developer Workflow Guide Informatica (Version 9.6.0) Developer Workflow Guide Informatica Developer Workflow Guide Version 9.6.0 January 2014 Copyright (c) 1998-2014 Informatica Corporation. All rights reserved. This software

More information

Informatica PowerExchange for Server (Version 9.1.0) User Guide

Informatica PowerExchange for  Server (Version 9.1.0) User Guide Informatica PowerExchange for Email Server (Version 9.1.0) User Guide Informatica PowerExchange for Email Server User Guide Version 9.1.0 March 2011 Copyright (c) 2005-2011 Informatica. All rights reserved.

More information

Informatica PowerCenter (Version 9.1.0) Workflow Basics Guide

Informatica PowerCenter (Version 9.1.0) Workflow Basics Guide Informatica PowerCenter (Version 9.1.0) Workflow Basics Guide Informatica PowerCenter Workflow Basics Guide Version 9.1.0 March 2011 Copyright (c) 1998-2011 Informatica. All rights reserved. This software

More information

Informatica MDM Multidomain Edition (Version 9.6.1) Informatica Data Director (IDD)-Interstage Integration Guide

Informatica MDM Multidomain Edition (Version 9.6.1) Informatica Data Director (IDD)-Interstage Integration Guide Informatica MDM Multidomain Edition (Version 9.6.1) Informatica Data Director (IDD)-Interstage Integration Guide Informatica MDM Multidomain Edition Informatica Data Director (IDD)-Interstage Integration

More information

Informatica PowerExchange for PeopleSoft (Version 9.5.0) User Guide for PowerCenter

Informatica PowerExchange for PeopleSoft (Version 9.5.0) User Guide for PowerCenter Informatica PowerExchange for PeopleSoft (Version 9.5.0) User Guide for PowerCenter Informatica PowerExchange for PeopleSoft User Guide for PowerCenter Version 9.5.0 June 2012 Copyright (c) 1999-2012 Informatica.

More information

Informatica Dynamic Data Masking (Version 9.6.1) Active Directory Accelerator Guide

Informatica Dynamic Data Masking (Version 9.6.1) Active Directory Accelerator Guide Informatica Dynamic Data Masking (Version 9.6.1) Active Directory Accelerator Guide Informatica Dynamic Data Masking Active Directory Accelerator Guide Version 9.6.1 January 2015 Copyright (c) 2012-2015

More information

Informatica PowerCenter (Version 9.0.1) Performance Tuning Guide

Informatica PowerCenter (Version 9.0.1) Performance Tuning Guide Informatica PowerCenter (Version 9.0.1) Performance Tuning Guide Informatica PowerCenter Performance Tuning Guide Version 9.0.1 June 2010 Copyright (c) 1998-2010 Informatica. All rights reserved. This

More information

Informatica PowerExchange for Microsoft Azure Cosmos DB SQL API User Guide

Informatica PowerExchange for Microsoft Azure Cosmos DB SQL API User Guide Informatica PowerExchange for Microsoft Azure Cosmos DB SQL API 10.2.1 User Guide Informatica PowerExchange for Microsoft Azure Cosmos DB SQL API User Guide 10.2.1 June 2018 Copyright Informatica LLC 2018

More information

Informatica Cloud (Version Spring 2017) Microsoft Azure DocumentDB Connector Guide

Informatica Cloud (Version Spring 2017) Microsoft Azure DocumentDB Connector Guide Informatica Cloud (Version Spring 2017) Microsoft Azure DocumentDB Connector Guide Informatica Cloud Microsoft Azure DocumentDB Connector Guide Version Spring 2017 April 2017 Copyright Informatica LLC

More information

Ultra Messaging Configuration Guide

Ultra Messaging Configuration Guide Ultra Messaging Configuration Guide Ultra Messaging Configuration Guide Published January 2012 Copyright 2004-2012 Informatica Corporation Informatica Ultra Messaging Version 5.3 June 2012 Copyright (c)

More information

Informatica Cloud (Version Spring 2017) Magento Connector User Guide

Informatica Cloud (Version Spring 2017) Magento Connector User Guide Informatica Cloud (Version Spring 2017) Magento Connector User Guide Informatica Cloud Magento Connector User Guide Version Spring 2017 April 2017 Copyright Informatica LLC 2016, 2017 This software and

More information

Informatica (Version 10.0) Mapping Specification Guide

Informatica (Version 10.0) Mapping Specification Guide Informatica (Version 10.0) Mapping Specification Guide Informatica Mapping Specification Guide Version 10.0 November 2015 Copyright (c) 1993-2015 Informatica LLC. All rights reserved. This software and

More information

Informatica 4.5. Installation and Configuration Guide

Informatica 4.5. Installation and Configuration Guide Informatica Secure@Source 4.5 Installation and Configuration Guide Informatica Secure@Source Installation and Configuration Guide 4.5 June 2018 Copyright Informatica LLC 2015, 2018 This software and documentation

More information

Informatica (Version HotFix 3) Business Glossary 9.5.x to 9.6.x Transition Guide

Informatica (Version HotFix 3) Business Glossary 9.5.x to 9.6.x Transition Guide Informatica (Version 9.6.1.HotFix 3) Business Glossary 9.5.x to 9.6.x Transition Guide Informatica Business Glossary 9.5.x to 9.6.x Transition Guide Version 9.6.1.HotFix 3 June 2015 Copyright (c) 1993-2015

More information

Informatica Cloud (Version Spring 2017) Box Connector Guide

Informatica Cloud (Version Spring 2017) Box Connector Guide Informatica Cloud (Version Spring 2017) Box Connector Guide Informatica Cloud Box Connector Guide Version Spring 2017 April 2017 Copyright Informatica LLC 2015, 2017 This software and documentation contain

More information

Informatica PowerCenter Express (Version HotFix2) Release Guide

Informatica PowerCenter Express (Version HotFix2) Release Guide Informatica PowerCenter Express (Version 9.6.1 HotFix2) Release Guide Informatica PowerCenter Express Release Guide Version 9.6.1 HotFix2 January 2015 Copyright (c) 1993-2015 Informatica Corporation. All

More information

Informatica Cloud (Version Spring 2017) Microsoft Dynamics 365 for Operations Connector Guide

Informatica Cloud (Version Spring 2017) Microsoft Dynamics 365 for Operations Connector Guide Informatica Cloud (Version Spring 2017) Microsoft Dynamics 365 for Operations Connector Guide Informatica Cloud Microsoft Dynamics 365 for Operations Connector Guide Version Spring 2017 July 2017 Copyright

More information

Workflow Basics Guide

Workflow Basics Guide Workflow Basics Guide Informatica PowerCenter (Version 8.6.1) PowerCenter Workflow Basics Guide Version 8.6.1 January 2009 Copyright (c) 1998 2009 Informatica Corporation. All rights reserved. This software

More information

Informatica PowerCenter Data Validation Option (Version 9.5.1) Installation and User Guide

Informatica PowerCenter Data Validation Option (Version 9.5.1) Installation and User Guide Informatica PowerCenter Data Validation Option (Version 9.5.1) Installation and User Guide Informatica PowerCenter Data Validation Option Version 9.5.1 February 2013 Copyright (c) 1998-2013 Informatica

More information

Informatica Data Replication (Version 9.5.1) Release Guide

Informatica Data Replication (Version 9.5.1) Release Guide Informatica Data Replication (Version 9.5.1) Release Guide Informatica Data Replication Release Guide Version 9.5.1 August 2013 Copyright (c) 2012-2013 Informatica Corporation. All rights reserved. This

More information

Informatica B2B Data Transformation (Version 10.0) XMap Tutorial

Informatica B2B Data Transformation (Version 10.0) XMap Tutorial Informatica B2B Data Transformation (Version 10.0) XMap Tutorial Informatica B2B Data Transformation XMap Tutorial Version 10.0 October 2015 Copyright (c) 1993-2016 Informatica LLC. All rights reserved.

More information

Informatica PowerExchange for Cloud Applications HF4. User Guide for PowerCenter

Informatica PowerExchange for Cloud Applications HF4. User Guide for PowerCenter Informatica PowerExchange for Cloud Applications 9.6.1 HF4 User Guide for PowerCenter Informatica PowerExchange for Cloud Applications User Guide for PowerCenter 9.6.1 HF4 January 2017 Copyright Informatica

More information

Informatica MDM Multidomain Edition for Oracle (Version 9.5.1) Installation Guide for WebLogic

Informatica MDM Multidomain Edition for Oracle (Version 9.5.1) Installation Guide for WebLogic Informatica MDM Multidomain Edition for Oracle (Version 9.5.1) Installation Guide for WebLogic Informatica MDM Multidomain Edition for Oracle Installation Guide for WebLogic Version 9.5.1 September 2012

More information

Informatica Data Integration Hub (Version 10.0) Developer Guide

Informatica Data Integration Hub (Version 10.0) Developer Guide Informatica Data Integration Hub (Version 10.0) Developer Guide Informatica Data Integration Hub Developer Guide Version 10.0 November 2015 Copyright (c) 1993-2015 Informatica LLC. All rights reserved.

More information

Informatica PowerExchange for Hive (Version 9.6.1) User Guide

Informatica PowerExchange for Hive (Version 9.6.1) User Guide Informatica PowerExchange for Hive (Version 9.6.1) User Guide Informatica PowerExchange for Hive User Guide Version 9.6.1 June 2014 Copyright (c) 2012-2014 Informatica Corporation. All rights reserved.

More information

Informatica Enterprise Data Catalog Installation and Configuration Guide

Informatica Enterprise Data Catalog Installation and Configuration Guide Informatica 10.2.1 Enterprise Data Catalog Installation and Configuration Guide Informatica Enterprise Data Catalog Installation and Configuration Guide 10.2.1 May 2018 Copyright Informatica LLC 2015,

More information

Informatica (Version 10.1) Metadata Manager Custom Metadata Integration Guide

Informatica (Version 10.1) Metadata Manager Custom Metadata Integration Guide Informatica (Version 10.1) Metadata Manager Custom Metadata Integration Guide Informatica Metadata Manager Custom Metadata Integration Guide Version 10.1 June 2016 Copyright Informatica LLC 1993, 2016

More information

Informatica SQL Data Service Guide

Informatica SQL Data Service Guide Informatica 10.2 SQL Data Service Guide Informatica SQL Data Service Guide 10.2 September 2017 Copyright Informatica LLC 2009, 2018 This software and documentation are provided only under a separate license

More information

Informatica Cloud Integration Hub Spring 2018 August. User Guide

Informatica Cloud Integration Hub Spring 2018 August. User Guide Informatica Cloud Integration Hub Spring 2018 August User Guide Informatica Cloud Integration Hub User Guide Spring 2018 August August 2018 Copyright Informatica LLC 2016, 2018 This software and documentation

More information

Informatica Dynamic Data Masking (Version 9.6.2) Stored Procedure Accelerator Guide for Sybase

Informatica Dynamic Data Masking (Version 9.6.2) Stored Procedure Accelerator Guide for Sybase Informatica Dynamic Data Masking (Version 9.6.2) Stored Procedure Accelerator Guide for Sybase Informatica Dynamic Data Masking Stored Procedure Accelerator Guide for Sybase Version 9.6.2 March 2015 Copyright

More information

Informatica (Version HotFix 2) Upgrading from Version 9.1.0

Informatica (Version HotFix 2) Upgrading from Version 9.1.0 Informatica (Version 9.6.1 HotFix 2) Upgrading from Version 9.1.0 Informatica Upgrading from Version 9.1.0 Version 9.6.1 HotFix 2 January 2015 Copyright (c) 1993-2015 Informatica Corporation. All rights

More information

User Guide for PowerCenter

User Guide for PowerCenter User Guide for PowerCenter Informatica PowerExchange for SAS (Version 9.6.1) Informatica PowerExchange for SAS User Guide Version 9.6.1 June 2014 Copyright 1998-2014 Informatica Corporation. All rights

More information

Informatica PowerCenter Express (Version 9.6.1) Performance Tuning Guide

Informatica PowerCenter Express (Version 9.6.1) Performance Tuning Guide Informatica PowerCenter Express (Version 9.6.1) Performance Tuning Guide Informatica PowerCenter Express Performance Tuning Guide Version 9.6.1 June 2014 Copyright (c) 1998-2014 Informatica Corporation.

More information

Informatica PowerExchange for HBase (Version 9.6.0) User Guide

Informatica PowerExchange for HBase (Version 9.6.0) User Guide Informatica PowerExchange for HBase (Version 9.6.0) User Guide Informatica PowerExchange for HBase User Guide Version 9.6.0 January 2014 Copyright (c) 2013-2014 Informatica Corporation. All rights reserved.

More information

Informatica Developer (Version 9.1.0) Transformation Guide

Informatica Developer (Version 9.1.0) Transformation Guide Informatica Developer (Version 9.1.0) Transformation Guide Informatica Developer Transformation Guide Version 9.1.0 March 2011 Copyright (c) 2009-2011 Informatica. All rights reserved. This software and

More information

Informatica Development Platform HotFix 1. Informatica Connector Toolkit Developer Guide

Informatica Development Platform HotFix 1. Informatica Connector Toolkit Developer Guide Informatica Development Platform 10.1.1 HotFix 1 Informatica Connector Toolkit Developer Guide Informatica Development Platform Informatica Connector Toolkit Developer Guide 10.1.1 HotFix 1 June 2017 Copyright

More information