Best Practices. Version 1.0. November 18,

Size: px
Start display at page:

Download "Best Practices. Version 1.0. November 18,"

Transcription

1 Best Practices Version 1.0 November 18, Global Headquarters 3303 Hillview Avenue Palo Alto, CA Tel: Toll Free: Fax: , TIBCO Software Inc. All rights reserved. TIBCO, the TIBCO logo, The Power of Now, and TIBCO Software are trademarks or registered trademarks of TIBCO Software Inc. in the United States and/or other countries. All other product and company names and marks mentioned in this document are the property of their respective owners and are mentioned for identification purposes only. This document (including, without limitation, any product roadmap or statement of direction data) illustrates the planned testing, release and availability dates for TIBCO products and services. This document is provided for informational purposes only and its contents are subject to change without notice. TIBCO makes no warranties, express or implied, in or relating to this document or any information in it, including, without limitation, that this document, or any information in it, is error-free or meets any conditions of merchantability or fitness for a particular purpose. This document may not be reproduced or transmitted in any form or by any means without our prior written permission.

2 Table of Contents Revision History Introduction Multiple Enterprise Co-Tenancy v/s Separate Instances Repository Design Mapping to Physical Data Model Import Input maps Applying Rulebases During Import How to Import Nested Data Mapped to Multiple Repositories? Ensuring Ordering of Data How to Avoid Failures During Import Handling Errors During Import Using SQL Based Datasources Loading Dates Import Control Switches DBLoader v/s Normal Import Import of Meta Data Workflows Workflow Customizations Error Handling Activities tuning Subflow v/s Spawned Workflow Rulebase Performance management UI performance Search Optimization Timing Logs Database Management Table Spaces Database Performance Database LOB Management Other factors Purge File System Management Recovering Failed Messages/Events Deployment Page 2 of 44

3 13.1 Ensuring that UI Initiated Workflows gets Higher Priority/ Ensuring that UI Performance is not Impacted by large Batch Memory Utilization Multiple CIM Instances Failover Capacity Planning JMS Best Practices EMS Websphere MQ Cache Web Services Synchronous Web Services Security Creating or modifying roles Security Models LDAP Integration Data Encryption Security Auditing Synchronization Impact on Capacity Planning Mass Update UI Customizations Localization of terms Debugging Analytics Use of MDM Studio Network Deployment Full v/s Incremental Documentation and samples Source Control Tricks Copy of Sticky Configuration Other Topics To Be Included Page 3 of 44

4 Revision History Version Date Author Department Description /5/2011 Milind Duraphe /15/2012 Milind Duraphe P & T R & D P & T R & D Initial version Initial version Page 4 of 44

5 1 Introduction This document is collection of best practices contributed by people who are developing the software and implementing it in many MDM projects. The information contained in this document is provided as is and reader is advised to apply his/her experience with TIBCO MDM to decide if the best practices make sense to his/her environment. Some of the best practices may contradict with others; this is due to varied target audience and usages of the software. Page 5 of 44

6 2 Multiple Enterprise Co-Tenancy v/s Separate Instances Enterprise (also called Company) is a logical unit which has complete data isolation except for deliberate sharing of some global objects. 1 MDM allows management of more than one enterprise in the same instance (co-tenancy). Feature Co Tenancy Separate instance Setup Maintenance Configuration More than one enterprise in one MDM instance. Database, cache, JMS etc. is shared by enterprises. Software maintenance for all the enterprise is together All configurations are shared. This includes Single Sign on and role mapping The message prioritization Message listeners File watcher Customized screens Anything which is configured through ConfigValues Each enterprise is in separate instance. Database, cache, JMS etc. is separate for each enterprise Each enterprise can be managed separately. Configuration for each enterprise is separate Data isolation Performance A lot of customization can be enterprise specific including Look and feel Business process rules including process selection Workflows and rulebases All data is isolated except Global business partners Look up data sources defined for TIBCOCIM enterprise Single data store allows data analysis and aggregation across enterprises using reporting tools Performance requirements of different enterprises can conflict. A large enterprise will take large share of system resources All data is isolated Performance characteristics of each enterprise can be managed separately. 1 Global data sources are the data sources which are defined in TIBCOCIM enterprise. These data sources can be used in rulebases of other enterprises. Global trading partner can be defined in any enterprise by checking global flag while partner is defined. Global partners can be visible to other enterprises, but can only be modified by the enterprise which defined them. Page 6 of 44

7 3 Repository Design 3.1 Mapping to Physical Data Model Always explicitly enter table names and column names while defining repository and relationships. If table and column names are entered by user, such names are not auto generated and provides a cleaner data model and you don t have to deal with MCT_XXX pattern. User entered names ensures that the object names do not change when meta data is moved from one installation to another. o If the table and column names are entered, any custom code/sql/trigger/procedures written for the model would not have to be changed for each installation. o Generated column names can change when meta data is imported in another installation depending on change history of the column. i.e. if the column was deleted and recreated, the column names will be different. User entered names also impose some limitations o Table names are unique within same database instance. This means that you cannot assign the same table name to more than one repository even if these are defined in different enterprises. o Meta data cannot be imported into another enterprise within the same instance as it will attempt to create a duplicate table and meta data import will fail. Sparse repository with lot of optional columns, i.e. null column is not an issue; databases handle the null columns pretty well. MDM does not support inheritance so if your model requires you to model sub objects which vary only slightly, it is better to model them in the same repository and use record type to identify different types of objects. Technically a repository cannot have more than 900 attributes. However in practice, performance deteriorates if there are more than 100 attributes. With introduction of category specific attributes in release 8.3, this limit now applies to non-category specific attributes only and there is no limit to number of category specific attributes. MDM manages all relationships as peer to peer, many-to- many, which means you don t have to define cardinality upfront. However for documentation of model, it is better to define cardinality in the repository model. If cardinality is to be enforced, it has to be done using rulebase. Grouping attributes in Attribute Group allows logical arrangement of attributes. It also helps with security enforcement using resource security or rulebase constraints, security and data visibility can now be defined at group level. Page 7 of 44

8 o o Groups also let you assign data custodians for governance and route workitems using business process rules. For example, when a record changes, Compare Activity in the workflow allows identification of groups for which data has changed. This information can be used to determine who should approve the change. Groups are also displayed in tabs in OOB UI though you can create a UI specification files to merge the groups under one group 2 If your model is deeply nested (deep hierarchy), first consider if such nesting can be reduced. If model cannot be changed, to get better performance, consider configuration switches to control the depth - o com.tibco.cim.optimization.recordbundleview.optimaldepth: defines the depth of bundle to be loaded for view. o com.tibco.cim.optimization.recordbundlevalidation.depth: defines the depth of the bundle for validation. If there is no change to any node (at any level) in the hierarchy, the validation will not be done for children records of the modified node at depths higher than this value. Changed data will always be validated. o tibco.optimization.recordbundle.excluderelationship allows you to specify which relationships can be ignored for navigation through bundle. o com.tibco.cim.optimization.recordview.skipcustomvalidation: specifies that custom validation class specified for a record can be bypassed for view o com.tibco.cim.ui.optimization.recordsearch.relationship.depth: Depth of the hierarchy available for Configuration of search pan o com.tibco.cim.optimization.recordsearch.relationship.depth: Depth of the hierarchy for search applies to web services If the cardinality is expected to more than 500, you will have performance issues. Similarly very large bundles which require traversal through many relationships will have performance issues. Larger cardinality will result in performance degradation for all channels, and is especially on UI. Though there are few optimization switches which alleviate this problem, it is still advisable to look at data model and consider changes which will reduce the cardinality. You have following choices o Create an intermediate group object. For example, if a customer has more than 500 accounts, you can create an account group object to bunch accounts such each group has no more than This feature was introduced in HF 10. Page 8 of 44

9 o Do you need to navigate from parent to child or child to parent? If navigation is always one direction, you can configure MDM UI to exclude relationship from parent to child. o Consider Softlink The relationship between records, if configured using soflink is not explicitly maintained (no RELATIONSHIP entries) which also means that these are NOT version specific. Instead, related records are searched whenever needed. Softlink is a good option when related records are simply referred, and are not normally updated together. If softlinks are used: No propagations of data can be done using softlink Records connected by softlink cannot be updated, accessed in one transaction Import Record edit Export Record query (get related record) Cross repository search across such relationships is not possible GetRecord activity will not include records related by soft link Related records configured through softlink are not part of record bundle i.e will not be validated when parent changes Initial design of data model should be done using standard database modeling principles. MDM relationship table is a generic association table: o All associations map to unique relationship types o All attributes defined for the association are mapped as relationship attributes o Once this design is achieved, define relationship names and map More than one relationship of same type is not possible between any two records. If such relationships are to be created, you need to create an intermediate association object. While mapping the data model to repositories Specialization of objects which results in small tables, each with very little additional information should be combined as one repository. If MDM out of box UI will be used, consider the impact of data model on UI. Small, highly normalized data model will require lot of clicks to navigate through the UI. MDM versions all changes. When record data changes, new version of the record is created. Irrespective of which attribute has changed, the new version has complete data. Whenever record version changes, all relationships of the record with others are copied from previous version to Page 9 of 44

10 the new version. This is full set as to get record data or relationships for the record, no previous record history needs to be accessed. Classifications assigned to records are nothing but special type of relationships (type = 4). Multi value attributes and category specific attributes3 are stored in separate tables. You can specify the table names or choose the predefined shared tables. The attributes which will have values for some records and will not generate large number of rows (each value of multi value/category specific attribute is a row) could be stored in shared table. Splitting the information in too many tables each with small number of rows is not recommended. At the same time the attributes which will generate a lot of rows should be stored in separate tables. 3 Introduced in release 8.3. Page 10 of 44

11 4 Import 4.1 Input maps Instead of using input maps to data transformation, keep the input maps simple with no expressions. Instead use rulebase during import step to transform the data. Rulebase designer lets you define sophisticated transform without requiring need to know sql syntax Expressions entered in input map are limited to simple expressions which work on one or more attributes Expressions entered in input map cannot include procedures. To specify sequences in the expression, you must implement a sql function. Expressions are specific to the database being used 4.2 Applying Rulebases During Import 4 The rulebase used during import may not make assumption about the relationships as the relationships are not established during this validation. If such validations are required, you need to implement them in a separate rulebase and configure an EvaluateRulebase activity. Rulebase can be applied on incoming data in various stages. Prepare for import During this step, the rulebase can only update product ID and Ext. Any other updates are ignored. The rulebase only has access to the data which is in staging step, the record yet does not have identity and it does not have any relationships. The rulebase can also not compare the record with previous versions. o It is recommended that when this step completes, all records have been assigned ID and Ext. o Release 8.3 also introduces the concept of business keys which allows mapping of external key to record ID and ext. o Starting with release 8.3, this step allows rejection of records during this step. The data can be validated, and erroneous records rejected. Validate the data in isolation (the validations which only work on incoming data) to reject the record. Import During this step, record has identity and can be validated and transformed. Using a rulebase to transform the data at this step instead of separate step avoids creation on another version of the record. The rulebase can 4 Some of this information applies to release 8.3. Page 11 of 44

12 o Compare the record with previous version to implement tolerance limits on changes o Transformation of the attributes other than ID/Ext and comparison with previous version should be done in this step. Records can be rejected. o Transformation of record ID/Ext is not recommended in this step. This step also validates the data types and sizes of each attributes and rejects the record which does not pass the validation. Note that the record does not have relationships so rulebase should not attempt to validate the hierarchy. Evaluate Rulebase As most validations can be done in import step, this step is necessary only if there are validations which require full hierarchy (i.e. sibling s validation, propagations). o The evaluate rulebase should be executed after extractrelationship step o Any hierarchy validations and propagation should be done in separate EvaluteRulebase step after relationships are established. 4.3 How to Import Nested Data Mapped to Multiple Repositories? To import nested data, it is best to use one or more data sources per repository instead of creating a view which combines underlying tables and then mapping such view to multiple repositories. The views are not preferred The views may already impose an ordering on data Repeated passes on views are not guaranteed to yield data in same order which will result in loss of relationships in some scenarios. For example, if a view is created on several tables and this view is then mapped to multiple repositories, it may result in loss of relationships as MDM is enable to get ordered rows when each data for each repository is processed 5. If import is for a large number of related repositories, the number of records for parent repositories is duplicated to maintain unique link from parent to leaf record in the hierarchy. For example, if for each customer record, there are 5 accounts which are related to 10 addresses, the number of customer records increases to 1 * 5 * 10 = 50. These numbers can increase exponentially as there are more and more nesting. Currently there is no workaround of this. 6 Due to this, the summary reported by import can be misleading and performance degrades. To manage this, nesting more than 3 levels is not recommended. 4.4 Ensuring Ordering of Data Ordering of data is important when import is for multiple repository and multiple data sources are joined. When there are multilevel joins are done, you must provide help to MDM in understanding the joins and 5 This is fixed in release This issue is fixed in 8.3. Page 12 of 44

13 ordering requirement. This is done by creating text files which list the ordering criterion. More details needed. <<TBD> 4.5 How to Avoid Failures During Import Usage of CONTAINS column (Related Records) CONTAINS attribute is deprecated except for specifying DELETE or DELETEALL commands. You should not use this attribute for creating or modifying any relationships. Instead use input map hierarchies and explicitly map related records to ID/Ext of related repositories. Don t leave imports in approval stage for long. Imported records are saved as draft, visibility of drafts is limited to the process which created it. Such records are not visible to other processes including other imports. Longer the records stay in draft state, higher the chances of conflicts. Multiple pending imports and conflicts will usually lead to different outcomes depending on which records are approved first. Once records enter conflicts, it is quite complicated for user to manage the conflicts and to predict the outcome. Changes in input map when import is running will result in import failure or incorrect result. Import of large batches is not recommended. Optimal size is 300K -500K. You should consider splitting batches in smaller chunks. Larger the chunks are, bigger the demands on cache and database will be. Beyond the chunk size of 500K, it will become a major exercise of hardware setup and tuning. As the batch sizes increase, do you really need to use regular import? Would DBLoader be a better option? 4.6 Handling Errors During Import With release 8.3, The errors are reported in an error files which can be opened in excel to view the errors for each attributes. In addition, the records which are in error are stored in log file. If you need to extract the data from log file, you will have to parse the log file. Including release 8.3 and previous releases, If you use an EvaluateRulebase activity to validate the records after import, the activity will generate a record collection for all rejected records. You can process this record collection similar to any other record collection. For example, you can send all the records in separate workflows for corrections by spawning workflow for each record in the rejected record collection. During import errors are generated at various stages as follows Page 13 of 44

14 Upload: The data is first loaded from files into data source tables. The loading may fail if data in the file cannot be loaded. In this case, the errors are reported in error file for the load step. Import steps: as described earlier, the import steps (PrepareForImport, Import and EvaluateRulebase) produce a combined error and log file 7. The errors could be o Data does not match the target attribute format and cannot convert correctly. This would happen if data type of data source attribute/expression is different for the data type of repository attribute it is mapped to. In this case, the whole record is rejected. o Data truncation warning some data was truncated to fit the target attribute size o Rulebase data validation error if a rulebase was specified EvaluateRulebase: The records rejected by rulebase validation will be output into a separate record collection. This record collection can be processed further. Manage Record collection: when records cannot be bundled correctly. Bundling is needed only if you imported hierarchical data and if you need to spawn a workflow for each hierarchy. If same children are part of multiple hierarchies, the bundling will fail. In this case, you need to import in multiple passes. If a row is mapped to more than one repository, import may reject one of the mapped records. Such rejection does not reject the parts of the rows mapped to other repositories. 4.7 Using SQL Based Datasources Avoid using SQL based datasources for data which changes often. When underlying data changes, unless an explicit upload is done, MDM would not know that the data has changed. When such datasources are used in rulebases, the data may be cached and would not be updated unless unload is done. Using views as basis for SQL based datasources is also not recommended Loading Dates Date loading is always tricky as there are many variations of date formats are supported by loaders. It is better to define this attribute for the datasource as String and then map it to a date repository attribute. Of course, you can specify date format for the datasource as well. Different database loaders support different date formats, it is best to check the documentation and test it with your data format to be sure. All dates in a data source should be in same date format. It is also recommended that all dates in all datasources used in one import have to same date format (some variations work, some don t). 7 Release 8.3 onwards 8 This recommendation is invalidated in 8.3. Release 8.3 is able to order the data in views correctly by capturing the ordering information. Page 14 of 44

15 4.9 Import Control Switches Import has many performance switches which can be combined to fine tune performance. Cyclic relationship during import (ConfigValues.xml) If you do not expect data to be cyclic, keep this value as false. This test is quite expensive. (com.tibco.cim.optimization.import.cyclictest). Key mutation check during import (ConfigValues.xml) If your data is such that when keys (ID/Ext) once assigned to any record, it is never changed, keep this value as false. (com.tibco.cim.optimization.import. mutationtest) Duplicate record test during import (ConfigValues.xml) If your data is such that same record does not repeat within the same import, change this flag to false. (com.tibco.cim.optimization.import. duprowtest). Note that default is true. ProcessOption this input parameter to ImportCatalogRecords allows you to override defaults DBLoader v/s Normal Import DB loader is primarily designed for initial data load. This tool also supports for loading of changes for existing records however the tool remains targeted for technical loads. When data volume exceeds few hundred K, consider DBLoader instead of direct load import. Direct load import through put varies between 300K 1.5 M per hour as compared to 5 M 10 M for DBloader. The throughput varies based on complexity of the data model, existing data in the database, hardware and rulebases used. It is common to see that import and DBLoader throughput drops as more and more data is loaded. In most cases, some of the lost throughput can be retrieved by DBA, typically by collecting stats. Data is not validated; you need to ensure that data is clean. You can use an ETL(i.e. Kettle) and Data quality tools (Trillium, TIBCO Patterns) to process the data file and clean it before uploading the data As loading cannot be interrupted, and you do not get chance to review and approve, make sure you are sure that import should be done. DB loader does generate back out scripts but back out is messy and time consuming. It is almost impossible to back out if the imported data has been changed by other users/processes. o Release 8.3 has Undo feature which is also supported for DBLoader. DbLoader performance is dependent on database, especially on memory assigned, Undo and temp table space. It is not uncommon to get an out of memory error thrown by database or the loader tools themselves. When you see such errors, you either need to reduce the size of data being loaded or increase the memory assigned to database. Page 15 of 44

16 DBLoader is designed for bulk data loads when there is no other activity on the server which changes meta data structures or record data. As DBLoader is a bulk data imported, it shortcuts some of the internal processes for example change notifications are not generated. DBLoader uses bulk inserts and updates and its performance heavily on database performance. Creating indexes for data sources (DBLoader only) Large loads (500K or more) which join more than one datasource can be speeded up by creating indexes for datasources. This is done by creating a file MQ_COMMON_DIR\enterprise\datasource\<datasourcename>.idx. Example 1 If the CID column of data source table DF_33969_37793_TAB is mapped to productid and productidext is not mapped, create index file as follows: UPPER("CID") Example 2 If the CID column of data source table DF_33969_37793_TAB is mapped to productid and CEXT is mapped to productidext, create index file as follows: UPPER("CID"),UPPER("CEXT") Page 16 of 44

17 5 Import of Meta Data Incremental meta data import for repository model should be used sparingly. If you do not have a choice other than incremental import, you need to ensure that the meta data has at-least all objects within one repository hierarchy within one transaction. If you import partial hierarchy, you may not like the outcome. Page 17 of 44

18 6 Workflows 6.1 Workflow Customizations Java class based transitions perform much better than java code directly added to transition. It is ok to experiment with inline transitions using Bsh but once you have finalized the transition code, create a java class and compile it. Instead of using adhoc java code in transition to perform some task which is otherwise not possible using pre-defined activities, create a custom workflow activity. Custom activity will perform much better, tracked using event log and have well defined interfaces. The activity performance stats are reported in JMX and timing log. Don t update any row in database in custom code as this could make cache out-of-sync. The caching algorithm and objects stored in cache changes often to fine tune cache, an object which was not cached so far, could be cached in next release or hot fix. Also updates may create deadlocks. Consider using rules to select input values for workflow activities instead of hard-coding them. Using rules make it easy to change the inputs. For example, the rulebase which can be passed in to EvaluateRulebase activity can be selected using business process rules. Don t perform any database update in transitions as transitions don t support failover and restart. Also, any updates performed in the transition will not be confirmed till next activity complete. 6.2 Error Handling Standard error handling method is to define an error transition. You can define error transition for each activity if special action has to be taken for a specific activity. In most cases an error transition from Any is sufficient. Undo activity can be used to undo the changes done to master data. If used, undo should be done after updaterecordstate activity has been run to reject records. 6.3 Activities tuning GetRecord - control the depth and related records by specifying depth and relationship names Upload data source Oracle direct path upload used to be quite restrictive (must have same hardware for client and server, must match software version etc). However with Oracle 10.2 onwards, direct path load is possible in most setups. You can enable direct path load by Page 18 of 44

19 configuring the ConfigValues.xml. Direct path upload works 5-10 times faster than normal upload and could give significant improvement for large data source uploads. 6.4 Subflow v/s Spawned Workflow Subflows do not create new events which mean that context, status, outputs and errors can be freely shared between subflows and parent workflows. Spawned workflows are separate events and do not shared the context with parent workflows. Synchronous subflows are good choice when subflow must return some data to parent workflow and that parent workflow must wait for such workflow to finish. Also, subflows allow the errors to be propagated to parent workflow. Spawned workflows are good choices when the parent workflow continues after child workflow has been initiated and does not need any feedback (fire and forget). Spawned workflows are the only choice when a) you don t know how many children workflows are to be started b) a large number of children workflows are to be started. Asynchronous subflows are not recommended (and will be deprecated). Instead, either use synchronous subflows or spawn a new workflow. Subflows allow you to limit the context by explicitly mapping the context from parent workflow to input and output parameter of the subflow. This makes the workflow cleaner and free of side effects. Subflows should not usually set the status of the event remember subflow don t have separate event, it is the event used by parent workflow. The status change will be reflected in parent workflow. It is expected that subflow will return to parent workflow and any status change should be done in parent workflow. Any error generated in subflow is propagated to parent workflow if it is not handled (no error transition) in subflow. However if you have error transition, the error is handled and not automatically propagated to parent workflow. (Java try and catch paradigm). Subflows can have workitems and suspend the workflow. Subflow (applies to synchronous workflow only) suspension will suspend the parent workflow as well. However when such workitems time out, the subflow is restarted and will complete. Unless the subflow is suspended again, the parent workflow will assume that subflow has completed and will resume. If you do not want this behavior, you need to suspend the subflow by calling out suspend in the transition (and suspend activity in next release). 9 9 Alstom use case Page 19 of 44

20 7 Rulebase Implementing security by checking against roles can be optimized by evaluating the role list at the start of rulebase and setting a flag. The use of flag, instead of checking it again and again when needed would perform better Use decision tables for simple rules for access control, attribute visibility is much more efficient and easier to manage 10. If your rulebase has many conditional sections, you may want to separate them into smaller rulebases. This will provide better performance and modularize your code. Even though making the rules conditional provide some of the performance benefit, separating them in separate rulebases and using include makes it even more efficient. Repository lookups (I.e dropdown to show records from another repository) can be very expensive if this lookup is done against a large repository. Note that such query does not use cached data as it assumes that the data in target repository could change anytime. Such lookups have been found to be primary cause of many slow UI service requests. Same applies to any SQL based lookups. To optimize such lookups o Specify sufficient conditions so that you get high selectivity o Do a test run and extract the query from debug log. Run it through database tools to find which indexes are needed. o Create required indexes (note of caution too many indexes are not recommended as they will slow done DML operations). Dropdowns which have large number of entries will slow down the application, specifically UI. If your data is such that the dropdown will be large, you should consider o Making the dropdown more selective by using the context, i.e can GROUP_ID be used to limit the entries for PARTICIPENT_TYPE? o Creating cascaded dropdowns by introducing groups. i.e. the entries in city would depend on state so that you don t have to list all cities in drop down. The entries in city are populated after state is selected. o Redesign to ensure that you don t ever end up with entries more than 100 Dropdowns based on datasources are cached during the execution so they are not a big drag on performance. However if datasource has lot of entries, you will still have UI performance impact. Also, SQL based datasource may provide wrong results if data in underlying table has changed but MDM is not notified of the change by executing UPLOAD. 10 Since release 8.3 Page 20 of 44

21 Initialization of data when creating a record should be placed in initialization rulebase. Initialization rulebase is only run once before record is initialized. Propagation should be designed to work in one direction. If you end up with following scenario, you may get unpredictable results in some scenarios 11 o o Propagate from repo A to Repo B Propagate from repo B to Repo C o Propagate from repo C to Repo A or B Essentially, when the propagation happens in both directions, depending on the change and the use case, the order of propagation may be different. Use the timing log to see which constraints are slow. This will give you a starting point for analyzing performance issues. Also, use JMX to see how often constraints and rulebases are executed. 11 Alstom use case Page 21 of 44

22 8 Performance management 8.1 UI performance Search screens are customizable by the users. However if too many search attributes are added to screen, it makes the screen cluttered and slow to draw. UI performance can degrade drastically and client CPU consumption can shoot up to 100%. Train the users to configure only the attributes which matter most of the time instead of adding all fields. Rulebase optimization is key to record UI performance Event log, inbox performance does not depend on data but mostly dependent on default filter including rows per page. If you see performance degradation, it usually means that database is not performing optimally. You can change the rows per page to mitigate the issue. UI performance degrades if cache is incorrectly sized. For example is cache is small, user and authentication information may be evicted causing reloads. You can check the cache stats using JMX and look at how various caches are performing. Control what user can see i.e. relationships. Less data to show, better it is. 8.2 Search Optimization OOB, you do not get indexes on all the data you would like to search on. It is not possible to create indexes based on defined metadata as defining indexes for optimization is still an art. Once you have determined common searches being done by your users, find out which searches are taking time. Capture the debug log for such searches and extract the query. Run the query through database tools to determine which indexes should be created. Note that case sensitive searches would need different indexes from case insensitive searches. For example, in oracle you need to create a functional index to support case insensitive search. 8.3 Timing Logs When performance degradation is suspected, timing logs for various components should be enabled. For most situations, default configuration will already capture timings for slow components, check the log director for a timing log file. Timing logs allow capture of action/sql/activity/rulebase which exceed time limit defined in the configuration. If a sql is slow, it will be registered in the timing log. If simple sqls start to appear in this log, database is definitely not performing well. If a particular activity is shows up in the log, you can focus your efforts on specific activity. Page 22 of 44

23 Timing log can also be loaded to database table (a sample script is provided under /bin) or viewed through sample projects in Spotfire. Page 23 of 44

24 9 Database Management 9.1 Table Spaces Large tables should preferably be kept in separate table spaces though newer technologies may make this practice redundant. For example, many DBAs claim that with Oracle ASM, keeping large tables in separate tablespaces is not required. Still, keeping large table in its own separate tablespace allows DBA to manage the tables more efficiently. 9.2 Database Performance Database performance changes as data is added or deleted. When more than 10% of data has changed or added, database may require a DBA attention. DBA should: a) Setup a job to generate ADDM/AWR reports or equivalent on regular intervals 12 b) Setup a job to collect optimization statistics regularly c) Review the report for recommendations and adjust database parameters accordingly. For example, reports may indicate changes to memory allocated to database instance. If ADDM report is looked up regularly and acted upon, you are not likely to have database performance issues in your installation d) Regularly purge data using provided purge program or any other tool. e) If there are lots of deletes (due to purge), indexes and tables may become fragmented. Again, stats report will show that and you may have to defrag the indexes regularly f) If ADDM report shows that inserts are running slow, this could mean 1. Disks are slow or access paths are slow. Even with a fast SAN, disk performance can be affected if database storage options are not configured correctly. Usually using Oracle ASM with a FAST SAN would resolve most of the disk related issues 2. Table or index is fragmented that could happen if your usage required lot of data being imported and deleted using purge. Request for defrag 3. Too much concurrency consider better database configuration or bigger hardware. 12 For oracle, the Oracle Enterprise Manager may require a separate license. Page 24 of 44

25 9.3 Database LOB Management Most of the xml documents generated during workflow are stored as LOBs in GENERALDOCUMENT tables. LOBs are special objects and need special attention. For example, Oracle may not release the space assigned for LOB storage based on configuration. LOBs are difficult to manage, so plan it out from beginning. (More details to be added -Block size, UNDO_RETENTION) 9.4 Other factors As number of rows in a table increases, partitioning becomes a requirement. If DBA requests that partitioning be done, contact MDM engineering for consultation. More indexes mean slower inserts, updates and deletes. Don t create indexes indiscriminately. Many levels of relationships (depth) slow down the application in all areas. Page 25 of 44

26 10 Purge Purge for history should be scheduled to run weekly. This would not let the history grow. Temporary file purge is implemented using a script and should also be scheduled to run weekly. If purge is not run for a long time, the first purge may take a long time. The purge for older record versions should be considered if older versions don t need to be retained. Page 26 of 44

27 11 File System Management With CIM 8.0 onwards, most of the data which was previously stored on files is now stored in database and the file system management has become simple. Most of the growth happens in /Temp directory. You should setup supplied cron job (bin/tibcocrontab.sh) to regularly purge files. The script deletes the files which are no longer needed and allows you to customize the retention period. All files and sub directories under MQ_COMMON_DIR/Temp directory can be removed if you are in a space crunch. However when deleting all files from this directory, you should make sure that no workflows are running (as temp files created recently may be needed by the running workflows). If you are sending many messages in or out of MDM through JMS, you will also see /sent and /received directories. These directories keep copy of messages sent or received. You can also remove these directories without affecting any process except that you will lose the trace of messages. Files stored under /work folder usually should not be deleted. This folder is not expected to have lot of files. Page 27 of 44

28 12 Recovering Failed Messages/Events When any of the workflow fails due to subsystem failures, if these workflows were started through JMS messages 13, MDM will attempt to redeliver the messages. However, the message cannot be delivered if JMS also fails, MDM will write the message to disk. It is advisable to configure the destination directory for failed messages as a local disk. This protects against inability to write to network disks in case of network failure. The location is configured using following property <ConfValue description="the name of the file logging all failure messages. Default is messages-redo.log." ishotdeployable="false" name="failure Message Log File Name" propname="com.tibco.cim.queue.failuremessageslogfile" sinceversion="7.0" visibility="advanced"> <ConfString default="messages-redo.log" value="messages-redo.log"/> </ConfValue> The messages stored in this file can be resubmitted using message recovery scripts under /bin (msgrecovery.bat and msgrecovery.sh). If the event has started and is failed, as long as event is viewable in Event Log, you can also use resubmit action to restart the event. However resubmit action may not work for many events which require initial data to be present. For example, a record modify event which was initiated from UI or webservice should not be resubmitted as the records modified by it would not be in valid state for this event to work correctly. 13 When workflow started using synchronous web services fail, the error is returned to the caller instead of retry. Page 28 of 44

29 13 Deployment 13.1 Ensuring that UI Initiated Workflows gets Higher Priority/ Ensuring that UI Performance is not Impacted by large Batch When MDM server is loaded fully, that is, all the workflow processing threads are always busy, you may end up with scenario that workflows initiated by the users may take few minutes to get a chance to execute. If such workflows are queued up behind a large batch (import), it may take longer. Most of the UI users would not care or notice this delay however when users want to ensure that data is approved and confirmed asap, this may have to be addressed. Following deployment architecture would separate the JMS but share the rest of the components to ensure that data consistency and availability is not sacrificed. UI Instance JMS UI Instance Common Dir Database Batch processing Instance JMS Batch processing Instance Same architecture would allow you to guarantee that UI performance is not impacted when there is a large load of import or backend messages. Page 29 of 44

30 For most situations, you may not have to resort to this deployment option as you can also try to balance the processing priority through ConfigValues. Using the configuration (see category = Message prioritization), you can adjust the relative priority of messages sent through different channels Memory Utilization If local caching is not enabled, 1 GB heap MB of perm size is sufficient. However if CPU utilization is low, you could consider increasing JVM heap size to 2 or 4 GB along with increased number of workflow and AsyncCall processing listeners. In most cases, for development setup, JVM heap of 512 MB is sufficient Multiple CIM Instances Multiple instances with load balancing should be configured whenever possible till the CPU usage hits about 75% with peak load. Also, if CPU utilization on the same machine is less than 30% or less, consider starting another MDM instance Failover MDM implements a wait and retry mechanism to handle subsystem (database, JMS, file, Netrics) failure. This failover is configured based on error codes returned. If the failover is not happening for a certain failure The error is not configured for failover. You need to add the error messages to ConfigValue.xml File The subsystem version is different (new oracle version) and the error description may have changed Error description may be presented in different language 13.5 Capacity Planning Engineering provides a free service to review the capacity requirements and to suggest hardware. You are encouraged to use this service. Contact Support and they will engage engineering. A cache memory sizing worksheet is available. A database sizing worksheet is available. A CPI sizing worksheet is available. Page 30 of 44

31 14 JMS Best Practices JMS server plays very little part in overall application performance; don t bother with excessive tuning JMS server EMS Prefetch should be set to NONE 14.2 Websphere MQ For large volume it is important that logs are sized correctly to avoid runtime errors. See installation guide for instructions on how to configure log files. Page 31 of 44

32 15 Cache For development and most functional testing, single installation with cache is good enough. You do not need to setup central cache server. Sequence numbers are cached by MDM. Don t change the sequence numbers (alter or read) directly using any custom SQL/code. (Sequence number changes when sequence is used in a sql statement). MDM also caches a large number of objects. If database tables are updated using scripts, check with TIBCO support if cache is impacted. If cache is impacted, in most cases clearing cache using scripts provided 14 or JMX is sufficient 15. In some cases, a restart of whole cluster is required. Most common installation error is incorrect cache configuration: o Sizing the cache for expected data volume o Setting up cache cluster Multiple instances of MDM use cache for exchanging job status and for distributed locks. If this information is lost due to abnormal shut down of MDM instances or cache instances, following could happen o Distributed locks may not be released o Distributed locks may be released prematurely o Jobs which are processing data batches may hang. When abnormal behavior is observed, whole cluster including all MDM instances and cache instances must be shut down and restarted. To avoid such unacceptable situations, it is recommended that Allow MDM instances to shut down gracefully Configure cache such that following caches have at least one replication count o COUNTERS o MDM_LOCK_SPACE 16 Memory fragmentation happens after a lot of puts and removes from cache. Prior to release 8.3, the memory fragmentation can happen quickly as data is imported. This is improved with release 8.3 as removes are minimized. When memory fragmentation reaches high point, cache slows 14 Prior to Since release Since 8.3 Page 32 of 44

33 down and starts to consume significant CPU. If these symptoms are observed, a restart should be scheduled. Page 33 of 44

34 16 Web Services If you specify ID/EXT in record UI, it allows use of cache to find the records (find by primary) which is significantly faster than search without both values. Reading a large number of entries (records, workitems) in one call will eventually fail with out-ofmemory error as the pay load size increases. The performance will also deteriorate as the large payload needs to be transported. Moreover the failure will be sporadic, irregular and unpredictable. Always build the client to scroll through the result set, i.e. 100 records a time to get predictable and reliable performance. To scroll through the result set, you need to set the startcount correctly. Generated web services do not support Single Sign on and creation/modification of users automatically Synchronous Web Services Synchronous web services are a good choice when caller must know if the operation is successful or not. This can be coupled with in-memory workflow to provide a light weight execution. Synchronous web services return only when workflow, if any, fired by the web service is completed. Synchronous web services should not be used if it takes more than couple of seconds to execute the service and any associated workflow. For example, import or mass update workflows which process a batch of records are not good candidates. Workflows initiated by the synchronous workflows do not go through workflow queue which means that they do not have wait period and are not assigned to workflow queue listener. This means that such workflows are in addition to number of simultaneous workflows which can be fired through workflow queue. If machine is sized based on number of workflow queue listener, this will be additional load and may affect the performance. Synchronous web services which fire workflows take more time than asynchronous workflows as the webservice return after workflow completion. During this time, http threads are held for longer duration and the max concurrent HTTP listener count may be reached. Page 34 of 44

TIBCO Complex Event Processing Evaluation Guide

TIBCO Complex Event Processing Evaluation Guide TIBCO Complex Event Processing Evaluation Guide This document provides a guide to evaluating CEP technologies. http://www.tibco.com Global Headquarters 3303 Hillview Avenue Palo Alto, CA 94304 Tel: +1

More information

ORACLE 11gR2 DBA. by Mr. Akal Singh ( Oracle Certified Master ) COURSE CONTENT. INTRODUCTION to ORACLE

ORACLE 11gR2 DBA. by Mr. Akal Singh ( Oracle Certified Master ) COURSE CONTENT. INTRODUCTION to ORACLE ORACLE 11gR2 DBA by Mr. Akal Singh ( Oracle Certified Master ) INTRODUCTION to ORACLE COURSE CONTENT Exploring the Oracle Database Architecture List the major architectural components of Oracle Database

More information

Broker Clusters. Cluster Models

Broker Clusters. Cluster Models 4 CHAPTER 4 Broker Clusters Cluster Models Message Queue supports the use of broker clusters: groups of brokers working together to provide message delivery services to clients. Clusters enable a Message

More information

Oracle Database 12c: JMS Sharded Queues

Oracle Database 12c: JMS Sharded Queues Oracle Database 12c: JMS Sharded Queues For high performance, scalable Advanced Queuing ORACLE WHITE PAPER MARCH 2015 Table of Contents Introduction 2 Architecture 3 PERFORMANCE OF AQ-JMS QUEUES 4 PERFORMANCE

More information

Service Manager. Database Configuration Guide

Service Manager. Database Configuration Guide Service Manager powered by HEAT Database Configuration Guide 2017.2.1 Copyright Notice This document contains the confidential information and/or proprietary property of Ivanti, Inc. and its affiliates

More information

Configuring the Oracle Network Environment. Copyright 2009, Oracle. All rights reserved.

Configuring the Oracle Network Environment. Copyright 2009, Oracle. All rights reserved. Configuring the Oracle Network Environment Objectives After completing this lesson, you should be able to: Use Enterprise Manager to: Create additional listeners Create Oracle Net Service aliases Configure

More information

VMware Mirage Getting Started Guide

VMware Mirage Getting Started Guide Mirage 5.8 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document,

More information

Microsoft SQL Server Fix Pack 15. Reference IBM

Microsoft SQL Server Fix Pack 15. Reference IBM Microsoft SQL Server 6.3.1 Fix Pack 15 Reference IBM Microsoft SQL Server 6.3.1 Fix Pack 15 Reference IBM Note Before using this information and the product it supports, read the information in Notices

More information

McAfee epolicy Orchestrator Release Notes

McAfee epolicy Orchestrator Release Notes McAfee epolicy Orchestrator 5.9.1 Release Notes Contents About this release What's new Resolved issues Known issues Installation information Getting product information by email Where to find product documentation

More information

Rhapsody Interface Management and Administration

Rhapsody Interface Management and Administration Rhapsody Interface Management and Administration Welcome The Rhapsody Framework Rhapsody Processing Model Application and persistence store files Web Management Console Backups Route, communication and

More information

Product Release Notes Alderstone cmt 2.0

Product Release Notes Alderstone cmt 2.0 Alderstone cmt product release notes Product Release Notes Alderstone cmt 2.0 Alderstone Consulting is a technology company headquartered in the UK and established in 2008. A BMC Technology Alliance Premier

More information

1 of 8 14/12/2013 11:51 Tuning long-running processes Contents 1. Reduce the database size 2. Balancing the hardware resources 3. Specifying initial DB2 database settings 4. Specifying initial Oracle database

More information

Jyotheswar Kuricheti

Jyotheswar Kuricheti Jyotheswar Kuricheti 1 Agenda: 1. Performance Tuning Overview 2. Identify Bottlenecks 3. Optimizing at different levels : Target Source Mapping Session System 2 3 Performance Tuning Overview: 4 What is

More information

Oracle. Service Cloud Using Knowledge Advanced

Oracle. Service Cloud Using Knowledge Advanced Oracle Service Cloud Release August 2016 Oracle Service Cloud Part Number: Part Number: E77681-03 Copyright 2015, 2016, Oracle and/or its affiliates. All rights reserved Authors: The Knowledge Information

More information

Managing Oracle Real Application Clusters. An Oracle White Paper January 2002

Managing Oracle Real Application Clusters. An Oracle White Paper January 2002 Managing Oracle Real Application Clusters An Oracle White Paper January 2002 Managing Oracle Real Application Clusters Overview...3 Installation and Configuration...3 Oracle Software Installation on a

More information

Product Guide. McAfee Performance Optimizer 2.2.0

Product Guide. McAfee Performance Optimizer 2.2.0 Product Guide McAfee Performance Optimizer 2.2.0 COPYRIGHT Copyright 2017 McAfee, LLC TRADEMARK ATTRIBUTIONS McAfee and the McAfee logo, McAfee Active Protection, epolicy Orchestrator, McAfee epo, McAfee

More information

CA GovernanceMinder. CA IdentityMinder Integration Guide

CA GovernanceMinder. CA IdentityMinder Integration Guide CA GovernanceMinder CA IdentityMinder Integration Guide 12.6.00 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation

More information

Tuning Enterprise Information Catalog Performance

Tuning Enterprise Information Catalog Performance Tuning Enterprise Information Catalog Performance Copyright Informatica LLC 2015, 2018. Informatica and the Informatica logo are trademarks or registered trademarks of Informatica LLC in the United States

More information

Performance Optimization for Informatica Data Services ( Hotfix 3)

Performance Optimization for Informatica Data Services ( Hotfix 3) Performance Optimization for Informatica Data Services (9.5.0-9.6.1 Hotfix 3) 1993-2015 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic,

More information

Jet Data Manager 2014 SR2 Product Enhancements

Jet Data Manager 2014 SR2 Product Enhancements Jet Data Manager 2014 SR2 Product Enhancements Table of Contents Overview of New Features... 3 New Features in Jet Data Manager 2014 SR2... 3 Improved Features in Jet Data Manager 2014 SR2... 5 New Features

More information

Teiid - Scalable Information Integration. Teiid Caching Guide 7.6

Teiid - Scalable Information Integration. Teiid Caching Guide 7.6 Teiid - Scalable Information Integration 1 Teiid Caching Guide 7.6 1. Overview... 1 2. Results Caching... 3 2.1. Support Summary... 3 2.2. User Interaction... 3 2.2.1. User Query Cache... 3 2.2.2. Procedure

More information

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0.

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0. IBM Optim Performance Manager Extended Edition V4.1.0.1 Best Practices Deploying Optim Performance Manager in large scale environments Ute Baumbach (bmb@de.ibm.com) Optim Performance Manager Development

More information

MOC 6232A: Implementing a Microsoft SQL Server 2008 Database

MOC 6232A: Implementing a Microsoft SQL Server 2008 Database MOC 6232A: Implementing a Microsoft SQL Server 2008 Database Course Number: 6232A Course Length: 5 Days Course Overview This course provides students with the knowledge and skills to implement a Microsoft

More information

Topics. File Buffer Cache for Performance. What to Cache? COS 318: Operating Systems. File Performance and Reliability

Topics. File Buffer Cache for Performance. What to Cache? COS 318: Operating Systems. File Performance and Reliability Topics COS 318: Operating Systems File Performance and Reliability File buffer cache Disk failure and recovery tools Consistent updates Transactions and logging 2 File Buffer Cache for Performance What

More information

Migrating to the P8 5.2 Component Manager Framework

Migrating to the P8 5.2 Component Manager Framework Migrating to the P8 5.2 Component Manager Framework Contents Migrating to the P8 5.2 Component Manager Framework... 1 Introduction... 1 Revision History:... 2 Comparing the Two Component Manager Frameworks...

More information

Test-king.TB Questions.

Test-king.TB Questions. Test-king.TB0-124.117 Questions. Number: TB0-124 Passing Score: 800 Time Limit: 120 min File Version: 4.7 http://www.gratisexam.com/ TB0-124 TIBCO MDM 8 Exam Best stuff I have ever used for my exam preparation.

More information

Oracle Database 11g: Administration Workshop II

Oracle Database 11g: Administration Workshop II Oracle Database 11g: Administration Workshop II Duration: 5 Days What you will learn In this course, the concepts and architecture that support backup and recovery, along with the steps of how to carry

More information

Neuron Change History

Neuron Change History Neuron 2.5.13.0 Change History The user can now create custom pipeline steps. The call web service step now has dynamic support for custom soap headers. New step to send and receive from Msmq New step

More information

Performance Benchmark and Capacity Planning. Version: 7.3

Performance Benchmark and Capacity Planning. Version: 7.3 Performance Benchmark and Capacity Planning Version: 7.3 Copyright 215 Intellicus Technologies This document and its content is copyrighted material of Intellicus Technologies. The content may not be copied

More information

ORANET- Course Contents

ORANET- Course Contents ORANET- Course Contents 1. Oracle 11g SQL Fundamental-l 2. Oracle 11g Administration-l 3. Oracle 11g Administration-ll Oracle 11g Structure Query Language Fundamental-l (SQL) This Intro to SQL training

More information

One Identity Manager 8.0. Administration Guide for Connecting to a Universal Cloud Interface

One Identity Manager 8.0. Administration Guide for Connecting to a Universal Cloud Interface One Identity Manager 8.0 Administration Guide for Connecting to a Copyright 2017 One Identity LLC. ALL RIGHTS RESERVED. This guide contains proprietary information protected by copyright. The software

More information

CHAPTER. Oracle Database 11g Architecture Options

CHAPTER. Oracle Database 11g Architecture Options CHAPTER 1 Oracle Database 11g Architecture Options 3 4 Part I: Critical Database Concepts Oracle Database 11g is a significant upgrade from prior releases of Oracle. New features give developers, database

More information

CA Process Automation

CA Process Automation CA Process Automation Production User Guide Release 04.3.00 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation

More information

Using the VMware vcenter Orchestrator Client. vrealize Orchestrator 5.5.1

Using the VMware vcenter Orchestrator Client. vrealize Orchestrator 5.5.1 Using the VMware vcenter Orchestrator Client vrealize Orchestrator 5.5.1 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments

More information

Informatica Developer Tips for Troubleshooting Common Issues PowerCenter 8 Standard Edition. Eugene Gonzalez Support Enablement Manager, Informatica

Informatica Developer Tips for Troubleshooting Common Issues PowerCenter 8 Standard Edition. Eugene Gonzalez Support Enablement Manager, Informatica Informatica Developer Tips for Troubleshooting Common Issues PowerCenter 8 Standard Edition Eugene Gonzalez Support Enablement Manager, Informatica 1 Agenda Troubleshooting PowerCenter issues require a

More information

Qlik Sense Enterprise architecture and scalability

Qlik Sense Enterprise architecture and scalability White Paper Qlik Sense Enterprise architecture and scalability June, 2017 qlik.com Platform Qlik Sense is an analytics platform powered by an associative, in-memory analytics engine. Based on users selections,

More information

VMware Mirage Web Manager Guide

VMware Mirage Web Manager Guide Mirage 5.3 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document,

More information

Replication. Some uses for replication:

Replication. Some uses for replication: Replication SQL Server 2000 Replication allows you to distribute copies of data from one database to another, on the same SQL Server instance or between different instances. Replication allows data to

More information

Ultra Messaging Queing Edition (Version ) Guide to Queuing

Ultra Messaging Queing Edition (Version ) Guide to Queuing Ultra Messaging Queing Edition (Version 6.10.1) Guide to Queuing 2005-2017 Contents 1 Introduction 5 1.1 UMQ Overview.............................................. 5 1.2 Architecture...............................................

More information

Projects. Corporate Trainer s Profile. CMM (Capability Maturity Model) level Project Standard:- TECHNOLOGIES

Projects. Corporate Trainer s Profile. CMM (Capability Maturity Model) level Project Standard:- TECHNOLOGIES Corporate Trainer s Profile Corporate Trainers are having the experience of 4 to 12 years in development, working with TOP CMM level 5 comapnies (Project Leader /Project Manager ) qualified from NIT/IIT/IIM

More information

McAfee Enterprise Security Manager

McAfee Enterprise Security Manager Release Notes McAfee Enterprise Security Manager 10.0.2 Contents About this release New features Resolved issues Instructions for upgrading Find product documentation About this release This document contains

More information

Oracle DBA workshop I

Oracle DBA workshop I Complete DBA(Oracle 11G DBA +MySQL DBA+Amazon AWS) Oracle DBA workshop I Exploring the Oracle Database Architecture Oracle Database Architecture Overview Oracle ASM Architecture Overview Process Architecture

More information

Enterprise Vault Best Practices

Enterprise Vault Best Practices Enterprise Vault Best Practices Implementing SharePoint Archiving This document contains information on best practices when implementing Enterprise Vault for SharePoint If you have any feedback or questions

More information

Data Protection Guide

Data Protection Guide SnapCenter Software 4.0 Data Protection Guide For Custom Plug-ins March 2018 215-12932_C0 doccomments@netapp.com Table of Contents 3 Contents Deciding on whether to read the SnapCenter Data Protection

More information

EMC Documentum Quality and Manufacturing

EMC Documentum Quality and Manufacturing EMC Documentum Quality and Manufacturing Version 3.1 User Guide EMC Corporation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Legal Notice Copyright 2012-2016 EMC Corporation.

More information

Explore the Oracle 10g database architecture. Install software with the Oracle Universal Installer (OUI)

Explore the Oracle 10g database architecture. Install software with the Oracle Universal Installer (OUI) Oracle DBA (10g, 11g) Training Course Content Introduction (Database Architecture) Describe course objectives Explore the Oracle 10g database architecture Installing the Oracle Database Software Explain

More information

Course Outline: Oracle Database 11g: Administration II. Learning Method: Instructor-led Classroom Learning. Duration: 5.

Course Outline: Oracle Database 11g: Administration II. Learning Method: Instructor-led Classroom Learning. Duration: 5. Course Outline: Oracle Database 11g: Administration II Learning Method: Instructor-led Classroom Learning Duration: 5.00 Day(s)/ 40 hrs Overview: In this course, the concepts and architecture that support

More information

RELEASE NOTES. Version NEW FEATURES AND IMPROVEMENTS

RELEASE NOTES. Version NEW FEATURES AND IMPROVEMENTS S AND S Implementation of the Google Adwords connection type Implementation of the NetSuite connection type Improvements to the Monarch Swarm Library Column sorting and enhanced searching Classic trapping

More information

Manjunath Subburathinam Sterling L2 Apps Support 11 Feb Lessons Learned. Peak Season IBM Corporation

Manjunath Subburathinam Sterling L2 Apps Support 11 Feb Lessons Learned. Peak Season IBM Corporation Manjunath Subburathinam Sterling L2 Apps Support 11 Feb 2014 Lessons Learned Peak Season Agenda PMR Distribution Learnings Sterling Database Miscellaneous 2 PMR Distribution Following are the areas where

More information

DiskBoss DATA MANAGEMENT

DiskBoss DATA MANAGEMENT DiskBoss DATA MANAGEMENT File Synchronization Version 9.1 Apr 2018 www.diskboss.com info@flexense.com 1 1 DiskBoss Overview DiskBoss is an automated, policy-based data management solution allowing one

More information

Oracle Database 10g: New Features for Administrators Release 2

Oracle Database 10g: New Features for Administrators Release 2 Oracle University Contact Us: +27 (0)11 319-4111 Oracle Database 10g: New Features for Administrators Release 2 Duration: 5 Days What you will learn This course introduces students to the new features

More information

Data Loss and Component Failover

Data Loss and Component Failover This chapter provides information about data loss and component failover. Unified CCE uses sophisticated techniques in gathering and storing data. Due to the complexity of the system, the amount of data

More information

Using the VMware vrealize Orchestrator Client

Using the VMware vrealize Orchestrator Client Using the VMware vrealize Orchestrator Client vrealize Orchestrator 7.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by

More information

RTI Monitor. User s Manual

RTI Monitor. User s Manual RTI Monitor User s Manual Version 4.5 2010-2012 Real-Time Innovations, Inc. All rights reserved. Printed in U.S.A. First printing. March 2012. Trademarks Real-Time Innovations, RTI, and Connext are trademarks

More information

High Availability through Warm-Standby Support in Sybase Replication Server A Whitepaper from Sybase, Inc.

High Availability through Warm-Standby Support in Sybase Replication Server A Whitepaper from Sybase, Inc. High Availability through Warm-Standby Support in Sybase Replication Server A Whitepaper from Sybase, Inc. Table of Contents Section I: The Need for Warm Standby...2 The Business Problem...2 Section II:

More information

1Z Upgrade Oracle9i/10g to Oracle Database 11g OCP Exam Summary Syllabus Questions

1Z Upgrade Oracle9i/10g to Oracle Database 11g OCP Exam Summary Syllabus Questions 1Z0-034 Upgrade Oracle9i/10g to Oracle Database 11g OCP Exam Summary Syllabus Questions Table of Contents Introduction to 1Z0-034 Exam on Upgrade Oracle9i/10g to Oracle Database 11g OCP... 2 Oracle 1Z0-034

More information

New Features in Splashtop Center v An Addendum to the Splashtop Center Administrator s Guide v1.7

New Features in Splashtop Center v An Addendum to the Splashtop Center Administrator s Guide v1.7 New Features in Splashtop Center v2.3.10 An Addendum to the Splashtop Center Administrator s Guide v1.7 Table of Contents 1. Introduction... 4 2. Overview of New Features... 5 3. Automatic Domain Users

More information

10. Replication. CSEP 545 Transaction Processing Philip A. Bernstein. Copyright 2003 Philip A. Bernstein. Outline

10. Replication. CSEP 545 Transaction Processing Philip A. Bernstein. Copyright 2003 Philip A. Bernstein. Outline 10. Replication CSEP 545 Transaction Processing Philip A. Bernstein Copyright 2003 Philip A. Bernstein 1 Outline 1. Introduction 2. Primary-Copy Replication 3. Multi-Master Replication 4. Other Approaches

More information

Release Notes1.1 Skelta BPM.NET 2009 March 2010 Release <Version > Date: 20 th May, 2010

Release Notes1.1 Skelta BPM.NET 2009 March 2010 Release <Version > Date: 20 th May, 2010 Skelta BPM.NET 2009 March 2010 Release Date: 20 th May, 2010 Document History Date Version No. Description of creation/change 30 th March, 2010 1.0 Release Notes for March Update

More information

TECHNICAL OVERVIEW OF NEW AND IMPROVED FEATURES OF EMC ISILON ONEFS 7.1.1

TECHNICAL OVERVIEW OF NEW AND IMPROVED FEATURES OF EMC ISILON ONEFS 7.1.1 TECHNICAL OVERVIEW OF NEW AND IMPROVED FEATURES OF EMC ISILON ONEFS 7.1.1 ABSTRACT This introductory white paper provides a technical overview of the new and improved enterprise grade features introduced

More information

Question: 1 What are some of the data-related challenges that create difficulties in making business decisions? Choose three.

Question: 1 What are some of the data-related challenges that create difficulties in making business decisions? Choose three. Question: 1 What are some of the data-related challenges that create difficulties in making business decisions? Choose three. A. Too much irrelevant data for the job role B. A static reporting tool C.

More information

Chapter 12: Indexing and Hashing

Chapter 12: Indexing and Hashing Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B+-Tree Index Files B-Tree Index Files Static Hashing Dynamic Hashing Comparison of Ordered Indexing and Hashing Index Definition in SQL

More information

Connector for OpenText Content Server Setup and Reference Guide

Connector for OpenText Content Server Setup and Reference Guide Connector for OpenText Content Server Setup and Reference Guide Published: 2018-Oct-09 Contents 1 Content Server Connector Introduction 4 1.1 Products 4 1.2 Supported features 4 2 Content Server Setup

More information

Veritas NetBackup for Lotus Notes Administrator's Guide

Veritas NetBackup for Lotus Notes Administrator's Guide Veritas NetBackup for Lotus Notes Administrator's Guide for UNIX, Windows, and Linux Release 8.0 Veritas NetBackup for Lotus Notes Administrator's Guide Document version: 8.0 Legal Notice Copyright 2016

More information

McAfee Performance Optimizer 2.1.0

McAfee Performance Optimizer 2.1.0 Product Guide McAfee Performance Optimizer 2.1.0 For use with McAfee epolicy Orchestrator COPYRIGHT 2016 Intel Corporation TRADEMARK ATTRIBUTIONS Intel and the Intel logo are registered trademarks of the

More information

One Identity Active Roles 7.2. Replication: Best Practices and Troubleshooting Guide

One Identity Active Roles 7.2. Replication: Best Practices and Troubleshooting Guide One Identity Active Roles 7.2 Replication: Best Practices and Troubleshooting Copyright 2017 One Identity LLC. ALL RIGHTS RESERVED. This guide contains proprietary information protected by copyright. The

More information

Altiris CMDB Solution from Symantec Help. Version 7.0

Altiris CMDB Solution from Symantec Help. Version 7.0 Altiris CMDB Solution from Symantec Help Version 7.0 CMDB Solution Help topics This document includes the following topics: About CMDB Solution CMDB Global Settings page Default values page Default values

More information

IT Best Practices Audit TCS offers a wide range of IT Best Practices Audit content covering 15 subjects and over 2200 topics, including:

IT Best Practices Audit TCS offers a wide range of IT Best Practices Audit content covering 15 subjects and over 2200 topics, including: IT Best Practices Audit TCS offers a wide range of IT Best Practices Audit content covering 15 subjects and over 2200 topics, including: 1. IT Cost Containment 84 topics 2. Cloud Computing Readiness 225

More information

Optimize Your Databases Using Foglight for Oracle s Performance Investigator

Optimize Your Databases Using Foglight for Oracle s Performance Investigator Optimize Your Databases Using Foglight for Oracle s Performance Investigator Solve performance issues faster with deep SQL workload visibility and lock analytics Abstract Get all the information you need

More information

VMware Mirage Web Management Guide. VMware Mirage 5.9.1

VMware Mirage Web Management Guide. VMware Mirage 5.9.1 VMware Mirage Web Management Guide VMware Mirage 5.9.1 VMware Mirage Web Management Guide You can find the most up-to-date technical documentation on the VMware Web site at: https://docs.vmware.com/ The

More information

Alfresco 2.1. Backup and High Availability Guide

Alfresco 2.1. Backup and High Availability Guide Copyright (c) 2007 by Alfresco and others. Information in this document is subject to change without notice. No part of this document may be reproduced or transmitted in any form or by any means, electronic

More information

Database Administration

Database Administration Unified CCE, page 1 Historical Data, page 2 Tool, page 3 Database Sizing Estimator Tool, page 11 Administration & Data Server with Historical Data Server Setup, page 14 Database Size Monitoring, page 15

More information

Chapter 12: Indexing and Hashing. Basic Concepts

Chapter 12: Indexing and Hashing. Basic Concepts Chapter 12: Indexing and Hashing! Basic Concepts! Ordered Indices! B+-Tree Index Files! B-Tree Index Files! Static Hashing! Dynamic Hashing! Comparison of Ordered Indexing and Hashing! Index Definition

More information

PASS4TEST. IT Certification Guaranteed, The Easy Way! We offer free update service for one year

PASS4TEST. IT Certification Guaranteed, The Easy Way!  We offer free update service for one year PASS4TEST IT Certification Guaranteed, The Easy Way! \ We offer free update service for one year Exam : TB0-124 Title : TIBCO MDM 8 Exam Vendors : Tibco Version : DEMO Get Latest & Valid TB0-124 Exam's

More information

Application Development Best Practice for Q Replication Performance

Application Development Best Practice for Q Replication Performance Ya Liu, liuya@cn.ibm.com InfoSphere Data Replication Technical Enablement, CDL, IBM Application Development Best Practice for Q Replication Performance Information Management Agenda Q Replication product

More information

EMC GREENPLUM MANAGEMENT ENABLED BY AGINITY WORKBENCH

EMC GREENPLUM MANAGEMENT ENABLED BY AGINITY WORKBENCH White Paper EMC GREENPLUM MANAGEMENT ENABLED BY AGINITY WORKBENCH A Detailed Review EMC SOLUTIONS GROUP Abstract This white paper discusses the features, benefits, and use of Aginity Workbench for EMC

More information

Tanium Asset User Guide. Version 1.1.0

Tanium Asset User Guide. Version 1.1.0 Tanium Asset User Guide Version 1.1.0 March 07, 2018 The information in this document is subject to change without notice. Further, the information provided in this document is provided as is and is believed

More information

Don t just manage your documents. Mobilize them!

Don t just manage your documents. Mobilize them! Don t just manage your documents Mobilize them! Don t just manage your documents Mobilize them! A simple, secure way to transform how you control your documents across the Internet and in your office.

More information

TIBCO Statistica Release Notes

TIBCO Statistica Release Notes TIBCO Statistica Release Notes Software Release 13.3.1 November 2017 Two-Second Advantage Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH EMBEDDED OR BUNDLED

More information

Lesson 2: Using the Performance Console

Lesson 2: Using the Performance Console Lesson 2 Lesson 2: Using the Performance Console Using the Performance Console 19-13 Windows XP Professional provides two tools for monitoring resource usage: the System Monitor snap-in and the Performance

More information

Implementing Data Masking and Data Subset with IMS Unload File Sources

Implementing Data Masking and Data Subset with IMS Unload File Sources Implementing Data Masking and Data Subset with IMS Unload File Sources 2013 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying,

More information

Datenbanksysteme II: Caching and File Structures. Ulf Leser

Datenbanksysteme II: Caching and File Structures. Ulf Leser Datenbanksysteme II: Caching and File Structures Ulf Leser Content of this Lecture Caching Overview Accessing data Cache replacement strategies Prefetching File structure Index Files Ulf Leser: Implementation

More information

Synchronization Agent Configuration Guide

Synchronization Agent Configuration Guide SafeNet Authentication Service Synchronization Agent Configuration Guide 1 Document Information Document Part Number 007-012848-001, Rev. E Release Date July 2015 Applicability This version of the SAS

More information

Administrator's Guide

Administrator's Guide Administrator's Guide EPMWARE Version 1.0 EPMWARE, Inc. Published: July, 2015 Information in this document, including URL and other Internet Web site references, is subject to change without notice. Unless

More information

Segregating Data Within Databases for Performance Prepared by Bill Hulsizer

Segregating Data Within Databases for Performance Prepared by Bill Hulsizer Segregating Data Within Databases for Performance Prepared by Bill Hulsizer When designing databases, segregating data within tables is usually important and sometimes very important. The higher the volume

More information

DBArtisan 8.6 New Features Guide. Published: January 13, 2009

DBArtisan 8.6 New Features Guide. Published: January 13, 2009 Published: January 13, 2009 Embarcadero Technologies, Inc. 100 California Street, 12th Floor San Francisco, CA 94111 U.S.A. This is a preliminary document and may be changed substantially prior to final

More information

Oracle Database 10g: Administration I. Course Outline. Oracle Database 10g: Administration I. 20 Jul 2018

Oracle Database 10g: Administration I. Course Outline. Oracle Database 10g: Administration I.  20 Jul 2018 Course Outline Oracle Database 10g: Administration I 20 Jul 2018 Contents 1. Course Objective 2. Pre-Assessment 3. Exercises, Quizzes, Flashcards & Glossary Number of Questions 4. Expert Instructor-Led

More information

IBM Security Identity Manager Version Planning Topics IBM

IBM Security Identity Manager Version Planning Topics IBM IBM Security Identity Manager Version 7.0.1 Planning Topics IBM IBM Security Identity Manager Version 7.0.1 Planning Topics IBM ii IBM Security Identity Manager Version 7.0.1: Planning Topics Table of

More information

Oracle 1Z0-640 Exam Questions & Answers

Oracle 1Z0-640 Exam Questions & Answers Oracle 1Z0-640 Exam Questions & Answers Number: 1z0-640 Passing Score: 800 Time Limit: 120 min File Version: 28.8 http://www.gratisexam.com/ Oracle 1Z0-640 Exam Questions & Answers Exam Name: Siebel7.7

More information

TANDBERG Management Suite - Redundancy Configuration and Overview

TANDBERG Management Suite - Redundancy Configuration and Overview Management Suite - Redundancy Configuration and Overview TMS Software version 11.7 TANDBERG D50396 Rev 2.1.1 This document is not to be reproduced in whole or in part without the permission in writing

More information

VMware vsphere Data Protection Evaluation Guide REVISED APRIL 2015

VMware vsphere Data Protection Evaluation Guide REVISED APRIL 2015 VMware vsphere Data Protection REVISED APRIL 2015 Table of Contents Introduction.... 3 Features and Benefits of vsphere Data Protection... 3 Requirements.... 4 Evaluation Workflow... 5 Overview.... 5 Evaluation

More information

Colligo Engage Outlook App 7.1. Offline Mode - User Guide

Colligo Engage Outlook App 7.1. Offline Mode - User Guide Colligo Engage Outlook App 7.1 Offline Mode - User Guide Contents Colligo Engage Outlook App 1 Benefits 1 Key Features 1 Platforms Supported 1 Installing and Activating Colligo Engage Outlook App 3 Checking

More information

Kintana Object*Migrator System Administration Guide. Version 5.1 Publication Number: OMSysAdmin-1203A

Kintana Object*Migrator System Administration Guide. Version 5.1 Publication Number: OMSysAdmin-1203A Kintana Object*Migrator System Administration Guide Version 5.1 Publication Number: OMSysAdmin-1203A Kintana Object*Migrator, Version 5.1 This manual, and the accompanying software and other documentation,

More information

HPE Storage Optimizer Software Version: 5.4. Best Practices Guide

HPE Storage Optimizer Software Version: 5.4. Best Practices Guide HPE Storage Optimizer Software Version: 5.4 Best Practices Guide Document Release Date: November 2016 Software Release Date: November 2016 Legal Notices Warranty The only warranties for Hewlett Packard

More information

Oracle 1Z Upgrade to Oracle Database 12c. Download Full Version :

Oracle 1Z Upgrade to Oracle Database 12c. Download Full Version : Oracle 1Z0-060 Upgrade to Oracle Database 12c Download Full Version : https://killexams.com/pass4sure/exam-detail/1z0-060 QUESTION: 141 Which statement is true about Enterprise Manager (EM) express in

More information

One Identity Manager 8.0. Administration Guide for Connecting to Azure Active Directory

One Identity Manager 8.0. Administration Guide for Connecting to Azure Active Directory One Identity Manager 8.0 Administration Guide for Connecting to Copyright 2017 One Identity LLC. ALL RIGHTS RESERVED. This guide contains proprietary information protected by copyright. The software described

More information

Windows 2000 / XP / Vista User Guide

Windows 2000 / XP / Vista User Guide Windows 2000 / XP / Vista User Guide Version 5.5.1.0 September 2008 Backup Island v5.5 Copyright Notice The use and copying of this product is subject to a license agreement. Any other use is prohibited.

More information

RELEASE NOTES FOR THE Kinetic - Edge & Fog Processing Module (EFM) RELEASE 1.2.0

RELEASE NOTES FOR THE Kinetic - Edge & Fog Processing Module (EFM) RELEASE 1.2.0 RELEASE NOTES FOR THE Kinetic - Edge & Fog Processing Module (EFM) RELEASE 1.2.0 Revised: November 30, 2017 These release notes provide a high-level product overview for the Cisco Kinetic - Edge & Fog

More information

Data Warehousing & Big Data at OpenWorld for your smartphone

Data Warehousing & Big Data at OpenWorld for your smartphone Data Warehousing & Big Data at OpenWorld for your smartphone Smartphone and tablet apps, helping you get the most from this year s OpenWorld Access to all the most important information Presenter profiles

More information

IBM FileNet Content Manager 5.2. Asynchronous Event Processing Performance Tuning

IBM FileNet Content Manager 5.2. Asynchronous Event Processing Performance Tuning IBM FileNet Content Manager 5.2 April 2013 IBM SWG Industry Solutions/ECM IBM FileNet Content Manager 5.2 Asynchronous Event Processing Performance Tuning Copyright IBM Corporation 2013 Enterprise Content

More information