10 September 2010 ( ) WO 2010/ Al

Size: px
Start display at page:

Download "10 September 2010 ( ) WO 2010/ Al"

Transcription

1 (12) INTERNATIONALAPPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (43) International Publication Date (10) International Publication Number 10 September 2010 ( ) WO 2010/ Al (51) International Patent Classification: (81) Designated States (unless otherwise indicated, for every G06F 3/06 ( ) kind of national protection available): AE, AG, AL, AM, AO, AT, AU, AZ, BA, BB, BG, BH, BR, BW, BY, BZ, (21) International Application Number: CA, CH, CL, CN, CO, CR, CU, CZ, DE, DK, DM, DO, PCT/EP20 10/ DZ, EC, EE, EG, ES, FI, GB, GD, GE, GH, GM, GT, (22) International Filing Date: HN, HR, HU, ID, IL, IN, IS, JP, KE, KG, KM, KN, KP, 12 January 2010 ( ) KR, KZ, LA, LC, LK, LR, LS, LT, LU, LY, MA, MD, (25) Filing Language: English ME, MG, MK, MN, MW, MX, MY, MZ, NA, NG, NI, NO, NZ, OM, PE, PG, PH, PL, PT, RO, RS, RU, SC, SD, (26) Publication Language: English SE, SG, SK, SL, SM, ST, SV, SY, TH, TJ, TM, TN, TR, TT, TZ, UA, UG, US, UZ, VC, VN, ZA, ZM, ZW. (30) Priority Data: March 2009 ( ) EP (84) Designated States (unless otherwise indicated, for every kind of regional protection available): ARIPO (BW, GH, (71) Applicant (for all designated States except US): INTER GM, KE, LS, MW, MZ, NA, SD, SL, SZ, TZ, UG, ZM, NATIONAL BUSINESS MACHINES CORPORA ZW), Eurasian (AM, AZ, BY, KG, KZ, MD, RU, TJ, TION [US/US]; New Orchard Road, Armonk, New York TM), European (AT, BE, BG, CH, CY, CZ, DE, DK, EE, (US). ES, FI, FR, GB, GR, HR, HU, IE, IS, IT, LT, LU, LV, MC, MK, MT, NL, NO, PL, PT, RO, SE, SI, SK, SM, (71) Applicant (for M G only): COMPAGNIE IBM TR), OAPI (BF, BJ, CF, CG, CI, CM, GA, GN, GQ, GW, FRANCE [FR/FR]; Tour Descartes, La Defense 5, 2 Av ML, MR, NE, SN, TD, TG). enue Gambetta, F Courbevoie (FR). (72) Inventor; and Declarations under Rule 4.17: (75) Inventor/Applicant (for US only): SABLONIERE, of inventorship (Rule 4Λ 7(iv)) Pierre [FR/FR]; IBM France, 17 Avenue De L'Europe, F Bois Colombes (FR). (74) Agent: LOPEZ, Frederique; Le Plan du Bois, F La Gaude (FR). Published: with international search report (Art. 21(3)) (54) Title: METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR MANAGING THE PLACEMENT OF STORAGE DATA IN A MULTI TIER VIRTUALIZED STORAGE INFRASTRUCTURE SAN Data Analyzer model 404 Data Migration Metadata Data Aggregator Actions Data Collector 401 (57) Abstract: A storage management method for use in SAN based virtualized multi-tier storage infrastructure in a loosely de fined and changing environment. Each physical storage media is assigned a tier level based on its Read I/O rate access density. The method comprises a top down method based on data collected from the virtualization engine compared to Read I/O capability and space capacity of each discrete virtual storage pool to determine whether re-tiering situations exist, a drill-in analysis algo rithm based on relative Read I/O access density to identify which data workload should right-tiered among the composite work load hosted in the discrete virtual storage pool.

2 FR Method, System and Computer Program Product for Managing the Placement of Storage Data in a multi tier virtualized storage infrastructure FIELD OF THE INVENTION The present invention relates to the field of data processing and in particular to the management of storage and the optimization of data placement in a multi tier virtualized storage infrastructure. BACKGROUND OF THE INVENTION Enterprises face major challenges due to the fast growth of their storage needs, the increased complexity of managing the storage, and the requirement for high availability of storage. Storage Area Network (SAN) technologies enable storage systems to be engineered separately from host computers through the pooling of storage, resulting in improved efficiency. Storage virtualization, a storage management technology which masks the physical storage complexities for the user, may also be used. Block virtualization (sometimes also called block aggregation) provides servers with a logical view of the physical storage, such as disk drives, solid-state disks, and tape drives, on which data is actually stored. The logical view may comprise a number of virtual storage areas into which the available storage space is divided (or aggregated) without regard to the physical layout of the actual storage. The servers no longer see specific physical targets, but instead see logical volumes which can be for their exclusive use. The servers send their data to the virtual storage areas as if they are their direct attached property. Virtualization may take place at the level of volumes, individual files or at the level of blocks that represent specific locations within a disk drive. Block aggregation can be performed within hosts (servers), and/or in storage devices (intelligent disk arrays). In data storage, the problem of accurate data placement among a set of storage tiers is among the most difficult problems to solve. Tiered storage is the

3 assignment of different categories of data to different types of storage media in order to reduce total storage cost. Categories may be based on levels of protection needed, performance requirements, frequency of use, capacity and other considerations. User requirements for their placement are quite often loosely specified or based on wishes rather than on accurate capacity planning. Furthermore, even if initial requirements were adequate, applications may undergo drastic data access changes throughout their life cycle. For instance, the roll out of an internet application where the number of future users is difficult to predict is likely to have an actual data access behavior at a given time very different from initial deployment values and/or planned activity. Over time, this application might benefit from functional enhancements causing upward changes in data access behaviors. Later, selected functions may become unused because their functional perimeter is taken over by a newer application leading to downward change in data access patterns. In additional to application behavior uncertainty, data access behaviors may be far from homogeneous within a single application. For instance, a highly active database log and a static parameter table will feature very different data access patterns. All across these life cycle changes, storage administrators are faced with loosely specified and changing environments where user technical input cannot be considered accurate or trustable to take right data placement decisions. The abundance of storage technologies used in storage tiers ( Fiber Channel (FC), Serial AT Attachment (SATA), Solid State Drives (SSD)) combined with their redundancy set up (RAID 5, RAID 10, etc...) makes even more complex application data placement decisions in storage tiers where prices per unit of storage capacity may range from 1 to 20 between SATA and SSD. Using right tiers for application data is now a crucial need for enterprises to reduce their cost while maintaining application performance. A method for managing allocation of data sets among a plurality of storage devices has been proposed in US 5,345,584. The method based on data storage factors for data set and storage devices is well fit for single dataset placement in single storage devices accessed without a local cache layer. This architecture is today mostly obsolete because modern storage devices host datasets in stripped

4 mode across multiple storage devices with a cache layer capable of buffering high numbers of write access instructions. Furthermore using the total access rate (i.e. the sum of Read activity and Write activity) is grossly inaccurate to characterize modern storage devices; for instance, a 300 GB Fiber Channel drive may typically support random accesses per second whereas a write cache layer may buffer 1000 write instructions per second each of 8Kbytes (a typical database block size) for 15 minutes causing the total access rate to become inaccurate. This issue derails any model which would be based on total read and write access activity and capability. A method of hierarchical storage data in a storage area network (SAN) has been proposed in WO 2007/ from the Assignee where the SAN comprises a plurality of host data processors coupled to a storage virtualization engine, which is coupled to a plurality of physical storage media. Each physical media is assigned a tier level. The method is based on selective relocation of data blocks when their access behaviors exceed tier media threshold values. This method may lead to non economical solutions for composite workloads including multiple applications consisting of highly demanding applications and low demanding applications. For such workloads, this method would lead to recommend or select two types of storage resources. The first storage resource type would be a "high performance SSD like" type and the second one would be a "low performance SATA drive like" type whereas a solution based on Fiber Channel (FC) disks might be sufficient and more economical to support the "average" performance characteristics of the aggregated workload. In essence, using 1, 2 and 20 ratios for type prices/unit of capacity for SATA, FC and SSD storage media would lead to an FC solution being five times cheaper than a combined SSD and SATA solution. The present invention aims to address the aforementioned cited problems. SUMMARY OF THE INVENTION. The invention provides a method for managing the placement of data on the virtualized multi-tier storage infrastructure in a loosely defined and changing

5 environment. Each physical storage media is assigned a tier level based on its Read I/O rate access density. The method comprises a top down method based on data collected from the virtualization engine compared to Read I/O capability and space capacity of each discrete virtual storage pool to determine whether re-tiering situations exist, a drill-in analysis algorithm based on relative Read I/O access density to identify which data workload should right-tiered among the composite workload hosted in the discrete virtual storage pool.. The method operates at discrete storage virtual pool and storage virtual disk levels and takes advantage of opportunistic complementary workload profiles present in most aggregated composite workloads. This method significantly reduces the amount of re-tiering activity which would be generated by a micro analysis at block or storage virtual disk level and may provide more economical recommendations. The method, based on a top down approach analyzes the behavior of storage resources, detecting situations where workload re-tiering is suitable and provides re-tiering (upward or downward) recommendations. The suggested re-tiering/right-tiering actions can be analyzed by storage administrators for validation or automatically passed to the virtualization engine for virtual disk migration. The method comprises also a Write response time component which covers for quality of service issues. The method uses alerts based on thresholds defined by the storage administrator. The process comprises a structured and repeatable evaluation of the virtualized storage infrastructure, a process flow leading to data workload re-tiering actions. The process comprises also a structured flow to analyze Write response time quality of service alerts, decide whether re-tiering is required and identify which data workload should be re-tiered. According to the invention, there is provided a method and system as described in the appended independent claims. Further embodiments are defined in the appended dependent claims. The foregoing and other objects, modules and advantages of the present invention will now be described by way of preferred embodiment and examples, with reference to the accompanying figures.

6 BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 shows an example of a Storage Area Network in which the present invention may be implemented; Figure 2 shows a simple view of block virtualization; Figure 3 shows components of a virtualization engine in which the present invention may be implemented; Figure 4 shows components of the Storage Tiering Analyzer for Right Tiering (START) component according to the invention; Figure 5 illustrates the preferred data service model dimensions used in an embodiment of the right tiering process; Figure 6 illustrates storage data service technical and economical domains of usage; Figures 7A, 7B, 7C and 7D show examples of actual situations of a composite data workload in a technical domain of usage for a storage pool; Figure 8 illustrates the Read I/O rate density in a three-dimension model used by the invention; Figure 9 shows the Read I/O rate density of a data workload composed of two data workloads of different Read I/O rate densities and illustrates the thermal analogy which is applicable; Figure 10 shows how the Read I/O rate density of a composite workload is modified when removing one of the composing data workloads; Figure 11 shows the threshold based alert system supporting the invention; Figure 12 provides the process flow supporting the method described in the invention as it relates to Read I/O rate density and Space utilization; and Figure 13 provides the process flow supporting an embodiment of the method as it relates to the analysis of Write I/O response time alerts. DESCRIPTION OF A PREFERRED EMBODIMENT The invention proposes using a virtualization engine, which has knowledge of both the data and the location of the data, and an analyzer component to identify situations deserving data re-tiering and recommending actual data re-tiering actions.

7 Referring to Figure 1, there is shown a SAN 100 with several host application servers 102 attached. These can be many different types, typically some number of enterprise servers, and some number of user workstations. Also attached to the SAN, (via Redundant Array of Inexpensive Disks (RAID) controllers A, B and C), are various levels of physical storage. In the present example, there are three levels of physical storage: Tier 1, which may be, for example enterprise level storage, such as the IBM System Storage DS8000; Tier 2, which may be mid range storage, such as the IBM System Storage DS5000 equipped with FC disks; and Tier 3 which may be lower end storage, such as the IBM System Storage DS4700 equipped with Serial Advanced Technology Attachment (SATA) drives. Typically, each MDisk corresponds to a single tier and each RAID array 101 belongs to a single tier. Each of the RAID controllers 103 may control RAID storage belonging to different tiers. In addition to different tiers being applied to different physical disk types, different tiers may also be applied to different RAID types; for example, a RAID-5 array may be placed in a higher tier than a RAID-O array. The SAN is virtualized by means of a storage virtualization engine 104 which sits in the data path for all SAN data, and presents Virtual Disk 106a to 106n to the host servers and workstations 102. These virtual disks are made up from the capacity provided across the three tiers of storage devices. The virtualization engine 104 comprises one of more nodes 110 (four shown), which provide virtualization, cache and copy services to the hosts. Typically, the nodes are deployed in pairs and make up a cluster of nodes, with each pair of nodes known as an Input/Output (I/O) group. As storage is attached to the SAN it is added to various pools of storage each controlled by a RAID controller 103. Each RAID controller presents an SCSI (Small Computer System Interface) disk to the virtualization engine. The presented disk may be managed by the virtualization engine, and be called a managed disk, or MDisk. These MDisks are split into extents, fixed size blocks of usable capacity, which are numbered sequentially from the start to the end of each MDisk. These extents can be concatenated, striped, or any desirable algorithm can

8 be used to produce larger virtual disk (VDisks) which are presented to the hosts by the nodes. The MDisks M1,M2,...M9 can be grouped together in Managed Disk Groups, or MDGs 108, typically characterized by factors such as performance, RAID level, reliability, vendor, and so on. According to the preferred embodiment, all MDisks in an MDG represent storage of the same tier level, as shown in Figure 1. There may be multiple MDGs of the same tier in the virtualized storage infrastructure, each being a discrete virtual storage pool. The virtualization engine converts Logical Block Addresses (LBAs) of virtual disk to extents of the VDisks, and maps extents of the VDisk to MDisk extents. An example of the mapping from a VDisk to MDisks is shown in Figure 2. Each of the extents of the VDisk A is mapped to an extent of one of the managed disks Ml, M2 or M3. The mapping table, which can be created from metadata stored by each node, shows that some of the managed disk extents are unused. These unused extents are available for use in creating new VDisks, migration, expansion and so on. Typically, virtual disks are created and distributed so that the enterprise level servers initially use enterprise level storage or based on application owner requirements. This may not be fully justified by actual data access characteristics. The invention provides a method to identify better data placement scenario with a structured right tiering process. The invention supports a different and cheaper initial data placement for application. For instance, initial data placement for all application could be released in tier 2 storage media and the invention would support the re-tiering of part or all of this data based on the actual situation of the overall virtualized storage infrastructure. To accomplish this, in addition to the metadata used to track the mapping of managed disk extents to virtual disks, access rate to each extent is monitored. As the data is read and written to any given extent, the metadata is updated with access count. An I/O flow will now be described with reference to Figure 3. As shown in Figure 3, a virtualization engine of node 110 comprises the following modules: SCSI Front End 302, Storage Virtualization 310, SCSI Back End 312, Storage Manager 314 and Event Manager 316.

9 The SCSI Front End layer receives I/O requests from hosts; conducts LUN mapping (i.e. between LBAs to Logical Unit Numbers (LUNs) (or extents) of virtual disks A and C); and converts SCSI Read and Write commands into the node's internal format. The SCSI Back End processes requests to Managed disks which are sent to it by the Virtualization layer above, and addresses commands to the RAID controllers. The I/O stack may also include other modules (not shown), such as Remote Copy, Flash Copy or Cache. Caches are usually present both at Virtualization engine and RAID controller levels. The node displayed in Figures 3 belongs to an I/O group to which VDisks A and B are assigned. This means that this node presents an interface to VDisks A and B for hosts. Managed disks 1, 2 and 3 may also correspond to other virtual disks assigned to other nodes. The event manager 316 manages metadata 318, which comprises mapping information for each extent as well as tier level data and an access value for the extent. This metadata is also available to the virtualization layer 310 and storage manager 314. Now consider the receipt from a host of a write request 350 which includes the ID of the virtual disk to which the request refers, and the LBA to which the write should be made. On receipt of the write request, the Front End converts the specified LBA into an extent ID (LUN) of a virtual disk, set us say this is extent 3 of VDisk A (A-3). The virtualization component 310 uses the metadata shown in the form of a mapping table in Figure 2, to map extent A-3 to extent 6 of MDisk 2 (M2-6). The write request is then passed via the SCSI back end 312 to the relevant controller for MDisk 2 and Data is written to the extent M2-6. The virtualization layer sends a message 304 to the event manager indicating that a write to extent 6 of MDisk 2 has been requested. The event manager then updates the metadata in respect of extent M2-6 to indicate that this extent is now full. The event manager also updates the access value in the metadata for the extent. This may be by storing the time at which the write occurred as the access value, of by resetting a count value in the metadata. The event manager returns a message 304 to the virtualization component to indicate that the metadata has been updated to reflect the write operation.

10 The Storage Tiering Analyzer for Right Tiering (START) manager component which allows right tiering actions is now described with reference to Figure 4. START performs the analysis of the SAN activity to identify situations deserving right tiering actions and prepares the appropriate VDisk migration action list. Firstly, the Data Collector 401 acts as a Storage Resource Manager, by periodically collecting topology data contained in the virtualization engine and access activity per LUNs and VDisks. This may comprise write and read activity counts, response times and other monitoring data. This may comprise back end and front end activity data and internal measurements of the virtualization engine such as queue levels. The data collector inserts this series of data in its local repository on a periodic basis (a preferred period is typically every 15 minutes) and stores it for a longer period of time (typically 6 months). The Data Aggregator 402 processes SAN data covering a longer period of time (say one day e.g. 96 samples of 15 minute each) by accessing the Data Collector repository (with mechanisms such as batch reports) and produces aggregated values comprising minimum, maximum, average, shape factors,...for VDisks and MDGs managed by the virtualization engine of the SAN. The data produced by the Data Aggregator can be compared to the SAN Model Metadata 403 which contains the I/O processing capability for each of the MDGs. This I/O processing capacity may be based on disk array vendor specifications, disk array modeling activity figures (such as produced by Disk Magic application software), or generally accepted industry technology capability figures for the disks controlled by the RAID controller, their number, their redundancy set up and cache hit ratio values at RAID controller level. Other I/O processing modeling capability algorithms may also be used. The data produced by the Data Aggregator can also be compared to the total space capacity of each MDG which can be stored in the SAN Model Meta data or collected from the virtualization engine. The Data Analyzer component 404 performs these comparisons and raises right tiering alerts based on thresholds set by the storage administrator. These alerts cover MDGs which utilizations are not balanced and for which VDisk migration actions should be considered.

11 For any MDG in alert, the Data Analyzer provides a drill-in view of all VDisks hosted by the MDG sorted by Read Access Rate Density. This view allows an immediate identification of 'hot' VDisks and 'cold' ones. Depending on the type of alert, this drill-in view easily points to VDisks which migration to another tier will resolve the MDG alert. By right-tiering these VDisks, the source MDG will see the Read Access rate density value of the composite workload hosted by the MDG becoming closer to the MDG intrinsic capability, making this MDG usage better balanced in regards of its utilization domain. For all MDGs, the Data Analyzer computes the Net Read I/O access density as the ratio of the MDG remaining Read I/O processing capability divided by the MDG reaming space capacity. A workload which Read I/O access density would be equal to the Net Read I/O access density would be considered as a complementary workload for this MDG in its current state. The VDisk migration action list, composed of 'hot' or 'cold' VDisks depending on the type of alert, is prepared by the Data Analyzer component and may be passed to the virtualization engine for implementation in the SAN either automatically or after validation by the storage administrator as shown by 405. The MDG target to which a particular VDisk should be re-tiered may be determined using the following algorithm. First, MDGs which remaining space capacity or Read I/O processing capability are not sufficient to fit VDisk footprint (the VDisk footprint being equal to space and Read I/O requirements for this VDisk) are eliminated as possible targets. Then, the MDG of Net Read I/O access density of the closest value to the VDisk Read I/O access density is chosen (e.g. the VDisk workload profile is a workload complementary to the MDG in its current state). This operation is repeated for VDisks in an MDG in alert until the cumulated relative weight of the re-tiered VDisks resolves the alert. This operation is also repeated for all MDGs in alert. Other algorithms may be considered to assist in the alert resolution process. Figure 5 illustrates a thee-dimension model used in a particular embodiment of the invention. In an embodiment based on the IBM TotalStorage SAN Virtualization Controller (SVC), back end storage services are provided by 'Managed Disk Groups' (MDG) federating a series of Managed Disks (LUNs)

12 hosted on storage arrays and accessed in 'stripped mode' by the SVC layer. Front end storage services as seen by data processing hosts as provided by VDisks. A composite workload of multiple VDisks, for instance all VDisks hosted in a given MDG, may also be described along this three-dimension model. Figure 6 illustrates two major domains of utilization of a storage service such as a RAID array, an MDG, a LUN or a VDisk. The first domain is the functional domain of the storage service. It lays within the boundaries of total space (in Mbytes) of the storage pool, its maximum Read I/O rate processing capability and its maximum acceptable response time as defined by the Storage administrator. The second domain is the economical domain of utilization of the storage service. This is a reduced volume located inside the previous domain located close to boundaries of the maximum Read I/O capability and total storage space pace within the acceptable response time limit. Figures 7A-7D provides illustrated examples of workload situations within the two domains of utilization. In Figure 7A, data occupies all the storage capacity, the I/O processing capability is well utilized and the Write I/O response time value is not a problem. There is a good match between data placement and the storage pool. In Figure 7B, the I/O processing capability is almost all utilized, the storage capacity is only very partially allocated and the Write I/O response time value is not a problem. Further capacity allocation is likely to cause I/O constraints. Moving selected data to a storage pool of higher I/O capability would be suitable. In Figure 7C, data occupies almost all the storage capacity, the I/O processing capability is under-utilized and the Write I/O response time value is not a problem. There is an opportunity to utilize a storage pool of lower I/O processing capability which is likely to be more economical. In Figure 7D, the storage capacity is almost completely allocated, the I/O processing capability is well leveraged, however, the Write I/O response time value is too high. There is a need to assess whether the high response time value

13 FR constitutes a risk to workload SLA (typically a batch elapsed time) before deciding any action. Figure 8 introduces the Read I/O rate access density factor which can be evaluated for a storage device (in terms of capability) or data workload such as applications or parts of applications (hosted in one VDisk or multiple ones). The following formulas provide additional details. For MDGs: Maximum Access Density = I/O processing Capability / Total storage capacity For Applications: Maximum Access Density = Actual maximum I/O rate / allocated storage space For VDisks: Maximum Access Density = Actual maximum I/O rate / allocated storage space The Read I/O rate access density is measured in IO/sec / Megabyte and its algebra can easily be understood when using a thermal analogy where high access density applications would be a 'hot' storage workloads and low access density application would be a 'cold' storage workloads. As illustrated in Figures 9 and 10, the weighted thermal formula applicable to mild water (hot + cold) applies to 'hot' and 'cold' data workloads. An MDG operates within its economical zone if the aggregated workload of all VDisks hosted in the MDG is 'close' to the MDG theoretical access density and if the MDG capacity is almost all utilized. The invention proposes a process aiming to optimizing MDG usage as a result from exchanging workload(s) with other MDGs of different access density. The preferred embodiment of this invention is the use of the Read I/O rate density to classify MDG capacity among the various tiers. An MDG hosted on a tier 1 RAID controller has the highest Read I/O rate density among all MDGs whereas an MDG of the lowest Read I/O rate access density will belong to a tier of lower ranking (typically tier 3-5 depending on the tier grouping in the virtualized infrastructure). The preferred embodiment of the invention is implemented by the Data Analyzer component when raising alerts based on thresholds defined by the storage administrator. There are three different alerts listed hereafter: 1. Storage capacity almost all allocated: in this situation, the Managed Disk

14 FR Group capacity allocated to VDisks is close (in %) to the MDG storage capacity. 2. IO Capacity almost fully used: in this situation the maximum Read I/O rate on back end disks (Managed Disk Group) is close (in %) to the maximum theoretical value. 3. 'High' response time values: in this situation the number of write instructions retained in the SVC cache is 'important' (in %) when compared to the total number of write instruction. This phenomenon reveals an increase of the write response time which may be causing breach of SLA target values for batch workloads. Figure 11 shows these three alert thresholds as they refer to MDG domains of utilization. The driving principles for storage pool optimization are the following ones: 1. If "Allocated capacity" is close to "Maximum capacity" and "Read I/O activity" is significantly lower than the "Read I/O capability", the "Read I/O capability" is not fully leveraged. Then, application data of lowest access rate density must be removed from the discrete virtual storage pool (i.e. MDG) to free up space to host application data of higher access rate density. The removed application data of lowest access rate density should be dispatched to a storage pool of lower Read access rate density capability. This process is called "down-tiering". 2. If "Read I/O activity" is close to the "Read I/O capability" and "Allocated capacity" significantly lower than "Maximum capacity", the storage pool capacity is unbalanced and adding more application data is likely to cause an undesired performance constraint. Handing this situation requires removing application data of highest access rate density from the storage pool to free up read I/O capability. This capacity will be used later to host application data of lower access rate density. The removed application data (of highest access rate density) may need to be dispatched to a storage pool of higher "Read I/O density capability". This process is called "up-tiering". 3. "Write response time" values increase when write cache buffers are filled up and this may put application service level agreement (SLA) at risk. In this situation, it is necessary to perform a trend analysis to project future "Write response time"

15 FR values and assess whether application SLA will be endangered. If this is the case, the related application data (VDisks) must be "up-tiered" to a storage pool of higher write I/O capability. If the SLA is not at risk, the application data placement may be kept unchanged in its current storage pool. 4. If the storage pool is in an intermediate status where all the space is not fully allocated or its Read I/O activity is not close to the "Read I/O capability", there is no need to consider any action. Even if a hot workload is present in the MDG, its behavior may be balanced by a cold workload resulting in an average workload within the MDG capability. This opportunistic situation significantly reduces the hypothetical amount of right tiering actions which might be unduly recommended by a micro analysis approach. 5. If "Read I/O activity" is close to the "Read I/O capability" and "Allocated capacity" is almost equal to the "Maximum capacity", the storage pool capacity is well balanced as long as the "Write response time" value stays within the acceptable limits and the two alerts compensate each other. 6. When determining which VDisk(s) should be right-tiered, absolute Read I/O rate VDisk actual values cannot be used 'as is' because of the cache present at the virtualization engine level. This cache allows serving Read I/O request to front end data processors without incurring back end Read instructions. The method of the present invention uses the relative Read I/O rate activity for each VDisk compared to the front end aggregated workload hosted in the MDG to sort VDisks between 'hot' and 'cold' data workloads and take practical re-tiering decisions. It will be clear to one skilled in the art that the method of the present invention may suitably be embodied in a logical apparatus comprising means to perform the steps of the method, and such logic means may comprise hardware components of firmware components. The implementation of this optimization approach may be supported by means of a microprocessor supporting a process flow as now described with reference to Figure 12. Step 1200 checks if the allocated storage capacity if greater than 90% of the total capacity of the Managed Disk Group where the threshold value (90%) can be set up by the storage administrator according to local policy.

16 FR If the result is No, then a test is performed (step 1202) to determine whether the actual Read I/O rate is greater than 75% of the read I/O capability of the MDG where the threshold value (75%) can be set up by the storage administrator according to local policy. - If the result is No, meaning that the pool is in an intermediate state, no further action is performed and the process goes to step If the result of test 1202 is Yes, meaning that the aggregated workload is already using a high percentage of the Read I/O capability without all the space being consumed, there is a high probability that adding further workload may saturate the Read I/O capability causing workload SLA to suffer. Therefore an up-tiering operation is recommended in step Next, on step 1208, the up-tiering is performed by selecting the VDisk(s) of highest access density currently hosted in the MDG, and uptiering to another MDG for which the VDisk is a good complementary workload. After this VDisk right-tiering operation, the source MDG will see its Read Access rate density actual value decreasing and becoming closer to its intrinsic capability, making this MDG usage better balanced in regards of its utilization domain. The process then goes to step Going back to test performed on step 1200, if the result is Yes, then a similar test similar to step 1202 is performed. - If the result is Yes, meaning that the aggregated workload is using a high percentage of the Read I/O capability and most of the space is consumed, the MDG is operating in its economical domain, no further action is performed, and the process stops. - If the result is No, meaning that the Read I/O capability in under utilized and most of the space is already consumed, then, the MDG Read I/O capability is likely to stay underutilized. The VDisks in the MDG would be more economically hosted on an MDG of lower tier. Therefore a down-tiering operation is recommended in step Next, on step 1214, the down-tiering is performed by selecting the VDisk(s) of lowest access density in the MDG, and down-tiering to another MDG for which the VDisk is a good complementary workload. After this VDisk right-tiering operation, the source MDG will see its Read Access rate density actual value increasing and becoming closer to its intrinsic capability, making this MDG usage better balanced in regards of its utilization domain. The process then goes to step 1216.

17 FR Finally, on step 1216, the available MDG storage capacity is allocated to other workloads of complementary access density profile, and the process loops back to step 1200 to analyze the following MDG. When all MDGs are analyzed, the process will wait until the next evaluation period to restart in 1200 for the first MDG of the list. The analysis / alert method can be integrated in a repeatable storage management process as a regular monitoring task. For instance, every day, a system implementation of the method could produce a storage management dashboard reporting for each MDG, actual values versus capability and capacity and Write response time situation with highlighted alerts when applicable. The dashboard would be accompanied with drill- in views providing behaviors of the VDisks hosted by each MDG, this view being sorted by Read I/O Access rate density and a list of right-tiering actions which might be evaluated by the storage administrator for passing to the virtualization engine. Figure 13 shows a flow chart of the analysis / alert method to take care of the Write I/O quality of service aspects. In this figure, the Write I/O response time trigger is replaced by another Write I/O rate indicator. This indicator is based on the ratio between the Front End Write cache Delay I/O rate and the total Write I/O rate value. Write Cache Delay I/O operations are Write I/O operations retained in the Write cache of the virtualization engine because the back end storage pool cannot accept them because of saturation. When the amount of Write Cache delay I/O operations reaches a significant percentage of the total Write I/O activity, the front end application is likely to become slowed down and the response time increases. The usage of this indicator as a re-tiering alert is another embodiment of the present invention. On step 1300, a test is performed to check if the Front End Write Cache Delay I/O rate has reached the threshold where the threshold value is be set up by the storage administrator according to local policy. If the result is No, then the process goes to step 1320 If the result is Yes, then the VDisks causing the alert are traced to the application using these VDisks on step Next, on step 1303 values for the application batch elapsed time value [A] and the batch elapsed time SLA target [T] are collected. This data is provided externally to the present invention typically by

18 FR application performance indicators under IT operation staff responsibility. Next on step 1304, a new test checks whether the application SLA, typically a batch elapsed time target is at risk by the mean of comparing A and T values versus a safety threshold level. If the result is No, meaning that A is significantly lower than T, then the observed high response time values are not important for the batch duration, no further action is performed on step 1306, and the process goes to step If the result is Yes, meaning that A is close to T then on step 1308, a trend analysis of Write I/O response time and Write I/O rate values is performed using for instance TPC graphics reporting as an embodiment. The process continues with step 1310 where a new test is performed to check whether the total time the application waits for Write I/O operations is of increasing values or not (this total Write wait time is equal to the sum for all sampling periods of the multiplication of Write I/O response time and Write I/O rate for all VDisks in alert): - If the result is No, meaning that the total time the application waits for Write I/O operations during the batch processing does not increase over time, and therefore does not degrade the batch duration SLA, then no further action is performed on step 1312 and the process follows with step If the result is Yes, meaning that the total time the application waits for Write I/O operations during the batch processing is increasing and may cause the batch duration to become at risk, the process goes to step 1314 where trend analysis results are used to extrapolate, for instance with a linear modeling, future batch duration values. The process continues with step 1316 to check if the SLA Target (T) is at risk or not in a near future. If the result is No, the process goes to step 1312 otherwise if the result is Yes, the process goes to step 1318 to up-tier some (or all) VDisks, creating the application SLA risk to an MDG with an higher I/O capability. Finally, on step 1320, the available MDG storage capacity is allocated to other workloads of complementary access density profile, and the process loops back to step 1300 to analyze the following MDG. When all MDGs are analyzed, the process will wait until the next evaluation period to restart in 1300 for the first MDG of the list.

19 FR The analysis/alert methods described in Figures 12 and 13 can also be used to characterize a new workload which I/O profile is unknown. This workload may be hosted in a 'nursery' MDG for measurement of its I/O behavior for a certain period (for instance for one month) to collect sufficient behavioral data. After this period, application VDisks could be right-tiered based on space requirement, Read I/O requirement and Read I/O density values provided by the Data Analyzer component. This 'nursery' process may replace, at low cost, the need for sophisticated storage performance estimation work required before deciding which storage tier should be used and which MDG(s) would be best suited. Future changes in application behavior would then be handled by the regular monitoring task ensuring alignment of application needs to the storage infrastructure without intervention from costly storage engineers. In an alternate embodiment, the analysis/alert method of the present invention may be used to relocate application data when a back end disk array connected to the virtualized storage infrastructure requires de-commissioning. In this situation the data available at the Data Analyzer component may be used to decide which storage tier should be used for each of the logical storage units and which discrete storage pool (e.g. MDG) is best suited for each ones. In yet another embodiment, the analysis/alert method of the present invention may be used to relocate application data when a disk array not connected to the virtualized storage infrastructure requires de-commissioning. In this situation the disk might be connected to the virtualized storage infrastructure and undergo the nursery characterization process before relocating the virtual logical storage units to other discrete virtual storage pools. In an alternative, the process might consist of using existing performance data collected on the disk array and reinstall the application on the virtualized storage infrastructure using the data provided by Data Analyzer component. It will be understood by those skilled in the art that, although the present invention has been described in relation to the preceding example embodiments, the invention is not limited thereto and that there are many possible variations and modifications which falls within the scope of the invention. The scope of the present disclosure includes any novel feature of combination of features disclosed herein. The applicant hereby gives notice that new claims may be

20 FR formulated to such features or combination of features during prosecution of this application of any such further applications derived therefrom. In particular, with reference to the appended claims, features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the claim. For the avoidance of doubt, the term " comprising", as used herein throughout the description and claim is not to be construed as meaning 'consisting only of. It will be understood by those skilled in the art that although the present invention has been described in relation to the preceding example embodiments by the use of SAN Volume controller vocabulary, the invention is not limited thereto and there are many possible wording which can describe an MDG or a VDisk. For instance, an MDG may be referred as a storage pool, virtual storage pool or discrete virtual storage pool and a VDisk as a Virtual Storage Logical Unit.

21 FR CLAIMS 1. A method for managing storage of data in a network comprising a plurality of host data processors coupled to a plurality of physical storage media through a storage virtualization engine, the storage virtualization engine comprising a mapping unit to map between Virtual Disk(s) (VDisk(s)) to Managed Disks (MDisks), wherein a plurality of Managed Disks of a same tier level being grouped to form discrete virtual storage pool(s) (MDG(s)), the method comprising: storing metadata describing space capacity and quantifying Read I/O capability of each discrete virtual storage pool; periodically collecting from the virtualization engine information on storage usage, Read I/O and Write I/O activity of Virtual Disk(s); aggregating the collected information; comparing the aggregated data to the metadata of each discrete virtual storage pool; and generating a list of re-tiering actions for Virtual Disk(s) according to the result of the comparison step based on threshold attainment. 2. The method of claim 1 wherein Read and Write I/O information collected may be one of rate access, response times, back end and/or front end activity data, and/or queue levels. 3. The method of claim 1 or 2 wherein the collecting step further comprises the step of storing at various time periods the information collected into a local repository. 4. The method of anyone of claims 1 to 3 wherein the aggregated data comprise values of minimum, maximum, average, shape factors, for VDisk(s). 5. The method of anyone of claims 1 to 4 wherein the comparing step further comprises the step of checking if the allocated storage capacity is greater than a predefined capacity threshold value.

22 FR The method of claim 5 wherein the predefined capacity threshold value is set to 90% of the total capacity of the discrete virtual storage pool. 7. The method of claim 5 or 6 further comprising the step of checking if the actual Read I/O rate is greater than a predefined capability threshold value. 8. The method of claim 8 wherein the predefined capability threshold value is set to 75% of the Read I/O capability. 9. The method of anyone of claims 1 to 8 wherein the comparing step further comprises the step of checking if the Write cache delay I/O rate is greater than a predefined percentage threshold of actual Write I/O rate value. 10. The method of anyone of claims 1 to 9 wherein the threshold values are set up by a storage administrator. 11. The method of anyone of claims 1 to 10 wherein the step of generating a list of re-tiering actions further comprises the step of generating a storage pool dashboard comprising virtual storage pool capability, capacity, actual usage and alerts raised. 12. The method of anyone of claims 1 to 11 wherein the step of generating a list of re-tiering actions further comprising the step of generating a drill-in view of VDisks sorted by relative Read I/O rate density. 13. A system for managing storage of data in a network comprising a plurality of host data processors coupled to a plurality of physical storage media through a storage virtualization engine, the storage virtualization engine comprising a mapping unit to map between Virtual Disk(s) (VDisk(s)) to Managed Disks (MDisks), a plurality of Managed Disks of a same tier level being grouped to form a discrete virtual storage pool (MDG), the system comprising means for implementing the steps of the method of anyone of claims 1 to 12.

23 FR A computer program comprising instructions for carrying out the steps of the method according to any one of claim 1 to 12 when said computer program is executed on a suitable computer device. 15. A computer readable medium having encoded thereon a computer program according to claim 14.

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38 A. CLASSIFICATION OF SUBJECT MATTER INV. G06F3/06 nternat ona app cat on o PCT/EP2010/ According to International Patent Classification (IPC) or to both national classification and IPC B. FIELDS SEARCHED Minimum documentation searched (classification system followed by classification symbols) G06F Documentation searched other than minimum documentation to the extent that such documents are included in the fields searched Electronic data base consulted during the international search (name of data base and, where practical search terms used) EPO-Internal, WPI Data C. DOCUMENTS CONSIDERED TO BE RELEVANT Category' Citation of document, with indication, where appropπate, of the relevant passages Relevant to claim No US 2007/ A l (EGUCHI YOSHIAKI [JP]) April 2007 ( ) figures 2,22-24 paragraph [0074] - paragraph [0087] paragraph [0263] - paragraph [0290] US 2008/ A l (SUGINO SHOJI [JP] ET 1-15 AL) 19 June 2008 ( ) figures 9,12,15,16 paragraph [0078] - paragraph [0081] paragraph [0086] paragraph [0130] - paragraph [0131] -/-- Further documents are listed in the continuation of Box C See patent family annex * Special categories of cited documents "T' later document published after the international filing date or priority date and not in conflict with the application but "A" document defining the general state of the art which is not cited to understand the pnnciple or theory underlying the considered to be of particular relevance invention "E" earlier document but published on or after the international "X' document of particular relevance the claimed invention filing date cannot be considered novel or cannot be considered to "L" document which may throw doubts on prionty claιm(s) or involve an inventive step when the document is taken alone which is cited to establish the publication date of another "Y" document of particular relevance the claimed invention citation or other special reason (as specified) cannot be considered to involve an inventive step when the O ' document referπng to an oral disclosure, use exhibition or document is combined with one or more other such docu other means ments, such combination being obvious to a person skilled "P" document published prior to the international filing date but in the art later than the priority date claimed "&" document member of the same patent family Date of the actual completion of the international search Date of mailing of the international search report 5 March /03/2010 Name and mailing address of the ISA/ Authonzed officer European Patent Office, P B 5818 Patentlaan 2 NL HV Rijswijk TeI (+31-70) , Fax (+31-70) Al I i ot, SyI vai n Form PCT/ISA/210 (second sheet) (April 2005)

(12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT)

(12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (43) International Publication Date (10) International

More information

(43) International Publication Date n n / ft * 3 May 2012 ( ) U l / 5 A

(43) International Publication Date n n / ft * 3 May 2012 ( ) U l / 5 A (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

TMCH Report March February 2017

TMCH Report March February 2017 TMCH Report March 2013 - February 2017 Contents Contents 2 1 Trademark Clearinghouse global reporting 3 1.1 Number of jurisdictions for which a trademark record has been submitted for 3 2 Trademark Clearinghouse

More information

TM), European (AL, AT, BE, BG, CH, CY, CZ, DE, DK, W., Houston, Texas (US).

TM), European (AL, AT, BE, BG, CH, CY, CZ, DE, DK, W., Houston, Texas (US). (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

WO 2017/ Al. 15 June 2017 ( )

WO 2017/ Al. 15 June 2017 ( ) (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

WO 2013/ Al. 17 January 2013 ( ) P O P C T

WO 2013/ Al. 17 January 2013 ( ) P O P C T (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

(43) International Publication Date WO 2013/ Al 4 April 2013 ( ) W P O P C T

(43) International Publication Date WO 2013/ Al 4 April 2013 ( ) W P O P C T (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

Figure 1. (43) International Publication Date WO 2015/ Al 9 July 2015 ( ) W P O P C T. [Continued on nextpage]

Figure 1. (43) International Publication Date WO 2015/ Al 9 July 2015 ( ) W P O P C T. [Continued on nextpage] (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

EP A1 (19) (11) EP A1. (12) EUROPEAN PATENT APPLICATION published in accordance with Art. 153(4) EPC

EP A1 (19) (11) EP A1. (12) EUROPEAN PATENT APPLICATION published in accordance with Art. 153(4) EPC (19) (12) EUROPEAN PATENT APPLICATION published in accordance with Art. 13(4) EPC (11) EP 2 482 24 A1 (43) Date of publication: 01.08.2012 Bulletin 2012/31 (21) Application number: 818282. (22) Date of

More information

EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (43) Date of publication: Bulletin 2012/34

EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (43) Date of publication: Bulletin 2012/34 (19) (12) EUROPEAN PATENT APPLICATION (11) EP 2 490 138 A1 (43) Date of publication: 22.08.2012 Bulletin 2012/34 (1) Int Cl.: G06F 17/30 (2006.01) (21) Application number: 1214420.9 (22) Date of filing:

More information

GM, KE, LR, LS, MW, MZ, NA, RW, SD, SL, SZ, TZ, fornia (US).

GM, KE, LR, LS, MW, MZ, NA, RW, SD, SL, SZ, TZ, fornia (US). (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

TEPZZ Z7999A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: B05B 15/04 ( )

TEPZZ Z7999A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: B05B 15/04 ( ) (19) TEPZZ Z7999A_T (11) EP 3 7 999 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 23.08.17 Bulletin 17/34 (1) Int Cl.: B0B 1/04 (06.01) (21) Application number: 1617686.1 (22) Date of filing:

More information

(12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT)

(12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (43) International Publication Date (10) International

More information

TEPZZ 98 _55A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION

TEPZZ 98 _55A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION (19) TEPZZ 98 _A_T (11) EP 2 983 1 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication:.02.16 Bulletin 16/06 (21) Application number: 1180049.7 (1) Int Cl.: G08G /06 (06.01) G08G 7/00 (06.01)

More information

EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: G06T 15/60 ( )

EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: G06T 15/60 ( ) (19) (12) EUROPEAN PATENT APPLICATION (11) EP 2 234 069 A1 (43) Date of publication: 29.09.2010 Bulletin 2010/39 (51) Int Cl.: G06T 15/60 (2006.01) (21) Application number: 09364002.7 (22) Date of filing:

More information

TEPZZ 6 8A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION

TEPZZ 6 8A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION (19) TEPZZ 6 8A_T (11) EP 3 121 638 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 2.01.17 Bulletin 17/04 (21) Application number: 1380032.1 (1) Int Cl.: G02B 27/01 (06.01) G06F 11/16 (06.01)

More information

TEPZZ _968ZZA_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: G06K 7/10 ( )

TEPZZ _968ZZA_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: G06K 7/10 ( ) (19) TEPZZ _968ZZA_T (11) EP 3 196 800 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 26.07.17 Bulletin 17/ (1) Int Cl.: G06K 7/ (06.01) (21) Application number: 1719738.8 (22) Date of filing:

More information

TEPZZ 6Z8446A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: H04L 9/08 ( ) H04L 9/32 (2006.

TEPZZ 6Z8446A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: H04L 9/08 ( ) H04L 9/32 (2006. (19) TEPZZ 6Z8446A_T (11) EP 2 608 446 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 26.06.2013 Bulletin 2013/26 (1) Int Cl.: H04L 9/08 (2006.01) H04L 9/32 (2006.01) (21) Application number:

More information

(12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau

(12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

(51) Int Cl.: H04L 12/24 ( ) WU, Qin

(51) Int Cl.: H04L 12/24 ( ) WU, Qin (19) TEPZZ Z 68A_T (11) EP 3 3 68 A1 (12) EUROPEAN PATENT APPLICATION published in accordance with Art. 13(4) EPC (43) Date of publication: 09.08.17 Bulletin 17/32 (21) Application number: 182297.9 (22)

More information

FILE SYSTEM 102 DIRECTORY MODULE 104 SELECTION MODULE. Fig. 1

FILE SYSTEM 102 DIRECTORY MODULE 104 SELECTION MODULE. Fig. 1 (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

TEPZZ Z47A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: G06Q 30/00 ( )

TEPZZ Z47A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: G06Q 30/00 ( ) (19) TEPZZ _ _Z47A_T (11) EP 3 131 047 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 1.02.17 Bulletin 17/07 (1) Int Cl.: G06Q /00 (12.01) (21) Application number: 160297.4 (22) Date of

More information

(51) Int Cl.: G06F 21/00 ( ) G11B 20/00 ( ) G06Q 10/00 ( )

(51) Int Cl.: G06F 21/00 ( ) G11B 20/00 ( ) G06Q 10/00 ( ) (19) Europäisches Patentamt European Patent Office Office européen des brevets (12) EUROPEAN PATENT APPLICATION (11) EP 1 724 699 A1 (43) Date of publication: 22.11.2006 Bulletin 2006/47 (21) Application

More information

PCT WO 2007/ Al

PCT WO 2007/ Al (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (43) International Publication Date (10) International

More information

(51) Int Cl.: H04L 29/06 ( )

(51) Int Cl.: H04L 29/06 ( ) (19) TEPZZ 94Z96B_T (11) EP 2 9 96 B1 (12) EUROPEAN PATENT SPECIFICATION (4) Date of publication and mention of the grant of the patent: 26.04.17 Bulletin 17/17 (1) Int Cl.: H04L 29/06 (06.01) (21) Application

More information

EP A1 (19) (11) EP A1. (12) EUROPEAN PATENT APPLICATION published in accordance with Art. 153(4) EPC

EP A1 (19) (11) EP A1. (12) EUROPEAN PATENT APPLICATION published in accordance with Art. 153(4) EPC (19) (12) EUROPEAN PATENT APPLICATION published in accordance with Art. 153(4) EPC (11) EP 2 493 239 A1 (43) Date of publication: 29.08.2012 Bulletin 2012/35 (21) Application number: 10829523.9 (22) Date

More information

TEPZZ 8_8997A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION

TEPZZ 8_8997A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION (19) TEPZZ 8_8997A_T (11) EP 2 818 997 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 31.12.2014 Bulletin 2015/01 (21) Application number: 13174439.3 (51) Int Cl.: G06F 3/0488 (2013.01)

More information

TEPZZ 74_475A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: H04L 29/12 ( )

TEPZZ 74_475A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: H04L 29/12 ( ) (19) TEPZZ 74_47A_T (11) EP 2 741 47 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 11.06.14 Bulletin 14/24 (1) Int Cl.: H04L 29/12 (06.01) (21) Application number: 131968.6 (22) Date of

More information

(CN). PCT/CN20 14/ (81) Designated States (unless otherwise indicated, for every kind of national protection available): AE, AG, AL, AM,

(CN). PCT/CN20 14/ (81) Designated States (unless otherwise indicated, for every kind of national protection available): AE, AG, AL, AM, (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

*EP A2* EP A2 (19) (11) EP A2 (12) EUROPEAN PATENT APPLICATION. (43) Date of publication: Bulletin 2005/37

*EP A2* EP A2 (19) (11) EP A2 (12) EUROPEAN PATENT APPLICATION. (43) Date of publication: Bulletin 2005/37 (19) Europäisches Patentamt European Patent Office Office européen des brevets *EP007312A2* (11) EP 1 7 312 A2 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 14.09.0 Bulletin 0/37 (1) Int Cl.

More information

(12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT)

(12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau 1111111111111111 111111 111111111111111 111 111 11111111111111111111

More information

TEPZZ 85 9Z_A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION

TEPZZ 85 9Z_A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION (19) TEPZZ 8 9Z_A_T (11) EP 2 83 901 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 01.04.1 Bulletin 1/14 (21) Application number: 141861.1 (1) Int Cl.: G01P 21/00 (06.01) G01C 2/00 (06.01)

More information

30 June 2011 ( ) W / / / / A

30 June 2011 ( ) W / / / / A (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

DNSSEC Workshop. Dan York, Internet Society ICANN 53 June 2015

DNSSEC Workshop. Dan York, Internet Society ICANN 53 June 2015 DNSSEC Workshop Dan York, Internet Society ICANN 53 June 2015 First, a word about our host... 2 Program Committee Steve Crocker, Shinkuro, Inc. Mark Elkins, DNS/ZACR Cath Goulding, Nominet Jean Robert

More information

10 December 2009 ( ) WO 2009/ A2

10 December 2009 ( ) WO 2009/ A2 (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (43) International Publication Date (10) International

More information

WO 2013/ Al. 11 April 2013 ( ) P O P C T

WO 2013/ Al. 11 April 2013 ( ) P O P C T (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

Lionbridge ondemand for Adobe Experience Manager

Lionbridge ondemand for Adobe Experience Manager Lionbridge ondemand for Adobe Experience Manager Version 1.1.0 Configuration Guide October 24, 2017 Copyright Copyright 2017 Lionbridge Technologies, Inc. All rights reserved. Published in the USA. March,

More information

TEPZZ 57 7 ZA_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (43) Date of publication: Bulletin 2013/13

TEPZZ 57 7 ZA_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (43) Date of publication: Bulletin 2013/13 (19) TEPZZ 57 7 ZA_T (11) EP 2 573 720 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 27.03.2013 Bulletin 2013/13 (51) Int Cl.: G06Q 10/00 (2012.01) (21) Application number: 11182591.5 (22)

More information

TEPZZ 8864Z9A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: B60W 30/14 ( ) B60W 50/00 (2006.

TEPZZ 8864Z9A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: B60W 30/14 ( ) B60W 50/00 (2006. (19) TEPZZ 8864Z9A_T (11) EP 2 886 9 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 24.06. Bulletin /26 (1) Int Cl.: B60W /14 (06.01) B60W 0/00 (06.01) (21) Application number: 106043.7

More information

TEPZZ Z5_748A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION

TEPZZ Z5_748A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION (19) TEPZZ Z_748A_T (11) EP 3 01 748 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 03.08.16 Bulletin 16/31 (21) Application number: 118.1 (1) Int Cl.: H04L 12/14 (06.01) H04W 48/18 (09.01)

More information

EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: G06F 17/30 ( )

EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: G06F 17/30 ( ) (19) (12) EUROPEAN PATENT APPLICATION (11) EP 2 447 858 A1 (43) Date of publication: 02.05.2012 Bulletin 2012/18 (51) Int Cl.: G06F 17/30 (2006.01) (21) Application number: 11004965.7 (22) Date of filing:

More information

ica) Inc., 2355 Dulles Corner Boulevard, 7th Floor, before the expiration of the time limit for amending the

ica) Inc., 2355 Dulles Corner Boulevard, 7th Floor, before the expiration of the time limit for amending the (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

TEPZZ 8Z9Z A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: H04L 12/26 ( )

TEPZZ 8Z9Z A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: H04L 12/26 ( ) (19) TEPZZ 8Z9Z A_T (11) EP 2 809 033 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 03.12.14 Bulletin 14/49 (1) Int Cl.: H04L 12/26 (06.01) (21) Application number: 1417000.4 (22) Date

More information

DZ, EC, EE, EG, ES, FI, GB, GD, GE, GH, GM, GT, HN, HR, HU, ID, IL, IN, IS, KE, KG, KM, KN, KP, KR, TZ, UA, UG, US, UZ, VC, VN, ZA, ZM, ZW.

DZ, EC, EE, EG, ES, FI, GB, GD, GE, GH, GM, GT, HN, HR, HU, ID, IL, IN, IS, KE, KG, KM, KN, KP, KR, TZ, UA, UG, US, UZ, VC, VN, ZA, ZM, ZW. (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (43) International Publication Date (10) International

More information

MAWA Forum State of Play. Cooperation Planning & Support Henk Corporaal MAWA Forum Chair

MAWA Forum State of Play. Cooperation Planning & Support Henk Corporaal MAWA Forum Chair MAWA Forum State of Play Cooperation Planning & Support Henk Corporaal MAWA Forum Chair Content Background MAWA Initiative Achievements and Status to date Future Outlook 2 Background MAWA Initiative The

More information

EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: H04L 12/56 ( )

EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: H04L 12/56 ( ) (19) (12) EUROPEAN PATENT APPLICATION (11) EP 1 760 963 A1 (43) Date of publication: 07.03.07 Bulletin 07/ (1) Int Cl.: H04L 12/6 (06.01) (21) Application number: 06018260.7 (22) Date of filing: 31.08.06

More information

(12) Patent Application Publication (10) Pub. No.: US 2004/ A1

(12) Patent Application Publication (10) Pub. No.: US 2004/ A1 (19) United States US 2004O260967A1 (12) Patent Application Publication (10) Pub. No.: US 2004/0260967 A1 Guha et al. (43) Pub. Date: Dec. 23, 2004 (54) METHOD AND APPARATUS FOR EFFICIENT FAULTTOLERANT

More information

TEPZZ _Z_56ZA_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: G06F 17/30 ( )

TEPZZ _Z_56ZA_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: G06F 17/30 ( ) (19) TEPZZ _Z_6ZA_T (11) EP 3 1 60 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 07.12.16 Bulletin 16/49 (1) Int Cl.: G06F 17/ (06.01) (21) Application number: 16176.9 (22) Date of filing:

More information

Automated Storage Tiering on Infortrend s ESVA Storage Systems

Automated Storage Tiering on Infortrend s ESVA Storage Systems Automated Storage Tiering on Infortrend s ESVA Storage Systems White paper Abstract This white paper introduces automated storage tiering on Infortrend s ESVA storage arrays. Storage tiering can generate

More information

TEPZZ 99894ZA_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION

TEPZZ 99894ZA_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION (19) TEPZZ 99894ZA_T (11) EP 2 998 9 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 23.03.16 Bulletin 16/12 (21) Application number: 18973.3 (1) Int Cl.: G07C 9/00 (06.01) B62H /00 (06.01)

More information

(12) Patent Application Publication (10) Pub. No.: US 2006/ A1

(12) Patent Application Publication (10) Pub. No.: US 2006/ A1 (19) United States US 20060041739A1 (12) Patent Application Publication (10) Pub. No.: US 2006/0041739 A1 Iwakura et al. (43) Pub. Date: Feb. 23, 2006 (54) MEMORY DUMP GENERATION WITH (52) U.S. Cl....

More information

IBM Tivoli Storage Productivity Center Version Storage Tier Reports. Authors: Mike Lamb Patrick Leahy Balwant Rai Jackson Shea

IBM Tivoli Storage Productivity Center Version Storage Tier Reports. Authors: Mike Lamb Patrick Leahy Balwant Rai Jackson Shea IBM Tivoli Storage Productivity Center Version 4.2.2 Authors: Mike Lamb Patrick Leahy Balwant Rai Jackson Shea Contents Introduction...3 Knowledge and skills prerequisites...3 Important concepts...3 Storage

More information

GM, KE, LR, LS, MW, MZ, NA, SD, SL, SZ, TZ, UG, ministration Building, Bantian, Longgang, Shenzhen,

GM, KE, LR, LS, MW, MZ, NA, SD, SL, SZ, TZ, UG, ministration Building, Bantian, Longgang, Shenzhen, (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

TEPZZ _4748 A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION

TEPZZ _4748 A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION (19) TEPZZ _4748 A_T (11) EP 3 147 483 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 29.03.17 Bulletin 17/13 (21) Application number: 161896.0 (1) Int Cl.: F02C 9/28 (06.01) F02C 9/46 (06.01)

More information

SVC VOLUME MIGRATION

SVC VOLUME MIGRATION The information, tools and documentation ( Materials ) are being provided to IBM customers to assist them with customer installations. Such Materials are provided by IBM on an as-is basis. IBM makes no

More information

TEPZZ 78779ZB_T EP B1 (19) (11) EP B1 (12) EUROPEAN PATENT SPECIFICATION

TEPZZ 78779ZB_T EP B1 (19) (11) EP B1 (12) EUROPEAN PATENT SPECIFICATION (19) TEPZZ 78779ZB_T (11) EP 2 787 790 B1 (12) EUROPEAN PATENT SPECIFICATION (4) Date of publication and mention of the grant of the patent: 26.07.17 Bulletin 17/ (21) Application number: 12878644.9 (22)

More information

(43) International Publication Date \ / 0 1 / 1 ' 9 September 2011 ( ) 2 1 VI / A 2

(43) International Publication Date \ / 0 1 / 1 ' 9 September 2011 ( ) 2 1 VI / A 2 (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

IBM System Storage SAN Volume Controller IBM Easy Tier enhancements in release

IBM System Storage SAN Volume Controller IBM Easy Tier enhancements in release IBM System Storage SAN Volume Controller IBM Easy Tier enhancements in 7.5.0 release Kushal S. Patel, Shrikant V. Karve, Sarvesh S. Patel IBM Systems, ISV Enablement July 2015 Copyright IBM Corporation,

More information

(12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT)

(12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

2016 Survey of Internet Carrier Interconnection Agreements

2016 Survey of Internet Carrier Interconnection Agreements 2016 Survey of Internet Carrier Interconnection Agreements Bill Woodcock Marco Frigino Packet Clearing House November 21, 2016 PCH Peering Survey 2011 Five years ago, PCH conducted the first-ever broad

More information

*EP A2* EP A2 (19) (11) EP A2 (12) EUROPEAN PATENT APPLICATION. (43) Date of publication: Bulletin 2000/33

*EP A2* EP A2 (19) (11) EP A2 (12) EUROPEAN PATENT APPLICATION. (43) Date of publication: Bulletin 2000/33 (19) Europäisches Patentamt European Patent Office Office européen des brevets *EP002842A2* (11) EP 1 028 42 A2 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 16.08.00 Bulletin 00/33 (1) Int

More information

WO 2016/ Al. 21 April 2016 ( ) P O P C T. Figure 2

WO 2016/ Al. 21 April 2016 ( ) P O P C T. Figure 2 (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

HANDBOOK ON INDUSTRIAL PROPERTY INFORMATION AND DOCUMENTATION. Ref.: Standards ST.10/B page: STANDARD ST.10/B

HANDBOOK ON INDUSTRIAL PROPERTY INFORMATION AND DOCUMENTATION. Ref.: Standards ST.10/B page: STANDARD ST.10/B Ref.: Standards ST.10/B page: 3.10.2.1 STANDARD ST.10/B LAYOUT OF BIBLIOGRAPHIC DATA COMPONENTS Revision adopted by the SCIT Standards and Documentation Working Group at its tenth session on November 21,

More information

EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (43) Date of publication: Bulletin 2012/45

EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (43) Date of publication: Bulletin 2012/45 (19) (12) EUROPEAN PATENT APPLICATION (11) EP 2 521 319 A1 (43) Date of publication: 07.11.2012 Bulletin 2012/45 (51) Int Cl.: H04L 12/40 (2006.01) H04L 1/00 (2006.01) (21) Application number: 11164445.6

More information

(12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT)

(12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (l J w;~:s~:!~:::.:opcrty ~ llllllllllll~~~~~~~~;~~~~~~~~~~~~~~~~.~~~~~!~~~~~llllllllll (43) International Publication

More information

IBM i Virtual I/O Performance in an IBM System Storage SAN Volume Controller with IBM System Storage DS8000 Environment

IBM i Virtual I/O Performance in an IBM System Storage SAN Volume Controller with IBM System Storage DS8000 Environment IBM i Virtual I/O Performance in an IBM System Storage SAN Volume Controller with IBM System Storage DS8000 Environment This document can be found in the IBM Techdocs library, www.ibm.com/support/techdocs

More information

Global Forum 2007 Venice

Global Forum 2007 Venice Global Forum 2007 Venice Broadband Infrastructure for Innovative Applications In Established & Emerging Markets November 5, 2007 Jacquelynn Ruff VP, International Public Policy Verizon Verizon Corporate

More information

SURVEY ON APPLICATION NUMBERING SYSTEMS

SURVEY ON APPLICATION NUMBERING SYSTEMS Ref.: Examples and IPO practices page: 7..5.0 SURVEY ON APPLICATION NUMBERING SYSTEMS Editorial note by the International Bureau The following survey presents the information on various aspects of application

More information

vsan 6.6 Performance Improvements First Published On: Last Updated On:

vsan 6.6 Performance Improvements First Published On: Last Updated On: vsan 6.6 Performance Improvements First Published On: 07-24-2017 Last Updated On: 07-28-2017 1 Table of Contents 1. Overview 1.1.Executive Summary 1.2.Introduction 2. vsan Testing Configuration and Conditions

More information

October 1, 2017 MPEG-2 Systems Attachment 1 Page 1 of 7. GE Technology Development, Inc. MY A MY MY A.

October 1, 2017 MPEG-2 Systems Attachment 1 Page 1 of 7. GE Technology Development, Inc. MY A MY MY A. October 1, 2017 MPEG-2 Systems Attachment 1 Page 1 of 7 GE Technology Development, Inc. MY 118172-A MY 128994 1 MY 141626-A Thomson Licensing MY 118734-A PH 1-1995-50216 US 7,334,248 October 1, 2017 MPEG-2

More information

Data center requirements

Data center requirements Prerequisites, page 1 Data center workflow, page 2 Determine data center requirements, page 2 Gather data for initial data center planning, page 2 Determine the data center deployment model, page 3 Determine

More information

SSD ENDURANCE. Application Note. Document #AN0032 Viking SSD Endurance Rev. A

SSD ENDURANCE. Application Note. Document #AN0032 Viking SSD Endurance Rev. A SSD ENDURANCE Application Note Document #AN0032 Viking Rev. A Table of Contents 1 INTRODUCTION 3 2 FACTORS AFFECTING ENDURANCE 3 3 SSD APPLICATION CLASS DEFINITIONS 5 4 ENTERPRISE SSD ENDURANCE WORKLOADS

More information

eifu Trauma and Extremities

eifu Trauma and Extremities Electronic Instructions for Use eifu Trauma and Extremities 1 Instructions for use of T&E products are available on the Stryker eifu website 2 Benefits Environmental aspect less paper, possible smaller

More information

*EP A1* EP A1 (19) (11) EP A1. (12) EUROPEAN PATENT APPLICATION published in accordance with Art.

*EP A1* EP A1 (19) (11) EP A1. (12) EUROPEAN PATENT APPLICATION published in accordance with Art. (19) Europäisches Patentamt European Patent Office Office européen des brevets *EP00182883A1* (11) EP 1 82 883 A1 (12) EUROPEAN PATENT APPLICATION published in accordance with Art. 18(3) EPC (43) Date

More information

I International Bureau (10) International Publication Number (43) International Publication Date

I International Bureau (10) International Publication Number (43) International Publication Date (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization I International Bureau (10) International Publication Number (43) International

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 FLASH 1 ST THE STORAGE STRATEGY FOR THE NEXT DECADE Iztok Sitar Sr. Technology Consultant EMC Slovenia 2 Information Tipping Point Ahead The Future Will Be Nothing Like The Past 140,000 120,000 100,000

More information

EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION

EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION (19) (12) EUROPEAN PATENT APPLICATION (11) EP 2 096 724 A1 (43) Date of publication: 02.09.2009 Bulletin 2009/36 (21) Application number: 09153153.3 (51) Int Cl.: H01R 35/04 (2006.01) H01R 24/00 (2006.01)

More information

MONITORING STORAGE PERFORMANCE OF IBM SVC SYSTEMS WITH SENTRY SOFTWARE

MONITORING STORAGE PERFORMANCE OF IBM SVC SYSTEMS WITH SENTRY SOFTWARE MONITORING STORAGE PERFORMANCE OF IBM SVC SYSTEMS WITH SENTRY SOFTWARE WHITE PAPER JULY 2018 INTRODUCTION The large number of components in the I/O path of an enterprise storage virtualization device such

More information

Lenovo SAN Manager. Rapid Tier and Read Cache. David Vestal, WW Product Marketing. June Lenovo.com/systems

Lenovo SAN Manager. Rapid Tier and Read Cache. David Vestal, WW Product Marketing. June Lenovo.com/systems Lenovo SAN Manager Rapid Tier and Read Cache June 2017 David Vestal, WW Product Marketing Lenovo.com/systems Table of Contents Introduction... 3 Automated Sub-LUN Tiering... 4 LUN-level tiering is inflexible

More information

(12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT)

(12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

SMF Transient Voltage Suppressor Diode Series

SMF Transient Voltage Suppressor Diode Series SMF Transient Voltage Suppressor Diode Series General Information The SMF series is designed specifically to protect sensitive electronic equipment from voltage transients induced by lightning and other

More information

Wireless devices supports in a simple environment

Wireless devices supports in a simple environment USOO8868690B2 (12) United States Patent (10) Patent No.: US 8,868,690 B2 Tsao (45) Date of Patent: *Oct. 21, 2014 (54) SYSTEMAND METHOD FOR SUPPORT (52) U.S. Cl. (71) (72) (73) (*) (21) (22) (65) (63)

More information

PowerVault MD3 SSD Cache Overview

PowerVault MD3 SSD Cache Overview PowerVault MD3 SSD Cache Overview A Dell Technical White Paper Dell Storage Engineering October 2015 A Dell Technical White Paper TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS

More information

(12) Patent Application Publication (10) Pub. No.: US 2010/ A1

(12) Patent Application Publication (10) Pub. No.: US 2010/ A1 (19) United States US 2010.019 1896A1 (12) Patent Application Publication (10) Pub. No.: US 2010/0191896 A1 Yang et al. (43) Pub. Date: Jul. 29, 2010 (54) SOLID STATE DRIVE CONTROLLER WITH FAST NVRAM BUFFER

More information

COMMISSION IMPLEMENTING REGULATION (EU)

COMMISSION IMPLEMENTING REGULATION (EU) 18.8.2012 Official Journal of the European Union L 222/5 COMMISSION IMPLEMENTING REGULATION (EU) No 751/2012 of 16 August 2012 correcting Regulation (EC) No 1235/2008 laying down detailed rules for implementation

More information

Implementing IBM Easy Tier with IBM Real-time Compression IBM Redbooks Solution Guide

Implementing IBM Easy Tier with IBM Real-time Compression IBM Redbooks Solution Guide Implementing IBM Easy Tier with IBM Real-time Compression IBM Redbooks Solution Guide Overview IBM Easy Tier is a performance function that automatically and non-disruptively migrates frequently accessed

More information

(12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT)

(12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

CCH Trust Accounts. Version Release Notes

CCH Trust Accounts. Version Release Notes CCH Trust Accounts Version 2017.4 Release Notes Legal Notice Disclaimer Wolters Kluwer (UK) Limited has made every effort to ensure the accuracy and completeness of these Release Notes. However, Wolters

More information

SPAREPARTSCATALOG: CONNECTORS SPARE CONNECTORS KTM ART.-NR.: 3CM EN

SPAREPARTSCATALOG: CONNECTORS SPARE CONNECTORS KTM ART.-NR.: 3CM EN SPAREPARTSCATALOG: CONNECTORS ART.-NR.: 3CM3208201EN CONTENT SPARE CONNECTORS AA-AN SPARE CONNECTORS AO-BC SPARE CONNECTORS BD-BQ SPARE CONNECTORS BR-CD 3 4 5 6 SPARE CONNECTORS CE-CR SPARE CONNECTORS

More information

(12) Patent Application Publication (10) Pub. No.: US 2008/ A1

(12) Patent Application Publication (10) Pub. No.: US 2008/ A1 (19) United States US 20080114930A1 (12) Patent Application Publication (10) Pub. No.: US 2008/0114930 A1 Sanvido et al. (43) Pub. Date: (54) DISK DRIVE WITH CACHE HAVING VOLATLE AND NONVOLATILE MEMORY

More information

WO 2008/ Al PCT. (19) World Intellectual Property Organization International Bureau

WO 2008/ Al PCT. (19) World Intellectual Property Organization International Bureau (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (43) International Publication Date (10) International

More information

EPO INPADOC 44 years. Dr. Günther Vacek, EPO Patent Information Fair 2016, Tokyo. November 2016

EPO INPADOC 44 years. Dr. Günther Vacek, EPO Patent Information Fair 2016, Tokyo. November 2016 EPO INPADOC 44 years Dr. Günther Vacek, EPO Patent Information Fair 2016, Tokyo November 2016 Content The INPADOC period Integration into the EPO establishment of principal directorate patent information

More information

(12) Patent Application Publication (10) Pub. No.: US 2006/ A1. Choi et al. (43) Pub. Date: Apr. 27, 2006

(12) Patent Application Publication (10) Pub. No.: US 2006/ A1. Choi et al. (43) Pub. Date: Apr. 27, 2006 US 20060090088A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2006/0090088 A1 Choi et al. (43) Pub. Date: Apr. 27, 2006 (54) METHOD AND APPARATUS FOR Publication Classification

More information

TZ, UG, ZM, ZW), Eurasian (AM, AZ, BY, KG, KZ, RU, Street, London EC1A 7AJ (GB).

TZ, UG, ZM, ZW), Eurasian (AM, AZ, BY, KG, KZ, RU, Street, London EC1A 7AJ (GB). (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property Organization International Bureau (10) International Publication Number (43) International

More information

Media Kit e.g. Amsterdam Search

Media Kit e.g. Amsterdam Search e.g. Amsterdam Search At www.trivago.nl we are focused on empowering millions of travelers every month to find their ideal hotel at the lowest rate, by offering total transparency of the online hotel market.

More information

(12) United States Patent (10) Patent No.: US 8,536,920 B2 Shen

(12) United States Patent (10) Patent No.: US 8,536,920 B2 Shen l 1 L L IL L. I 1 L _ I L L L L US008536920B2 (12) United States Patent (10) Patent No.: US 8,536,920 B2 Shen (45) Date of Patent: Sep. 17, 2013 (54) CLOCK CIRCUIT WITH DELAY FUNCTIONS AND RELATED METHOD

More information

SPARE CONNECTORS KTM 2014

SPARE CONNECTORS KTM 2014 SPAREPARTSCATALOG: // ENGINE ART.-NR.: 3208201EN CONTENT CONNECTORS FOR WIRING HARNESS AA-AN CONNECTORS FOR WIRING HARNESS AO-BC CONNECTORS FOR WIRING HARNESS BD-BQ CONNECTORS FOR WIRING HARNESS BR-CD

More information

... (12) Patent Application Publication (10) Pub. No.: US 2003/ A1. (19) United States. icopying unit d:

... (12) Patent Application Publication (10) Pub. No.: US 2003/ A1. (19) United States. icopying unit d: (19) United States US 2003.01.01188A1 (12) Patent Application Publication (10) Pub. No.: US 2003/0101188A1 Teng et al. (43) Pub. Date: May 29, 2003 (54) APPARATUS AND METHOD FOR A NETWORK COPYING SYSTEM

More information

2016 Survey of Internet Carrier Interconnection Agreements

2016 Survey of Internet Carrier Interconnection Agreements 2016 Survey of Internet Carrier Interconnection Agreements Bill Woodcock Marco Frigino Packet Clearing House February 6, 2017 PCH Peering Survey 2011 Five years ago, PCH conducted the first-ever broad

More information

Figure 1: Patent Architect Toolbar

Figure 1: Patent Architect Toolbar TM Claims The Claims buttons are used to add or modify the claims of a patent application. To fully take advantage of Patent Architect's features, the claims should be written before any other section

More information