Low Latency Data Grids in Finance Jags Ramnarayan Chief Architect GemStone Systems jags.ramnarayan@gemstone.com Copyright 2006, GemStone Systems Inc. All Rights Reserved.
Background on GemStone Systems Known for its Object Database technology since 1982 Now specializes in memory-oriented distributed data management Over 200 installed customers in global 2000 Grid focus driven by: Very high performance with predictable throughput, latency and availability Capital markets Large e-commerce portals real time fraud Federal intelligence
Use of Grid computing in finance Two primary areas in tier 1 investment banks Risk Analytics Pricing
State of affairs Risk Analytics Deluge of data (market data, trade data, etc) Overnight batch job doesn t cut it Want intra-day risk metrics In some cases, real-time risk Explosion in simulation scenarios More accurate risk exposure Compliance Increasing number of smaller calculations
State of affairs Pricing (derivatives) Too many products Increasing complexity in products Too many underliers Many relationships Hunger for latency reduction Calculating the new price with lowest possible latency Pushing the prices to distributed applications
Where is the problem? Grid Scheduler Compute farm Data warehouses Rational databases File system Database/file access contention Too many concurrent connections Large database server bottlenecks on network Queries results are large causing CPU bottlenecks Even a parallel file system throttled by disk speeds Too much data transfer Between tasks, Jobs Between Grid and file systems, databases Data consistency issues CPU bound job turns into a IO bound Job
Data Fabric for Risk Analytics When data is stored, it is transparently replicated and/or partitioned; Redundant storage can be in memory and/or on disk ensures continuous availability Keep reference data replicated on many; partition trade data Pool memory (and disk) across cluster ; parallelize data access and computation to achieve very high aggregate throughput Machine nodes can be added dynamically to expand storage capacity or to handle increased client load
Data Fabric for Risk Analytics TaskFlow - As results are generated push events to compute nodes to initiate subsequent computation Avoid bulk data transfer across tasks or Jobs Thousands of compute nodes can maintain local cache of most frequently used data; Optionally use local disk for overflow Move reference data to local cache Synchronous read through, write through or Asynchronous write-behind to other data sources and sinks
Move business logic to data Principle: Move task to computational resource with most of the relevant data before considering other nodes where data transfer becomes necessary Parallel function execution service ( Map Reduce ) Data dependency hints Routing key, collection of keys, where clause(s) Serial or parallel execution Exec functions FIFO Queue f, f 1 2, f n Function (f2) Submit (f1) -> AggregateHighValueTrades(<input data>, where trades.month= Sept Sept ) Sept Trades Function (f1) Data fabric Resources
Key lessons Apps should think about capitalizing memory across Grid (it is abundant) Keep IO cycles to minimum through main memory caching of operational data sets Scavange Grid memory and avoid data source access Achieve linear scaling for your Grid apps by horizontally partitioning your data and behavior Read Pat helland s Life beyond Distributed transactions (http://www-db.cs.wisc.edu/cidr/cidr2007/papers/cidr07p15.pdf) Get more info on the GemFire data fabric http://www.gemstone.com/gemfire