Computer Physics Communications. Multi-GPU acceleration of direct pore-scale modeling of fluid flow in natural porous media
|
|
- Beatrice Wilcox
- 5 years ago
- Views:
Transcription
1 Computer Pysics Communications 183 (2012) Contents lists available at SciVerse ScienceDirect Computer Pysics Communications ournal omepage: Multi-GPU acceleration of direct pore-scale modeling of fluid flow in natural porous media Saeed Ovaysi, Moammad Piri Department of Cemical and Petroleum Engineering, University of Wyoming, Laramie, WY , USA a r t i c l e i n f o a b s t r a c t Article istory: Received 23 June 2011 Received in revised form 8 Marc 2012 Accepted 16 April 2012 Available online 25 April 2012 Keywords: GPU computing Parallel programming Moving Particle Semi-implicit Particle-based metods Porous media MMPS Modified Moving Particle Semi-implicit (MMPS) is a particle-based metod used to simulate pore-scale fluid flow troug disordered porous media. We present a multi-gpu implementation of MMPS for ybrid CPU GPU clusters using NVIDIA s Compute Unified Device Arcitecture (CUDA). Message Passing Interface (MPI) functions are used to communicate between different nodes of te cluster and ence teir respective GPUs. Te accuracy and stability of te GPU implementation of MMPS are verified troug careful comparison wit te results obtained on conventional CPU-only clusters. We ten examine te speedup and scalability of te GPU implementation for pore-scale flow simulations in samples wit various sizes taken from te same natural porous system. We acieve a 134 speedup wit 60 grapics cards compared to 6 CPU cores wile maintaining a linear scalability. Incompressible fluid flow simulation to reac steady-state troug a 1 mm 1 mm 8 mm microtomograpy image of Benteimer sandstone is also performed in less tan Elsevier B.V. All rigts reserved. 1. Introduction Fluid flow in porous media is of great importance in many areas of science and tecnology including petroleum production, ydrology, and environmental remediation. Better understanding of fluid flow in porous media, owever, requires examining te pysics of fluid flow at te pore level (micron level). Currently, it is very difficult to acieve tis goal troug only experimental means. Terefore, it is crucial to develop models tat are capable of reproducing te true pysics of fluid flow troug porous media at te pore level. Recently, Modified Moving Particle Semi-implicit (MMPS) as been developed to directly model fluid flow in disordered porous media at te pore level [1]. MMPS, wen applied on ig-resolution images obtained using X-ray microtomograpy, can sed ligt on transport penomena in natural porous media [2]. However, te ig resolutions required to capture te pore-level complexities of natural porous media make te simulations computationally expensive. We ave previously presented a parallel implementation of te metod wic scales linearly on distributed memory clusters [1]. Noneteless, several ours are still required to complete a pysically useful simulation on more tan 200 processing cores. Fortunately, wit te advances made in General Purpose computation on Grapics Processing Units (GPGPU), it is possible to reduce te compuational cost significantly at a considerably lower price. In tis paper, we use Compute Unified Device Arcitecture (CUDA) developed by NVIDIA [3] to perform te simulations on Grapics Processing Units (GPUs). CUDA, wic is an extension of C, is integrated wit Message Passing Interface (MPI) to allow simulations across a multi-gpu platform wit distributed memory. Te underlying arcitecture of te code is written in C++. In te following sections, we first briefly introduce te computational algoritm of MMPS. Next, our single-gpu algoritm is discussed. We ten integrate te single- GPU code wit te domain decomposition tecnique facilitated by MPI to finalize a multi-gpu code tat runs on distributed memory computer clusters. Tis is ten followed by our scalability results obtained using te above-mentioned code. Finally, we present a case study in wic fluid flow in a large sample is simulated using te multi-gpu code. 2. MMPS MMPS is a Lagrangian particle-based metod used to solve te incompressible Navier Stokes equations in disordered porous media. Te voxel image of te porous medium renders itself to a particle-based representation were te rock (void) space is mapped into solid Corresponding autor. Tel.: ; fax: addresses: sovaysi@uwyo.edu (S. Ovaysi), mpiri@uwyo.edu (M. Piri) /$ see front matter 2012 Elsevier B.V. All rigts reserved. doi: /.cpc
2 S. Ovaysi, M. Piri / Computer Pysics Communications 183 (2012) (fluid) particles. To solve te incompressible Navier Stokes equations, MMPS uses a dual-level approac known as te pressure proection metod to calculate te velocity of fluid particles. First, an explicit velocity is calculated using v E = v + 1 ρ P s + µ ρ 2 v + g t (1) were v is velocity, ρ is density, µ is viscosity and g is gravity. Subscript E stands for explicit and D Dt wit respect to time, t. P s, static pressure, is calculated using 2 P s = 0. Te explicit velocity is ten used to calculate dynamic pressure, P d, using 2 P d = ρ v E + 1 n v k t k t t k=1 wic is ten used in te following equation to calculate te implicit velocity: denotes te substantial derivative (2) (3) v I = t ρ P d. It must be noted tat te second term in te rigt and side of Eq. (3) takes into account any deviation from te incompressibility criterion in te previous time steps, see [1] for more details. Te above equations are linearized using te particle-based summations proposed by Kosizuka [4], wic define te gradient of te scalar field A for particle i as (4) i A = d N i N A A i (r r i )W i r 2 i (5) were d is te number of spatial dimensions, r denotes te coordinates, r i = r i r is te distance between particles i and, N i = N W i is te number density, and W is a kernel tat gives iger weigts to te particles close to i tan te particles at a distance. To reduce te computational and memory costs, tis summation is performed only over a predefined radius of neigborood, kernel size, from i wic is denoted by in Eq. (6). Te number of neigboring particles residing in tis neigborood is denoted by N. In tis work, we use te following polynomial kernel: 2 2ri r i < 2 W i = 2 2ri 2 2 r (6) i < 0 r i. Similar principles are used to compute te divergence operator of te vector field v and te Laplacian operator of te scalar field A for particle i: were λ = i v = d N i 2 i A = 2d λn i V W(r)r2 dv V W(r)dv. N (v v i ) (r r i ) r 2 i N (A A i )W i W i In summary, te MMPS algoritm includes 1. Initialization 2. Neigbor searc 3. Compute P s using Eq. (2) 4. Compute V E using Eq. (1) 5. Compute P d using Eq. (3) 6. Compute V I using Eq. (4) 7. v = v E + v I 8. r = r + vt 9. t = t + t 10. if t < t final go to step finis were te neigbor searc in step 2 uses a linked-list searc algoritm wit O(N) complexity. Furtermore, te Bi-Conugate Gradient Stabilized (BiCGStab) solver is used to solve te systems of linear equations encountered in steps 3 and 5. Tese systems of linear equations (7) (8)
3 1892 S. Ovaysi, M. Piri / Computer Pysics Communications 183 (2012) Fig. 1. Computational times required to perform te main numerical operations in one time step of MMPS. Te computational times are averaged over 1000 time steps for a system composed of particles on one Intel Xeon X5670 core. Fig. 2. Memory ierarcies in te CUDA grid wit M blocks per grid and N treads per block. are obtained from te extended forms of Eqs. (2) and (3) using Eqs. (5) (8). Details of te derivations are given in [1]. Here we present only te final Eqs. (9) and (10) for P s and P d, respectively. N M i Nˆ i P si P s W i = 0 (9) were M i represents te subset of particles in te neigborood of i tat do not influence its P s, i.e., solids and disconnected particles. N N i P di P d W i = ρλ N (v E, v E,i ) (r r i ) N (v n v n W i + ) i (r r i ) W i ρλn n 1 i v k 2t 2dt 2 i tk (10) r 2 i were superscripts n and k denote te previous time step and te time steps earlier tan n, respectively. Fig. 1 presents te contribution of eac maor computation in one time step of MMPS. Te computational times are averaged over 1000 time steps for a system comprised of 315,000 particles. Knowing tat te particles move only a fraction of teir size at every time step, it is not necessary to perform neigbor searc at every time step (for instance, once every 10 time steps ere). Tis reveals te solution of te system of linear equations at step 5 as te bottleneck of te simulations wit a degree of freedom equal to te total number of particles. Depending on te resolution at wic te porous medium is imaged, te total number of particles for a (1 mm) 3 sample is of te order of tens of millions. Also, considering te movement of particles from one time step to anoter and te resulting cange in te velocity of te particles, te system of linear equations at step 5 presents a non-symmetric system tat requires several iterations of te BiCGStab solver to converge. In te next section, we discuss a single-gpu approac to overcome te computational cost associated wit tese computations. 3. Single-GPU acceleration A GPU consists of undreds of stream processors wic are grouped in stream multi-processors, see Table 1 for te specifications of two NVIDIA grapics cards. Fig. 2 illustrates te memory ierarcies tat are explicitly accessible to te stream processors troug te CUDA API functions. To access te GPU (device) processors, te CPU (ost) must launc a kernel function. Tis moves te execution to te device were Number of blocks per grid Number of treads per block treads execute te same instruction in te kernel function. Launcing a kernel wit a greater number of treads tan te actual number of processors can potentially overlap computation wit ig latency memory access to te GPU s global memory. Utilizing te sared and constant memories as well as coalescing are oter efficient ways to speed up GPU memory access, see [5] for te details. In tis section, we briefly explain te tecniques used to speed up te main MMPS operations outlined in Section 2. r 2 i k=1
4 S. Ovaysi, M. Piri / Computer Pysics Communications 183 (2012) Table 1 Specifications of two NVIDIA grapics cards. Grapics card GeForce GTX 580 Tesla C2050 Number of multiprocessors Number of processors Clock rate (GHz) Global memory (Mbytes) 1572 and 3071 (from ZOTAC) 2687 Constant memory (bytes) Sared memory per multiprocessor (bytes) Registers per multiprocessor Memory bandwidt (Gbytes/s) Max number of blocks per grid Max number of treads per block Neigbor searc Before using te linked-list algoritm, te entire domain must be divided into subdomains (cubes in tree dimensional systems) as large as te kernel size,. As sown in Fig. 3, to find te neigbors of any particle i, only te subdomains in te immediate neigborood of i are searced. Below is a pseudo-code for te CUDA kernel, performing neigbor searc using te linked-list algoritm. np is te number of particles. Storing te coordinates of te particles in te sared memory at te beginning reduces te cost of future memory accesses to tese data. global void neigbor_searc(...) id = blockidx.x * blockdim.x + treadidx.x; if(id<np) store coordinates of particle[id] in te sared memory for(over te x coordinate of te closest subdomains) for(over te y coordinate of te closest subdomains) for(over te z coordinate of te closest subdomains) for(= all te particles in te subdomain) r = distance between particle[id] and particle[] if(r<) add index of particle[] to array of neigbors for particle[id] 3.2. System of linear equations Te most time-consuming operation in te BiCGStab solver is matrix vector multiplication. To perform tis efficiently, we allocate eac row of te sparse matrix to a block. We ten use te sared memory to calculate te summation of multiplications for eac row. Tis algoritm proves to be four times faster tan te case were eac row is allocated to one tread. Below, we present a pseudo-code to multiply a sparse matrix caracterized by its elements, i.e., mult, and element ids, i.e., mult id, by vector B. Te results are stored in vector C. It sould be noted tat, in calculating te global summation, especial care must be taken to minimize tread divergence and ence improve te acieved speedup. To do tat, we ave employed te global summation tecnique presented in [5]. global void multiply(...) id = blockidx.x; tx = treadidx.x; sared float s_c[dimension of te block]; if(id<np) s_c[tx] = mult[tx]* B[mult_id[tx]]; synctreads(); for(s=blockdim.x>>2; s>0; s>>=1) if(tx<s) s_c[tx] = s_c[tx+s]; synctreads(); if(!tx) C[id] = s_c[0];
5 1894 S. Ovaysi, M. Piri / Computer Pysics Communications 183 (2012) Fig. 3. In te linked-list algoritms only te immediate neigboring subdomains, sown in grey, are searced. P1 P2 P3 P4 Fig. 4. Te domain is decomposed along te subdomains of te linked-list algoritm (small squares). Eac process communicates only its grey squares wit its neigboring processes. 4. Multi-GPU acceleration Flow simulation in large systems requires a considerable amount of memory tat is beyond wat a single GPU can provide. For tis reason, and also to acieve iger speedups, it is necessary to develope a multi-gpu implementation. Here, we first discuss domain decomposion. Eac subdomain is ten allocated to one MPI process wic in turn ands over te simulation to one GPU. Message passing and syncronization of te GPUs are done by MPI functions. An optimized domain decomposition must lead to a balanced distribution of computational load and also minimum communication between te processes [6]. To acieve load balance, we decompose te domain into subdomains wit equal volumes. Since we model te flow of incompressible fluids, subdomains wit equal volumes implies anding over an equal number of particles to eac MPI process. Also, to optimize te neigbor searc step for eac process, domain decomposition must be done along te subdomains of te linked-list algoritm discussed in Section 3. Fig. 4 illustrates a case were te entire domain is decomposed between 4 processes. At te initialization pase, te grey linked-list subdomains (small squares in te figure) are marked and te neigbor searc is initiated at eac time step by passing te coordinates of te particles in te grey subdomains to oter processes. It sould be noted tat, in practice, only a fraction of te particles in te grey subdomains are te actual neigbors of te particles in te neigboring processes. Since eac iteration of te BiCGStab solver
6 S. Ovaysi, M. Piri / Computer Pysics Communications 183 (2012) Fig. 5. Te actual neigboring particles of process Pn are sown wit filled particles. Tis constitutes only 68% of te particles in te grey area. Fig. 6. GPU-computed velocity distribution in a 0.5 mm 0.5 mm 1 mm Berea sandstone X-ray image wit µm resolution and a total of 2,519,424 particles. Te sample is under 1 Pa pressure difference along te z direction (from left to rigt) and te figure illustrates te results after 0.01 s of te real time. requires twice updating te information in te oter processes, it is wise to first mark te actual neigboring particles of eac process and ten for te rest of te communications only pass te actual neigboring particles. In Fig. 5, te actual neigboring particles constitute only 68% of te particles in te grey zone. Considering tat several iterations are required to solve te system of linear equations arising from Eqs. (2) and (3), tis 32% reduction in te communication cost goes a long way in furter optimizing te code. More optimization is acieved by overlapping te computations wit communications using non-blocking MPI calls. For instance, wile performing neigbor searc, a non-blocking MPI call can pass te coordinates of te particles residing in te grey subdomains to te neigboring process at te beginning wit almost no cost. Te receiving process can start by calling a non-blocking receive operation and immediately begin searcing te inner subdomains tat do not need information from te neigboring processes and ten wait for te non-blocking receive operation to complete, wic ten allows it to proceed wit searcing te periperal subdomains. Also, furter overlapping is acieved by performing te communications tat are required to mark te actual neigbors in anoter tread of te same MPI process wile te master tread performs computations. 5. Scalability and results To evaluate te scalability of our GPU implementation of MMPS and te accuracy and stability of te results, we studied a 0.5 mm 0.5 mm 1 mm sample from te Berea sandstone image in [7]. Fig. 6 visualizes te GPU-computed velocity distribution in tis sample wit te same accuracy as te results obtained using only CPUs. Te accuracy of te CPU-only results as previously been verified in [1,2] against analytical/numerical/experimental data available in te literature. Te reported results in tis section are obtained using a computer cluster wit 30 nodes. Eac node is equipped wit 2 AMD Opteron 6128 CPUs and 2 NVIDIA GTX 580 GPUs and 32 GB of memory. In addition to tat, te ead node of tis cluster is equipped wit 2 Intel Xeon X5670 CPUs and 8 GPUs (4 NVIDIA Tesla C2050s and 4 NVIDIA GeForce GTX 580 GPUs). Depending on weter a parallel CPU-based code as been developed or not, te scalability results can be presented in two ways (see te next paragrap). Since, we ave already developed a parallel CPU-based code [1], te scalability results in tis work are presented in bot ways. Furtermore, all te simulations, i.e., bot CPU based and CPU GPU based, are performed using single precision floating point operations.
7 1896 S. Ovaysi, M. Piri / Computer Pysics Communications 183 (2012) Fig. 7. Speedup of te main numerical operations of an average time step of MMPS on a system comprised of 315,000 particles using a single Tesla C2050 GPU compared wit a single Intel Xeon X5670 core. Te numbers are averaged over 1000 time steps. Fig. 8. Scalability results for different systems. System size measures te relative number of particles compared to a base system comprised of 315,000 particles. CPU times are obtained using all te 6 cores of an Intel Xeon X5670 CPU and te GPU results are based on NVIDIA GTX 580 GPUs. In te first approac, we are interested to know ow muc performance could be gained if we were to perform te MMPS computations on a GPU rater tan using a sequential code tat utilizes only one core of a CPU. Fig. 7 summarizes te speedup acieved for te maor numerical operations in an average time step of MMPS on a single Tesla C2050 GPU for a system composed of 315,000 particles. Compared to Fig. 1, te tree main contributors to te computational time, namely neigbor searc, computation of P s, and computation of P d, ave gained, on average, more tan 26 speedup. However, tis speedup proved to be dependent on te system size. Te same simulation on a system wit twice te size of te base system, i.e., wit 630,000 particles, acieved more tan 28 speedup. Interestingly, te less expensive GTX 580 outperformed Tesla C2050 and acieved a 34 speedup on te base system wic translates to a 36% better performance. Having developed a CPU-based parallel code, in te second approac we are interested in examining ow fast te code runs on multiple GPUs wen compared to te performance gained using all te cores of a CPU. To do tat, we first compute te computational time on all te 6 cores of te Intel Xeon X5670 CPU on te ead node. Ten, we obtain te computational time for te same simulation on te computing nodes. Fig. 8 presents te scalability results for different system sizes relative to te base system tat is composed of 315,000 particles. As sown in tis figure, te multi-gpu code performs better wen larger systems are used. Tis observation, wic is consistent wit te results presented in [8], underscores te cost of expensive memory access in GPUs. Since eac particle is assigned to one CUDA tread, by increasing te system size, and ence te number of particles, one increases te number of treads tat operate at a given time. Tis allows a better utilization of te processing power as te idle treads can start processing wile oter treads wait to access te GPU s global memory. Te actual computational times are also reported in Table 2. Tese numbers reveal a 3.3 better performance for one NVIDIA GeForce GTX 580 over all te 6 cores of an Intel X5670 CPU. It sould be stressed tat, depending on te orientation of te processes in te x, y, and z directions, domain decomposition can sligtly increase/decrease te number of iterations required to obtain te same level of precision. 6. Case study In tis section we study pore-scale fluid flow in Benteimer sandstone. Te entire microtomograpy image, sown in Fig. 9, is a 4 mm 4 mm 9 mm cylinder wit µm resolution. We cut a 1 mm 1 mm 8 mm sample from te central part of tis image wic amounts to 43,740,000 particles and apply a 25 Pa pressure difference across its lengt, i.e., from left to rigt as sown in Fig. 10.
8 S. Ovaysi, M. Piri / Computer Pysics Communications 183 (2012) Fig. 9. Isosurface of a 4 mm 4 mm 9 mm Benteimer sandstone sample wit µm resolution. Fig. 10. Distribution of static pressure in a 1 mm 1 mm 8 mm sample from a Benteimer sandstone microtomograpy image wit µm resolution. Only te fluid particles are visualized. Fig. 11. Distribution of velocity in a 1 mm 1 mm 8 mm sample from a Benteimer sandstone microtomograpy image wit µm resolution. Only te fluid particles faster tan m/s are sown. Table 2 Average computational times, in seconds, for one time step of flow simulations. CPU results are obtained using all te 6 cores of an Intel Xeon X5670 CPU and te GPU results are based on NVIDIA GeForce GTX 580 GPUs. System size CPU GPU NA NA 2 GPUs NA 4 GPUs GPUs GPUs GPUs GPUs GPUs GPUs GPUs GPUs GPUs Fig. 11 sows te velocity distribution in te most active fluid cannels, i.e., faster tan m/s, at steady-state, i.e., t = s. For tis simulation wic completed in less tan 1 we used a s time step and let te simulation run for 2000 time steps. To cross-ceck te accuracy of te results, exactly te same simulation was run on te entire 480 CPU cores of te cluster. However, te CPU-based simulation, wic was as accurate as its ybrid counterpart, completed in Conclusions We presented an MPI CUDA implementation of MMPS to simulate fluid flow troug naturally-occurring porous media on ybrid CPU GPU clusters. Tis implementation as proved to be very efficient wit linear scalability and as enabled us to perform pysically meaningful simulations in a practical amount of time. Altoug te largest system size is still limited by te total global memory of all te GPU cards, tis implementation as enabled us to look at significantly larger samples of natural porous media wit iger resolutions. Te scalability results presented in tis paper lead us to conclude tat ybrid CPU GPU clusters can deliver te same computational power at
9 1898 S. Ovaysi, M. Piri / Computer Pysics Communications 183 (2012) a muc lower price tan conventional CPU clusters. Considering te current market prices of bot te NVIDIA GeForce GTX 580 grapics card and te Intel X5670 CPU and te results presented in Section 5, te same simulation would be 9.6 less expensive on an NVIDIA GeForce GTX 580 grapics card. Tis olds true if one were to develop a parallel code tat would utilize all te available 6 CPU cores on te Intel X5670 CPU. We believe tat tis opens up new doors to applications demanding ig computational power at lower cost. References [1] S. Ovaysi, M. Piri, J. Comput. Pys. 229 (2010) [2] S. Ovaysi, M. Piri, J. Contam. Hydrol. 124 (2011) [3] NVIDIA, NVIDIA CUDA programming guide, version 3.0, [4] S. Kosizuka, H. Tamako, Y. Oka, Int. J. Comput. Fluid Dyn. 4 (1) (1995) [5] D.B. Kirk, W.W. Hwu, Programming Massively Parallel Processors: A Hands-on Approac, Morgan Kaufmann Pub., [6] D.E. Culler, J.P. Sing, A. Gupta, Parallel Computer Arcitecture: A Hardware/Software Approac, Morgan Kaufman Pub., [7] H. Dong, Micro-CT imaging and pore network extraction, P.D. Diss., Dept. Eart Sci. Eng., Imperial College, London, [8] D.A. Jacobsen, J.C. Tibault, I. Senocak, An MPI-CUDA implementation for massively parallel incompressible flow computations on multi-gpu clusters, in: 48t AIAA Aerospace Sciences Meeting, 2010.
Fast Calculation of Thermodynamic Properties of Water and Steam in Process Modelling using Spline Interpolation
P R E P R N T CPWS XV Berlin, September 8, 008 Fast Calculation of Termodynamic Properties of Water and Steam in Process Modelling using Spline nterpolation Mattias Kunick a, Hans-Joacim Kretzscmar a,
More informationParallel Simulation of Equation-Based Models on CUDA-Enabled GPUs
Parallel Simulation of Equation-Based Models on CUDA-Enabled GPUs Per Ostlund Department of Computer and Information Science Linkoping University SE-58183 Linkoping, Sweden per.ostlund@liu.se Kristian
More informationOur Calibrated Model has No Predictive Value: An Example from the Petroleum Industry
Our Calibrated Model as No Predictive Value: An Example from te Petroleum Industry J.N. Carter a, P.J. Ballester a, Z. Tavassoli a and P.R. King a a Department of Eart Sciences and Engineering, Imperial
More informationTwo Modifications of Weight Calculation of the Non-Local Means Denoising Method
Engineering, 2013, 5, 522-526 ttp://dx.doi.org/10.4236/eng.2013.510b107 Publised Online October 2013 (ttp://www.scirp.org/journal/eng) Two Modifications of Weigt Calculation of te Non-Local Means Denoising
More information4.1 Tangent Lines. y 2 y 1 = y 2 y 1
41 Tangent Lines Introduction Recall tat te slope of a line tells us ow fast te line rises or falls Given distinct points (x 1, y 1 ) and (x 2, y 2 ), te slope of te line troug tese two points is cange
More informationHash-Based Indexes. Chapter 11. Comp 521 Files and Databases Spring
Has-Based Indexes Capter 11 Comp 521 Files and Databases Spring 2010 1 Introduction As for any index, 3 alternatives for data entries k*: Data record wit key value k
More informationA UPnP-based Decentralized Service Discovery Improved Algorithm
Indonesian Journal of Electrical Engineering and Informatics (IJEEI) Vol.1, No.1, Marc 2013, pp. 21~26 ISSN: 2089-3272 21 A UPnP-based Decentralized Service Discovery Improved Algoritm Yu Si-cai*, Wu Yan-zi,
More informationVector Processing Contours
Vector Processing Contours Andrey Kirsanov Department of Automation and Control Processes MAMI Moscow State Tecnical University Moscow, Russia AndKirsanov@yandex.ru A.Vavilin and K-H. Jo Department of
More informationHash-Based Indexes. Chapter 11. Comp 521 Files and Databases Fall
Has-Based Indexes Capter 11 Comp 521 Files and Databases Fall 2012 1 Introduction Hasing maps a searc key directly to te pid of te containing page/page-overflow cain Doesn t require intermediate page fetces
More informationCESILA: Communication Circle External Square Intersection-Based WSN Localization Algorithm
Sensors & Transducers 2013 by IFSA ttp://www.sensorsportal.com CESILA: Communication Circle External Square Intersection-Based WSN Localization Algoritm Sun Hongyu, Fang Ziyi, Qu Guannan College of Computer
More informationImplementation of Digital Signal Processing Algorithm in General Purpose Graphics Processing Unit (GPGPU)
Implementation of Digital Signal Processing Algoritm in General Purpose Grapics Processing Unit (GPG) Jagdamb Beari Srivastava 1, R.K. pandey, Jitendra Jain 3 Programme Asst. (Computer), Jawaarlal Neru
More information2 The Derivative. 2.0 Introduction to Derivatives. Slopes of Tangent Lines: Graphically
2 Te Derivative Te two previous capters ave laid te foundation for te study of calculus. Tey provided a review of some material you will need and started to empasize te various ways we will view and use
More informationTwo-Level Iterative Queuing Modeling of Software Contention
Two-Level Iterative Queuing Modeling of Software Contention Daniel A. Menascé Dept. of Computer Science George Mason University www.cs.gmu.edu/faculty/menasce.tml 2002 D. Menascé. All Rigts Reserved. 1
More informationAsynchronous Power Flow on Graphic Processing Units
1 Asyncronous Power Flow on Grapic Processing Units Manuel Marin, Student Member, IEEE, David Defour, and Federico Milano, Senior Member, IEEE Abstract Asyncronous iterations can be used to implement fixed-point
More informationA Cost Model for Distributed Shared Memory. Using Competitive Update. Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science
A Cost Model for Distributed Sared Memory Using Competitive Update Jai-Hoon Kim Nitin H. Vaidya Department of Computer Science Texas A&M University College Station, Texas, 77843-3112, USA E-mail: fjkim,vaidyag@cs.tamu.edu
More informationAuthor's personal copy
Autor's personal copy Information Processing Letters 09 (009) 868 875 Contents lists available at ScienceDirect Information Processing Letters www.elsevier.com/locate/ipl Elastic tresold-based admission
More informationNumerical Derivatives
Lab 15 Numerical Derivatives Lab Objective: Understand and implement finite difference approximations of te derivative in single and multiple dimensions. Evaluate te accuracy of tese approximations. Ten
More informationCubic smoothing spline
Cubic smooting spline Menu: QCExpert Regression Cubic spline e module Cubic Spline is used to fit any functional regression curve troug data wit one independent variable x and one dependent random variable
More informationUtilizing Call Admission Control to Derive Optimal Pricing of Multiple Service Classes in Wireless Cellular Networks
Utilizing Call Admission Control to Derive Optimal Pricing of Multiple Service Classes in Wireless Cellular Networks Okan Yilmaz and Ing-Ray Cen Computer Science Department Virginia Tec {oyilmaz, ircen}@vt.edu
More informationPYRAMID FILTERS BASED ON BILINEAR INTERPOLATION
PYRAMID FILTERS BASED ON BILINEAR INTERPOLATION Martin Kraus Computer Grapics and Visualization Group, Tecnisce Universität Müncen, Germany krausma@in.tum.de Magnus Strengert Visualization and Interactive
More informationMean Waiting Time Analysis in Finite Storage Queues for Wireless Cellular Networks
Mean Waiting Time Analysis in Finite Storage ueues for Wireless ellular Networks J. YLARINOS, S. LOUVROS, K. IOANNOU, A. IOANNOU 3 A.GARMIS 2 and S.KOTSOOULOS Wireless Telecommunication Laboratory, Department
More informationBounding Tree Cover Number and Positive Semidefinite Zero Forcing Number
Bounding Tree Cover Number and Positive Semidefinite Zero Forcing Number Sofia Burille Mentor: Micael Natanson September 15, 2014 Abstract Given a grap, G, wit a set of vertices, v, and edges, various
More informationAlternating Direction Implicit Methods for FDTD Using the Dey-Mittra Embedded Boundary Method
Te Open Plasma Pysics Journal, 2010, 3, 29-35 29 Open Access Alternating Direction Implicit Metods for FDTD Using te Dey-Mittra Embedded Boundary Metod T.M. Austin *, J.R. Cary, D.N. Smite C. Nieter Tec-X
More informationNotes: Dimensional Analysis / Conversions
Wat is a unit system? A unit system is a metod of taking a measurement. Simple as tat. We ave units for distance, time, temperature, pressure, energy, mass, and many more. Wy is it important to ave a standard?
More information12.2 TECHNIQUES FOR EVALUATING LIMITS
Section Tecniques for Evaluating Limits 86 TECHNIQUES FOR EVALUATING LIMITS Wat ou sould learn Use te dividing out tecnique to evaluate its of functions Use te rationalizing tecnique to evaluate its of
More informationWhen the dimensions of a solid increase by a factor of k, how does the surface area change? How does the volume change?
8.4 Surface Areas and Volumes of Similar Solids Wen te dimensions of a solid increase by a factor of k, ow does te surface area cange? How does te volume cange? 1 ACTIVITY: Comparing Surface Areas and
More informationAn Algorithm for Loopless Deflection in Photonic Packet-Switched Networks
An Algoritm for Loopless Deflection in Potonic Packet-Switced Networks Jason P. Jue Center for Advanced Telecommunications Systems and Services Te University of Texas at Dallas Ricardson, TX 75083-0688
More informationMATH 5a Spring 2018 READING ASSIGNMENTS FOR CHAPTER 2
MATH 5a Spring 2018 READING ASSIGNMENTS FOR CHAPTER 2 Note: Tere will be a very sort online reading quiz (WebWork) on eac reading assignment due one our before class on its due date. Due dates can be found
More informationDensity Estimation Over Data Stream
Density Estimation Over Data Stream Aoying Zou Dept. of Computer Science, Fudan University 22 Handan Rd. Sangai, 2433, P.R. Cina ayzou@fudan.edu.cn Ziyuan Cai Dept. of Computer Science, Fudan University
More informationHaar Transform CS 430 Denbigh Starkey
Haar Transform CS Denbig Starkey. Background. Computing te transform. Restoring te original image from te transform 7. Producing te transform matrix 8 5. Using Haar for lossless compression 6. Using Haar
More information12.2 Techniques for Evaluating Limits
335_qd /4/5 :5 PM Page 863 Section Tecniques for Evaluating Limits 863 Tecniques for Evaluating Limits Wat ou sould learn Use te dividing out tecnique to evaluate its of functions Use te rationalizing
More informationTHANK YOU FOR YOUR PURCHASE!
THANK YOU FOR YOUR PURCHASE! Te resources included in tis purcase were designed and created by me. I ope tat you find tis resource elpful in your classroom. Please feel free to contact me wit any questions
More informationInvestigating an automated method for the sensitivity analysis of functions
Investigating an automated metod for te sensitivity analysis of functions Sibel EKER s.eker@student.tudelft.nl Jill SLINGER j..slinger@tudelft.nl Delft University of Tecnology 2628 BX, Delft, te Neterlands
More informationDistributed and Optimal Rate Allocation in Application-Layer Multicast
Distributed and Optimal Rate Allocation in Application-Layer Multicast Jinyao Yan, Martin May, Bernard Plattner, Wolfgang Mülbauer Computer Engineering and Networks Laboratory, ETH Zuric, CH-8092, Switzerland
More informationOptimal In-Network Packet Aggregation Policy for Maximum Information Freshness
1 Optimal In-etwork Packet Aggregation Policy for Maimum Information Fresness Alper Sinan Akyurek, Tajana Simunic Rosing Electrical and Computer Engineering, University of California, San Diego aakyurek@ucsd.edu,
More informationChapter K. Geometric Optics. Blinn College - Physics Terry Honan
Capter K Geometric Optics Blinn College - Pysics 2426 - Terry Honan K. - Properties of Ligt Te Speed of Ligt Te speed of ligt in a vacuum is approximately c > 3.0µ0 8 mês. Because of its most fundamental
More informationAVL Trees Outline and Required Reading: AVL Trees ( 11.2) CSE 2011, Winter 2017 Instructor: N. Vlajic
1 AVL Trees Outline and Required Reading: AVL Trees ( 11.2) CSE 2011, Winter 2017 Instructor: N. Vlajic AVL Trees 2 Binary Searc Trees better tan linear dictionaries; owever, te worst case performance
More informationSolutions Manual for Fundamentals of Fluid Mechanics 7th edition by Munson Rothmayer Okiishi and Huebsch
Solutions Manual for Fundamentals of Fluid Mecanics 7t edition by Munson Rotmayer Okiisi and Huebsc Link full download : ttps://digitalcontentmarket.org/download/solutions-manual-forfundamentals-of-fluid-mecanics-7t-edition-by-munson-rotmayer-okiisi-and-uebsc/
More information19.2 Surface Area of Prisms and Cylinders
Name Class Date 19 Surface Area of Prisms and Cylinders Essential Question: How can you find te surface area of a prism or cylinder? Resource Locker Explore Developing a Surface Area Formula Surface area
More information13.5 DIRECTIONAL DERIVATIVES and the GRADIENT VECTOR
13.5 Directional Derivatives and te Gradient Vector Contemporary Calculus 1 13.5 DIRECTIONAL DERIVATIVES and te GRADIENT VECTOR Directional Derivatives In Section 13.3 te partial derivatives f x and f
More informationThe Euler and trapezoidal stencils to solve d d x y x = f x, y x
restart; Te Euler and trapezoidal stencils to solve d d x y x = y x Te purpose of tis workseet is to derive te tree simplest numerical stencils to solve te first order d equation y x d x = y x, and study
More informationCS 234. Module 6. October 16, CS 234 Module 6 ADT Dictionary 1 / 33
CS 234 Module 6 October 16, 2018 CS 234 Module 6 ADT Dictionary 1 / 33 Idea for an ADT Te ADT Dictionary stores pairs (key, element), were keys are distinct and elements can be any data. Notes: Tis is
More informationAn Anchor Chain Scheme for IP Mobility Management
An Ancor Cain Sceme for IP Mobility Management Yigal Bejerano and Israel Cidon Department of Electrical Engineering Tecnion - Israel Institute of Tecnology Haifa 32000, Israel E-mail: bej@tx.tecnion.ac.il.
More informationAsynchronous Power Flow on Graphic Processing Units
Asyncronous Power Flow on Grapic Processing Units Manuel Marin, David Defour, Federico Milano To cite tis version: Manuel Marin, David Defour, Federico Milano. Asyncronous Power Flow on Grapic Processing
More information2.8 The derivative as a function
CHAPTER 2. LIMITS 56 2.8 Te derivative as a function Definition. Te derivative of f(x) istefunction f (x) defined as follows f f(x + ) f(x) (x). 0 Note: tis differs from te definition in section 2.7 in
More informationMean Shifting Gradient Vector Flow: An Improved External Force Field for Active Surfaces in Widefield Microscopy.
Mean Sifting Gradient Vector Flow: An Improved External Force Field for Active Surfaces in Widefield Microscopy. Margret Keuper Cair of Pattern Recognition and Image Processing Computer Science Department
More information3.6 Directional Derivatives and the Gradient Vector
288 CHAPTER 3. FUNCTIONS OF SEVERAL VARIABLES 3.6 Directional Derivatives and te Gradient Vector 3.6.1 Functions of two Variables Directional Derivatives Let us first quickly review, one more time, te
More informationMore on Functions and Their Graphs
More on Functions and Teir Graps Difference Quotient ( + ) ( ) f a f a is known as te difference quotient and is used exclusively wit functions. Te objective to keep in mind is to factor te appearing in
More informationCoarticulation: An Approach for Generating Concurrent Plans in Markov Decision Processes
Coarticulation: An Approac for Generating Concurrent Plans in Markov Decision Processes Kasayar Roanimanes kas@cs.umass.edu Sridar Maadevan maadeva@cs.umass.edu Department of Computer Science, University
More informationFourth-order NMO velocity for P-waves in layered orthorhombic media vs. offset-azimuth
Fourt-order NMO velocity for P-waves in layered orrombic media vs. set-azimut Zvi Koren* and Igor Ravve Paradigm Geopysical Summary We derive te fourt-order NMO velocity of compressional waves for a multi-layer
More informationTwo-Phase flows on massively parallel multi-gpu clusters
Two-Phase flows on massively parallel multi-gpu clusters Peter Zaspel Michael Griebel Institute for Numerical Simulation Rheinische Friedrich-Wilhelms-Universität Bonn Workshop Programming of Heterogeneous
More informationUnsupervised Learning for Hierarchical Clustering Using Statistical Information
Unsupervised Learning for Hierarcical Clustering Using Statistical Information Masaru Okamoto, Nan Bu, and Tosio Tsuji Department of Artificial Complex System Engineering Hirosima University Kagamiyama
More informationSymmetric Tree Replication Protocol for Efficient Distributed Storage System*
ymmetric Tree Replication Protocol for Efficient Distributed torage ystem* ung Cune Coi 1, Hee Yong Youn 1, and Joong up Coi 2 1 cool of Information and Communications Engineering ungkyunkwan University
More informationH-Adaptive Multiscale Schemes for the Compressible Navier-Stokes Equations Polyhedral Discretization, Data Compression and Mesh Generation
H-Adaptive Multiscale Scemes for te Compressible Navier-Stokes Equations Polyedral Discretization, Data Compression and Mes Generation F. Bramkamp 1, B. Gottsclic-Müller 2, M. Hesse 1, P. Lamby 2, S. Müller
More informationClassification of Osteoporosis using Fractal Texture Features
Classification of Osteoporosis using Fractal Texture Features V.Srikant, C.Dines Kumar and A.Tobin Department of Electronics and Communication Engineering Panimalar Engineering College Cennai, Tamil Nadu,
More informationSection 2.3: Calculating Limits using the Limit Laws
Section 2.3: Calculating Limits using te Limit Laws In previous sections, we used graps and numerics to approimate te value of a it if it eists. Te problem wit tis owever is tat it does not always give
More informationIntra- and Inter-Session Network Coding in Wireless Networks
Intra- and Inter-Session Network Coding in Wireless Networks Hulya Seferoglu, Member, IEEE, Atina Markopoulou, Member, IEEE, K K Ramakrisnan, Fellow, IEEE arxiv:857v [csni] 3 Feb Abstract In tis paper,
More informationMulti-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art
Multi-Objective Particle Swarm Optimizers: A Survey of te State-of-te-Art Margarita Reyes-Sierra and Carlos A. Coello Coello CINVESTAV-IPN (Evolutionary Computation Group) Electrical Engineering Department,
More information1.4 RATIONAL EXPRESSIONS
6 CHAPTER Fundamentals.4 RATIONAL EXPRESSIONS Te Domain of an Algebraic Epression Simplifying Rational Epressions Multiplying and Dividing Rational Epressions Adding and Subtracting Rational Epressions
More informationData Structures and Programming Spring 2014, Midterm Exam.
Data Structures and Programming Spring 2014, Midterm Exam. 1. (10 pts) Order te following functions 2.2 n, log(n 10 ), 2 2012, 25n log(n), 1.1 n, 2n 5.5, 4 log(n), 2 10, n 1.02, 5n 5, 76n, 8n 5 + 5n 2
More informationProceedings of the 8th WSEAS International Conference on Neural Networks, Vancouver, British Columbia, Canada, June 19-21,
Proceedings of te 8t WSEAS International Conference on Neural Networks, Vancouver, Britis Columbia, Canada, June 9-2, 2007 3 Neural Network Structures wit Constant Weigts to Implement Dis-Jointly Removed
More informationExtended Synchronization Signals for Eliminating PCI Confusion in Heterogeneous LTE
1 Extended Syncronization Signals for Eliminating PCI Confusion in Heterogeneous LTE Amed H. Zaran Department of Electronics and Electrical Communications Cairo University Egypt. azaran@eecu.cu.edu.eg
More information4.2 The Derivative. f(x + h) f(x) lim
4.2 Te Derivative Introduction In te previous section, it was sown tat if a function f as a nonvertical tangent line at a point (x, f(x)), ten its slope is given by te it f(x + ) f(x). (*) Tis is potentially
More informationComputing geodesic paths on manifolds
Proc. Natl. Acad. Sci. USA Vol. 95, pp. 8431 8435, July 1998 Applied Matematics Computing geodesic pats on manifolds R. Kimmel* and J. A. Setian Department of Matematics and Lawrence Berkeley National
More informationAN IMPROVED VOLUME-OF-FLUID (IVOF) METHOD FOR WAVE IMPACT TYPE PROBLEMS. K.M.Theresa Kleefsman, Arthur E.P. Veldman
Proceedings of OMAE-FPSO 2004 OMAE Speciality Symposium on FPSO Integrity 2004, Houston, USA OMAE-FPSO 04-0066 AN IMPROVED VOLUME-OF-FLUID (IVOF) METHOD FOR WAVE IMPACT TYPE PROBLEMS K.M.Teresa Kleefsman,
More information12.2 Investigate Surface Area
Investigating g Geometry ACTIVITY Use before Lesson 12.2 12.2 Investigate Surface Area MATERIALS grap paper scissors tape Q U E S T I O N How can you find te surface area of a polyedron? A net is a pattern
More informationAn Effective Sensor Deployment Strategy by Linear Density Control in Wireless Sensor Networks Chiming Huang and Rei-Heng Cheng
An ffective Sensor Deployment Strategy by Linear Density Control in Wireless Sensor Networks Ciming Huang and ei-heng Ceng 5 De c e mbe r0 International Journal of Advanced Information Tecnologies (IJAIT),
More informationTuning MAX MIN Ant System with off-line and on-line methods
Université Libre de Bruxelles Institut de Recerces Interdisciplinaires et de Développements en Intelligence Artificielle Tuning MAX MIN Ant System wit off-line and on-line metods Paola Pellegrini, Tomas
More informationInterference and Diffraction of Light
Interference and Diffraction of Ligt References: [1] A.P. Frenc: Vibrations and Waves, Norton Publ. 1971, Capter 8, p. 280-297 [2] PASCO Interference and Diffraction EX-9918 guide (written by Ann Hanks)
More informationNumerical Simulation of Two-Phase Free Surface Flows
Arc. Comput. Met. Engng. Vol. 12, 2, 165-224 (2005) Arcives of Computational Metods in Engineering State of te art reviews Numerical Simulation of Two-Pase Free Surface Flows Alexandre Caboussat Department
More informationMAPI Computer Vision
MAPI Computer Vision Multiple View Geometry In tis module we intend to present several tecniques in te domain of te 3D vision Manuel Joao University of Mino Dep Industrial Electronics - Applications -
More information, 1 1, A complex fraction is a quotient of rational expressions (including their sums) that result
RT. Complex Fractions Wen working wit algebraic expressions, sometimes we come across needing to simplify expressions like tese: xx 9 xx +, xx + xx + xx, yy xx + xx + +, aa Simplifying Complex Fractions
More informationA Novel QC-LDPC Code with Flexible Construction and Low Error Floor
A Novel QC-LDPC Code wit Flexile Construction and Low Error Floor Hanxin WANG,2, Saoping CHEN,2,CuitaoZHU,2 and Kaiyou SU Department of Electronics and Information Engineering, Sout-Central University
More informationMulti-Stack Boundary Labeling Problems
Multi-Stack Boundary Labeling Problems Micael A. Bekos 1, Micael Kaufmann 2, Katerina Potika 1 Antonios Symvonis 1 1 National Tecnical University of Atens, Scool of Applied Matematical & Pysical Sciences,
More informationRedundancy Awareness in SQL Queries
Redundancy Awareness in QL Queries Bin ao and Antonio Badia omputer Engineering and omputer cience Department University of Louisville bin.cao,abadia @louisville.edu Abstract In tis paper, we study QL
More informationCRASHWORTHINESS ASSESSMENT IN AIRCRAFT DITCHING INCIDENTS
27 TH INTERNATIONAL CONGRESS OF THE AERONAUTICAL SCIENCES CRASHWORTHINESS ASSESSMENT IN AIRCRAFT DITCHING INCIDENTS C. Candra*, T. Y. Wong* and J. Bayandor** * Te Sir Lawrence Wackett Aerospace Centre
More informationUUV DEPTH MEASUREMENT USING CAMERA IMAGES
ABCM Symposium Series in Mecatronics - Vol. 3 - pp.292-299 Copyrigt c 2008 by ABCM UUV DEPTH MEASUREMENT USING CAMERA IMAGES Rogerio Yugo Takimoto Graduate Scool of Engineering Yokoama National University
More informationPiecewise Polynomial Interpolation, cont d
Jim Lambers MAT 460/560 Fall Semester 2009-0 Lecture 2 Notes Tese notes correspond to Section 4 in te text Piecewise Polynomial Interpolation, cont d Constructing Cubic Splines, cont d Having determined
More informationYou Try: A. Dilate the following figure using a scale factor of 2 with center of dilation at the origin.
1 G.SRT.1-Some Tings To Know Dilations affect te size of te pre-image. Te pre-image will enlarge or reduce by te ratio given by te scale factor. A dilation wit a scale factor of 1> x >1enlarges it. A dilation
More informationReal-Time Wireless Routing for Industrial Internet of Things
Real-Time Wireless Routing for Industrial Internet of Tings Cengjie Wu, Dolvara Gunatilaka, Mo Sa, Cenyang Lu Cyber-Pysical Systems Laboratory, Wasington University in St. Louis Department of Computer
More informationUNSUPERVISED HIERARCHICAL IMAGE SEGMENTATION BASED ON THE TS-MRF MODEL AND FAST MEAN-SHIFT CLUSTERING
UNSUPERVISED HIERARCHICAL IMAGE SEGMENTATION BASED ON THE TS-MRF MODEL AND FAST MEAN-SHIFT CLUSTERING Raffaele Gaetano, Giuseppe Scarpa, Giovanni Poggi, and Josiane Zerubia Dip. Ing. Elettronica e Telecomunicazioni,
More informationImage Registration via Particle Movement
Image Registration via Particle Movement Zao Yi and Justin Wan Abstract Toug fluid model offers a good approac to nonrigid registration wit large deformations, it suffers from te blurring artifacts introduced
More informationExcel based finite difference modeling of ground water flow
Journal of Himalaan Eart Sciences 39(006) 49-53 Ecel based finite difference modeling of ground water flow M. Gulraiz Akter 1, Zulfiqar Amad 1 and Kalid Amin Kan 1 Department of Eart Sciences, Quaid-i-Azam
More informationGrid Adaptation for Functional Outputs: Application to Two-Dimensional Inviscid Flows
Journal of Computational Pysics 176, 40 69 (2002) doi:10.1006/jcp.2001.6967, available online at ttp://www.idealibrary.com on Grid Adaptation for Functional Outputs: Application to Two-Dimensional Inviscid
More informationMinimizing Memory Access By Improving Register Usage Through High-level Transformations
Minimizing Memory Access By Improving Register Usage Troug Hig-level Transformations San Li Scool of Computer Engineering anyang Tecnological University anyang Avenue, SIGAPORE 639798 Email: p144102711@ntu.edu.sg
More informationAll truths are easy to understand once they are discovered; the point is to discover them. Galileo
Section 7. olume All truts are easy to understand once tey are discovered; te point is to discover tem. Galileo Te main topic of tis section is volume. You will specifically look at ow to find te volume
More informationSoft sensor modelling by time difference, recursive partial least squares and adaptive model updating
Soft sensor modelling by time difference, recursive partial least squares adaptive model updating Y Fu 1, 2, W Yang 2, O Xu 1, L Zou 3, J Wang 4 1 Zijiang College, Zejiang University of ecnology, Hangzou
More informationLinear Interpolating Splines
Jim Lambers MAT 772 Fall Semester 2010-11 Lecture 17 Notes Tese notes correspond to Sections 112, 11, and 114 in te text Linear Interpolating Splines We ave seen tat ig-degree polynomial interpolation
More informationNVIDIA GTX200: TeraFLOPS Visual Computing. August 26, 2008 John Tynefield
NVIDIA GTX200: TeraFLOPS Visual Computing August 26, 2008 John Tynefield 2 Outline Execution Model Architecture Demo 3 Execution Model 4 Software Architecture Applications DX10 OpenGL OpenCL CUDA C Host
More informationA Statistical Approach for Target Counting in Sensor-Based Surveillance Systems
Proceedings IEEE INFOCOM A Statistical Approac for Target Counting in Sensor-Based Surveillance Systems Dengyuan Wu, Decang Cen,aiXing, Xiuzen Ceng Department of Computer Science, Te George Wasington University,
More informationNetwork Coding to Enhance Standard Routing Protocols in Wireless Mesh Networks
Downloaded from vbn.aau.dk on: April 7, 09 Aalborg Universitet etwork Coding to Enance Standard Routing Protocols in Wireless Mes etworks Palevani, Peyman; Roetter, Daniel Enrique Lucani; Fitzek, Frank;
More informationPLK-B SERIES Technical Manual (USA Version) CLICK HERE FOR CONTENTS
PLK-B SERIES Technical Manual (USA Version) CLICK ERE FOR CONTENTS CONTROL BOX PANEL MOST COMMONLY USED FUNCTIONS INITIAL READING OF SYSTEM SOFTWARE/PAGES 1-2 RE-INSTALLATION OF TE SYSTEM SOFTWARE/PAGES
More informationOn the Use of Radio Resource Tests in Wireless ad hoc Networks
Tecnical Report RT/29/2009 On te Use of Radio Resource Tests in Wireless ad oc Networks Diogo Mónica diogo.monica@gsd.inesc-id.pt João Leitão jleitao@gsd.inesc-id.pt Luis Rodrigues ler@ist.utl.pt Carlos
More informationFeature-Based Steganalysis for JPEG Images and its Implications for Future Design of Steganographic Schemes
Feature-Based Steganalysis for JPEG Images and its Implications for Future Design of Steganograpic Scemes Jessica Fridric Dept. of Electrical Engineering, SUNY Bingamton, Bingamton, NY 3902-6000, USA fridric@bingamton.edu
More informationHaptic Rendering of Topological Constraints to Users Manipulating Serial Virtual Linkages
Haptic Rendering of Topological Constraints to Users Manipulating Serial Virtual Linkages Daniela Constantinescu and Septimiu E Salcudean Electrical & Computer Engineering Department University of Britis
More informationOptimization solutions for the segmented sum algorithmic function
Optimization solutions for the segmented sum algorithmic function ALEXANDRU PÎRJAN Department of Informatics, Statistics and Mathematics Romanian-American University 1B, Expozitiei Blvd., district 1, code
More informationApplication of a Key Value Paradigm to Logic Factoring
INVITED PAPER Application of a Key Value Paradigm to Logic Factoring Te autor first revisits a classic algoritm for algebraic factoring to establis a stronger connection to te functional intent rater tan
More informationNon-Interferometric Testing
NonInterferometric Testing.nb Optics 513 - James C. Wyant 1 Non-Interferometric Testing Introduction In tese notes four non-interferometric tests are described: (1) te Sack-Hartmann test, (2) te Foucault
More informationAn Intuitive Framework for Real-Time Freeform Modeling
An Intuitive Framework for Real-Time Freeform Modeling Mario Botsc Leif Kobbelt Computer Grapics Group RWTH Aacen University Abstract We present a freeform modeling framework for unstructured triangle
More informationTruncated Newton-based multigrid algorithm for centroidal Voronoi diagram calculation
NUMERICAL MATHEMATICS: Teory, Metods and Applications Numer. Mat. Teor. Met. Appl., Vol. xx, No. x, pp. 1-18 (200x) Truncated Newton-based multigrid algoritm for centroidal Voronoi diagram calculation
More information