A Report. Caching and Replacement of Multimedia Streaming Objects in Fixed Networks Using Soft Computing. Submitted by

Size: px
Start display at page:

Download "A Report. Caching and Replacement of Multimedia Streaming Objects in Fixed Networks Using Soft Computing. Submitted by"

Transcription

1 A Report On Caching and Replacement of Multimedia Streaming Objects in Fixed Networks Using Soft Computing Submitted by Aditya Vikram Daga Dushyant Arora 2006A7PS A7PS083 To Dr. T.S.B. Sudarshan Assistant Professor, Group Leader CS & IS Group BIRLA INSTITUTE OF TECHNOLOGY & SCIENCE, PILANI, RAJASTHAN November,

2 A Report On Caching and Replacement of Multimedia Streaming Objects in Fixed Networks Using Soft Computing Submitted by Aditya Vikram Daga Dushyant Arora 2006A7PS A7PS083 To Dr. T.S.B. Sudarshan Assistant Professor, Group Leader CS & IS Group In partial fulfilment of the requirements of the course Computer Projects BITS C331 BIRLA INSTITUTE OF TECHNOLOGY & SCIENCE, PILANI, RAJASTHAN November,

3 Acknowledgements This acknowledgement is intended to thank all those involved in our project directly or indirectly. We are extremely grateful to Prof. T.S.B Sudarshan for giving us the opportunity and all the necessary resources to do this project. His guidance made many aspects of the caching, neural networks and multimedia much clearer and by giving us the valuable opportunity to implement the algorithms in C, we were able to improve our programming skills. Prof T.S.B Sudarshan s timely help made it easy for us to approach him for any help that we needed related to the project. We would like to express our sincere regards to, Prof T.S.B Sudarshan for his valuable support and encouragement to do this work. 3

4 Abstract With the soaring popularity of the Internet, web-based streaming applications are becoming more and more common. But, obviously, due to the enormous size of the multimedia files, the bandwidth has become a significant resource constraint. Hence the need for locally caching the video files is enormous. This work explores the use of neural networks in multimedia web caching. Neural networks are used to identify the best video object for replacement. The multimedia objects in the cache have been dealt with at the frame level. Neural network is used to approximate a cache replacement function which takes frequency, recency and size as parameters. The neural network outputs (tag values) have been assigned to each video object and also to each of its frames. First, the video object with lowest tag value is selected and then the frames with lower tag values of that video are identified and evicted. 4

5 Table of Contents Acknowledgements 3 Abstract 4 Introduction 6 1. Multimedia Web Objects Issues in multimedia caching 8 2. Video Staging Algorithms Optimal Caching (OC- Algorithm) Neural Networks Overview Firing/Squashing function Multilayer Perceptron Supervised learning Objective function Backpropagation Related work Proposed work Neural network proxy cache replacement Replacement Issues Pseudo Code Pseudo code for neural network training Pseudo code for cache replacement 23 Results 25 Conclusion 27 References 28 5

6 Introduction During recent years, the rapid increase in commercial usage of the Internet has resulted in explosive growth in demand for web-based applications. This trend is expected to continue, and justifies the need for caching popular files at a proxy server close to the interested clients. The Internet can be defined as a huge distributed system with numerous servers spread out across the world. The surge in the usage of the World Wide Web has made caching imperative since otherwise the latency for servicing every request was becoming very much perceivable and unacceptable. The volume of data being circulated across the web has been having a disproportionate increase when compared with the capacity of the lines to carry data. It has been studied that the amount of data being circulated has been doubling every six months. Hence this places an enormous strain on both the origin servers in terms of processing as well as the cache servers in terms of both processing and data management. Caching is an effective way to speed up user requests to a processor. Originally, caching was introduced to provide a storage space intermediate to the main memory and the processor. Due to the growing popularity of the World Wide Web in today s world, there has been a significant increase in network congestion, leading to increased latency of serving user requests for web pages. As a result, the Web has now become one of the primary bottlenecks to network performance. A user who is connected to a web server on a slow network link has to face noticeable delay between his request for an object and its receipt at the client end. Further, transferring the object over the network leads to an increase in the level of traffic. Coupled with this are the problems of competition for the fixed bandwidth available and the increased load on servers. As a solution to these problems, caching was extended to Web servers to lessen the latency at the client end, network congestion and load at the server end. The basic idea of Web Caching is to store copies of popular objects closer to the user who requests them such that they can be retrieved faster. A web cache sits between the Web servers (or origin servers) and a client or many clients, and watches requests for HTML pages images and files(collectively known as objects) come by, saving a copy for itself. Then, if there is another request for the same object, it will use the copy that it has to serve the request, instead of asking the origin server for it again. The four most significant advantages out of this are: 6

7 1) Reduced latency since responses for cached requests are available immediately, and closer to the client being server. 2) Temporary unavailability of the network can be hidden from users, thus making the network appear to be more reliable. Network outages will typically not be as severe to clients of a caching system, since local caches can be leveraged regardless of network availability. 3) Reduced traffic because each object is gotten from the server only once, it reduces the amount of bandwidth used by a client. 4) Reduced server load fewer requests for a server to handle. The present report deals with analysis and implementation of a Neural network Algorithm for cache replacement for proxy caching, wherein a particular cache replacement function is used to decide the video frames to evict from the cache. 7

8 1. Multimedia Web Objects The demand for streaming objects across the Internet has increased manifold in the recent past. This trend is expected to continue, and justifies the need for caching popular streams at a proxy server close to the interested clients. However, existing techniques for caching text and image objects are not appropriate for caching media streams. The main reasons for this being their large size, variable bit rate property and real time constraints. Multimedia objects have critical timing requirements. Any network congestion and other delays would heavily degrade the quality of service. These objects have started proliferating across the Internet recently. For this reason, user access patterns to these objects are not clearly known, like the normal web objects. This makes even the replacement of these objects, a challenging task. 1.1 Issues in Multimedia Caching a) Huge size: A conventional Web object is typically on the order of Kbytes. Hence, a binary decision works well for proxy caching. But a media object has a high data rate and long playback duration, which combined yield huge data volume. For illustration, a 1h standard MPEG1 video has a volume of about 675 Mbytes; caching it entirely at a Web proxy is clearly impractical, as several such large streams would exhaust the capacity of the cache. One solution is to cache only portions of an object, in which case a client s playback needs joint delivery involving both the proxy and the origin server. Which portions of which objects to cache thus has to be carefully chosen b) Intensive bandwidth use: The streaming nature of delivery requires a significant amount of disk and network I/O bandwidth, sustained over a long period. Hence, minimizing bandwidth consumption becomes a primary consideration for proxy cache management, even taking precedence over reducing access latencies in many cases. c) High interactivity: The long playback duration of a streaming object also enables various client server interactions. As an example, recent studies found that nearly 90 percent of media playbacks are terminated prematurely by clients and often they expect VCR like operations like fast forward and rewinding 8

9 Multimedia objects, more specifically video objects, with their inherent properties cannot be cached in their entirety. A technique called Video Staging was proposed for caching video objects, where the video proxy would cache only a part of the video content. A video staging algorithm would decide what part of the video should be cached in the proxy. Any such algorithm would look at video object as set of video frames, each of which can be of different size. There are two video staging algorithms Cut-off caching algorithm (CC) and optimal caching algorithm (OC) 9

10 2. Video Staging Algorithms Because of the huge size of typical streaming media objects, it is impractical to cache them in their entirety. Just storing the entire contents of several long streams would exhaust the capacity of a conventional proxy cache. Thus, most media are only cached partially. A technique called Video Staging was proposed, which caches only parts of video contents in the proxy. It proposes to utilize proxy cache size and Network bandwidth (two of the most important network resources); in an intelligent way so as to ensure a QoS guaranteed delivery of media object to the client. The CC and OC algorithms are almost similar except that OC algorithm has a new feature called prefetching added to it by which its performance is enhanced. 2.1 Optimal Caching (OC ALGORITHM) The basic disadvantage with the CC algorithm is that the network bandwidth available is not completely utilized. This happens for frames that are less than the cut-off size, where in which only a portion of the available bandwidth is utilized and the rest of it is wasted. The OC algorithm proposes to utilize even this bandwidth for prefetching some part of the next frame. The algorithm follows the same procedure as CC algorithm for the frames that are larger than cut-off size. But for the frames that are smaller than cut-off, some portion of the next frame is prefetched. The amount to be prefetched is chosen so as to fill the network bandwidth to c(i) bits completely. Since some portion of next frame is prefetched, less number of bits are needed to be cached for any given frame (if its size is greater than cut-off). Thus, this algorithm not only utilizes the network bandwidth more efficiently but also reduces the required cache size for each file. This means more files can be cached in the proxy for the same given storage space (Refer Fig). Following the above given notations, OC algorithm can be written down as follows: Once the file is cached, the proxy would serve any other request for the file in same manner as in the case of CC Algorithm. But the concatenation process becomes slightly complicated because of prefetching. For any frame, prefetched portion along with cached portion should be concatenated to the frame obtained from the server. 10

11 11

12 3. Neural Networks 3.1 Overview Neural networks are comprised of many inter-connected simple processing units (neurons) which (when combined) are able to approximate functions based on data sets [2]. Neural networks typically mimic biological neurons by using a set of weights, one for each connection, which is similar to the exciting and inhibiting properties of actual neurons [2-5]. By adjusting these weights such that the network is able to provide correct outputs for most of (ideally all of) the inputs, the network is said to gain knowledge about the problem. This is particularly useful for problems where a definite and/or optimal algorithm is unknown. Neural networks are also valuable for developing heuristics for problems where the data set is too large for a comprehensive search [3]. A neuron with an associated weight for each connection is known as a McCulloch and Pitts (MCP) neuron [2]. A neural network comprised of MCP neurons is considerably more powerful than a neural network with un-weighted neurons. The weights allow the network to develop its own representation of knowledge [3]. The output of each neuron is determined by applying a function to the sum of every output multiplied by the weight of the associated connection [2-6]. The MCP model does not suggest a specific rule for the translating neuron input to output, so one must be chosen according to the properties of the problem the network is designed to solve. 3.2 Firing Rule / Squashing Function The firing rule determines whether or not the neuron will fire based on the inputs. Neuron firing rules are generally linear (proportional output), threshold (binary output), or sigmoid (non-linear proportional output) [2]. Although firing is a simple on or off for threshold networks, it is more typically used to mean a rule for determining the amount of output. Neural networks that use non-linear proportional output sometimes refer to the firing rule as the squashing function because they are typically chosen to constrain extreme positive and negative values. The terms firing rule, squashing function and activation function are used 12

13 interchangeably. Common squashing functions include the sigmoid, tanh and step functions [6]. The sigmoid and tanh functions are approximately linear close to zero but quickly saturate for larger positive or negative numbers. The sigmoid function, is.5 for u = 0 and is bounded by 0 and 1. The tanh function, is 0 for u = 0 and is bounded by -1 and 1. Sigmoid and tanh functions both have derivatives which are easy to calculate given the output. For the sigmoid function the derivative is and for tanh the derivative is 3.3 Multilayer Perceptrons All neural networks have a layer of inputs and a layer of outputs. Neural networks which also feature one or more additional hidden layers are called multilayer perceptrons (MLP) [4,6]. In this research, we will employ a feed-forward network. Feed-forward networks are fully connected, meaning all of the neurons in a given layer, proceeding from input to output, are connected to each neuron in the next layer. The output y i of node i is f (a i ) where f is the activation function and a i is the activity on node i. The activity is 13

14 where w ij is the weight of the connection to node i from node j. The name is derived from the fact that signals always proceed forward (from input to output) in the network [2,4,6]. Feedback networks also exist, but are more complicated and less intuitive than feed-forward networks. In a feedback network, neurons may be connected to other neurons in the same layer, a previous layer, or even to itself. These networks, unlike feed-forward networks, fluctuate upon receiving new input until reaching an equilibrium point [2]. A feed-forward MLP with two hidden layers is able to classify regions of any shape [6,7] and approximate any continuous bounded function [6,8]. 3.4 Supervised Learning The MLP model described earlier requires the weights to be tuned to the specific problem the network is designed to address. Supervised learning is one method of determining the proper weights. The MLP is trained on inputs where the corresponding target output is known. The weights are then adjusted according to the correctness of the produced output. This process is repeated until the network is able to provide the correct output for each of the training inputs or until a stopping condition is satisfied [2-6]. Since the MLP approximates a function that matches the training data, this approach aims to achieve weights that allow the network to generalize to input/output combinations not included in the training set. In practical applications, the MLP needs two methods: one to measure how closely a training output matches the target output and a second to adjust the weights such that the error level is reduced [2,6]. 3.5 Objective Function The objective function, also called the error function, is used to quantify the level of correctness of training output. According to [6], the sum of squared errors is the most commonly used objective function. However, all objective functions make some assumptions about the distribution of errors and perform accordingly. Some applications require objective functions that lack the benefit of easy differentiability and/or independence found in the most common error functions [6]. The sum of squared errors (SSE) is given by 14

15 where o is indexed over the output nodes, p is indexed over the training patterns, t o is the target output and y o is the actual output. The mean sum of squared error function normalizes SSE for the number of training patterns and is given by where P is the number of training patterns. SSE and MSE both assume a Gaussian error distribution. For classification problems the cross-entropy error function, where t {0,1}. The cross-entropy error function assumes a binomial distribution and outputs and targets are interpreted as a probability or confidence level that the pattern belongs to a certain class. 3.6 Back-propagation Back-propagation is the most commonly used method of adjusting the weights in a MLP. It calculates the partial derivative of the error with respect to the each weight by calculating the rate of change of the error as the activity for each unit changes. The delta values δ i are first calculated for the output layer directly based on the target and actual outputs. For SSE, this is It is not necessary, fortunately, to calculate the derivative of the cross-entropy error function. If the cross-entropy error function is used with sigmoid units, then δ i for output nodes is simply 15

16 The same calculations are then performed for each hidden layer based on information from the layer in front of that layer. Once the error derivative of the weights is known, they can be adjusted to reduce the error. Back-propagation is so named because the error derivatives are calculated in the opposite direction of signal propagation. The delta value δ i of node i for hidden nodes is calculated as where i < k. This process is repeated until the error is less than a threshold determined by the application of the network [2-6]. Reed and Marks [6] point out that back-propagation actually refers to both the derivative calculation and a weight change algorithm. The basic weight update algorithm changes each weight by negative the derivative of the error with respect to the weights multiplied by a small constant η known as the learning rate (LR), as shown in equation Momentum is a common modification to the weight update algorithm which involves adding a fraction of the previous weight change to the current weight change [6]. The backpropagation weight update amount with momentum is where 0 α <1 is the momentum rate (MR). Momentum is useful for coasting out of a poor local minima in the error surface and traversing flat areas quickly [6]. 16

17 Two common variations of this algorithm are batch-mode and on-line learning. Batch-mode runs all patterns in a training set. The error derivative with respect to the weight for each pattern is summed to obtain the total error derivative with respect to the weights. All weights are then adjusted accordingly. On-line training, on the other hand, runs a pattern at random from the training set. The weights are updated after each single pattern using the error derivative with respect to the weights for the current training pattern [6]. 17

18 4. Related Work This section discusses some current web proxy cache replacement algorithms. Next, work using neural networks for caching is reviewed. Existing Algorithms There are a wide variety of web proxy cache replacement algorithms in the literature. According to [1], Pyramidal Selection Scheme (PSS) [9], Greedy Dual Size Frequency (GDSF) [10], Least-Unified Value (LUV) [11] and other algorithms developed from them are considered good enough for current web caching needs. PSS is a recency-based algorithm. It uses multiple LRU caches and chooses an item for replacement from the selections of the individual LRU lists. GDSF and LUV are both function-based strategies. GDSF extends GD-Size [12] to take account for frequency in addition to size, cost to fetch and an aging factor. LUV uses a sliding window of request times to gather parameters for an undefined function F(x) which is used to calculate the probability p(i) that an object i will be referenced in the future. This probability is used along with cost to fetch and size information to select an object for replacement. 18

19 5. Proposed Work 5.1 NEURAL NETWORK PROXY CACHE REPLACEMENT In this section we present our Neural Network Proxy Cache Replacement (NNPCR) technique. The main idea behind NNPCR is to construct and train a multilayer feed-forward artificial neural network to handle web proxy cache replacement decisions. The weights of the network are adjusted using back-propagation. The sigmoid function is used as the activation function for the neural network. The neural network has a single output (tag value) which is assigned to each video object and also to each of its frames. First, the video object with lowest tag value is selected and then the frames with lower tag values of that video are identified and evicted. NNPCR uses a single hidden layer structure since networks of this class are good enough to approximate numerical functions. Several sizes and hidden layer configurations were trained in an effort to find the smallest network possible. Networks which are too large often learn the training set well but are unable to generalize [6]. The hidden layers were usually kept at approximately the same size to ease training [6,16]. This approach is a function-based replacement strategy. Neural networks are often used to approximate functions from a sample of the data set [6]. 5.2 Replacement Issues Current replacement algorithms usually make a binary decision on the caching of an atomic object. The object is cached or flushed in its entirety based on the time or frequency. The optimal caching algorithm caches the objects partially, (i.e. some portion in a frame is cached). A replacement algorithm, which uses just time or frequency, would not suffice. The algorithm should take into account size because size is the most important factor in case of multimedia objects. Also in case of videos, popularity of the videos is quite essential, because once a video becomes popular the requests for such videos grow exponentially and less popular videos are almost ignored. The caching of the videos using OC algorithm happens frame-by-frame. Thus, the replacement should also happen frame-by-frame. Once the victim video is selected, the deletion of frames can be done from first frame or the last frame. The best choice would be to start deleting from the last frame, because any remaining frames could be used to serve a 19

20 future request, at least partially. This also serves another purpose; if any request for the video being deleted arrives, it could be locked and other file could be chosen for deletion. In the Neural Replacement Policy, whenever the cache is full and a cache miss occurs,the NN algorithm determines the video frames to evict by computing a mathematical merit called cache metric for each of the video depending on certain parameters viz. Size, Frequency and Access Recency. The above parameters are explained below: 1) Access Frequency (Highest priority) 2) Size 3) Access Recency (Lowest priority) Access frequency: Less is the frequency, more is the probability of that object being replaced. Size: Larger size objects are given less priority so that more number of objects can fit inside cache. Access Recency: Every video object has a field access recency field T(r). Every time a request is made, this field is calculated by subtracting the proxy start time from the current time and is updated. So a higher T(r) value indicates a more recently accessed object. The neural network is used to approximate the following cache metric function: H = 1 1+ exp(-f) Where F = f i f T i r s i s f i = frequency of the i th video object/frame T i = Access recency time of i th video object/frame s i = Size of i th video object/frame 20

21 The indices f, r and s are integer constants. They signify the relative priority given to the three parameters. For this cache metric,the values of these indices determined after training are f=5, r=2 and s= - 4. If number of input neurons = j then no of hidden neurons in hidden layer must be between j & 2j. The exact no of hidden layer neurons can be determined by the error and computation time during training. For approximating bounded continuous functions as in our case, one hidden layer is sufficient. 21

22 6. Pseudo Code 6.1 Pseudo Code for neural network training: //definitions Compute_NN_Output,forward_pass, backward_pass, learn, initialize_net // gets the required network framework from the file and assigns random values to weights Initialize_net() //Training set of 100 video objects stored in a text file with frequency, size and access recency values is passed to the above network. //Each of the inputs was normalized before being fed to NN. Learn() { for(each input pattern) { while( (target_output NN_Output) > max_error_tolerance) { forward_pass(y); //computes the neural network output backward_pass(y); //adjusts the weights using backpropagation algorithm } } } 22

23 6.2 Pseudo code for cache replacement: If (Cache Hit) { //Update the Frequency // Update the Recent Time stamp //Calculate new NN_output (tag value) } If (Cache Miss) { // Cache the frames of the requested video Cache (OC); If (Cache Size + NextFrameSize > MaxCacheSize) { Select the video with least tag value for eviction REPLACE: Select (V) where tagvalue = tagvalueminimum. //Replace the victim video from last frame with the requested video. Replace (Victim Video with Requested Video); //If a request arrives for the victim video // while being deleted, lock it. If (Request (Victim Video) == TRUE) { Lock (Victim Video); // Select another Victim video that comes next in the list and replace it. Goto REPLACE; } 23

24 // Delete the entire victim video if required if (Victim Video Frame No. = = 1) { Delete(Victim Video); // if the Requested video is not cached // completely yet If (Cache (Requested Video)! = Finished) { // Select another Victim video that // comes next in the list and replace it. Goto REPLACE; } } } } Figure 1: (3, 4, 1) Neural Network with Back propagation 24

25 Results For evaluating the performance of the replacement algorithm, it was simulated on UNIX platform. The programming was done using C language. Proxy server receives the requests for video objects from the client. The object is a video file, taken from the benchmark videos. The size is sent to the client frame by frame. The origin server handles the request from the proxy server in case of a cache miss. If the video requested by the user is not present in the cache, then requests is forwarded to the origin server. The origin server sends the video frame sizes to the proxy server. In case of a cache miss the replacement algorithm is run. Hit Ratio is calculated for each client request, be it a cache hit or a miss. Learning rate and momentum coefficient were found to be optimum at 0.2 and 0.8 respectively. Figure 2: Neural Network Structure obtained after training (output written to a file) 25

26 Figure 3: Output of neural network for the training set 26

27 Conclusion Artificial neural networks are a model with a structure consisting of many nodes, often arranged in layers, and weighted connections between the nodes which seek to imitate the processing procedure of actual neural networks. MLP model neural networks are appropriate for web proxy caching because they are able to learn by example and generalize knowledge gained through training. The weights can be set to appropriate values through the process of supervised learning, wherein the network is trained against known input/output pairs and the weights are adjusted accordingly until the network converges to presenting correct output for all patterns in the training set. If the network is not over-trained, it should then be able to generalize reasonably to patterns outside the training set. 27

28 REFERENCES [1] S. Podlipnig and L. Böszörmenyi, A survey of web cache replacement strategies, ACM Computing Surveys, vol. 35, no. 4, pp , [2] C. Stergiou and D. Siganos, Neural Networks, [Online document] [cited Sept. 9, 2005] Available WWW: [3] L. Fu, Knowledge discovery based on neural networks, Communications of the ACM Archive, vol. 42, no. 11, pp , [4] P. P. van der Smagt, A comparative study of neural network algorithms applied to optical character recognition, in Proceedings of the Third International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems (ACM Press, 1990, vol. 2, pp ) [5] B. Dasgupta, H.T. Siegelmann, and E. Sontag, On a learnability question associated to neural networks with continuous activations, Annual Workshop on Computational Learning Theory, pp , [6] R. D. Reed and R. J. Marks II, Neural smithing: supervised learning in feedforward artificial neural networks. The MIT Press, [7] R. P. Lippmann, An introduction to computing with neural nets, ASSP Magazine, pp. 4-22, April [8] Lapedes and R. Farber, How neural nets work, in Neural Information Processing Systems American Institute of Physics, 1988, pp [9] C. C. Aggarwal, J. L. Wolf and P. S. Yu, Caching on the World Wide Web, IEEE Trans. Knowl. Data Eng. 11, pp , Jan [10] M. Arlitt, L. Cherkasova, J. Dilley, R. Friedrich and T. Jin, Evaluating content management techniques for Web proxy caches, ACM SIGMETRICS Performance Evaluation Review, vol. 27, no. 4, pp. 3-11, [11] H. Bahn, K. Koh, S. L. Min and S. H. Noh, Efficient replacement of nonuniform objects in Web caches, IEEE Comput. 35, pp , June 2002 [12] P. Cao and S. Irani, Cost-Aware WWW Proxy Caching Algorithms, in Proceedings of USENIX Symposium on Internet Technologies and Systems. (1997, pp ) 28

29 [13] H. Khalid, A new cache replacement scheme based on backpropagation neural networks, ACM SIGARCH Computer Architecture News, vol. 25, no. 1, pp. 2733, [14] Zhi-Li Zhang, Yuewei Wang, David H. C. Du, and Dongli Su, Video Staging: A Proxy- Server-Based Approach to End-to-End Video Delivery over Wide-Area-Networks, in IEEE/ACM Transaction on Multimedia, August [15] T S B Sudarshan, J Sashidhar, G Raghurama Multimedia proxy caching Algorithms for streaming Objects, Internationalconference on Recent Trends and New Directions of Research in Cybernetics & Systems Theory, IASST, Jan [16] J. de Villiers and E. Barnard, Backpropagation neural nets with one and two hidden layers, IEEE Transactions on Neural Networks, 4(1), pp , [17] S. Rajasekaran and G.A. Vijayalakshmi Pai Neural Networks, Fuzzy Logic and Genetic Algorithms: Synthesis & Applications PHI-India, 2003 [18] T S B Sudarshan, J Sashidhar, G Raghurama Multimedia proxy caching Algorithms for streaming Objects, Internationalconference on Recent Trends and New Directions of Research in Cybernetics & Systems Theory, IASST, Jan

Trace Driven Simulation of GDSF# and Existing Caching Algorithms for Web Proxy Servers

Trace Driven Simulation of GDSF# and Existing Caching Algorithms for Web Proxy Servers Proceeding of the 9th WSEAS Int. Conference on Data Networks, Communications, Computers, Trinidad and Tobago, November 5-7, 2007 378 Trace Driven Simulation of GDSF# and Existing Caching Algorithms for

More information

Training of NNPCR-2: An Improved Neural Network Proxy Cache Replacement Strategy

Training of NNPCR-2: An Improved Neural Network Proxy Cache Replacement Strategy Training of NNPCR-2: An Improved Neural Network Proxy Cache Replacement Strategy Hala ElAarag and Sam Romano Department of Mathematics and Computer Science Stetson University 421 N Woodland Blvd, Deland,

More information

COMPUTATIONAL INTELLIGENCE

COMPUTATIONAL INTELLIGENCE COMPUTATIONAL INTELLIGENCE Fundamentals Adrian Horzyk Preface Before we can proceed to discuss specific complex methods we have to introduce basic concepts, principles, and models of computational intelligence

More information

An Efficient Web Cache Replacement Policy

An Efficient Web Cache Replacement Policy In the Proc. of the 9th Intl. Symp. on High Performance Computing (HiPC-3), Hyderabad, India, Dec. 23. An Efficient Web Cache Replacement Policy A. Radhika Sarma and R. Govindarajan Supercomputer Education

More information

Improvement of the Neural Network Proxy Cache Replacement Strategy

Improvement of the Neural Network Proxy Cache Replacement Strategy Improvement of the Neural Network Proxy Cache Replacement Strategy Hala ElAarag and Sam Romano Department of Mathematics and Computer Science Stetson University 421 N Woodland Blvd, Deland, FL 32723 {

More information

Research Article Combining Pre-fetching and Intelligent Caching Technique (SVM) to Predict Attractive Tourist Places

Research Article Combining Pre-fetching and Intelligent Caching Technique (SVM) to Predict Attractive Tourist Places Research Journal of Applied Sciences, Engineering and Technology 9(1): -46, 15 DOI:.1926/rjaset.9.1374 ISSN: -7459; e-issn: -7467 15 Maxwell Scientific Publication Corp. Submitted: July 1, 14 Accepted:

More information

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer

More information

MULTIMEDIA PROXY CACHING FOR VIDEO STREAMING APPLICATIONS.

MULTIMEDIA PROXY CACHING FOR VIDEO STREAMING APPLICATIONS. MULTIMEDIA PROXY CACHING FOR VIDEO STREAMING APPLICATIONS. Radhika R Dept. of Electrical Engineering, IISc, Bangalore. radhika@ee.iisc.ernet.in Lawrence Jenkins Dept. of Electrical Engineering, IISc, Bangalore.

More information

Supervised Learning in Neural Networks (Part 2)

Supervised Learning in Neural Networks (Part 2) Supervised Learning in Neural Networks (Part 2) Multilayer neural networks (back-propagation training algorithm) The input signals are propagated in a forward direction on a layer-bylayer basis. Learning

More information

Neural Network Neurons

Neural Network Neurons Neural Networks Neural Network Neurons 1 Receives n inputs (plus a bias term) Multiplies each input by its weight Applies activation function to the sum of results Outputs result Activation Functions Given

More information

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL 2006 243 Segment-Based Streaming Media Proxy: Modeling Optimization Songqing Chen, Member, IEEE, Bo Shen, Senior Member, IEEE, Susie Wee, Xiaodong

More information

Multilayer Feed-forward networks

Multilayer Feed-forward networks Multi Feed-forward networks 1. Computational models of McCulloch and Pitts proposed a binary threshold unit as a computational model for artificial neuron. This first type of neuron has been generalized

More information

Image Compression: An Artificial Neural Network Approach

Image Compression: An Artificial Neural Network Approach Image Compression: An Artificial Neural Network Approach Anjana B 1, Mrs Shreeja R 2 1 Department of Computer Science and Engineering, Calicut University, Kuttippuram 2 Department of Computer Science and

More information

CHAPTER VI BACK PROPAGATION ALGORITHM

CHAPTER VI BACK PROPAGATION ALGORITHM 6.1 Introduction CHAPTER VI BACK PROPAGATION ALGORITHM In the previous chapter, we analysed that multiple layer perceptrons are effectively applied to handle tricky problems if trained with a vastly accepted

More information

Notes on Multilayer, Feedforward Neural Networks

Notes on Multilayer, Feedforward Neural Networks Notes on Multilayer, Feedforward Neural Networks CS425/528: Machine Learning Fall 2012 Prepared by: Lynne E. Parker [Material in these notes was gleaned from various sources, including E. Alpaydin s book

More information

Learning. Learning agents Inductive learning. Neural Networks. Different Learning Scenarios Evaluation

Learning. Learning agents Inductive learning. Neural Networks. Different Learning Scenarios Evaluation Learning Learning agents Inductive learning Different Learning Scenarios Evaluation Slides based on Slides by Russell/Norvig, Ronald Williams, and Torsten Reil Material from Russell & Norvig, chapters

More information

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu Natural Language Processing CS 6320 Lecture 6 Neural Language Models Instructor: Sanda Harabagiu In this lecture We shall cover: Deep Neural Models for Natural Language Processing Introduce Feed Forward

More information

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class

More information

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation Farrukh Jabeen Due Date: November 2, 2009. Neural Networks: Backpropation Assignment # 5 The "Backpropagation" method is one of the most popular methods of "learning" by a neural network. Read the class

More information

Practical Tips for using Backpropagation

Practical Tips for using Backpropagation Practical Tips for using Backpropagation Keith L. Downing August 31, 2017 1 Introduction In practice, backpropagation is as much an art as a science. The user typically needs to try many combinations of

More information

Improving object cache performance through selective placement

Improving object cache performance through selective placement University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2006 Improving object cache performance through selective placement Saied

More information

2. Neural network basics

2. Neural network basics 2. Neural network basics Next commonalities among different neural networks are discussed in order to get started and show which structural parts or concepts appear in almost all networks. It is presented

More information

A Proxy Caching Scheme for Continuous Media Streams on the Internet

A Proxy Caching Scheme for Continuous Media Streams on the Internet A Proxy Caching Scheme for Continuous Media Streams on the Internet Eun-Ji Lim, Seong-Ho park, Hyeon-Ok Hong, Ki-Dong Chung Department of Computer Science, Pusan National University Jang Jun Dong, San

More information

A CONTENT-TYPE BASED EVALUATION OF WEB CACHE REPLACEMENT POLICIES

A CONTENT-TYPE BASED EVALUATION OF WEB CACHE REPLACEMENT POLICIES A CONTENT-TYPE BASED EVALUATION OF WEB CACHE REPLACEMENT POLICIES F.J. González-Cañete, E. Casilari, A. Triviño-Cabrera Department of Electronic Technology, University of Málaga, Spain University of Málaga,

More information

Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks

Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks Ritika Luthra Research Scholar Chandigarh University Gulshan Goyal Associate Professor Chandigarh University ABSTRACT Image Skeletonization

More information

Character Recognition Using Convolutional Neural Networks

Character Recognition Using Convolutional Neural Networks Character Recognition Using Convolutional Neural Networks David Bouchain Seminar Statistical Learning Theory University of Ulm, Germany Institute for Neural Information Processing Winter 2006/2007 Abstract

More information

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,

More information

Role of Aging, Frequency, and Size in Web Cache Replacement Policies

Role of Aging, Frequency, and Size in Web Cache Replacement Policies Role of Aging, Frequency, and Size in Web Cache Replacement Policies Ludmila Cherkasova and Gianfranco Ciardo Hewlett-Packard Labs, Page Mill Road, Palo Alto, CA 9, USA cherkasova@hpl.hp.com CS Dept.,

More information

Evaluating the Impact of Different Document Types on the Performance of Web Cache Replacement Schemes *

Evaluating the Impact of Different Document Types on the Performance of Web Cache Replacement Schemes * Evaluating the Impact of Different Document Types on the Performance of Web Cache Replacement Schemes * Christoph Lindemann and Oliver P. Waldhorst University of Dortmund Department of Computer Science

More information

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose Department of Electrical and Computer Engineering University of California,

More information

Surveying Formal and Practical Approaches for Optimal Placement of Replicas on the Web

Surveying Formal and Practical Approaches for Optimal Placement of Replicas on the Web Surveying Formal and Practical Approaches for Optimal Placement of Replicas on the Web TR020701 April 2002 Erbil Yilmaz Department of Computer Science The Florida State University Tallahassee, FL 32306

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

CS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016

CS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016 CS 4510/9010 Applied Machine Learning 1 Neural Nets Paula Matuszek Fall 2016 Neural Nets, the very short version 2 A neural net consists of layers of nodes, or neurons, each of which has an activation

More information

Lecture 20: Neural Networks for NLP. Zubin Pahuja

Lecture 20: Neural Networks for NLP. Zubin Pahuja Lecture 20: Neural Networks for NLP Zubin Pahuja zpahuja2@illinois.edu courses.engr.illinois.edu/cs447 CS447: Natural Language Processing 1 Today s Lecture Feed-forward neural networks as classifiers simple

More information

Data Mining. Neural Networks

Data Mining. Neural Networks Data Mining Neural Networks Goals for this Unit Basic understanding of Neural Networks and how they work Ability to use Neural Networks to solve real problems Understand when neural networks may be most

More information

For Monday. Read chapter 18, sections Homework:

For Monday. Read chapter 18, sections Homework: For Monday Read chapter 18, sections 10-12 The material in section 8 and 9 is interesting, but we won t take time to cover it this semester Homework: Chapter 18, exercise 25 a-b Program 4 Model Neuron

More information

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used. 1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when

More information

A Neural Network Model Of Insurance Customer Ratings

A Neural Network Model Of Insurance Customer Ratings A Neural Network Model Of Insurance Customer Ratings Jan Jantzen 1 Abstract Given a set of data on customers the engineering problem in this study is to model the data and classify customers

More information

COMBINING NEURAL NETWORKS FOR SKIN DETECTION

COMBINING NEURAL NETWORKS FOR SKIN DETECTION COMBINING NEURAL NETWORKS FOR SKIN DETECTION Chelsia Amy Doukim 1, Jamal Ahmad Dargham 1, Ali Chekima 1 and Sigeru Omatu 2 1 School of Engineering and Information Technology, Universiti Malaysia Sabah,

More information

Week 3: Perceptron and Multi-layer Perceptron

Week 3: Perceptron and Multi-layer Perceptron Week 3: Perceptron and Multi-layer Perceptron Phong Le, Willem Zuidema November 12, 2013 Last week we studied two famous biological neuron models, Fitzhugh-Nagumo model and Izhikevich model. This week,

More information

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM Annals of the University of Petroşani, Economics, 12(4), 2012, 185-192 185 IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM MIRCEA PETRINI * ABSTACT: This paper presents some simple techniques to improve

More information

Seminar on. By Sai Rahul Reddy P. 2/2/2005 Web Caching 1

Seminar on. By Sai Rahul Reddy P. 2/2/2005 Web Caching 1 Seminar on By Sai Rahul Reddy P 2/2/2005 Web Caching 1 Topics covered 1. Why Caching 2. Advantages of Caching 3. Disadvantages of Caching 4. Cache-Control HTTP Headers 5. Proxy Caching 6. Caching architectures

More information

Relative Reduced Hops

Relative Reduced Hops GreedyDual-Size: A Cost-Aware WWW Proxy Caching Algorithm Pei Cao Sandy Irani y 1 Introduction As the World Wide Web has grown in popularity in recent years, the percentage of network trac due to HTTP

More information

A Novel Technique for Optimizing the Hidden Layer Architecture in Artificial Neural Networks N. M. Wagarachchi 1, A. S.

A Novel Technique for Optimizing the Hidden Layer Architecture in Artificial Neural Networks N. M. Wagarachchi 1, A. S. American International Journal of Research in Science, Technology, Engineering & Mathematics Available online at http://www.iasir.net ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629

More information

11/14/2010 Intelligent Systems and Soft Computing 1

11/14/2010 Intelligent Systems and Soft Computing 1 Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Artificial Neural Network based Curve Prediction

Artificial Neural Network based Curve Prediction Artificial Neural Network based Curve Prediction LECTURE COURSE: AUSGEWÄHLTE OPTIMIERUNGSVERFAHREN FÜR INGENIEURE SUPERVISOR: PROF. CHRISTIAN HAFNER STUDENTS: ANTHONY HSIAO, MICHAEL BOESCH Abstract We

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Image Data: Classification via Neural Networks Instructor: Yizhou Sun yzsun@ccs.neu.edu November 19, 2015 Methods to Learn Classification Clustering Frequent Pattern Mining

More information

IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS

IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS BOGDAN M.WILAMOWSKI University of Wyoming RICHARD C. JAEGER Auburn University ABSTRACT: It is shown that by introducing special

More information

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. Mohammad Mahmudul Alam Mia, Shovasis Kumar Biswas, Monalisa Chowdhury Urmi, Abubakar

More information

A neural network proxy cache replacement strategy and its implementation in the Squid proxy server

A neural network proxy cache replacement strategy and its implementation in the Squid proxy server Neural Comput & Applic (2011) 20:59 78 DOI 10.1007/s00521-010-0442-0 ORIGINAL ARTICLE A neural network proxy cache replacement strategy and its implementation in the Squid proxy server Sam Romano Hala

More information

Neuron Selectivity as a Biologically Plausible Alternative to Backpropagation

Neuron Selectivity as a Biologically Plausible Alternative to Backpropagation Neuron Selectivity as a Biologically Plausible Alternative to Backpropagation C.J. Norsigian Department of Bioengineering cnorsigi@eng.ucsd.edu Vishwajith Ramesh Department of Bioengineering vramesh@eng.ucsd.edu

More information

Knowledge Discovery and Data Mining. Neural Nets. A simple NN as a Mathematical Formula. Notes. Lecture 13 - Neural Nets. Tom Kelsey.

Knowledge Discovery and Data Mining. Neural Nets. A simple NN as a Mathematical Formula. Notes. Lecture 13 - Neural Nets. Tom Kelsey. Knowledge Discovery and Data Mining Lecture 13 - Neural Nets Tom Kelsey School of Computer Science University of St Andrews http://tom.home.cs.st-andrews.ac.uk twk@st-andrews.ac.uk Tom Kelsey ID5059-13-NN

More information

ENHANCING QoS IN WEB CACHING USING DIFFERENTIATED SERVICES

ENHANCING QoS IN WEB CACHING USING DIFFERENTIATED SERVICES ENHANCING QoS IN WEB CACHING USING DIFFERENTIATED SERVICES P.Venketesh 1, S.N. Sivanandam 2, S.Manigandan 3 1. Research Scholar, 2. Professor and Head, 3. Research Scholar Department of Computer Science

More information

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Ali Al-Dhaher, Tricha Anjali Department of Electrical and Computer Engineering Illinois Institute of Technology Chicago, Illinois

More information

Knowledge Discovery and Data Mining

Knowledge Discovery and Data Mining Knowledge Discovery and Data Mining Lecture 13 - Neural Nets Tom Kelsey School of Computer Science University of St Andrews http://tom.home.cs.st-andrews.ac.uk twk@st-andrews.ac.uk Tom Kelsey ID5059-13-NN

More information

Deep Learning. Practical introduction with Keras JORDI TORRES 27/05/2018. Chapter 3 JORDI TORRES

Deep Learning. Practical introduction with Keras JORDI TORRES 27/05/2018. Chapter 3 JORDI TORRES Deep Learning Practical introduction with Keras Chapter 3 27/05/2018 Neuron A neural network is formed by neurons connected to each other; in turn, each connection of one neural network is associated

More information

CS 4510/9010 Applied Machine Learning

CS 4510/9010 Applied Machine Learning CS 4510/9010 Applied Machine Learning Neural Nets Paula Matuszek Spring, 2015 1 Neural Nets, the very short version A neural net consists of layers of nodes, or neurons, each of which has an activation

More information

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION 6 NEURAL NETWORK BASED PATH PLANNING ALGORITHM 61 INTRODUCTION In previous chapters path planning algorithms such as trigonometry based path planning algorithm and direction based path planning algorithm

More information

Artificial Neural Network Methodology for Modelling and Forecasting Maize Crop Yield

Artificial Neural Network Methodology for Modelling and Forecasting Maize Crop Yield Agricultural Economics Research Review Vol. 21 January-June 2008 pp 5-10 Artificial Neural Network Methodology for Modelling and Forecasting Maize Crop Yield Rama Krishna Singh and Prajneshu * Biometrics

More information

Lecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa

Lecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa Instructors: Parth Shah, Riju Pahwa Lecture 2 Notes Outline 1. Neural Networks The Big Idea Architecture SGD and Backpropagation 2. Convolutional Neural Networks Intuition Architecture 3. Recurrent Neural

More information

SAT A Split-Up Cache Model to Boost the Performance of Web Cache Replacement Policies

SAT A Split-Up Cache Model to Boost the Performance of Web Cache Replacement Policies BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 11, No 4 Sofia 2011 SAT A Split-Up Cache Model to Boost the Performance of Web Cache Replacement Policies Geetha Krishnan 1,

More information

Ensemble methods in machine learning. Example. Neural networks. Neural networks

Ensemble methods in machine learning. Example. Neural networks. Neural networks Ensemble methods in machine learning Bootstrap aggregating (bagging) train an ensemble of models based on randomly resampled versions of the training set, then take a majority vote Example What if you

More information

Why MultiLayer Perceptron/Neural Network? Objective: Attributes:

Why MultiLayer Perceptron/Neural Network? Objective: Attributes: Why MultiLayer Perceptron/Neural Network? Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are

More information

Channel Performance Improvement through FF and RBF Neural Network based Equalization

Channel Performance Improvement through FF and RBF Neural Network based Equalization Channel Performance Improvement through FF and RBF Neural Network based Equalization Manish Mahajan 1, Deepak Pancholi 2, A.C. Tiwari 3 Research Scholar 1, Asst. Professor 2, Professor 3 Lakshmi Narain

More information

SF-LRU Cache Replacement Algorithm

SF-LRU Cache Replacement Algorithm SF-LRU Cache Replacement Algorithm Jaafar Alghazo, Adil Akaaboune, Nazeih Botros Southern Illinois University at Carbondale Department of Electrical and Computer Engineering Carbondale, IL 6291 alghazo@siu.edu,

More information

Characterizing Document Types to Evaluate Web Cache Replacement Policies

Characterizing Document Types to Evaluate Web Cache Replacement Policies Characterizing Document Types to Evaluate Web Cache Replacement Policies F.J. Gonzalez-Cañete, E. Casilari, Alicia Triviño-Cabrera Dpto. Tecnología Electrónica, Universidad de Málaga, E.T.S.I. Telecomunicación,

More information

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

Scheduling Algorithms to Minimize Session Delays

Scheduling Algorithms to Minimize Session Delays Scheduling Algorithms to Minimize Session Delays Nandita Dukkipati and David Gutierrez A Motivation I INTRODUCTION TCP flows constitute the majority of the traffic volume in the Internet today Most of

More information

Classification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska

Classification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska Classification Lecture Notes cse352 Neural Networks Professor Anita Wasilewska Neural Networks Classification Introduction INPUT: classification data, i.e. it contains an classification (class) attribute

More information

International Journal of Electrical and Computer Engineering 4: Application of Neural Network in User Authentication for Smart Home System

International Journal of Electrical and Computer Engineering 4: Application of Neural Network in User Authentication for Smart Home System Application of Neural Network in User Authentication for Smart Home System A. Joseph, D.B.L. Bong, and D.A.A. Mat Abstract Security has been an important issue and concern in the smart home systems. Smart

More information

Chapter 4. Adaptive Self-tuning : A Neural Network approach. 4.1 Introduction

Chapter 4. Adaptive Self-tuning : A Neural Network approach. 4.1 Introduction Chapter 4 Adaptive Self-tuning : A Neural Network approach 4.1 Introduction Machine learning is a method of solving real world problems by employing the hidden knowledge present in the past data or data

More information

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction Chapter 6 Objectives Chapter 6 Memory Master the concepts of hierarchical memory organization. Understand how each level of memory contributes to system performance, and how the performance is measured.

More information

Neural Networks CMSC475/675

Neural Networks CMSC475/675 Introduction to Neural Networks CMSC475/675 Chapter 1 Introduction Why ANN Introduction Some tasks can be done easily (effortlessly) by humans but are hard by conventional paradigms on Von Neumann machine

More information

THE CACHE REPLACEMENT POLICY AND ITS SIMULATION RESULTS

THE CACHE REPLACEMENT POLICY AND ITS SIMULATION RESULTS THE CACHE REPLACEMENT POLICY AND ITS SIMULATION RESULTS 1 ZHU QIANG, 2 SUN YUQIANG 1 Zhejiang University of Media and Communications, Hangzhou 310018, P.R. China 2 Changzhou University, Changzhou 213022,

More information

Instantaneously trained neural networks with complex inputs

Instantaneously trained neural networks with complex inputs Louisiana State University LSU Digital Commons LSU Master's Theses Graduate School 2003 Instantaneously trained neural networks with complex inputs Pritam Rajagopal Louisiana State University and Agricultural

More information

On Minimizing Packet Loss Rate and Delay for Mesh-based P2P Streaming Services

On Minimizing Packet Loss Rate and Delay for Mesh-based P2P Streaming Services On Minimizing Packet Loss Rate and Delay for Mesh-based P2P Streaming Services Zhiyong Liu, CATR Prof. Zhili Sun, UniS Dr. Dan He, UniS Denian Shi, CATR Agenda Introduction Background Problem Statement

More information

Deep Learning. Architecture Design for. Sargur N. Srihari

Deep Learning. Architecture Design for. Sargur N. Srihari Architecture Design for Deep Learning Sargur N. srihari@cedar.buffalo.edu 1 Topics Overview 1. Example: Learning XOR 2. Gradient-Based Learning 3. Hidden Units 4. Architecture Design 5. Backpropagation

More information

Implementation of a Library for Artificial Neural Networks in C

Implementation of a Library for Artificial Neural Networks in C Implementation of a Library for Artificial Neural Networks in C Jack Breese TJHSST Computer Systems Lab 2007-2008 June 10, 2008 1 Abstract In modern computing, there are several approaches to pattern recognition

More information

Artificial Neural Networks Lecture Notes Part 5. Stephen Lucci, PhD. Part 5

Artificial Neural Networks Lecture Notes Part 5. Stephen Lucci, PhD. Part 5 Artificial Neural Networks Lecture Notes Part 5 About this file: If you have trouble reading the contents of this file, or in case of transcription errors, email gi0062@bcmail.brooklyn.cuny.edu Acknowledgments:

More information

CS 6501: Deep Learning for Computer Graphics. Training Neural Networks II. Connelly Barnes

CS 6501: Deep Learning for Computer Graphics. Training Neural Networks II. Connelly Barnes CS 6501: Deep Learning for Computer Graphics Training Neural Networks II Connelly Barnes Overview Preprocessing Initialization Vanishing/exploding gradients problem Batch normalization Dropout Additional

More information

An Empirical Study of Software Metrics in Artificial Neural Networks

An Empirical Study of Software Metrics in Artificial Neural Networks An Empirical Study of Software Metrics in Artificial Neural Networks WING KAI, LEUNG School of Computing Faculty of Computing, Information and English University of Central England Birmingham B42 2SU UNITED

More information

Semi-Supervised Clustering with Partial Background Information

Semi-Supervised Clustering with Partial Background Information Semi-Supervised Clustering with Partial Background Information Jing Gao Pang-Ning Tan Haibin Cheng Abstract Incorporating background knowledge into unsupervised clustering algorithms has been the subject

More information

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

COMP 551 Applied Machine Learning Lecture 14: Neural Networks COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp551 Unless otherwise noted, all material posted for this course

More information

Time Series prediction with Feed-Forward Neural Networks -A Beginners Guide and Tutorial for Neuroph. Laura E. Carter-Greaves

Time Series prediction with Feed-Forward Neural Networks -A Beginners Guide and Tutorial for Neuroph. Laura E. Carter-Greaves http://neuroph.sourceforge.net 1 Introduction Time Series prediction with Feed-Forward Neural Networks -A Beginners Guide and Tutorial for Neuroph Laura E. Carter-Greaves Neural networks have been applied

More information

Chapter III. congestion situation in Highspeed Networks

Chapter III. congestion situation in Highspeed Networks Chapter III Proposed model for improving the congestion situation in Highspeed Networks TCP has been the most used transport protocol for the Internet for over two decades. The scale of the Internet and

More information

ECE 610: Homework 4 Problems are taken from Kurose and Ross.

ECE 610: Homework 4 Problems are taken from Kurose and Ross. ECE 610: Homework 4 Problems are taken from Kurose and Ross. Problem 1: Host A and B are communicating over a TCP connection, and Host B has already received from A all bytes up through byte 248. Suppose

More information

Kernel-based online machine learning and support vector reduction

Kernel-based online machine learning and support vector reduction Kernel-based online machine learning and support vector reduction Sumeet Agarwal 1, V. Vijaya Saradhi 2 andharishkarnick 2 1- IBM India Research Lab, New Delhi, India. 2- Department of Computer Science

More information

Rough Set Approach to Unsupervised Neural Network based Pattern Classifier

Rough Set Approach to Unsupervised Neural Network based Pattern Classifier Rough Set Approach to Unsupervised Neural based Pattern Classifier Ashwin Kothari, Member IAENG, Avinash Keskar, Shreesha Srinath, and Rakesh Chalsani Abstract Early Convergence, input feature space with

More information

WHAT TYPE OF NEURAL NETWORK IS IDEAL FOR PREDICTIONS OF SOLAR FLARES?

WHAT TYPE OF NEURAL NETWORK IS IDEAL FOR PREDICTIONS OF SOLAR FLARES? WHAT TYPE OF NEURAL NETWORK IS IDEAL FOR PREDICTIONS OF SOLAR FLARES? Initially considered for this model was a feed forward neural network. Essentially, this means connections between units do not form

More information

Neural Networks (Overview) Prof. Richard Zanibbi

Neural Networks (Overview) Prof. Richard Zanibbi Neural Networks (Overview) Prof. Richard Zanibbi Inspired by Biology Introduction But as used in pattern recognition research, have little relation with real neural systems (studied in neurology and neuroscience)

More information

Cursive Handwriting Recognition System Using Feature Extraction and Artificial Neural Network

Cursive Handwriting Recognition System Using Feature Extraction and Artificial Neural Network Cursive Handwriting Recognition System Using Feature Extraction and Artificial Neural Network Utkarsh Dwivedi 1, Pranjal Rajput 2, Manish Kumar Sharma 3 1UG Scholar, Dept. of CSE, GCET, Greater Noida,

More information

Dynamic Broadcast Scheduling in DDBMS

Dynamic Broadcast Scheduling in DDBMS Dynamic Broadcast Scheduling in DDBMS Babu Santhalingam #1, C.Gunasekar #2, K.Jayakumar #3 #1 Asst. Professor, Computer Science and Applications Department, SCSVMV University, Kanchipuram, India, #2 Research

More information

Heuristic Algorithms for Multiconstrained Quality-of-Service Routing

Heuristic Algorithms for Multiconstrained Quality-of-Service Routing 244 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 10, NO 2, APRIL 2002 Heuristic Algorithms for Multiconstrained Quality-of-Service Routing Xin Yuan, Member, IEEE Abstract Multiconstrained quality-of-service

More information

Memory Design. Cache Memory. Processor operates much faster than the main memory can.

Memory Design. Cache Memory. Processor operates much faster than the main memory can. Memory Design Cache Memory Processor operates much faster than the main memory can. To ameliorate the sitution, a high speed memory called a cache memory placed between the processor and main memory. Barry

More information

COMMON INTERNET FILE SYSTEM PROXY

COMMON INTERNET FILE SYSTEM PROXY COMMON INTERNET FILE SYSTEM PROXY CS739 PROJECT REPORT ANURAG GUPTA, DONGQIAO LI {anurag, dongqiao}@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison Madison 53706, WI May 15, 1999

More information

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition Pattern Recognition Kjell Elenius Speech, Music and Hearing KTH March 29, 2007 Speech recognition 2007 1 Ch 4. Pattern Recognition 1(3) Bayes Decision Theory Minimum-Error-Rate Decision Rules Discriminant

More information

Machine Learning Classifiers and Boosting

Machine Learning Classifiers and Boosting Machine Learning Classifiers and Boosting Reading Ch 18.6-18.12, 20.1-20.3.2 Outline Different types of learning problems Different types of learning algorithms Supervised learning Decision trees Naïve

More information

Automatic Classification of Attacks on IP Telephony

Automatic Classification of Attacks on IP Telephony Automatic Classification of Attacks on IP Telephony Jakub SAFARIK 1, Pavol PARTILA 1, Filip REZAC 1, Lukas MACURA 2, Miroslav VOZNAK 1 1 Department of Telecommunications, Faculty of Electrical Engineering

More information

A Data Classification Algorithm of Internet of Things Based on Neural Network

A Data Classification Algorithm of Internet of Things Based on Neural Network A Data Classification Algorithm of Internet of Things Based on Neural Network https://doi.org/10.3991/ijoe.v13i09.7587 Zhenjun Li Hunan Radio and TV University, Hunan, China 278060389@qq.com Abstract To

More information

Reducing Web Latency through Web Caching and Web Pre-Fetching

Reducing Web Latency through Web Caching and Web Pre-Fetching RESEARCH ARTICLE OPEN ACCESS Reducing Web Latency through Web Caching and Web Pre-Fetching Rupinder Kaur 1, Vidhu Kiran 2 Research Scholar 1, Assistant Professor 2 Department of Computer Science and Engineering

More information