Generalized Low Rank Approximations of Matrices

Size: px
Start display at page:

Download "Generalized Low Rank Approximations of Matrices"

Transcription

1 Machine Learning, Springer Science + Business Meia, Inc.. Manufacture in The Netherlans. DOI: /s Generalize Low Rank Approximations of Matrices JIEPING YE * jieping@cs.umn.eu Department of Computer Science & Engineering,University of Minnesota-Twin Cities, Minneapolis, MN 55455, USA Eitor: Peter Flach Publishe online: 12 August 2005 Abstract. The problem of computing low rank approximations of matrices is consiere. The novel aspect of our approach is that the low rank approximations are on a collection of matrices. We formulate this as an optimization problem, which aims to minimize the reconstruction (approximation) error. To the best of our knowlege, the optimization problem propose in this paper oes not amit a close form solution. We thus erive an iterative algorithm, namely, which stans for the Generalize Low Rank Approximations of Matrices. reuces the reconstruction error sequentially, an the resulting approximation is thus improve uring successive iterations. Experimental results show that the algorithm converges rapily. We have conucte extensive experiments on image ata to evaluate the effectiveness of the propose algorithm an compare the compute low rank approximations with those obtaine from traitional Singular Value Decomposition () base methos. The comparison is base on the reconstruction error, misclassification error rate, an computation time. Results show that is competitive with for classification, while it has a much lower computation cost. However, results in a larger reconstruction error than. To further reuce the reconstruction error, we stuy the combination of an, namely +, where is precee by. Results show that when using the same number of reuce imensions, + achieves significant reuction of the reconstruction error as compare to, while keeping the computation cost low. Keywors: singular value ecomposition, matrix approximation, reconstruction error, classification 1. Introuction The problem of imensionality reuction has recently receive broa attention in areas such as machine learning, computer vision, an information retrieval (Berry, Dumais, & O Brie, 1995; Castelli, Thomasian, & Li, 2003; Deerwester et al., 1990; Dhillon & Moha, 2001; Kleinberg & Tomkins, 1999; Srebro & Jaakkola, 2003). The goal of imensionality reuction is to obtain more compact representations of the ata with limite loss of information. Traitional algorithms for imensionality reuction are base on the so-calle vector space moel. Uner this moel, each atum is moele as a vector an the collection of ata is moele as a single ata matrix, where each column of the ata matrix correspons to a ata point an each row correspons to a feature imension. The representation of ata by vectors in Eucliean space allows one to compute the similarity between ata points, base * Present aress: Department of Computer Science & Engineering,Arizona State University, Tempe, AZ , USA.

2 J. YE on the Eucliean istance or some other similarity metric. The similarity metrics on ata points naturally lea to similarity-base inexing by representing queries as vectors an searching for their nearest neighbors (Aggarwal, 2001; Castelli, Thomasian, & Li, 2003). A well-known technique for imensionality reuction is the low rank approximation by the Singular Value Decomposition (), also calle Latent Semantic Inexing (LSI) in information retrieval (Berry, Dumais, & O Brie, 1995). An appealing property of this low rank approximation is that it achieves the smallest reconstruction error among all approximations with the same rank. Details can be foun in Section 2. Some theoretical justification of the empirical success of LSI can be foun in Papaimitriou et al. (1998), where it is shown that LSI works in the context of a simple probabilistic Corpus-generating moel. However, applications of this technique to high-imensional ata, such as images an vieos, quickly run up against practical computational limits, mainly ue to the expensive computation in both time an space for large matrices (Golub & Van Loan, 1996). Several incremental algorithms have been propose in the past (Bran, 2002; Gu& Eisenstat, 1993; Kanth et al., 1998) to eal with the high space complexity of, where the ata points are inserte incrementally to upate the. To the best of our knowlege, such algorithms come with no guarantees on the quality of the approximation prouce. Ranom sampling can be applie to spee up the computation. More etails can be foun in Achlioptas an McSherry (2001), Drineas et al. (1999) an Frieze, Kannan, & Vempala (1998) Contributions In this paper, we present a novel approach to alleviate the expensive computation. The novelty lies in a new ata representation moel. Uner this moel, each atum is represente as a matrix, instea of a vector, an the collection of ata is represente as a collection of matrices, instea of a single ata matrix. We formulate the problem of low rank approximations as a new optimization problem, which approximates a collection of matrices with matrices of lower rank. To the best of our knowlege, there is no close form solution for the new optimization problem. We thus erive an iterative algorithm, namely. Detaile mathematical justification for this iterative proceure is given in Section 3. Both an aim to minimize the reconstruction error. The essential ifference is that applies a bilinear transformation on the ata. Such a bilinear transformation is particularly appropriate for ata in matrix form, an often leas to lower computation cost in comparison to. We apply on image compression an retrieval, where each image is represente in its native matrix form. To evaluate the propose algorithm, we have conucte extensive experiments on five well-known image atasets: PIX, ORL, AR, PIE, an USPS, where USPS consists of images of hanwritten igits an the other four are face image atasets. is compare with, as well as 2DPCA, a recently propose algorithm for imension reuction. (Details on 2DPCA can be foun in Section 4.) Results show that when using the same number of reuce imensions, is competitive with for classification, while it has a much lower computation cost. However, results in a larger reconstruction error than. The unerlying

3 GENERALIZED LOW RANK APPROXIMATIONS OF MATRICES reason may be that is able to utilize the locality property (e.g., smoothness in an image) intrinsic in the ata, which leas to goo classification performance. In terms of compression ratio 1, outperforms, especially when the number of ata points is relatively small compare to the number of imensions. For large an high-imensional atasets, the lack of available space becomes a critical issue. In this case, compression ratio is an important factor in evaluating ifferent imensionality reuction algorithms. To further reuce the reconstruction error of, we stuy the combination of an, namely +, which applies after the intermeiate imensionality reuction stage using. The essence of this composite algorithm is a further imensionality reuction stage by following. Since is applie to a low-imensional space transforme by, the secon stage by can be implemente efficiently. We apply this algorithm to image atasets an compare it with an. Results show that when using the same number of reuce imensions, + achieves a significant reuction of the reconstruction error as compare to, while keeping the computation cost small. The reconstruction error of + is close to that of, especially when the intermeiate imension in the stage is large, while it has a smaller computation cost than. In summary, can be applie as a pre-processing step for. The preprocessing by reuces significantly the computation cost of the computation, while keeping the reconstruction error small (see Section 5.6) Organization of the paper The rest of this paper is organize as follows. We give a brief overview of low rank approximations of matrices in Section 2. The problem of generalize low rank approximations of matrices is stuie in Section 3. Some relate work is presente in Section 4. A performance stuy is provie in Section 5. Conclusions an irections for future work can be foun in Section 6. A preliminary version of this paper appears in the Proceeings of the Twenty-First International Conference on Machine Learning, Alberta, Canaa, pp , This submission is substantially extene an contains: (1) aitional atasets in Section 5.1, such as RAND an USPS; (2) new experiments in Sections ; an (3) inclusion of + in Section 5.6. The major notations use throughout the rest of this paper are liste in Table Low rank approximations of matrices Traitional methos in information retrieval an machine learning eal with ata in vectorize representation. A collection of ata is then store in a single matrix A R N n, where each column of A correspons to a vector in the N-imensional space. A major benefit of this vector space moel is that the algebraic structure of the vector space can be exploite (Berry, Dumais, & O Brie, 1995). For high-imensional ata, one woul like to simplify the ata, so that traitional machine learning an statistical techniques can be applie. However, crucial information intrinsic

4 J. YE Table 1. Notations. Notations A i r c L R M i l 1 l 2 Descriptions The i-th ata point in matrix form Number of rows in A i Number of columns in A i Transformation on the left sie Transformation on the right sie Reuce representation of A i Number of rows in M i Number of columns in M i Common value for l 1 an l 2 k Number of reuce imensions by A Data matrix of size N by n n Number of training ata points N Dimension of training ata (N = rc) in the ata shoul not be remove uner this simplification. A wiely use metho for this purpose is to approximate the single ata matrix, A, with a matrix of lower rank. Mathematically, the optimal rank-k approximation of a matrix A, uner the Frobenius norm can be formulate as follows: Fin a matrix B R N n with rank(b) = k, such that B = arg min A B F, rank(b)=k where the Frobenius norm, M F, of a matrix M = (M ij ) is given by M F = i, j M2 ij. The matrix B can be reaily obtaine by computing the Singular Value Decomposition () of A, as state in the following theorem (Golub & Van Loan, 1996). Theorem 2.1. Let the Singular Value Decomposition of A R N n be A = UDV T, where U an V are orthogonal, D = iag(σ 1,...,σ r, 0,...,0),σ 1 σ r > 0 an r = rank(a). Then for 1 k r, r i=k+1 σ i 2 = min{ A B 2 F rank(b) = k}. The minimum is achieve with B = best k (A), where best k (A) = U k iag(σ 1,...,σ k )Vk T, an U k an V k are the matrices forme by the first k columns of U an V respectively. For any approximation M of A, we call A M F the reconstruction error of the approximation. By Theorem 2.1, B = U k iag(σ 1,...,σ k )Vk T has the smallest reconstruction error among all the rank-k approximations of A. Uner this approximation, each column, a i R N,ofAcan be approximate as a i U k ai L, for some a L i R k. Since U k has orthonormal columns, U k a L i U k a L j = a L i a L j, i.e., the Eucliean istance between two vectors are preserve uner the orthogonal projection. It follows that a i a j U k a L i U k a L j = a L i a L j. Hence the proximity of a i an a j,

5 GENERALIZED LOW RANK APPROXIMATIONS OF MATRICES in the original high-imensional space, can be approximate by computing the proximity of their reuce representations a i L an a j L. The spee-up on a single istance computation using the reuce representations is N/k. This forms the basis for Latent Semantic Inexing (Berry, Dumais, & O Brie, 1995; Deerwester et al., 1990), use wiely in informational retrieval. Another potential application of the above rank-k approximation is for ata compression. Since each a i is approximate by U k a i L, where U k is common for every a i, we nee to keep U k an {a i L } n only for all the approximations. Since U k R N k an a i L R k,fori = 1,..., n, it requires nk + Nk = (n + N)k scalars to store the reuce representations. The storage save, or compression ratio, using the rank-k approximation is thus nn/(n + N)k, since the original ata matrix A is of size N by n. 3. Generalize low rank approximations of matrices In this section, we stuy the problem of generalize low rank approximations of matrices, which aims to approximate a collection of matrices with lower rank. A key ifference between this generalize problem an the low rank approximation problem iscusse in the last section, is the ata representation moel applie. Recall that the vector space moel is applie for the traitional low rank approximations. The vector space moel leas to a simple an close form solution for low rank approximations by computing the of the ata matrix. However, the computation restricts its applicability to matrices of small size. Instea, we apply a ifferent ata representation moel, uner which each atum is represente as a matrix an the collection of ata is represente as a collection of matrices Problem formulation Let A i R r c,fori = 1,..., n, bethen ata points in the training set, where r an c enote the number of rows an columns respectively for each A i. We aim to compute two matrices L R r l 1 an R R c l 2 with orthonormal columns, an n matrices M i R l 1 l 2,fori = 1,..., n, such that LM i R T approximates A i, for all i. Here, l 1 an l 2 are two pre-specifie parameters that are best set to the same value, base on the experimental results in Section 5. Mathematically, we can formulate this as the following minimization problem: Computing optimal L, R an {M i } n, which solve min L R r l 1 : L T L=I l1 R R c l 2 : R T R=I l2 M i R l 1 l 2 :,...,n A i LM i R T 2 F. (1) The matrices L an R in the above approximations act as the two-sie linear transformations on the ata in matrix form. Recall that in the case of traitional low rank approximations, one-sie transformation is applie, which is U k in our previous iscussions. Note that the M i s are not require to be iagonal.

6 J. YE The generalize low rank approximations above naturally lea to two basic applications. Data compression: The matrices L, R, an {M i } n can be use to recover the original n matrices {A i } n, assuming LM i R T approximates A i, for each i. It requires r l 1 + c l 2 + n l 1 l 2 scalars to store L, R, an {M i } n. Hence, the storage save, or the compression ratio using the approximations is nrc/(rl 1 + cl 2 + nl 1 l 2 ). Distance computation: A common similarity metric on matrices is the Frobenius norm. The istance between A i an A j is A i A j F. Using the approximations, we have A i A j F LM i R T LM j R T F = M i M j F, since both L an R have orthonormal columns. The computation cost of computing A i A j F (resp. M i M j F ) is O(rc) (resp. O(l 1 l 2 )). Hence, the spee-up on a single istance computation using the approximations is rc/(l 1 l 2 ). Note that as l 1 an l 2 ecrease, the spee-up on the istance computation an the compression ratio increase. However, small values of l 1 an l 2 may lea to loss of information intrinsic in the original ata. We iscuss this trae-off in Section 5. TheformulationinEq.(1) is general, in the sense that l 1 an l 2 can be ifferent, i.e., M i can have an arbitrary shape. We will stuy the effect of the shape of M i on the performance of the approximations in Section The main algorithm In this section, we show how to solve the minimization problem in Eq. (1). The following theorem shows that the M i s are etermine by the transformation matrices L an R, which significantly simplifies the minimization problem in Eq. (1). Theorem 3.1. Let L, R an {M i } n be the optimal solution to the minimization problem in Eq. (1). Then M i = L T A i R, for every i. Proof: By the property of the trace of matrices, A i LM i R T 2 F = = trace((a i LM i R T )(A i LM i R T ) T ) trace ( A i Ai T ) + trace ( M i Mi T ) 2 trace ( LM i R T Ai T ), (2) where the secon term n trace(m i M i T ) results from the fact that both L an R have orthonormal columns, an trace(ab) = trace(ba), for any two matrices.

7 GENERALIZED LOW RANK APPROXIMATIONS OF MATRICES Since the first term on the right han sie of Eq. (2) is a constant, the minimization in Eq. (1) is equivalent to minimizing trace ( M i M T i ) 2 trace ( LM i R T Ai T ). (3) It is easy to check that the minimum of (3) is achieve, only if M i = L T A i R, for every i. This completes the proof of the theorem. Theorem 3.1 implies that M i is uniquely etermine by L an R with M i = L T A i R,for all i. Hence the key step for the minimization in Eq. (1) is the computation of the common transformations L an R. A key property of the optimal transformations L an R is state in the following theorem: Theorem 3.2. Let L, R an {M i } n be the optimal solution to the minimization problem in Eq. (1). Then L an R solve the following optimization problem: max L R r l 1 : L T L=I l1 R R c l 2 : R T R=I l2 L T A i R 2 F. (4) Proof: From Theorem 3.1, M i = L T A i R, for every i. Substituting this into n A i LM i R T 2 F, we obtain A i LM i R T 2 F = A i 2 F L T A i R 2 F. (5) Hence the minimization in Eq. (1) is equivalent to the maximization of L T A i R 2 F, which completes the proof of the theorem. To the best of our knowlege, there is no close form solution for the maximization problem in Eq. (4). A key observation, which leas to an iterative algorithm for the computation of L an R, is state in the following theorem: Theorem 3.3. Let L, R an {M i } n be the optimal solution to the minimization problem in Eq. (1). Then (1) For a given R, L consists of the l 1 eigenvectors of the matrix M L = A i RR T Ai T

8 J. YE corresponing to the largest l 1 eigenvalues. (2) For a given L, R consists of the l 2 eigenvectors of the matrix M R = Ai T LL T A i corresponing to the largest l 2 eigenvalues. Proof: By Theorem 3.2, L an R maximize L T A i R 2 F, which can be rewritten as trace ( L T A i RR T Ai T L ) ( = trace L T ) ( Ai RR T Ai T ) L = trace(l T M L L), (6) where M L = n A i RR T A i T. Hence, for a given R, the maximum of L T A i R 2 F = trace ( L T M L L ) is achieve, only if L R r l 1 consists of the l 1 eigenvectors of the matrix M L corresponing to the largest l 1 eigenvalues. The maximization of trace(l T M L L) can be consiere as a special case of the more general optimization problem in Eelman, Arias an Smith (1998). Similarly, by the property of the trace of matrices, L T A i R 2 F can also be rewritten as trace ( R T Ai T LL T A i R ) ( = trace R T ) ( A T i LL T ) A i R = trace(r T M R R), (7) where M R = n A i T LL T A i. Hence, for a given L, the maximum of L T A i R 2 F = trace(rt M R R)

9 GENERALIZED LOW RANK APPROXIMATIONS OF MATRICES is achieve, only if R R c l 2 consists of the l 2 eigenvectors of the matrix M R corresponing to the largest l 2 eigenvalues. This completes the proof of the theorem. Theorem 3.3 results in an iterative proceure for computing L an R as follows: for a given L, we can compute R by computing the eigenvectors of the matrix M R ; with the compute R, we can then upate L by computing the eigenvectors of the matrix M L. The proceure can be repeate until convergence. The pseuo-coe of the above iterative proceure is given in Algorithm below. Theoretically, the solution to is only locally optimal. The solution epens on the choice of the initial L 0 for L. We i extensive experiments (see Section 5.3) using ifferent choices of the initial L 0 an foun out that, for image atasets, always converges to the same solution, regarless of the choice of the initial L 0. Theorem 3.3 implies that the matrix upates in Lines 5 an 8 of o not ecrease the value of n L T A i R F 2, since the compute R an L are locally optimal. Hence by Theorem 3.2, the value of n A i LM i R T F 2,or RMSRE 1 n A i LM i R T 2 F (8)

10 J. YE oes not increase. Here RMSRE stans for the Root Mean Square Reconstruction Error. The convergence of follows, since RMSRE is boune from below by 0, as state in the following Theorem: Theorem 3.4. The Algorithm monotonically non-increases the RMSRE value as efine in Eq. (8), hence it converges in the limit. Thus we use the relative reuction of the RMSRE value to check the convergence. Specifically, let RMSRE(i) be the RMSRE value at the i-th iteration of the algorithm, then the convergence of the algorithm is etermine by checking whether the following inequality hols: RMSRE(i 1) RMSRE(i) RMSRE(i 1) <η, for some small threshol η>0. In our experiments, we choose η = Results in Section 5 show that the algorithm converges within two to three iterations. Note that the transformation matrices L an R in may not converge, even when the RMSRE value converges. To see why this is the case, consier two pairs of solutions (L, R) an (LP, RQ), for some orthogonal matrices P R l 1 l 1 an Q R l 2 l 2. Since RMSRE 1 n A i LM i R T 2 F = 1 n A i LL T A i RR T 2 F, it is easy to verify that both (L, R) an (LP, RQ) result in the same RMSRE value. Thus, the solution to is invariant uner arbitrary orthogonal transformations. Two transformations L an ˆL can be compare by computing the largest principal angle ((Bjork & Golub, 1973; Golub & Van Loan, 1996) between the column spaces of L an ˆL. Ifthe angle is zero, L is essentially equivalent to ˆL up to an orthogonal transformation Time an space complexities The most expensive steps in are the formation of the matrices M R an M L in Lines 3 an 6, an the formation of M j in Lines It takes O(l 1 c (r + c) n) time for computing M R an O(l 2 r (r + c) n) time for computing M L. The computation time of M j = (L T (A j R)) using the given orer is O(rcl 2 + rl 2 l 1 ) = O(rl 2 (c + l 1 )). Assume the number of iterations in the while loop (from Line 2 to Line 10) is I. The total time complexity of is O(I(r + c) 2 max (l 1, l 2 ) n). It is easy to verify that the space complexity of is O(rc) = O(N). The key to the low space complexity is that the formation of the matrices M R an M L can be proceee by reaing the matrices {A i } n incrementally. Note that involves eigenvalue problems of size r 2 or c 2, as compare to size rcn (= Nn) in. This is the key reason why has much lower costs in time an space than.

11 GENERALIZED LOW RANK APPROXIMATIONS OF MATRICES 4. Relate work Wavelet transform is a commonly use scheme for image compression (Averbuch, Lazar, & Israeli, 1996). Similar to the algorithm in this paper, wavelets can be applie to images in matrix representation. A subtle but important ifference between wavelet compression an compression is that the former mainly aims to compress an reconstruct a single image with small cost of basis representations, which is extremely important for image transmission in computer networks, whereas compression aims to compress a set of images by making use of the correlation information between images. A collection of images can also be consiere as a 3r-orer tensor, or three-imensional array. Decomposition of higher-orer tensors has been stuie in Kola (2001), Shashua an Levin (2001), Vasilescu an Terzopoulos (2002), an Zhang an Golub (2001). Our approach iffers in that we keep explicit the 2D nature of images. The work that is most closely relate to the current one is the two-imensional Principal Component Analysis (2DPCA) algorithm recently propose in Yang et al. (2004). Like, 2DPCA works with ata in matrix form. The key ifference is that 2DPCA applies linear transformation on the right sie of the ata, while applies two-sie linear transformation. 2DPCA can be formulate as a trace optimization problem, from which a close form solution is obtaine. However, a isavantage of 2DPCA, as also mentione in Yang et al. (2004), is that the number of reuce imensions of 2DPCA can be quite large. More etails are given below. 2DPCA computes a linear transformation X R c l with l<c, such that each image A i R r c is transforme (projecte) to Y i = A i X R r l. The variance of the n projections {Y i } n can be compute as 1 n 1 Y i Ȳ 2 F = 1 n 1 X T (A i Ā) T (A i Ā)X, where Ȳ = 1 n n Y i = ĀX is the mean an Ā = 1 n n A i. The optimal transformation X in 2DPCA is compute such that the variance of the n ata points in the transforme space is maximize. Specifically, the optimal transformation X can be compute by solving the following maximization problem: X = arg max X T X=I l ( 1 n 1 ) X T (A i Ā) T (A i Ā)X. (9) The optimal X can be obtaine by computing the l eigenvectors of the matrix 1 n 1 n (A i Ā) T (A i Ā) corresponing to the largest l eigenvalues. It requires cl + nrl scalars to store X R c l an {Y i } n Rr l. Hence, the compression ratio by 2DPCA is nrc/(cl + nrl) c/l. Table 2 lists the time an space complexities of, 2DPCA, an. It is clear that an 2DPCA have much smaller costs in time an space than.

12 J. YE Table 2. Comparison of, 2DPCA, an : n is the number of ata points in the training ataset an N = r c is the imension of the ata. Methos Time Space O(nN min (n, N)) O(nN) 2DPCA O(nc 2 r) O(N) O(I(r+c) 2 max (l 1, l 2 ) n) O(N) 5. Experimental evaluations In this section, we experimentally evaluate the algorithm. All of our experiments are performe on a P GHz Linux machine with 1GB memory. A MATLAB version of the algorithm can be accesse at jieping//}. We present in Section 5.1 one synthetic ataset an five real-worl image atasets use for our evaluation. The effect of the ratio of l 1 to l 2 on reconstruction error is iscusse in Section 5.2. Results show that, for the atasets consiere in the paper, choosing l 1 /l 2 1 achieves goo performance. We thus set both l 1 an l 2 equal to a common value in the following experiments. The sensitivity of to the choice of the initial L 0 for L is stuie in Section 5.3. In Sections , a etaile comparative stuy between the propose algorithm an is provie, where the comparison is mae on the reconstruction error (measure by RMSRE), classification, an quality of compresse images. The results with 2DPCA (Yang et al., 2004) are also inclue. The effectiveness of critically epens on the reuce imension k. For all the experiments, k is chosen so that both an have the same number of reuce imensions. Finally, we stuy the + algorithm in Section 5.6. For all the experiments, we use the K-Nearest-Neighbors (K-NN) metho with K = 1 base on the Eucliean istance for classification (Dua, Hart, & Stork, 2000; Fukunaga, 1990). We use 10-fol cross-valiation for estimating the misclassification error rate. In 10-fol cross-valiation, we ivie the ata into ten subsets of approximately equal size. Then we o the training an testing ten times, each time leaving out one of the subsets for training, an using only the omitte subset for testing. The misclassification error rate reporte is the average from the ten runs Datasets We use the following six atasets (one synthetic ataset an five real-worl image atasets) in our experiments: RAND is a synthetic ataset, consisting of 500 ata points of size All the entries are ranomly generate between 0 an 255 (the same range as the four face image atasets). PIX 2 contains 300 face images of 30 persons. The size of PIX images is We subsample the images own to a size of =

13 GENERALIZED LOW RANK APPROXIMATIONS OF MATRICES ORL 3 is a well-known ataset for face recognition (Samaria & Harter, 1994). It contains the face images of 40 persons, for a total of 400 images. The image size is The face images are perfectly centre. The major challenge pose by this ataset is the variation of the face pose. We use the whole image as an instance (i.e., the imension of an instance is = 10304). AR 4 is a large face image ataset (Martinez & Benavente, 1998). The instance of each face may contain large areas of occlusion, ue to the presence of sun glasses an scarves. The existence of occlusion ramatically increases the within-class variances of AR face image ata. We use a subset of AR. This subset contains 1638 face images of 126 persons. Its image size is We first crop the image from row 100 to 500, an column 200 to 550, an then subsample the croppe images own to a size of = PIE 5 is a subset of the CMU PIE face image ataset (Sim et al., 2004). PIE contains 6615 face instances of 63 persons. More specifically, each person has 21 5 = 105 instances taken uner 21 ifferent lighting conitions an 5 ifferent poses. The image size of PIE is We pre-process each image using a similar technique as above. The final imension of each instance is = 768. USPS 6 is an image ataset consisting of 9298 hanwritten igits of 0 through 9. We use a subset of USPS. This subset contains 300 images for each igit, for a total of 3000 images. The image size is = 256. The statistics of all atasets are summarize in Table Effect of the ratio of l 1 to l 2 on reconstruction error In this experiment, we stuy the effect of the ratio of l 1 to l 2 on reconstruction error, where l 1 an l 2 are the row an column imensions of the reuce representation M i in. To this en, we run with ifferent combinations of l 1 an l 2 with a constant prouct l 1 l 2 = 400. The results on PIX, ORL, an AR are shown in Table 4. It is clear from the table that the RMSRE value is small, when l 1 /l 2 1, an the minimum is achieve when l 1 /l 2 = 1 in all cases. To examine whether this is relate to the fact that for images, the number of rows (r) an the number of columns (c) are comparable, we subsample the images in PIX own to a size of = The result on this ataset is inclue in Table 4. Interestingly, we observe the same tren in this ataset. That is, the RMSRE value is small, when l 1 /l 2 Table 3. Statistics of our test atasets. Dataset Size (n) Dimension (r c) Number of classes RAND = PIX = ORL = AR = PIE = USPS =

14 J. YE 1. We have conucte similar experiments on other atasets an observe the same tren. This may be relate to the effect of balancing between the left an right transformations involve in. Finally, we examine the effect of the ratio using the synthetic ataset. The result on RAND is inclue in the last column of Table 4. We observe the same tren as other atasets. That is, the RMSRE value is small, when l 1 /l 2 1. The above experiment on both the synthetic an real-worl atasets suggests that choosing l 1 /l 2 1 may be a goo strategy in practice. In all the following experiments, we set both l 1 an l 2 equal to a common value Sensitivity of to the choice of the initial L 0 In this experiment, we examine the sensitivity of to the choice of the initial L 0 for L (see Line 1 of the algorithm). To this en, we run with 10 ifferent initial L 0 s. The first one is L 0 = (I,0) T, while the next nine being ranomly generate. First, we stuy the sensitivity of using the image atasets. The result on ORL is shown in figure 1 (left), where the horizontal axis is the number of iterations an the vertical axis is the RMSRE value (on a log scale). is set to be 10. We can observe from the figure that converges rapily for all ten initial choices of L. It converges within two to three iterations with the specifie threshol (η = 10 6 ). For all ten ifferent initial L 0 s, converges to the same RMSRE value. To check whether converges to the same solution, we compare the resulting left transformations L from the ten ifferent runs. Two transformations can be compare by computing the largest principal angle between the column spaces of these two transformations, as iscusse in Section 3. The angle between the left transformation resulting from the first run an the ones from other nine runs are compute (results omitte). For all cases, the angles are aroun to This implies that essentially converges to the same solution (subject to an orthogonal transformation) for the ten ifferent runs. We observe the same Table 4. Effect of the ratio of l 1 to l 2 on reconstruction error: Row shown in bol has minimum RMSRE (where l 1 = l 2 ). Parameters Datasets l 1 l 2 PIX ORL AR PIX (50 100) RAND

15 GENERALIZED LOW RANK APPROXIMATIONS OF MATRICES tren in other four image atasets (PIX, AR, PIE, an USPS) as well as ifferent values of an the results are omitte. Next, we examine the sensitivity of using RAND, the synthetic ataset. The result is shown in figure 1 (right). It is clear from the figure that converges much slower on RAND than on image atasets. We run with the threshol η = 10 6, an it oes not converge until 78 iterations. Furthermore, oes not converge to the same solution (measure by the angle between two subspaces). Further experiments also show that the final RMSRE value may be ifferent for ifferent initial L 0 s, even though the ifference always seems small. This is likely ue to the fact that there are some similarities among the images in the same image atasets, while the ata in RAND is ranomly generate. The experiment above implies that for atasets with some hien structures, such as faces an hanwritten igits, may converge to the global solution, regarless of the choice of the initial L 0. However, it is not true in general, as shown in the RAND ataset Comparison of reconstruction error an classification In this experiment, we evaluate the effectiveness of the propose algorithm in terms of the reconstruction error measure by RMSRE an classification measure by misclassification error rate, an compare it with 2DPCA an. For, the reuce imension (k) is chosen so that both an have the same number of reuce imensions, that is, k = 2, where is the common value for both l 1 an l 2. Figures 2 6 show the results on the five image atasets: PIX, ORL, AR, PIE, an USPS respectively. The horizontal axis enotes the value of, an the vertical axis enotes the RMSRE value (left graph) an misclassification rate (right graph). Figure 7 shows the compression ratios of all algorithms on AR (left graph) an PIE (right graph), two representatives of all image atasets in Table 3. The main observations inclue: RMSRE (on a log scale) RMSRE (on a log scale) Number of iterations Number of iterations Figure 1. Sensitivity of to the choice of the initial L 0 on ORL (left) an RAND (right). The ten curves correspon to the ten runs with ifferent initial L 0 s. The horizontal axis is the number of iterations an the vertical axis is the RMSRE value (on a log scale).

16 J. YE As increases, the reconstruction error by ecreases monotonically for all cases, while the misclassification rate ecreases monotonically in most cases. The same tren can be observe from other algorithms. Thus choosing a large in general improves the performance of in reconstruction an classification. However, the computation cost of also increases as increases, as shown in Table 2 (Note that = l 1 = l 2 ). There is a traeoff between the performance an the computation cost, when choosing the best in. has the smallest RMSRE value in all cases, while 2DPCA has the largest RMSRE value in most cases. The large reconstruction error of 2DPCA is ue to its poor compression performance, when using only one-sie transformation, as compare to two-sie transformation in. For atasets with a relatively large number of imensions compare to the number of ata points, such as the AR atasets, the compression ratio of is much smaller than others as shown in figure 7 (left graph). As the number of ata points gets as large as in the PIE ataset, the compression ratio of becomes close to, as shown in figure 7 (right graph). Root Mean Square Reconstruction Error (RMSRE) DPCA Misclassification rate DPCA Figure 2. Comparison of reconstruction error (left) an misclassification rate (right) on PIX. Root Mean Square Reconstruction Error (RMSRE) DPCA Misclassification rate DPCA Figure 3. Comparison of reconstruction error (left) an misclassification rate (right) on ORL.

17 GENERALIZED LOW RANK APPROXIMATIONS OF MATRICES Root Mean Square Reconstruction Error (RMSRE) DPCA Misclassification rate DPCA Figure 4. Comparison of reconstruction error (left) an misclassification rate (right) on AR. Root Mean Square Reconstruction Error (RMSRE) DPCA Misclassification rate DPCA Figure 5. Comparison of reconstruction error (left) an misclassification rate (right) on PIE. is competitive with for classification in most cases, even though has larger RMSRE values. This may be relate to the fact that is able to utilize the locality information (e.g. smoothness in an image) intrinsic in the image, which leas to goo classification performance. We apply to atasets without any locality property, such as text ocuments an gene expression ata, by reshaping each vector as a matrix. performs quite poorly in both the reconstruction error an classification as compare to. The reconstruction error an misclassification rate on AR are much higher than those of other image atasets. This may be relate to the large within-class variances on AR, ue to the presence of sun glasses an scarves, as mentione in Section Compression effectiveness In this experiment, we examine the quality of the images compresse by the propose algorithm an compare it with an 2DPCA. Image compression is commonly applie as a pre-processing step for storage an transmission of large image ata. There exists a traeoff

18 J. YE Root Mean Square Reconstruction Error (RMSRE) DPCA Misclassification rate DPCA Figure 6. Comparison of reconstruction error (left) an misclassification rate (right) on USPS. Compression Ratio (on a log scale) DPCA Compression Ratio (on a log scale) DPCA Figure 7. Comparison of compression ratio (on a log scale) on AR (left) an PIE (right). The horizontal axis enotes the value of, an the vertical axis enotes the compression ratio (on a log scale). between quality of compresse images an compression ratio, as a high compression ratio usually leas to poor quality of compresse images. Figure 8 shows images of 10 ifferent persons from the ORL ataset. The 10 images in the first row are the original images from the ataset. The 10 images in the secon row are the ones compresse by the algorithm with = 10. The compression ratio is about The images compresse by an 2DPCA with approximately the same number of reuce imensions as are shown in the thir an fourth rows of figure 8 respectively. It is clear that the images compresse by our propose algorithm have slightly better visual quality than those compresse by 2DPCA, while the ones compresse by have the best visual quality. However, the compression ratio of (3.85) is much smaller than that of (98.0). Figure 9 shows images of 10 ifferent igits from the USPS ataset. = 5isusein. The compression ratio is about 10. an perform slightly better than 2DPCA. Furthermore, the compression ratio of (9.4) is close to that of

19 GENERALIZED LOW RANK APPROXIMATIONS OF MATRICES (10.2). The ifferent behavior between ORL an USPS is relate to the fact that USPS has a relatively large number of ata points compare to its imension, i.e., n rc In this experiment, we stuy the combination of an, namely +, where the imension is further reuce by. More specifically, in the first stage, each ata point A i R r c is reuce to M i R by, with Figure 8. First row: raw images from ORL ataset. Secon row: images compresse by. Thir row: images compresse by. Fourth row: images compresse by 2DPCA. Note that the compression ratio of (3.85) is much smaller than that of (98.0). Figure 9. First row: raw images from USPS ataset. Secon row: images compresse by. Thir row: images compresse by. Fourth row: images compresse by 2DPCA.

20 J. YE < min (r, c). In the secon stage, each M i is first transforme to a vector v i R 2 by matrix-to-vector alignment, where a matrix is transforme to a vector by concatenating all its rows together consecutively. Then v i is further reuce to v L i R k by with k < 2. The flowchart of the + algorithm is shown in figure 10 graphically. The complexity of the first () stage is O(I(r + c) 2 n), where the number of iterations I is usually small. The secon stage applies to a n by 2 matrix, hence takes O(n 2 min (n, 2 )). Therefore, the total time complexity of + is O(n ((r + c) 2 + min (n, 3 ))). Assuming r c N, the time complexity is simplifie to Figure 10. Flowchart of the + algorithm (soli lines). It has two stages: In the first stage, each ata point A i R r c is transforme to M i R ; In the secon stage, M i is first transforme to a vector v i R 2 by matrix-to-vector alignment, which is further reuce to a vector v L i R k by. Note that < min (r, c) ank < 2. For traitional (ashe lines), A i R r c is first transforme to a vector a i R rc, which is then reuce to v L i R k by irectly. Root Mean Square Reconstruction Error (RMSRE) Time (in secons) Figure 11. Comparison of reconstruction error (left) an computation time (right) on PIX.

21 GENERALIZED LOW RANK APPROXIMATIONS OF MATRICES Root Mean Square Reconstruction Error (RMSRE) Time (in secons) Figure 12. Comparison of reconstruction error (left) an computation time (right) on ORL. Root Mean Square Reconstruction Error (RMSRE) Time (in secons) Figure 13. Comparison of reconstruction error (left) an computation time (right) on AR. O(n (N + min (n, 3 ))). Note that both the an stages in + have much smaller computation costs than, especially when is small. (Note that the cost of on an n N matrix is O(nNmin (n, N)).) We apply + to the image atasets an compare it with an in terms of reconstruction error an computation time, when using the same number of reuce imensions. For simplicity, we use k = 100 in for PIX, ORL an AR an k = 25 for PIE an USPS. Hence, the reuce imension of an + is 100 (or 25). The value of etermines the intermeiate imension of the stage in +. We examine the effect of on the performance of +, an the results are summarize in figures 11 15, where the horizontal axis enotes the value of (between 15 an 40 for PIX, ORL an AR, between 6 an 16 for PIE, an between 6 an 12 for USPS) an the vertical axis enotes the reconstruction error, measure by the RMSRE value (left graph) an the computation time, measure in secons (right graph). It is worthwhile to note that the reuce imension of is fixe in the comparison, while the reuce imension of the intermeiate () stage in + varies. The main observations inclue:

22 J. YE Root Mean Square Reconstruction Error (RMSRE) Time (in secons) Figure 14. Comparison of reconstruction error (left) an computation time (right) on PIE. Root Mean Square Reconstruction Error (RMSRE) Time (in secons) Figure 15. Comparison of reconstruction error (left) an computation time (right) on USPS. As increases, the RMSRE value of + ecreases. By combining an, + achieves a ramatic reuction of the RMSRE value as compare to. The computation time of + increases, as increases. There is a traeoff between the reconstruction error an the computation time, when choosing the best. The above experiment shows that it may be beneficial to combine with, since it has much lower reconstruction error than, while keeping the computation cost low. The performance of + critically epens on the value of. To choose the optimal, one nees to consier the traeoff between the computation cost an the reconstruction error, as a larger value of usually leas to higher computation cost an smaller reconstruction error. 6. Conclusions an future work A novel algorithm, name, for low rank approximations of a collection of matrices is presente. The algorithm works in an iterative an interleave fashion an the approximation is improve uring successive iterations. Experimental results show that

23 GENERALIZED LOW RANK APPROXIMATIONS OF MATRICES the algorithm converges rapily. Detaile analysis shows that has asymptotically minimum space requirement an lower time complexity than, which is esirable for large an high-imensional atasets. Specifically, involves eigenvalue problems of size r 2 or c 2, as compare to size rcn (= Nn) in. A natural application for is in image compression an retrieval, where each image is represente in its native matrix form. We evaluate the propose algorithm in terms of the reconstruction error an classification, an compare it with 2DPCA an. Results show that when using the same number of reuce imensions, the propose algorithm is competitive with 2DPCA an for classification, while results in a larger reconstruction error than. In terms of compression ratio, outperforms, especially when the number of imensions is relatively large compare to the number of ata points. However, our experiments show that can fail both in reconstruction error an classification when the ata o not have locality property (e.g. smoothness in an image), such as text ocuments an gene expression ata. Further stuy is neee to show how a native ata vector can be rearrange into a matrix so that relate variables are spatially close. To further reuce the reconstruction error of, we stuy the + algorithm, where is precee by. In this composite algorithm, can be consiere as a pre-processing step for. Extensive experiments show that, when using the same number of reuce imensions, + achieves a significant reuction of the reconstruction error as compare to, while keeping the computation cost small. The reconstruction error of + is close to that of, especially when the intermeiate reuce imension in the stage is large, while it has a smaller computation cost than. One of our future research irections is to unerstan why has larger reconstruction error than, when using the same number of reuce imensions. There are several other crucial questions that still remain to be answere: In Section 5.2, we stuy the effect of the ratio of l 1 to l 2 on reconstruction error. Experimental results show that choosing l 1 /l 2 1 works well in practice, even when the original row (r) an column (c) imensions are quite ifferent. It may be relate to the effect of balancing between the left an right transformations involve in. However, a rigorous theoretical justification behin this is still not available. In Section 5.3, we stuy the convergence property of an the sensitivity of to the choice of the initial L 0. Extensive experiments show that for image atasets, may converge to the global solution, regarless of the choice of the initial L 0. However, it is not true in general, as shown in the RAND ataset. The remaining question is whether there exist certain conitions on the A i s, uner which has the global convergence property. Acknowlegment We thank the Associate Eitor an the reviewers for helpful comments that greatly improve the paper. We also thank Prof. Ravi Janaran an Dr. Chris Ding for helpful iscussions.

24 J. YE This research is sponsore, in part, by the Army High Performance Computing Research Center uner the auspices of the Department of the Army, Army Research Laboratory cooperative agreement number DAAD , the content of which oes not necessarily reflect the position or the policy of the government, an no official enorsement shoul be inferre. Support Fellowships from Guiant Corporation an from the Department of Computer Science & Engineering, at the University of Minnesota, Twin Cities is gratefully acknowlege. Notes 1. Here the compression ratio means the percentage of space save by the low rank approximations to store the ata. Details can be foun in Sections 2 an aleix/aleix face DB.html html 6. tibs/elemstatlearn/ata.html References Achlioptas, D., & McSherry, F. (2001). Fast computation of low rank matrix approximations. In ACM STOC Conference Proceeings (pp ). Aggarwal, C. C. (2001). On the effects of imensionality reuction on high imensional similarity search. In ACM PODS Conference Proceeings (pp ). Averbuch, A., Lazar, D., & Israeli, M. (1996). Image compression using wavelet transform an multiresolution ecomposition. IEEE Transactions on Image Processing, 5:1, Berry, M., Dumais, S., & O Brie, G. (1995). Using linear algebra for intelligent information retrieval. SIAM Review, 37, Bjork, A., & Golub, G. (1973). Numerical methos for computing angles between linear subspaces. Mathematics of Computation, 27:123, Bran, M. (2002). Incremental singular value ecomposition of uncertain ata with missing values. In ECCV Conference Proceeings (pp ). Castelli, V., Thomasian, A., & Li, C.-S. (2003). C: Clustering an singular value ecomposition for approximate similarity searches in high imensional space. IEEE Transactions on Knowlege an Data Engineering, 15:3, Deerwester, S., Dumais, S., Furnas, G., Lanauer, T., & Harshman, R. (1990). Inexing by latent semantic analysis. Journal of the Society for Information Science, 41, Dhillon, I., & Moha, D. (2001). Concept ecompositions for large sparse text ata using clustering. Machine Learning, 42, Drineas, P., Frieze, A., Kannan, R., Vempala, S., & Vinay, V. (1999). Clustering in large graphs an matrices. In ACM SODA Conference Proceeings (pp ). Dua, R., Hart, P., & Stork, D. (2000). Pattern classification. Wiley. Eelman, A., Arias, T. A., & Smith, S. T. (1998). The geometry of algorithms with orthogonality constraints. SIAM Journal on Matrix Analysis an Applications, 20:2, Frieze, A., Kannan, R., & Vempala, S. (1998). Fast monte-carlo algorithms for fining low-rank approximations. In ACM FOCS Conference Proceeings (pp ). Fukunaga, K. (1990). Introuction to statistical pattern classification. San Diego, California, USA: Acaemic Press. Golub, G. H., & Van Loan, C. F. (1996). Matrix computations, 3r eition. Baltimore, MD, USA: The Johns Hopkins University Press.

Classifying Facial Expression with Radial Basis Function Networks, using Gradient Descent and K-means

Classifying Facial Expression with Radial Basis Function Networks, using Gradient Descent and K-means Classifying Facial Expression with Raial Basis Function Networks, using Graient Descent an K-means Neil Allrin Department of Computer Science University of California, San Diego La Jolla, CA 9237 nallrin@cs.ucs.eu

More information

Multilevel Linear Dimensionality Reduction using Hypergraphs for Data Analysis

Multilevel Linear Dimensionality Reduction using Hypergraphs for Data Analysis Multilevel Linear Dimensionality Reuction using Hypergraphs for Data Analysis Haw-ren Fang Department of Computer Science an Engineering University of Minnesota; Minneapolis, MN 55455 hrfang@csumneu ABSTRACT

More information

1 Surprises in high dimensions

1 Surprises in high dimensions 1 Surprises in high imensions Our intuition about space is base on two an three imensions an can often be misleaing in high imensions. It is instructive to analyze the shape an properties of some basic

More information

Almost Disjunct Codes in Large Scale Multihop Wireless Network Media Access Control

Almost Disjunct Codes in Large Scale Multihop Wireless Network Media Access Control Almost Disjunct Coes in Large Scale Multihop Wireless Network Meia Access Control D. Charles Engelhart Anan Sivasubramaniam Penn. State University University Park PA 682 engelhar,anan @cse.psu.eu Abstract

More information

Image Segmentation using K-means clustering and Thresholding

Image Segmentation using K-means clustering and Thresholding Image Segmentation using Kmeans clustering an Thresholing Preeti Panwar 1, Girhar Gopal 2, Rakesh Kumar 3 1M.Tech Stuent, Department of Computer Science & Applications, Kurukshetra University, Kurukshetra,

More information

A fast embedded selection approach for color texture classification using degraded LBP

A fast embedded selection approach for color texture classification using degraded LBP A fast embee selection approach for color texture classification using egrae A. Porebski, N. Vanenbroucke an D. Hama Laboratoire LISIC - EA 4491 - Université u Littoral Côte Opale - 50, rue Ferinan Buisson

More information

Loop Scheduling and Partitions for Hiding Memory Latencies

Loop Scheduling and Partitions for Hiding Memory Latencies Loop Scheuling an Partitions for Hiing Memory Latencies Fei Chen Ewin Hsing-Mean Sha Dept. of Computer Science an Engineering University of Notre Dame Notre Dame, IN 46556 Email: fchen,esha @cse.n.eu Tel:

More information

Cluster Center Initialization Method for K-means Algorithm Over Data Sets with Two Clusters

Cluster Center Initialization Method for K-means Algorithm Over Data Sets with Two Clusters Available online at www.scienceirect.com Proceia Engineering 4 (011 ) 34 38 011 International Conference on Avances in Engineering Cluster Center Initialization Metho for K-means Algorithm Over Data Sets

More information

APPLYING GENETIC ALGORITHM IN QUERY IMPROVEMENT PROBLEM. Abdelmgeid A. Aly

APPLYING GENETIC ALGORITHM IN QUERY IMPROVEMENT PROBLEM. Abdelmgeid A. Aly International Journal "Information Technologies an Knowlege" Vol. / 2007 309 [Project MINERVAEUROPE] Project MINERVAEUROPE: Ministerial Network for Valorising Activities in igitalisation -

More information

Coupling the User Interfaces of a Multiuser Program

Coupling the User Interfaces of a Multiuser Program Coupling the User Interfaces of a Multiuser Program PRASUN DEWAN University of North Carolina at Chapel Hill RAJIV CHOUDHARY Intel Corporation We have evelope a new moel for coupling the user-interfaces

More information

Research Article Inviscid Uniform Shear Flow past a Smooth Concave Body

Research Article Inviscid Uniform Shear Flow past a Smooth Concave Body International Engineering Mathematics Volume 04, Article ID 46593, 7 pages http://x.oi.org/0.55/04/46593 Research Article Invisci Uniform Shear Flow past a Smooth Concave Boy Abullah Mura Department of

More information

filtering LETTER An Improved Neighbor Selection Algorithm in Collaborative Taek-Hun KIM a), Student Member and Sung-Bong YANG b), Nonmember

filtering LETTER An Improved Neighbor Selection Algorithm in Collaborative Taek-Hun KIM a), Student Member and Sung-Bong YANG b), Nonmember 107 IEICE TRANS INF & SYST, VOLE88 D, NO5 MAY 005 LETTER An Improve Neighbor Selection Algorithm in Collaborative Filtering Taek-Hun KIM a), Stuent Member an Sung-Bong YANG b), Nonmember SUMMARY Nowaays,

More information

Particle Swarm Optimization Based on Smoothing Approach for Solving a Class of Bi-Level Multiobjective Programming Problem

Particle Swarm Optimization Based on Smoothing Approach for Solving a Class of Bi-Level Multiobjective Programming Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 17, No 3 Sofia 017 Print ISSN: 1311-970; Online ISSN: 1314-4081 DOI: 10.1515/cait-017-0030 Particle Swarm Optimization Base

More information

Image compression predicated on recurrent iterated function systems

Image compression predicated on recurrent iterated function systems 2n International Conference on Mathematics & Statistics 16-19 June, 2008, Athens, Greece Image compression preicate on recurrent iterate function systems Chol-Hui Yun *, Metzler W. a an Barski M. a * Faculty

More information

Skyline Community Search in Multi-valued Networks

Skyline Community Search in Multi-valued Networks Syline Community Search in Multi-value Networs Rong-Hua Li Beijing Institute of Technology Beijing, China lironghuascut@gmail.com Jeffrey Xu Yu Chinese University of Hong Kong Hong Kong, China yu@se.cuh.eu.h

More information

Generalized Edge Coloring for Channel Assignment in Wireless Networks

Generalized Edge Coloring for Channel Assignment in Wireless Networks Generalize Ege Coloring for Channel Assignment in Wireless Networks Chun-Chen Hsu Institute of Information Science Acaemia Sinica Taipei, Taiwan Da-wei Wang Jan-Jan Wu Institute of Information Science

More information

6 Gradient Descent. 6.1 Functions

6 Gradient Descent. 6.1 Functions 6 Graient Descent In this topic we will iscuss optimizing over general functions f. Typically the function is efine f : R! R; that is its omain is multi-imensional (in this case -imensional) an output

More information

Generalized Edge Coloring for Channel Assignment in Wireless Networks

Generalized Edge Coloring for Channel Assignment in Wireless Networks TR-IIS-05-021 Generalize Ege Coloring for Channel Assignment in Wireless Networks Chun-Chen Hsu, Pangfeng Liu, Da-Wei Wang, Jan-Jan Wu December 2005 Technical Report No. TR-IIS-05-021 http://www.iis.sinica.eu.tw/lib/techreport/tr2005/tr05.html

More information

Shift-map Image Registration

Shift-map Image Registration Shift-map Image Registration Svärm, Linus; Stranmark, Petter Unpublishe: 2010-01-01 Link to publication Citation for publishe version (APA): Svärm, L., & Stranmark, P. (2010). Shift-map Image Registration.

More information

THE APPLICATION OF ARTICLE k-th SHORTEST TIME PATH ALGORITHM

THE APPLICATION OF ARTICLE k-th SHORTEST TIME PATH ALGORITHM International Journal of Physics an Mathematical Sciences ISSN: 2277-2111 (Online) 2016 Vol. 6 (1) January-March, pp. 24-6/Mao an Shi. THE APPLICATION OF ARTICLE k-th SHORTEST TIME PATH ALGORITHM Hua Mao

More information

Random Clustering for Multiple Sampling Units to Speed Up Run-time Sample Generation

Random Clustering for Multiple Sampling Units to Speed Up Run-time Sample Generation DEIM Forum 2018 I4-4 Abstract Ranom Clustering for Multiple Sampling Units to Spee Up Run-time Sample Generation uzuru OKAJIMA an Koichi MARUAMA NEC Solution Innovators, Lt. 1-18-7 Shinkiba, Koto-ku, Tokyo,

More information

BIJECTIONS FOR PLANAR MAPS WITH BOUNDARIES

BIJECTIONS FOR PLANAR MAPS WITH BOUNDARIES BIJECTIONS FOR PLANAR MAPS WITH BOUNDARIES OLIVIER BERNARDI AND ÉRIC FUSY Abstract. We present bijections for planar maps with bounaries. In particular, we obtain bijections for triangulations an quarangulations

More information

k-nn Graph Construction: a Generic Online Approach

k-nn Graph Construction: a Generic Online Approach k-nn Graph Construction: a Generic Online Approach Wan-Lei Zhao arxiv:80.00v [cs.ir] Sep 08 Abstract Nearest neighbor search an k-nearest neighbor graph construction are two funamental issues arise from

More information

A Plane Tracker for AEC-automation Applications

A Plane Tracker for AEC-automation Applications A Plane Tracker for AEC-automation Applications Chen Feng *, an Vineet R. Kamat Department of Civil an Environmental Engineering, University of Michigan, Ann Arbor, USA * Corresponing author (cforrest@umich.eu)

More information

A Classification of 3R Orthogonal Manipulators by the Topology of their Workspace

A Classification of 3R Orthogonal Manipulators by the Topology of their Workspace A Classification of R Orthogonal Manipulators by the Topology of their Workspace Maher aili, Philippe Wenger an Damien Chablat Institut e Recherche en Communications et Cybernétique e Nantes, UMR C.N.R.S.

More information

Transient analysis of wave propagation in 3D soil by using the scaled boundary finite element method

Transient analysis of wave propagation in 3D soil by using the scaled boundary finite element method Southern Cross University epublications@scu 23r Australasian Conference on the Mechanics of Structures an Materials 214 Transient analysis of wave propagation in 3D soil by using the scale bounary finite

More information

Comparison of Methods for Increasing the Performance of a DUA Computation

Comparison of Methods for Increasing the Performance of a DUA Computation Comparison of Methos for Increasing the Performance of a DUA Computation Michael Behrisch, Daniel Krajzewicz, Peter Wagner an Yun-Pang Wang Institute of Transportation Systems, German Aerospace Center,

More information

Queueing Model and Optimization of Packet Dropping in Real-Time Wireless Sensor Networks

Queueing Model and Optimization of Packet Dropping in Real-Time Wireless Sensor Networks Queueing Moel an Optimization of Packet Dropping in Real-Time Wireless Sensor Networks Marc Aoun, Antonios Argyriou, Philips Research, Einhoven, 66AE, The Netherlans Department of Computer an Communication

More information

Fast Fractal Image Compression using PSO Based Optimization Techniques

Fast Fractal Image Compression using PSO Based Optimization Techniques Fast Fractal Compression using PSO Base Optimization Techniques A.Krishnamoorthy Visiting faculty Department Of ECE University College of Engineering panruti rishpci89@gmail.com S.Buvaneswari Visiting

More information

Online Appendix to: Generalizing Database Forensics

Online Appendix to: Generalizing Database Forensics Online Appenix to: Generalizing Database Forensics KYRIACOS E. PAVLOU an RICHARD T. SNODGRASS, University of Arizona This appenix presents a step-by-step iscussion of the forensic analysis protocol that

More information

Large-Scale Face Manifold Learning

Large-Scale Face Manifold Learning Large-Scale Face Manifold Learning Sanjiv Kumar Google Research New York, NY * Joint work with A. Talwalkar, H. Rowley and M. Mohri 1 Face Manifold Learning 50 x 50 pixel faces R 2500 50 x 50 pixel random

More information

LRLW-LSI: An Improved Latent Semantic Indexing (LSI) Text Classifier

LRLW-LSI: An Improved Latent Semantic Indexing (LSI) Text Classifier LRLW-LSI: An Improved Latent Semantic Indexing (LSI) Text Classifier Wang Ding, Songnian Yu, Shanqing Yu, Wei Wei, and Qianfeng Wang School of Computer Engineering and Science, Shanghai University, 200072

More information

Learning Polynomial Functions. by Feature Construction

Learning Polynomial Functions. by Feature Construction I Proceeings of the Eighth International Workshop on Machine Learning Chicago, Illinois, June 27-29 1991 Learning Polynomial Functions by Feature Construction Richar S. Sutton GTE Laboratories Incorporate

More information

Learning convex bodies is hard

Learning convex bodies is hard Learning convex boies is har Navin Goyal Microsoft Research Inia navingo@microsoftcom Luis Raemacher Georgia Tech lraemac@ccgatecheu Abstract We show that learning a convex boy in R, given ranom samples

More information

New Version of Davies-Bouldin Index for Clustering Validation Based on Cylindrical Distance

New Version of Davies-Bouldin Index for Clustering Validation Based on Cylindrical Distance New Version of Davies-Boulin Inex for lustering Valiation Base on ylinrical Distance Juan arlos Roas Thomas Faculta e Informática Universia omplutense e Mari Mari, España correoroas@gmail.com Abstract

More information

Kinematic Analysis of a Family of 3R Manipulators

Kinematic Analysis of a Family of 3R Manipulators Kinematic Analysis of a Family of R Manipulators Maher Baili, Philippe Wenger an Damien Chablat Institut e Recherche en Communications et Cybernétique e Nantes, UMR C.N.R.S. 6597 1, rue e la Noë, BP 92101,

More information

On the Role of Multiply Sectioned Bayesian Networks to Cooperative Multiagent Systems

On the Role of Multiply Sectioned Bayesian Networks to Cooperative Multiagent Systems On the Role of Multiply Sectione Bayesian Networks to Cooperative Multiagent Systems Y. Xiang University of Guelph, Canaa, yxiang@cis.uoguelph.ca V. Lesser University of Massachusetts at Amherst, USA,

More information

arxiv: v2 [cs.lg] 22 Jan 2019

arxiv: v2 [cs.lg] 22 Jan 2019 Spatial Variational Auto-Encoing via Matrix-Variate Normal Distributions Zhengyang Wang Hao Yuan Shuiwang Ji arxiv:1705.06821v2 [cs.lg] 22 Jan 2019 Abstract The key iea of variational auto-encoers (VAEs)

More information

Shift-map Image Registration

Shift-map Image Registration Shift-map Image Registration Linus Svärm Petter Stranmark Centre for Mathematical Sciences, Lun University {linus,petter}@maths.lth.se Abstract Shift-map image processing is a new framework base on energy

More information

MANJUSHA K.*, ANAND KUMAR M., SOMAN K. P.

MANJUSHA K.*, ANAND KUMAR M., SOMAN K. P. Journal of Engineering Science an echnology Vol. 13, No. 1 (2018) 141-157 School of Engineering, aylor s University IMPLEMENAION OF REJECION SRAEGIES INSIDE MALAYALAM CHARACER RECOGNIION SYSEM BASED ON

More information

THE BAYESIAN RECEIVER OPERATING CHARACTERISTIC CURVE AN EFFECTIVE APPROACH TO EVALUATE THE IDS PERFORMANCE

THE BAYESIAN RECEIVER OPERATING CHARACTERISTIC CURVE AN EFFECTIVE APPROACH TO EVALUATE THE IDS PERFORMANCE БСУ Международна конференция - 2 THE BAYESIAN RECEIVER OPERATING CHARACTERISTIC CURVE AN EFFECTIVE APPROACH TO EVALUATE THE IDS PERFORMANCE Evgeniya Nikolova, Veselina Jecheva Burgas Free University Abstract:

More information

Solution Representation for Job Shop Scheduling Problems in Ant Colony Optimisation

Solution Representation for Job Shop Scheduling Problems in Ant Colony Optimisation Solution Representation for Job Shop Scheuling Problems in Ant Colony Optimisation James Montgomery, Carole Faya 2, an Sana Petrovic 2 Faculty of Information & Communication Technologies, Swinburne University

More information

Feature Extraction and Rule Classification Algorithm of Digital Mammography based on Rough Set Theory

Feature Extraction and Rule Classification Algorithm of Digital Mammography based on Rough Set Theory Feature Extraction an Rule Classification Algorithm of Digital Mammography base on Rough Set Theory Aboul Ella Hassanien Jafar M. H. Ali. Kuwait University, Faculty of Aministrative Science, Quantitative

More information

Optimal Oblivious Path Selection on the Mesh

Optimal Oblivious Path Selection on the Mesh Optimal Oblivious Path Selection on the Mesh Costas Busch Malik Magon-Ismail Jing Xi Department of Computer Science Rensselaer Polytechnic Institute Troy, NY 280, USA {buschc,magon,xij2}@cs.rpi.eu Abstract

More information

Adjacency Matrix Based Full-Text Indexing Models

Adjacency Matrix Based Full-Text Indexing Models 1000-9825/2002/13(10)1933-10 2002 Journal of Software Vol.13, No.10 Ajacency Matrix Base Full-Text Inexing Moels ZHOU Shui-geng 1, HU Yun-fa 2, GUAN Ji-hong 3 1 (Department of Computer Science an Engineering,

More information

New Geometric Interpretation and Analytic Solution for Quadrilateral Reconstruction

New Geometric Interpretation and Analytic Solution for Quadrilateral Reconstruction New Geometric Interpretation an Analytic Solution for uarilateral Reconstruction Joo-Haeng Lee Convergence Technology Research Lab ETRI Daejeon, 305 777, KOREA Abstract A new geometric framework, calle

More information

Animated Surface Pasting

Animated Surface Pasting Animate Surface Pasting Clara Tsang an Stephen Mann Computing Science Department University of Waterloo 200 University Ave W. Waterloo, Ontario Canaa N2L 3G1 e-mail: clftsang@cgl.uwaterloo.ca, smann@cgl.uwaterloo.ca

More information

Improving Performance of Sparse Matrix-Vector Multiplication

Improving Performance of Sparse Matrix-Vector Multiplication Improving Performance of Sparse Matrix-Vector Multiplication Ali Pınar Michael T. Heath Department of Computer Science an Center of Simulation of Avance Rockets University of Illinois at Urbana-Champaign

More information

Indexing the Edges A simple and yet efficient approach to high-dimensional indexing

Indexing the Edges A simple and yet efficient approach to high-dimensional indexing Inexing the Eges A simple an yet efficient approach to high-imensional inexing Beng Chin Ooi Kian-Lee Tan Cui Yu Stephane Bressan Department of Computer Science National University of Singapore 3 Science

More information

Non-homogeneous Generalization in Privacy Preserving Data Publishing

Non-homogeneous Generalization in Privacy Preserving Data Publishing Non-homogeneous Generalization in Privacy Preserving Data Publishing W. K. Wong, Nios Mamoulis an Davi W. Cheung Department of Computer Science, The University of Hong Kong Pofulam Roa, Hong Kong {wwong2,nios,cheung}@cs.hu.h

More information

Tight Wavelet Frame Decomposition and Its Application in Image Processing

Tight Wavelet Frame Decomposition and Its Application in Image Processing ITB J. Sci. Vol. 40 A, No., 008, 151-165 151 Tight Wavelet Frame Decomposition an Its Application in Image Processing Mahmu Yunus 1, & Henra Gunawan 1 1 Analysis an Geometry Group, FMIPA ITB, Banung Department

More information

A Multi-class SVM Classifier Utilizing Binary Decision Tree

A Multi-class SVM Classifier Utilizing Binary Decision Tree Informatica 33 (009) 33-41 33 A Multi-class Classifier Utilizing Binary Decision Tree Gjorgji Mazarov, Dejan Gjorgjevikj an Ivan Chorbev Department of Computer Science an Engineering Faculty of Electrical

More information

Rough Set Approach for Classification of Breast Cancer Mammogram Images

Rough Set Approach for Classification of Breast Cancer Mammogram Images Rough Set Approach for Classification of Breast Cancer Mammogram Images Aboul Ella Hassanien Jafar M. H. Ali. Kuwait University, Faculty of Aministrative Science, Quantitative Methos an Information Systems

More information

Synthesis Distortion Estimation in 3D Video Using Frequency and Spatial Analysis

Synthesis Distortion Estimation in 3D Video Using Frequency and Spatial Analysis MITSUBISHI EECTRIC RESEARCH ABORATORIES http://www.merl.com Synthesis Distortion Estimation in 3D Vieo Using Frequency an Spatial Analysis Fang,.; Cheung, N-M; Tian, D.; Vetro, A.; Sun, H.; Yu,. TR2013-087

More information

Improving Spatial Reuse of IEEE Based Ad Hoc Networks

Improving Spatial Reuse of IEEE Based Ad Hoc Networks mproving Spatial Reuse of EEE 82.11 Base A Hoc Networks Fengji Ye, Su Yi an Biplab Sikar ECSE Department, Rensselaer Polytechnic nstitute Troy, NY 1218 Abstract n this paper, we evaluate an suggest methos

More information

Data Mining: Clustering

Data Mining: Clustering Bi-Clustering COMP 790-90 Seminar Spring 011 Data Mining: Clustering k t 1 K-means clustering minimizes Where ist ( x, c i t i c t ) ist ( x m j 1 ( x ij i, c c t ) tj ) Clustering by Pattern Similarity

More information

Secure Network Coding for Distributed Secret Sharing with Low Communication Cost

Secure Network Coding for Distributed Secret Sharing with Low Communication Cost Secure Network Coing for Distribute Secret Sharing with Low Communication Cost Nihar B. Shah, K. V. Rashmi an Kannan Ramchanran, Fellow, IEEE Abstract Shamir s (n,k) threshol secret sharing is an important

More information

Backpressure-based Packet-by-Packet Adaptive Routing in Communication Networks

Backpressure-based Packet-by-Packet Adaptive Routing in Communication Networks 1 Backpressure-base Packet-by-Packet Aaptive Routing in Communication Networks Eleftheria Athanasopoulou, Loc Bui, Tianxiong Ji, R. Srikant, an Alexaner Stolyar Abstract Backpressure-base aaptive routing

More information

Discriminative Filters for Depth from Defocus

Discriminative Filters for Depth from Defocus Discriminative Filters for Depth from Defocus Fahim Mannan an Michael S. Langer School of Computer Science, McGill University Montreal, Quebec HA 0E9, Canaa. {fmannan, langer}@cim.mcgill.ca Abstract Depth

More information

A shortest path algorithm in multimodal networks: a case study with time varying costs

A shortest path algorithm in multimodal networks: a case study with time varying costs A shortest path algorithm in multimoal networks: a case stuy with time varying costs Daniela Ambrosino*, Anna Sciomachen* * Department of Economics an Quantitative Methos (DIEM), University of Genoa Via

More information

Analysis of half-space range search using the k-d search skip list. Here we analyse the expected time for half-space

Analysis of half-space range search using the k-d search skip list. Here we analyse the expected time for half-space Analysis of half-space range search using the k- search skip list Mario A. Lopez Brafor G. Nickerson y 1 Abstract We analyse the average cost of half-space range reporting for the k- search skip list.

More information

Nearest Neighbor Search using Additive Binary Tree

Nearest Neighbor Search using Additive Binary Tree Nearest Neighbor Search using Aitive Binary Tree Sung-Hyuk Cha an Sargur N. Srihari Center of Excellence for Document Analysis an Recognition State University of New York at Buffalo, U. S. A. E-mail: fscha,sriharig@cear.buffalo.eu

More information

Distributed Line Graphs: A Universal Technique for Designing DHTs Based on Arbitrary Regular Graphs

Distributed Line Graphs: A Universal Technique for Designing DHTs Based on Arbitrary Regular Graphs IEEE TRANSACTIONS ON KNOWLEDE AND DATA ENINEERIN, MANUSCRIPT ID Distribute Line raphs: A Universal Technique for Designing DHTs Base on Arbitrary Regular raphs Yiming Zhang an Ling Liu, Senior Member,

More information

Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. Preface Here are my online notes for my Calculus I course that I teach here at Lamar University. Despite the fact that these are my class notes, they shoul be accessible to anyone wanting to learn Calculus

More information

Non-Uniform Sensor Deployment in Mobile Wireless Sensor Networks

Non-Uniform Sensor Deployment in Mobile Wireless Sensor Networks 01 01 01 01 01 00 01 01 Non-Uniform Sensor Deployment in Mobile Wireless Sensor Networks Mihaela Carei, Yinying Yang, an Jie Wu Department of Computer Science an Engineering Floria Atlantic University

More information

1/5/2014. Bedrich Benes Purdue University Dec 12 th 2013 INRIA Imagine. Modeling is an open problem in CG

1/5/2014. Bedrich Benes Purdue University Dec 12 th 2013 INRIA Imagine. Modeling is an open problem in CG Berich Benes Purue University Dec 12 th 213 INRIA Imagine Inverse Proceural Moeling (IPM) Motivation IPM Classification Case stuies IPM of volumetric builings IPM of stochastic trees Urban reparameterization

More information

An Adaptive Routing Algorithm for Communication Networks using Back Pressure Technique

An Adaptive Routing Algorithm for Communication Networks using Back Pressure Technique International OPEN ACCESS Journal Of Moern Engineering Research (IJMER) An Aaptive Routing Algorithm for Communication Networks using Back Pressure Technique Khasimpeera Mohamme 1, K. Kalpana 2 1 M. Tech

More information

Table-based division by small integer constants

Table-based division by small integer constants Table-base ivision by small integer constants Florent e Dinechin, Laurent-Stéphane Diier LIP, Université e Lyon (ENS-Lyon/CNRS/INRIA/UCBL) 46, allée Italie, 69364 Lyon Ceex 07 Florent.e.Dinechin@ens-lyon.fr

More information

A Convex Clustering-based Regularizer for Image Segmentation

A Convex Clustering-based Regularizer for Image Segmentation Vision, Moeling, an Visualization (2015) D. Bommes, T. Ritschel an T. Schultz (Es.) A Convex Clustering-base Regularizer for Image Segmentation Benjamin Hell (TU Braunschweig), Marcus Magnor (TU Braunschweig)

More information

1/5/2014. Bedrich Benes Purdue University Dec 6 th 2013 Prague. Modeling is an open problem in CG

1/5/2014. Bedrich Benes Purdue University Dec 6 th 2013 Prague. Modeling is an open problem in CG Berich Benes Purue University Dec 6 th 213 Prague Inverse Proceural Moeling (IPM) Motivation IPM Classification Case stuies IPM of volumetric builings IPM of stochastic trees Urban reparameterization IPM

More information

AnyTraffic Labeled Routing

AnyTraffic Labeled Routing AnyTraffic Labele Routing Dimitri Papaimitriou 1, Pero Peroso 2, Davie Careglio 2 1 Alcatel-Lucent Bell, Antwerp, Belgium Email: imitri.papaimitriou@alcatel-lucent.com 2 Universitat Politècnica e Catalunya,

More information

Fast Window Based Stereo Matching for 3D Scene Reconstruction

Fast Window Based Stereo Matching for 3D Scene Reconstruction The International Arab Journal of Information Technology, Vol. 0, No. 3, May 203 209 Fast Winow Base Stereo Matching for 3D Scene Reconstruction Mohamma Mozammel Chowhury an Mohamma AL-Amin Bhuiyan Department

More information

Bends, Jogs, And Wiggles for Railroad Tracks and Vehicle Guide Ways

Bends, Jogs, And Wiggles for Railroad Tracks and Vehicle Guide Ways Ben, Jogs, An Wiggles for Railroa Tracks an Vehicle Guie Ways Louis T. Klauer Jr., PhD, PE. Work Soft 833 Galer Dr. Newtown Square, PA 19073 lklauer@wsof.com Preprint, June 4, 00 Copyright 00 by Louis

More information

Unknown Radial Distortion Centers in Multiple View Geometry Problems

Unknown Radial Distortion Centers in Multiple View Geometry Problems Unknown Raial Distortion Centers in Multiple View Geometry Problems José Henrique Brito 1,2, Rolan Angst 3, Kevin Köser 3, Christopher Zach 4, Pero Branco 2, Manuel João Ferreira 2, Marc Pollefeys 3 1

More information

arxiv: v1 [math.co] 15 Dec 2017

arxiv: v1 [math.co] 15 Dec 2017 Rectilinear Crossings in Complete Balance -Partite -Uniform Hypergraphs Rahul Gangopahyay Saswata Shannigrahi arxiv:171.05539v1 [math.co] 15 Dec 017 December 18, 017 Abstract In this paper, we stuy the

More information

Computer Organization

Computer Organization Computer Organization Douglas Comer Computer Science Department Purue University 250 N. University Street West Lafayette, IN 47907-2066 http://www.cs.purue.eu/people/comer Copyright 2006. All rights reserve.

More information

Minoru SASAKI and Kenji KITA. Department of Information Science & Intelligent Systems. Faculty of Engineering, Tokushima University

Minoru SASAKI and Kenji KITA. Department of Information Science & Intelligent Systems. Faculty of Engineering, Tokushima University Information Retrieval System Using Concept Projection Based on PDDP algorithm Minoru SASAKI and Kenji KITA Department of Information Science & Intelligent Systems Faculty of Engineering, Tokushima University

More information

Dense Disparity Estimation in Ego-motion Reduced Search Space

Dense Disparity Estimation in Ego-motion Reduced Search Space Dense Disparity Estimation in Ego-motion Reuce Search Space Luka Fućek, Ivan Marković, Igor Cvišić, Ivan Petrović University of Zagreb, Faculty of Electrical Engineering an Computing, Croatia (e-mail:

More information

Coordinating Distributed Algorithms for Feature Extraction Offloading in Multi-Camera Visual Sensor Networks

Coordinating Distributed Algorithms for Feature Extraction Offloading in Multi-Camera Visual Sensor Networks Coorinating Distribute Algorithms for Feature Extraction Offloaing in Multi-Camera Visual Sensor Networks Emil Eriksson, György Dán, Viktoria Foor School of Electrical Engineering, KTH Royal Institute

More information

Detecting Overlapping Communities from Local Spectral Subspaces

Detecting Overlapping Communities from Local Spectral Subspaces Detecting Overlapping Communities from Local Spectral Subspaces Kun He, Yiwei Sun Huazhong University of Science an Technology Wuhan 430074, China Email: {brooklet60, yiweisun}@hust.eu.cn Davi Binel, John

More information

Design of Policy-Aware Differentially Private Algorithms

Design of Policy-Aware Differentially Private Algorithms Design of Policy-Aware Differentially Private Algorithms Samuel Haney Due University Durham, NC, USA shaney@cs.ue.eu Ashwin Machanavajjhala Due University Durham, NC, USA ashwin@cs.ue.eu Bolin Ding Microsoft

More information

Modifying ROC Curves to Incorporate Predicted Probabilities

Modifying ROC Curves to Incorporate Predicted Probabilities Moifying ROC Curves to Incorporate Preicte Probabilities Cèsar Ferri DSIC, Universitat Politècnica e València Peter Flach Department of Computer Science, University of Bristol José Hernánez-Orallo DSIC,

More information

Message Transport With The User Datagram Protocol

Message Transport With The User Datagram Protocol Message Transport With The User Datagram Protocol User Datagram Protocol (UDP) Use During startup For VoIP an some vieo applications Accounts for less than 10% of Internet traffic Blocke by some ISPs Computer

More information

Intensive Hypercube Communication: Prearranged Communication in Link-Bound Machines 1 2

Intensive Hypercube Communication: Prearranged Communication in Link-Bound Machines 1 2 This paper appears in J. of Parallel an Distribute Computing 10 (1990), pp. 167 181. Intensive Hypercube Communication: Prearrange Communication in Link-Boun Machines 1 2 Quentin F. Stout an Bruce Wagar

More information

MORA: a Movement-Based Routing Algorithm for Vehicle Ad Hoc Networks

MORA: a Movement-Based Routing Algorithm for Vehicle Ad Hoc Networks : a Movement-Base Routing Algorithm for Vehicle A Hoc Networks Fabrizio Granelli, Senior Member, Giulia Boato, Member, an Dzmitry Kliazovich, Stuent Member Abstract Recent interest in car-to-car communications

More information

Preamble. Singly linked lists. Collaboration policy and academic integrity. Getting help

Preamble. Singly linked lists. Collaboration policy and academic integrity. Getting help CS2110 Spring 2016 Assignment A. Linke Lists Due on the CMS by: See the CMS 1 Preamble Linke Lists This assignment begins our iscussions of structures. In this assignment, you will implement a structure

More information

0607 CAMBRIDGE INTERNATIONAL MATHEMATICS

0607 CAMBRIDGE INTERNATIONAL MATHEMATICS CAMBRIDGE INTERNATIONAL EXAMINATIONS International General Certificate of Seconary Eucation MARK SCHEME for the May/June 03 series 0607 CAMBRIDGE INTERNATIONAL MATHEMATICS 0607/4 Paper 4 (Extene), maximum

More information

A Neural Network Model Based on Graph Matching and Annealing :Application to Hand-Written Digits Recognition

A Neural Network Model Based on Graph Matching and Annealing :Application to Hand-Written Digits Recognition ITERATIOAL JOURAL OF MATHEMATICS AD COMPUTERS I SIMULATIO A eural etwork Moel Base on Graph Matching an Annealing :Application to Han-Written Digits Recognition Kyunghee Lee Abstract We present a neural

More information

UNIT 9 INTERFEROMETRY

UNIT 9 INTERFEROMETRY UNIT 9 INTERFEROMETRY Structure 9.1 Introuction Objectives 9. Interference of Light 9.3 Light Sources for 9.4 Applie to Flatness Testing 9.5 in Testing of Surface Contour an Measurement of Height 9.6 Interferometers

More information

Short-term prediction of photovoltaic power based on GWPA - BP neural network model

Short-term prediction of photovoltaic power based on GWPA - BP neural network model Short-term preiction of photovoltaic power base on GWPA - BP neural networ moel Jian Di an Shanshan Meng School of orth China Electric Power University, Baoing. China Abstract In recent years, ue to China's

More information

Backpressure-based Packet-by-Packet Adaptive Routing in Communication Networks

Backpressure-based Packet-by-Packet Adaptive Routing in Communication Networks 1 Backpressure-base Packet-by-Packet Aaptive Routing in Communication Networks Eleftheria Athanasopoulou, Loc Bui, Tianxiong Ji, R. Srikant, an Alexaner Stoylar arxiv:15.4984v1 [cs.ni] 27 May 21 Abstract

More information

0607 CAMBRIDGE INTERNATIONAL MATHEMATICS

0607 CAMBRIDGE INTERNATIONAL MATHEMATICS PAPA CAMBRIDGE CAMBRIDGE INTERNATIONAL EXAMINATIONS International General Certificate of Seconary Eucation MARK SCHEME for the May/June 0 series CAMBRIDGE INTERNATIONAL MATHEMATICS /4 4 (Extene), maximum

More information

Figure 1: 2D arm. Figure 2: 2D arm with labelled angles

Figure 1: 2D arm. Figure 2: 2D arm with labelled angles 2D Kinematics Consier a robotic arm. We can sen it commans like, move that joint so it bens at an angle θ. Once we ve set each joint, that s all well an goo. More interesting, though, is the question of

More information

The Reconstruction of Graphs. Dhananjay P. Mehendale Sir Parashurambhau College, Tilak Road, Pune , India. Abstract

The Reconstruction of Graphs. Dhananjay P. Mehendale Sir Parashurambhau College, Tilak Road, Pune , India. Abstract The Reconstruction of Graphs Dhananay P. Mehenale Sir Parashurambhau College, Tila Roa, Pune-4030, Inia. Abstract In this paper we iscuss reconstruction problems for graphs. We evelop some new ieas lie

More information

EFFICIENT ON-LINE TESTING METHOD FOR A FLOATING-POINT ADDER

EFFICIENT ON-LINE TESTING METHOD FOR A FLOATING-POINT ADDER FFICINT ON-LIN TSTING MTHOD FOR A FLOATING-POINT ADDR A. Droz, M. Lobachev Department of Computer Systems, Oessa State Polytechnic University, Oessa, Ukraine Droz@ukr.net, Lobachev@ukr.net Abstract In

More information

Study of Network Optimization Method Based on ACL

Study of Network Optimization Method Based on ACL Available online at www.scienceirect.com Proceia Engineering 5 (20) 3959 3963 Avance in Control Engineering an Information Science Stuy of Network Optimization Metho Base on ACL Liu Zhian * Department

More information

A New Search Algorithm for Solving Symmetric Traveling Salesman Problem Based on Gravity

A New Search Algorithm for Solving Symmetric Traveling Salesman Problem Based on Gravity Worl Applie Sciences Journal 16 (10): 1387-1392, 2012 ISSN 1818-4952 IDOSI Publications, 2012 A New Search Algorithm for Solving Symmetric Traveling Salesman Problem Base on Gravity Aliasghar Rahmani Hosseinabai,

More information

A Revised Simplex Search Procedure for Stochastic Simulation Response Surface Optimization

A Revised Simplex Search Procedure for Stochastic Simulation Response Surface Optimization 272 INFORMS Journal on Computing 0899-1499 100 1204-0272 $05.00 Vol. 12, No. 4, Fall 2000 2000 INFORMS A Revise Simplex Search Proceure for Stochastic Simulation Response Surface Optimization DAVID G.

More information

Software Reliability Modeling and Cost Estimation Incorporating Testing-Effort and Efficiency

Software Reliability Modeling and Cost Estimation Incorporating Testing-Effort and Efficiency Software Reliability Moeling an Cost Estimation Incorporating esting-effort an Efficiency Chin-Yu Huang, Jung-Hua Lo, Sy-Yen Kuo, an Michael R. Lyu -+ Department of Electrical Engineering Computer Science

More information

Threshold Based Data Aggregation Algorithm To Detect Rainfall Induced Landslides

Threshold Based Data Aggregation Algorithm To Detect Rainfall Induced Landslides Threshol Base Data Aggregation Algorithm To Detect Rainfall Inuce Lanslies Maneesha V. Ramesh P. V. Ushakumari Department of Computer Science Department of Mathematics Amrita School of Engineering Amrita

More information