New Strategies for Filtering the Number Field Sieve Matrix
|
|
- Emil Ellis
- 5 years ago
- Views:
Transcription
1 New Strategies for Filtering the Number Field Sieve Matrix Shailesh Patil Department of CSA Indian Institute of Science Bangalore India Gagan Garg Department of CSA Indian Institute of Science Bangalore India garg C. E. Veni Madhavan Department of CSA Indian Institute of Science Bangalore India Abstract Large sparse matrices over F 2 have to be handled after the sieving step of the number field sieve for factoring integers, typically the RSA challenge numbers. These matrices have to be preprocessed to reduce the dimension for enabling the subsequent stage of the linear algebra computations. We propose a new method for pre-processing (filtering) the number field sieve matrix on a single processor. In this method, the entire matrix is divided into bands of The bands are loaded into the memory one-by-one and processed. We also propose another method for parallel execution of the filtering step. In this method, the matrix is divided into blocks of rows and each block is processed by a different processor. These methods would serve as the most viable means for handling the number field sieve matrices arising in the future RSA challenges. I. INTRODUCTION The number field sieve is the fastest known algorithm for factoring large integers [2]. The five main steps of the number field sieve are: 1) Polynomial selection [8], [12] 2) Sieving [14] 3) Filtering [3], [5], [13], [15] 4) Finding linear dependence [5], [6], [11], [13], [15] 5) Computing a square root [7], [10] This paper presents two new methods for the filtering step. A. Motivation The output of the sieving step is a large sparse matrix over F 2. For example, for the RSA-512 factorization [4], the matrix generated by the sieving step had 131 million rows and 79 million Typically, the number of nonzero entries per row is 40. Since the index of a prime is stored, each entry would occupy 4 bytes. Hence, the total size of the matrix is bytes, which is around 20 GB. In the factorization of RSA-640 [1], the output matrix had relations. So the size of the sieve matrix is bytes, which is around 265 GB. Hence, we predict that the size of sieve matrix for RSA-704 factorization will be around 900 GB and for RSA-768 factorization around 3000 GB. This implies that the matrices corresponding to the future RSA challenge numbers [16] cannot be loaded into the memory of a single processor. Hence, there is a need for a method that can process matrices that are too large to be loaded into the memory of a single processor. B. Results of this paper In this paper, we present a novel method for filtering a large matrix on a single processor. In this method, the matrix is divided into bands of columns and one band is loaded into the memory at a time and processed. We present another novel method for filtering a matrix in parallel. In this method, the matrix is divided into blocks of rows and each block is processed by a different processor. C. Outline of the paper In the next section, we briefly describe the number field sieve algorithm. We also summarize the currently known methods for filtering. In section III, we describe our method for filtering a matrix on
2 a single processor. In section IV, we describe our method for filtering a matrix in parallel. Finally, we conclude with a short discussion in section V. II. BACKGROUND A. The number field sieve The main steps of the number field sieve algorithm for factoring integers are: 1) Determine m Z and an irreducible polynomial f(x) over Z s.t. f(m) 0 mod n. Let α be a complex root of f and let K = Q(α) be a number field with f as its minimal polynomial. Substitute each occurrence of α by (m mod n) to get a natural homomorphism φ : Z[α] Z n. 2) Generate S: a finite set of coprime integer pairs a,b s.t. (a bm) = X 2,X Z, (a bα) = γ 2,γ Z[α]. 3) Let φ(γ) = Y mod n. Then Y 2 φ(γ) 2 = φ(γ 2 ) = φ( φ(a bα) = a bα) = a bm = X 2 mod n. 4) If X ±Y mod n, we get a possible nontrivial factor of n from gcd(x ± Y,n). The first step is the polynomial selection step; step 2 is the sieving step; the steps of filtering and finding linear dependence have been combined together in step 3; the last step involves computation of a square root. We focus our attention on the filtering step. Let us denote the sieve output matrix by A. Each row of the matrix corresponds to an a,b pair s.t. a bm and a bα are smooth. An integer x is said to be y-smooth if all its prime divisors are y. In this case, y = FB, the factor base bound. The bound FB is usually different for the rational side and the algebraic side; for simplicity, we may assume that it is the same. Let us assume that the integer a bm factorizes as a bm = q F qβq, where F = {q 1,q 2,...,q k } is the factor base; q k FB and k is the number of primes in the factor base. Let us assume that the norm of the algebraic integer a bα factorizes as N(a bα) = q F qβq. This a,b pair generates one row of the sieve matrix. The matrix is over F 2. The columns in the matrix correspond to the primes in the factor base F. The i-th column corresponds to the i-th prime in the factor base F i.e. q i. The columns in the matrix are sorted in the increasing order of the prime size i.e. the entries to the left correspond to the smaller primes and the entries to the right correspond to the larger primes. The entry in the row corresponding to a,b has 1 in the column corresponding to the prime q if β q 1 mod 2; otherwise the entry is 0. This briefly describes the structure of the sieve matrix A. We describe the properties of the sieve matrix in detail in section III. In the following subsections, we summarize the currently known methods for filtering. B. Structured Gaussian Elimination The basic idea of Structured Gaussian Elimination [13] is to utilize the structural properties of the sieve matrix. The algorithm declares some columns as inactive. It then works only on the remaining active columns, preserving the sparsity of these active A column with one non-zero entry is called a singleton column and a column with two nonzero entries is called a doubleton column. The process of removing a singleton column and the corresponding row is called singleton removal. Similarly, a row with one non-zero entry in the active part of the matrix is called a singleton row. In the first step, the algorithm performs singleton removal and deletes the zero-weight columns and throws out excess rows. This is followed by Gaussian Elimination with pivots being singleton rows. When no more singleton rows and columns can be found, some additional columns are declared as inactive and the algorithm continues with Gaussian Elimination. All the steps of the Gaussian Elimination are stored. At the end, all these steps are repeated for the inactive part of the matrix. There are some disadvantages of this method. First, the reduction in the size of the matrix is not significant. Second, the fill-in in the inactive part of the matrix is high resulting in a more dense matrix.
3 C. Reduction via created catastrophes Pomerance and Smith [15] developed a heuristic specifically for matrices over F 2. This method is the same as Structured Gaussian Elimination except for one step. The situation in Structured Gaussian Elimination [13] where no more singleton rows and singletons columns can be found is called a catastrophe. In [15], Gaussian Elimination is performed on the active part until a catastrophe is encountered. After that, rows of weight 2 are used to eliminate the lighter of the two intersecting columns; the other column is replaced by its sum with the deleted column. Though this method gives better compression than the previous method, it has an additional disadvantage that the fill-in in the active part of the matrix is also high. D. Cavallar s approach Cavallar [3] proposed an alternate filtering strategy which reduces the fill-in. The main steps of her method are: 1) All the duplicate relations, zero-weight columns and singleton columns are removed. 2) A graph is built with relations being the vertices of the graph. Two vertices are connected if the corresponding relations can be merged in a two-way merge i.e. the relations appear in a doubleton column. The connected components of this graph are called cliques. 3) All the cliques are determined and each clique is given a rating. The rating favors small primes. 4) All the cliques are kept on a priority heap according to this rating. 5) Starting from the top, cliques from this heap are deleted. This is done until the number of excess relations drops below a certain threshold. Steps 2 to 5 are called clique algorithm. 6) k-way merge is performed for different values of k. A column of weight k is chosen as a merge candidate. The k relations in this column ( form the vertices of a graph and the k ) 2 merges form the edges of the graph. The weight of the edge between two vertices corresponds to the weight of the relation formed by adding the relations corresponding to the vertices. A minimum spanning tree on this graph gives the optimal merge. Thus, with a k-way merge, one column and one row are removed. All the algorithms mentioned above were designed for filtering the matrix on a single processor. But the size of the sieve matrix grows exponentially with the size of the number to be factored. For future RSA challenges [16], the storage requirements would cross 900 GB. For a matrix of this size or greater, it is not possible to filter the matrix by any of these methods. III. FILTERING MATRIX ON A SINGLE MACHINE In this section, we present a new method for filtering on a single processor. It is well known that the matrices of the sieve output are not random. These matrices exhibit the following structural properties: 1) The matrix is more sparse at one end than at the other end. The columns corresponding to the large primes are much more sparse than those corresponding to the small primes. 2) Along each column, the non-zero entries are distributed uniformly. 3) There are more singletons in the columns corresponding to the large primes than in the columns corresponding to the small primes. 4) For small values of k, more k-way merges take place in the sparser part of the matrix. Fig. 1. Band division of the matrix Our algorithm proceeds as follows: 1) Remove duplicate relations using hashing. 2) Start from the sparse end and load maximum possible number of columns into the memory
4 of the processor. This forms our first band. This is the active part of the matrix. 3) Remove all zero-weight columns and singleton columns from the active part of the matrix. Repeat this step till there are zeroweight columns or singleton columns in the active part of the matrix. 4) Load more columns into the memory. This forms our next band, as illustrated in Fig. 1. The active part of the matrix is the union of all the bands loaded till now. 5) Repeat steps 3 and 4 until no more columns can be loaded into the memory or no more singleton columns can be found. 6) Execute the clique algorithm to throw away excess relations from the active part of the matrix. 7) Repeat steps 3 to 6 until no more columns can be loaded into the memory. 8) If the entire matrix is loaded into the memory, proceed with k-way merge [3]. The columns in the rightmost band have the minimal weight. We encounter maximum number of singletons in this band. Hence, we load columns into the memory starting from the right side. When we remove a singleton column, the corresponding row is also deleted. Thus, some memory is freed and the dimension of the matrix is reduced. Moreover, deletion of a row reduces the weight of the columns intersecting with this row. This may give rise to more zero-weight columns and singleton columns in the active part of the matrix. Hence, we need to execute step 3 repeatedly. In order to execute the clique algorithm, we weigh the cliques such that a clique containing more low-frequency columns gets a higher weight. In k-way merge, a column of weight k is chosen as a merge candidate. The k relations in this column form the vertices of a graph. The edge connecting two vertices is assigned a weight, which is the weight of the row formed by adding the rows corresponding to these two vertices. Since the matrix is sparse, there is low probability that cancellations would take place while adding. Hence, we may assume that the weight of the edge is the same as the sum of the weights of the two rows. Hence, for smaller values of k, we can perform k way merge even when the entire matrix cannot be loaded into the memory. In most of the cases, the entire matrix gets loaded into memory of the processor by step 8. This is so because singleton column removal and deletion of excess rows leads to a substantial reduction in the size of the matrix. However, it is possible that the matrix is so large that even after reaching step 8, the entire matrix cannot be loaded into the memory of the processor. In this case, it is recommended that filtering is done in parallel. This is explained in the next section. A. The scheme IV. FILTERING IN PARALLEL We assume a master-slave model. We assume that there are t + 1 nodes. Each node has its own RAM and a hard-disk. Each node may have only one processor or it may be multi-processor. Let the nodes be named s 1,s 2,...,s t. Let these be the slaves and let M be the master. Denote the sieve output matrix by A. We assume that the matrix is represented as an adjacency list of non-zero entries. We assume that A has m rows and n The master M sends approximately m/t rows to each of the nodes s i. Let these block of rows be denoted by A 1,A 2,...,A t. We assume that the number of nodes t is such that the row block A i can be completely loaded into the memory of the node s i. A schematic diagram for filtering in parallel is shown in Fig. 2. The main steps of filtering are: 1) Removing duplicate relations. 2) Computing the transpose. 3) Removing zero-weight columns and singleton 4) Removing excess rows. 5) Reducing the dimensions of the matrix. We need to repeat steps 3, 4 and 5 until the desired level of compression is achieved. Each of these five steps are discussed in detail in the following subsections. B. Removing duplicate relations The first step in filtering is to remove duplicate relations. Experiments suggest that more than 30% of the rows are duplicated [3]. Hence, it is recommended that the duplicate relations are removed first as it reduces the matrix size substantially.
5 Fig. 2. Schematic diagram of filtering in parallel Duplicate removal is done as follows: 1) Each node s i computes the hash of all the rows in the row block A i. 2) Each node s i determines the duplicate rows that it has and deletes them. 3) The nodes s i send the hash list of the remaining rows to the master M. 4) The master M merges the hash lists and determines the duplicate rows. 5) The list of the duplicate rows is broadcast to the nodes s i. 6) The nodes s i delete the duplicate rows. The next step is computing the transpose of the matrix. The transpose is required for the removal of zero-weight columns and the singleton C. Computing the transpose The transpose is computed as follows: 1) Each node s i loads A i into its memory. 2) Each node s i computes the transpose A T i. We now remove zero-weight columns and singleton D. Removing zero-weight and singleton columns This is done as follows: 1) Each node s i computes the weight of all the columns in the row block A i. 2) Each node s i sends the column indices of weight 0 columns and weight 1 columns to the master M. 3) The master M merges this information to determine zero-weight columns and singleton 4) The master M broadcasts the list of zeroweight columns and singleton columns to all the nodes s i. 5) The nodes s i delete the zero-weight columns and the singleton For each singleton column deleted, the corresponding row is also deleted. 6) Steps 1 to 5 are repeated until no more zeroweight columns and singleton columns are found. In order to reduce the network traffic, each node s i divides the columns of A i into bands and executes steps 1 and 2 only for this band. After the zero-weight columns and singleton columns from this band have been removed, all the nodes s i move to the next band and process both these bands together. This reduces the network traffic since we are now communicating the information of only a part of the matrix. Since the number of singleton columns is much more in the sparse part of the matrix, it is advised to start from the right-most band. In subsection IV-A, we assumed that A i can be completely loaded into the memory of the node s i. However, if this is not possible, each node s i follows the algorithm described in section III. After removing the zero-weight columns and the singleton columns, the number of rows are more than the number of Hence, we need to remove excess rows. E. Removing excess rows We may either remove the heaviest rows or use the clique algorithm. Removing the heaviest rows proceeds as follows: 1) Each node s i sends the weight of all the rows in A i to the master M.
6 2) The master M merges this information and determines the heaviest rows. 3) The master M broadcasts this list to all the nodes s i. 4) Each node s i deletes the rows that are in A i. 5) Zero-weight columns and singleton columns are deleted as described in subsection IV-D. The clique algorithm proceeds as follows: 1) The nodes s i send the column indices of the columns of weight 0,1,2 to the master M. 2) The master M merges this information and determines the columns of weight 2 i.e., the doubleton 3) The master M broadcasts this list of doubleton columns to the nodes s i. 4) Each node s i determines the rows that intersect with these doubletons 5) Each node s i sends the corresponding rows to the master M. 6) The master M builds the graph and determines the cliques. 7) The master M broadcasts the list of rows and columns to be deleted to the nodes s i. 8) The nodes s i delete the appropriate rows and 9) Zero-weight columns and singleton columns are deleted as described in subsection IV-D. F. Reducing the dimension of the matrix For small values of k, we perform k-way merge in parallel. The main steps are: 1) The nodes s i send the column indices of the columns of weight 0,1,2,...,k to the master node M. 2) The master M merges this information to determine the columns of weight k. 3) The master M broadcasts this list to the nodes s i. 4) Each node s i determines the rows that intersect with these 5) Each node s i sends the corresponding rows to the master M. 6) The master M builds the graph and computes its minimum spanning tree. 7) The master M broadcasts the index of the row and the column to be deleted to the nodes s i. It also broadcasts the index of the k 1 rows that need to be replaced, and the new k 1 rows that correspond to the edges of the minimum spanning tree. 8) Each node s i performs the appropriate row replacements and deletes one column and one row. 9) Zero-weight columns and singleton columns are deleted as described in subsection IV-D. For higher values of k, the procedure becomes highly communication intensive. Hence, it becomes very slow and it is recommended that k-way merge is done on a single node. V. CONCLUSION AND FUTURE WORK From the above discussion, it is clear that the steps of 1) duplicates removal, 2) zero-weight column and singleton column removal, and 3) throwing out excess rows of the matrix can be efficiently done in parallel. These steps involve little communication. k-way merge for small values of k can also be done in parallel. However, for larger values of k, it is recommended that k-way merge is done on a single node. We are currently sieving for RSA-640. After the sieving is over, we would proceed with filtering. The step of sieving requires some more months of computation. Our present estimates warrant the use of the proposed heuristics. We would provide experimental results in a more detailed paper subsequently. ACKNOWLEDGMENTS The first author would like to thank V. Suresh and H. V. Kumar Swamy for their useful comments on the first draft of the paper. REFERENCES [1] F. Bahr, M. Boehm, J. Franke, T. Kleinjung, Factorization of RSA-640, at [2] J. P. Buhler, H. W. Lenstra Jr., and C. Pomerance, Factoring integers with the number field sieve, in [9], pp [3] S. Cavallar, Strategies in filtering in the number field sieve, in Algorithmic Number Theory - ANTS-IV, LNCS 1838, pp , Springer-Verlag, Berlin, [4] S. Cavallar, B. Dodson, A.K. Lenstra, W. Lioen, P.L. Montgomery, B. Murphy, H. te Riele, K. Aardal, J. Gilchrist, G. Guillerm, P. Leyland, J. Marchand, F. Morain, A. Muffett, C. and C. Putnam, and P. Zimmermann, Factorization of a 512-bit RSA modulus, in Advances in Cryptology - Eurocrypt 2000, LNCS 1807 pp Springer-Verlag 2000.
7 [5] S. Cavallar, On the number field sieve integer factorization algorithm, PhD thesis, [6] D. Coppersmith, Solving homogeneous linear equations over GF(2) via block Wiedemann algorithm, in Mathematics of Computation, bf Vol 62, pp , [7] J. Couveignes, Computing a square root for the number field sieve, in [9], pp [8] T. Kleinjung, On polynomial selection for the general number field sieve, in Mathematics of Computation, Vol 75, pp , [9] A. K. Lenstra and H. W. Lenstra Jr. (eds.), The development of the number field sieve, Lecture notes in Math. vol. 1554, Springer-Verlag, Berlin and Heidelberg, [10] P. L. Montgomery, Square roots of products of algebraic numbers, in Mathematics of Computation : a Half-Century of Computational Mathematics 1994, W. Gautschi, Ed., Proceedings of Symposia in Applied Mathematics, American Mathematical Society, pp [11] P. L. Montgomery, A block Lanczos algorithm for finding dependencies over GF(2), in Advances in Cryptology - Eurocrypt 1995, LNCS 921 pp Springer-Verlag [12] B. A. Murphy, Polynomial selection for the number field sieve integer factorization algorithm, PhD thesis, [13] A. M. Odlyzko, Discrete logarithm over finite fields and their cryptographic significance, in Advances in Cryptology - Eurocrypt 1984, LNCS 209, pp , Springer-Verlag [14] J. M. Pollard, The lattice sieve, in [9], pp [15] C. Pomerance and J. W. Smith, Reduction of large, sparse matrices over a finite field via created catastrophes, in Experimental Mathematics, Vol 1. No. 2, pp , [16] The RSA Challenge Numbers
Sieving Using Bucket Sort
Sieving Using Bucket Sort Kazumaro Aoki and Hiroki Ueda NTT, 1-1 Hikarinooka, Yokosuka-shi, Kanagawa-ken, 239-0847 Japan {maro,ueda}@isl.ntt.co.jp Abstract. This paper proposes a new sieving algorithm
More informationReduction of Huge, Sparse Matrices over Finite Fields Via Created Catastrophes
Reduction of Huge, Sparse Matrices over Finite Fields Via Created Catastrophes Carl Pomerance and J. W. Smith CONTENTS 1. Introduction 2. Description of the Method 3. Outline of Experiments 4. Conclusion
More informationSIMD-Based Implementations of Sieving in Integer-Factoring Algorithms
SIMD-Based Implementations of Sieving in Integer-Factoring Algorithms Binanda Sengupta and Abhijit Das Department of Computer Science and Engineering Indian Institute of Technology Kharagpur, West Bengal,
More informationEFFICIENT PARALLELIZATION OF LANCZOS TYPE ALGORITHMS
EFFICIENT PARALLELIZATION OF LANCZOS TYPE ALGORITHMS ILYA POPOVYAN MOSCOW STATE UNIVERSITY, RUSSIA Abstract We propose a new parallelization technique for Lanzos type algorithms for solving sparse linear
More informationAn effective Method for Attack RSA Strategy
Int. J. Advanced Networking and Applications 136 Volume: 03, Issue: 05, Pages: 136-1366 (01) An effective Method for Attack RSA Strategy Vibhor Mehrotra Assistant Professor Department of Computer Science,
More informationSMP Based Solver For Large Binary Linear Systems
2009 International Conference on Parallel and Distributed Computing, Applications and Technologies SMP Based Solver For Large Binary Linear Systems Nikhil Jain, Brajesh Pande, Phalguni Gupta Indian Institute
More informationSiddharth Raina. Sushant Pawar. Shriket Pai T.E Computers, Fr.CRIT, VASHI B-11 Kailash Vihar, Ghatkopar(W) Mumbai- 86
Study of RSA and Proposed Variant against Wiener s Attack Justin Jose Fr. Agnel Boys Hostel Sec 9-A Vashi, Navi Mumbai-400703 Siddharth Raina Fr. Agnel Boys Hostel Sec 9-A Vashi, Navi Mumbai-400703 Sushant
More informationElliptic Curve Cryptosystem
UDC 681.8 Elliptic Curve Cryptosystem VNaoya Torii VKazuhiro Yokoyama (Manuscript received June 6, 2000) This paper describes elliptic curve cryptosystems (ECCs), which are expected to become the next-generation
More informationREGULAR GRAPHS OF GIVEN GIRTH. Contents
REGULAR GRAPHS OF GIVEN GIRTH BROOKE ULLERY Contents 1. Introduction This paper gives an introduction to the area of graph theory dealing with properties of regular graphs of given girth. A large portion
More informationSHARK: A Realizable Special Hardware Sieving Device for Factoring 1024-Bit Integers
SHARK: A Realizable Special Hardware Sieving Device for Factoring 1024-Bit Integers Jens Franke 1, Thorsten Kleinjung 1, Christof Paar 2,JanPelzl 2, Christine Priplata 3, and Colin Stahlke 3 1 University
More informationPOST-SIEVING ON GPUs
POST-SIEVING ON GPUs Andrea Miele 1, Joppe W Bos 2, Thorsten Kleinjung 1, Arjen K Lenstra 1 1 LACAL, EPFL, Lausanne, Switzerland 2 NXP Semiconductors, Leuven, Belgium 1/18 NUMBER FIELD SIEVE (NFS) Asymptotically
More informationAdvanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs
Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material
More informationBipartite Modular Multiplication
Bipartite Modular Multiplication Marcelo E. Kaihara and Naofumi Takagi Department of Information Engineering, Nagoya University, Nagoya, 464-8603, Japan {mkaihara, ntakagi}@takagi.nuie.nagoya-u.ac.jp Abstract.
More informationSparse matrices, graphs, and tree elimination
Logistics Week 6: Friday, Oct 2 1. I will be out of town next Tuesday, October 6, and so will not have office hours on that day. I will be around on Monday, except during the SCAN seminar (1:25-2:15);
More informationOn Covering a Graph Optimally with Induced Subgraphs
On Covering a Graph Optimally with Induced Subgraphs Shripad Thite April 1, 006 Abstract We consider the problem of covering a graph with a given number of induced subgraphs so that the maximum number
More informationImproving Implementable Meet-in-the-Middle Attacks by Orders of Magnitude
Improving Implementable Meet-in-the-Middle Attacks by Orders of Magnitude Paul C. van Oorschot and Michael J. Wiener Bell-Northern Research, P.O. Box 3511 Station C, Ottawa, Ontario, K1Y 4H7, Canada {paulv,wiener}@bnr.ca
More informationREDUCING GRAPH COLORING TO CLIQUE SEARCH
Asia Pacific Journal of Mathematics, Vol. 3, No. 1 (2016), 64-85 ISSN 2357-2205 REDUCING GRAPH COLORING TO CLIQUE SEARCH SÁNDOR SZABÓ AND BOGDÁN ZAVÁLNIJ Institute of Mathematics and Informatics, University
More informationTopology Proceedings. COPYRIGHT c by Topology Proceedings. All rights reserved.
Topology Proceedings Web: http://topology.auburn.edu/tp/ Mail: Topology Proceedings Department of Mathematics & Statistics Auburn University, Alabama 36849, USA E-mail: topolog@auburn.edu ISSN: 0146-4124
More informationThomas H. Cormen Charles E. Leiserson Ronald L. Rivest. Introduction to Algorithms
Thomas H. Cormen Charles E. Leiserson Ronald L. Rivest Introduction to Algorithms Preface xiii 1 Introduction 1 1.1 Algorithms 1 1.2 Analyzing algorithms 6 1.3 Designing algorithms 1 1 1.4 Summary 1 6
More informationOn the Max Coloring Problem
On the Max Coloring Problem Leah Epstein Asaf Levin May 22, 2010 Abstract We consider max coloring on hereditary graph classes. The problem is defined as follows. Given a graph G = (V, E) and positive
More informationPublic-Key Cryptanalysis
http://www.di.ens.fr/ pnguyen INRIA and École normale supérieure, Paris, France MPRI, 2010 Outline 1 Introduction Asymmetric Cryptology Course Overview 2 Textbook RSA 3 Euclid s Algorithm Applications
More informationRectangular Matrix Multiplication Revisited
JOURNAL OF COMPLEXITY, 13, 42 49 (1997) ARTICLE NO. CM970438 Rectangular Matrix Multiplication Revisited Don Coppersmith IBM Research, T. J. Watson Research Center, Yorktown Heights, New York 10598 Received
More informationSCHOOL OF ENGINEERING & BUILT ENVIRONMENT. Mathematics. Numbers & Number Systems
SCHOOL OF ENGINEERING & BUILT ENVIRONMENT Mathematics Numbers & Number Systems Introduction Numbers and Their Properties Multiples and Factors The Division Algorithm Prime and Composite Numbers Prime Factors
More informationBlock Lanczos-Montgomery method over large prime fields with GPU accelerated dense operations
Block Lanczos-Montgomery method over large prime fields with GPU accelerated dense operations Nikolai Zamarashkin and Dmitry Zheltkov INM RAS, Gubkina 8, Moscow, Russia {nikolai.zamarashkin,dmitry.zheltkov}@gmail.com
More informationA SIGNATURE ALGORITHM BASED ON DLP AND COMPUTING SQUARE ROOTS
A SIGNATURE ALGORITHM BASED ON DLP AND COMPUTING SQUARE ROOTS Ounasser Abid 1 and Omar Khadir 2 1, 2 Laboratory of Mathematics, Cryptography and Mechanics, FSTM University Hassan II of Casablanca, Morocco
More informationTHE DEGREE OF POLYNOMIAL CURVES WITH A FRACTAL GEOMETRIC VIEW
225 THE DEGREE OF POLYNOMIAL CURVES WITH A FRACTAL GEOMETRIC VIEW S. Mohanty 1 and A. Misra 2 1 Department of Computer Science and Applications, Utkal University, Bhubaneswar-751004, INDIA. 2 Silicon Institute
More informationSome Algebraic (n, n)-secret Image Sharing Schemes
Applied Mathematical Sciences, Vol. 11, 2017, no. 56, 2807-2815 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ams.2017.710309 Some Algebraic (n, n)-secret Image Sharing Schemes Selda Çalkavur Mathematics
More informationEfficient Assembly of Sparse Matrices Using Hashing
Efficient Assembly of Sparse Matrices Using Hashing Mats Aspnäs, Artur Signell, and Jan Westerholm Åbo Akademi University, Faculty of Technology, Department of Information Technologies, Joukahainengatan
More informationAn array is a collection of data that holds fixed number of values of same type. It is also known as a set. An array is a data type.
Data Structures Introduction An array is a collection of data that holds fixed number of values of same type. It is also known as a set. An array is a data type. Representation of a large number of homogeneous
More informationLemma (x, y, z) is a Pythagorean triple iff (y, x, z) is a Pythagorean triple.
Chapter Pythagorean Triples.1 Introduction. The Pythagorean triples have been known since the time of Euclid and can be found in the third century work Arithmetica by Diophantus [9]. An ancient Babylonian
More informationFast Multiplication on Elliptic Curves over GF (2 m ) without Precomputation
Fast Multiplication on Elliptic Curves over GF (2 m ) without Precomputation Julio López 1 and Ricardo Dahab 2 1 Department of Combinatorics & Optimization University of Waterloo, Waterloo, Ontario N2L
More informationThis chapter continues our overview of public-key cryptography systems (PKCSs), and begins with a description of one of the earliest and simplest
1 2 3 This chapter continues our overview of public-key cryptography systems (PKCSs), and begins with a description of one of the earliest and simplest PKCS, Diffie- Hellman key exchange. This first published
More informationDiscrete Optimization. Lecture Notes 2
Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The
More informationParameterized graph separation problems
Parameterized graph separation problems Dániel Marx Department of Computer Science and Information Theory, Budapest University of Technology and Economics Budapest, H-1521, Hungary, dmarx@cs.bme.hu Abstract.
More informationChapter 9 Graph Algorithms
Introduction graph theory useful in practice represent many real-life problems can be if not careful with data structures Chapter 9 Graph s 2 Definitions Definitions an undirected graph is a finite set
More informationA Public Key Crypto System On Real and Com. Complex Numbers
A Public Key Crypto System On Real and Complex Numbers ISITV, Université du Sud Toulon Var BP 56, 83162 La Valette du Var cedex May 8, 2011 Motivation Certain rational interval maps can be used to define
More informationFactoring Semi-primes using Cluster Computing
Factoring Semi-primes using Cluster Computing Suraj Ketan Samal University of Nebraska-Lincoln December 15, 215 ssamal@cse.unl.edu 1. Abstract: Prime-factorization has long been a difficult problem to
More informationOn the undecidability of the tiling problem. Jarkko Kari. Mathematics Department, University of Turku, Finland
On the undecidability of the tiling problem Jarkko Kari Mathematics Department, University of Turku, Finland Consider the following decision problem, the tiling problem: Given a finite set of tiles (say,
More informationOptimization Problems Under One-sided (max, min)-linear Equality Constraints
WDS'12 Proceedings of Contributed Papers, Part I, 13 19, 2012. ISBN 978-80-7378-224-5 MATFYZPRESS Optimization Problems Under One-sided (max, min)-linear Equality Constraints M. Gad Charles University,
More informationNEW MODIFIED LEFT-TO-RIGHT RADIX-R REPRESENTATION FOR INTEGERS. Arash Eghdamian 1*, Azman Samsudin 1
International Journal of Technology (2017) 3: 519-527 ISSN 2086-9614 IJTech 2017 NEW MODIFIED LEFT-TO-RIGHT RADIX-R REPRESENTATION FOR INTEGERS Arash Eghdamian 1*, Azman Samsudin 1 1 School of Computer
More information1. (15 points) Solve the decanting problem for containers of sizes 199 and 179; that is find integers x and y satisfying.
May 9, 2003 Show all work Name There are 260 points available on this test 1 (15 points) Solve the decanting problem for containers of sizes 199 and 179; that is find integers x and y satisfying where
More informationColoring Fuzzy Circular Interval Graphs
Coloring Fuzzy Circular Interval Graphs Friedrich Eisenbrand 1 Martin Niemeier 2 SB IMA DISOPT EPFL Lausanne, Switzerland Abstract Computing the weighted coloring number of graphs is a classical topic
More informationLecture 15: The subspace topology, Closed sets
Lecture 15: The subspace topology, Closed sets 1 The Subspace Topology Definition 1.1. Let (X, T) be a topological space with topology T. subset of X, the collection If Y is a T Y = {Y U U T} is a topology
More informationProvable Partial Key Escrow
Provable Partial Key Escrow Kooshiar Azimian Electronic Research Center, Sharif University of Technology, and Computer Engineering Department, Sharif University of Technology Tehran, Iran Email: Azimian@ce.sharif.edu
More informationThe p-sized partitioning algorithm for fast computation of factorials of numbers
J Supercomput (2006) 38:73 82 DOI 10.1007/s11227-006-7285-5 The p-sized partitioning algorithm for fast computation of factorials of numbers Ahmet Ugur Henry Thompson C Science + Business Media, LLC 2006
More informationChapter 9 Graph Algorithms
Chapter 9 Graph Algorithms 2 Introduction graph theory useful in practice represent many real-life problems can be if not careful with data structures 3 Definitions an undirected graph G = (V, E) is a
More information2 Computation with Floating-Point Numbers
2 Computation with Floating-Point Numbers 2.1 Floating-Point Representation The notion of real numbers in mathematics is convenient for hand computations and formula manipulations. However, real numbers
More informationFUNCTIONS AND MODELS
1 FUNCTIONS AND MODELS FUNCTIONS AND MODELS 1.5 Exponential Functions In this section, we will learn about: Exponential functions and their applications. EXPONENTIAL FUNCTIONS The function f(x) = 2 x is
More information--> Buy True-PDF --> Auto-delivered in 0~10 minutes. GM/T Translated English of Chinese Standard: GM/T0044.
Translated English of Chinese Standard: GM/T0044.1-2016 www.chinesestandard.net Buy True-PDF Auto-delivery. Sales@ChineseStandard.net CRYPTOGRAPHY INDUSTRY STANDARD OF THE PEOPLE S REPUBLIC OF CHINA GM
More informationGenerating the Reduced Set by Systematic Sampling
Generating the Reduced Set by Systematic Sampling Chien-Chung Chang and Yuh-Jye Lee Email: {D9115009, yuh-jye}@mail.ntust.edu.tw Department of Computer Science and Information Engineering National Taiwan
More informationDM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini
DM545 Linear and Integer Programming Lecture 2 The Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. 4. Standard Form Basic Feasible Solutions
More informationImproved Truncated Differential Attacks on SAFER
Improved Truncated Differential Attacks on SAFER Hongjun Wu * Feng Bao ** Robert H. Deng ** Qin-Zhong Ye * * Department of Electrical Engineering National University of Singapore Singapore 960 ** Information
More informationPARALLEL METHODS FOR SOLVING PARTIAL DIFFERENTIAL EQUATIONS. Ioana Chiorean
5 Kragujevac J. Math. 25 (2003) 5 18. PARALLEL METHODS FOR SOLVING PARTIAL DIFFERENTIAL EQUATIONS Ioana Chiorean Babeş-Bolyai University, Department of Mathematics, Cluj-Napoca, Romania (Received May 28,
More informationComputing the Pseudoprimes up to 10 13
Computing the Pseudoprimes up to 10 13 Michal Mikuš KAIVT FEI STU, Ilkovičova 3, Bratislava, Slovakia Abstract. The paper extends the current known tables of Fermat s pseudoprimes to base 3 with the bound
More informationCS321 Introduction To Numerical Methods
CS3 Introduction To Numerical Methods Fuhua (Frank) Cheng Department of Computer Science University of Kentucky Lexington KY 456-46 - - Table of Contents Errors and Number Representations 3 Error Types
More information1 Inference for Boolean theories
Scribe notes on the class discussion on consistency methods for boolean theories, row convex constraints and linear inequalities (Section 8.3 to 8.6) Speaker: Eric Moss Scribe: Anagh Lal Corrector: Chen
More informationUNIT 2 ARRAYS 2.0 INTRODUCTION. Structure. Page Nos.
UNIT 2 ARRAYS Arrays Structure Page Nos. 2.0 Introduction 23 2.1 Objectives 24 2.2 Arrays and Pointers 24 2.3 Sparse Matrices 25 2.4 Polynomials 28 2.5 Representation of Arrays 30 2.5.1 Row Major Representation
More informationIterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms
Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms By:- Nitin Kamra Indian Institute of Technology, Delhi Advisor:- Prof. Ulrich Reude 1. Introduction to Linear
More informationLecture 27: Fast Laplacian Solvers
Lecture 27: Fast Laplacian Solvers Scribed by Eric Lee, Eston Schweickart, Chengrun Yang November 21, 2017 1 How Fast Laplacian Solvers Work We want to solve Lx = b with L being a Laplacian matrix. Recall
More informationOn the Computational Complexity of Nash Equilibria for (0, 1) Bimatrix Games
On the Computational Complexity of Nash Equilibria for (0, 1) Bimatrix Games Bruno Codenotti Daniel Štefankovič Abstract The computational complexity of finding a Nash equilibrium in a nonzero sum bimatrix
More informationMath 302 Introduction to Proofs via Number Theory. Robert Jewett (with small modifications by B. Ćurgus)
Math 30 Introduction to Proofs via Number Theory Robert Jewett (with small modifications by B. Ćurgus) March 30, 009 Contents 1 The Integers 3 1.1 Axioms of Z...................................... 3 1.
More informationInstitutionen för matematik, KTH.
Institutionen för matematik, KTH. Chapter 10 projective toric varieties and polytopes: definitions 10.1 Introduction Tori varieties are algebraic varieties related to the study of sparse polynomials.
More informationChordal deletion is fixed-parameter tractable
Chordal deletion is fixed-parameter tractable Dániel Marx Institut für Informatik, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099 Berlin, Germany. dmarx@informatik.hu-berlin.de Abstract. It
More informationTopological Invariance under Line Graph Transformations
Symmetry 2012, 4, 329-335; doi:103390/sym4020329 Article OPEN ACCESS symmetry ISSN 2073-8994 wwwmdpicom/journal/symmetry Topological Invariance under Line Graph Transformations Allen D Parks Electromagnetic
More informationA New Statistical Restoration Method for Spatial Domain Images
A New Statistical Restoration Method for Spatial Domain Images Arijit Sur 1,,PiyushGoel 2, and Jayanta Mukherjee 2 1 Department of Computer Science and Engineering, Indian Institute of Technology, Guwahati-781039,
More informationInverted Indexes. Indexing and Searching, Modern Information Retrieval, Addison Wesley, 2010 p. 5
Inverted Indexes Indexing and Searching, Modern Information Retrieval, Addison Wesley, 2010 p. 5 Basic Concepts Inverted index: a word-oriented mechanism for indexing a text collection to speed up the
More informationSimplicial Complexes of Networks and Their Statistical Properties
Simplicial Complexes of Networks and Their Statistical Properties Slobodan Maletić, Milan Rajković*, and Danijela Vasiljević Institute of Nuclear Sciences Vinča, elgrade, Serbia *milanr@vin.bg.ac.yu bstract.
More informationON SOME METHODS OF CONSTRUCTION OF BLOCK DESIGNS
ON SOME METHODS OF CONSTRUCTION OF BLOCK DESIGNS NURNABI MEHERUL ALAM M.Sc. (Agricultural Statistics), Roll No. I.A.S.R.I, Library Avenue, New Delhi- Chairperson: Dr. P.K. Batra Abstract: Block designs
More informationAbout the Author. Dependency Chart. Chapter 1: Logic and Sets 1. Chapter 2: Relations and Functions, Boolean Algebra, and Circuit Design
Preface About the Author Dependency Chart xiii xix xxi Chapter 1: Logic and Sets 1 1.1: Logical Operators: Statements and Truth Values, Negations, Conjunctions, and Disjunctions, Truth Tables, Conditional
More informationGift Wrapping for Pretropisms
Gift Wrapping for Pretropisms Jan Verschelde University of Illinois at Chicago Department of Mathematics, Statistics, and Computer Science http://www.math.uic.edu/ jan jan@math.uic.edu Graduate Computational
More information3. Replace any row by the sum of that row and a constant multiple of any other row.
Math Section. Section.: Solving Systems of Linear Equations Using Matrices As you may recall from College Algebra or Section., you can solve a system of linear equations in two variables easily by applying
More informationAn Improved Remote User Authentication Scheme with Smart Cards using Bilinear Pairings
An Improved Remote User Authentication Scheme with Smart Cards using Bilinear Pairings Debasis Giri and P. D. Srivastava Department of Mathematics Indian Institute of Technology, Kharagpur 721 302, India
More informationA new edge selection heuristic for computing the Tutte polynomial of an undirected graph.
FPSAC 2012, Nagoya, Japan DMTCS proc. (subm.), by the authors, 1 12 A new edge selection heuristic for computing the Tutte polynomial of an undirected graph. Michael Monagan 1 1 Department of Mathematics,
More informationVLSI ARCHITECTURE FOR NANO WIRE BASED ADVANCED ENCRYPTION STANDARD (AES) WITH THE EFFICIENT MULTIPLICATIVE INVERSE UNIT
VLSI ARCHITECTURE FOR NANO WIRE BASED ADVANCED ENCRYPTION STANDARD (AES) WITH THE EFFICIENT MULTIPLICATIVE INVERSE UNIT K.Sandyarani 1 and P. Nirmal Kumar 2 1 Research Scholar, Department of ECE, Sathyabama
More information(Refer Slide Time: 01.26)
Data Structures and Algorithms Dr. Naveen Garg Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture # 22 Why Sorting? Today we are going to be looking at sorting.
More informationVLSI Design and Implementation of High Speed and High Throughput DADDA Multiplier
VLSI Design and Implementation of High Speed and High Throughput DADDA Multiplier U.V.N.S.Suhitha Student Department of ECE, BVC College of Engineering, AP, India. Abstract: The ever growing need for improved
More informationUNIVERSITY OF CALGARY. Improved Arithmetic in the Ideal Class Group of Imaginary Quadratic Number Fields. With an Application to Integer Factoring
UNIVERSITY OF CALGARY Improved Arithmetic in the Ideal Class Group of Imaginary Quadratic Number Fields With an Application to Integer Factoring by Maxwell Sayles A THESIS SUBMITTED TO THE FACULTY OF GRADUATE
More informationPublished by: PIONEER RESEARCH & DEVELOPMENT GROUP (www.prdg.org) 158
Enhancing The Security Of Koblitz s Method Using Transposition Techniques For Elliptic Curve Cryptography Santoshi Pote Electronics and Communication Engineering, Asso.Professor, SNDT Women s University,
More informationx = 12 x = 12 1x = 16
2.2 - The Inverse of a Matrix We've seen how to add matrices, multiply them by scalars, subtract them, and multiply one matrix by another. The question naturally arises: Can we divide one matrix by another?
More informationImproving and Extending the Lim/Lee Exponentiation Algorithm
Improving and Extending the Lim/Lee Exponentiation Algorithm Biljana Cubaleska 1, Andreas Rieke 2, and Thomas Hermann 3 1 FernUniversität Hagen, Department of communication systems Feithstr. 142, 58084
More informationMaths for Signals and Systems Linear Algebra in Engineering. Some problems by Gilbert Strang
Maths for Signals and Systems Linear Algebra in Engineering Some problems by Gilbert Strang Problems. Consider u, v, w to be non-zero vectors in R 7. These vectors span a vector space. What are the possible
More informationIsogeny graphs, algorithms and applications
Isogeny graphs, algorithms and applications University of Auckland, New Zealand Reporting on joint work with Christina Delfs (Oldenburg). Thanks: David Kohel, Drew Sutherland, Marco Streng. Plan Elliptic
More informationReduced Memory Meet-in-the-Middle Attack against the NTRU Private Key
Reduced Memory Meet-in-the-Middle Attack against the NTRU Private Key Christine van Vredendaal Eindhoven, University of Technology c.v.vredendaal@tue.nl Twelfth Algorithmic Number Theory Symposium University
More informationFeature Based Watermarking Algorithm by Adopting Arnold Transform
Feature Based Watermarking Algorithm by Adopting Arnold Transform S.S. Sujatha 1 and M. Mohamed Sathik 2 1 Assistant Professor in Computer Science, S.T. Hindu College, Nagercoil, Tamilnadu, India 2 Associate
More informationChapter 4. square sum graphs. 4.1 Introduction
Chapter 4 square sum graphs In this Chapter we introduce a new type of labeling of graphs which is closely related to the Diophantine Equation x 2 + y 2 = n and report results of our preliminary investigations
More informationreasonable to store in a software implementation, it is likely to be a signicant burden in a low-cost hardware implementation. We describe in this pap
Storage-Ecient Finite Field Basis Conversion Burton S. Kaliski Jr. 1 and Yiqun Lisa Yin 2 RSA Laboratories 1 20 Crosby Drive, Bedford, MA 01730. burt@rsa.com 2 2955 Campus Drive, San Mateo, CA 94402. yiqun@rsa.com
More informationInteger Programming Theory
Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x
More informationChapter 17. Disk Storage, Basic File Structures, and Hashing. Records. Blocking
Chapter 17 Disk Storage, Basic File Structures, and Hashing Records Fixed and variable length records Records contain fields which have values of a particular type (e.g., amount, date, time, age) Fields
More informationTORIC VARIETIES JOAQUÍN MORAGA
TORIC VARIETIES Abstract. This is a very short introduction to some concepts around toric varieties, some of the subsections are intended for more experienced algebraic geometers. To see a lot of exercises
More informationBounds on the signed domination number of a graph.
Bounds on the signed domination number of a graph. Ruth Haas and Thomas B. Wexler September 7, 00 Abstract Let G = (V, E) be a simple graph on vertex set V and define a function f : V {, }. The function
More informationSoftware Implementation of Tate Pairing over GF(2 m )
Software Implementation of Tate Pairing over GF(2 m ) G. Bertoni 1, L. Breveglieri 2, P. Fragneto 1, G. Pelosi 2 and L. Sportiello 1 ST Microelectronics 1, Politecnico di Milano 2 Via Olivetti, Agrate
More informationThe Beta Cryptosystem
Bulletin of Electrical Engineering and Informatics Vol. 4, No. 2, June 2015, pp. 155~159 ISSN: 2089-3191 155 The Beta Cryptosystem Chandrashekhar Meshram Department of Mathematics, RTM Nagpur University,
More informationAn Algorithm of Parking Planning for Smart Parking System
An Algorithm of Parking Planning for Smart Parking System Xuejian Zhao Wuhan University Hubei, China Email: xuejian zhao@sina.com Kui Zhao Zhejiang University Zhejiang, China Email: zhaokui@zju.edu.cn
More informationPAijpam.eu SECURE SCHEMES FOR SECRET SHARING AND KEY DISTRIBUTION USING PELL S EQUATION P. Muralikrishna 1, S. Srinivasan 2, N. Chandramowliswaran 3
International Journal of Pure and Applied Mathematics Volume 85 No. 5 2013, 933-937 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: http://dx.doi.org/10.12732/ijpam.v85i5.11
More informationCHAPTER 9 INPAINTING USING SPARSE REPRESENTATION AND INVERSE DCT
CHAPTER 9 INPAINTING USING SPARSE REPRESENTATION AND INVERSE DCT 9.1 Introduction In the previous chapters the inpainting was considered as an iterative algorithm. PDE based method uses iterations to converge
More informationParallel Hybrid Monte Carlo Algorithms for Matrix Computations
Parallel Hybrid Monte Carlo Algorithms for Matrix Computations V. Alexandrov 1, E. Atanassov 2, I. Dimov 2, S.Branford 1, A. Thandavan 1 and C. Weihrauch 1 1 Department of Computer Science, University
More informationGenerating edge covers of path graphs
Generating edge covers of path graphs J. Raymundo Marcial-Romero, J. A. Hernández, Vianney Muñoz-Jiménez and Héctor A. Montes-Venegas Facultad de Ingeniería, Universidad Autónoma del Estado de México,
More informationInternational Journal of Foundations of Computer Science c World Scientic Publishing Company DFT TECHNIQUES FOR SIZE ESTIMATION OF DATABASE JOIN OPERA
International Journal of Foundations of Computer Science c World Scientic Publishing Company DFT TECHNIQUES FOR SIZE ESTIMATION OF DATABASE JOIN OPERATIONS KAM_IL SARAC, OMER E GEC_IO GLU, AMR EL ABBADI
More informationBoundary Curves of Incompressible Surfaces
Boundary Curves of Incompressible Surfaces Allen Hatcher This is a Tex version, made in 2004, of a paper that appeared in Pac. J. Math. 99 (1982), 373-377, with some revisions in the exposition. Let M
More informationEFFICIENT CLUSTERING WITH FUZZY ANTS
EFFICIENT CLUSTERING WITH FUZZY ANTS S. SCHOCKAERT, M. DE COCK, C. CORNELIS AND E. E. KERRE Fuzziness and Uncertainty Modelling Research Unit, Department of Applied Mathematics and Computer Science, Ghent
More information