Improving Low Density Parity Check Codes Over the Erasure Channel. The Nelder Mead Downhill Simplex Method. Scott Stransky

Similar documents
Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

NOVEL CONSTRUCTION OF SHORT LENGTH LDPC CODES FOR SIMPLE DECODING

Optimization Methods: Integer Programming Integer Linear Programming 1. Module 7 Lecture Notes 1. Integer Linear Programming

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Private Information Retrieval (PIR)

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

5 The Primal-Dual Method

An Optimal Algorithm for Prufer Codes *

Solving two-person zero-sum game by Matlab

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements

Mathematics 256 a course in differential equations for engineering students

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

Analysis of Continuous Beams in General

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

Biostatistics 615/815

K-means and Hierarchical Clustering

Sorting Review. Sorting. Comparison Sorting. CSE 680 Prof. Roger Crawfis. Assumptions

Lecture 5: Multilayer Perceptrons

SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Hermite Splines in Lie Groups as Products of Geodesics

Brave New World Pseudocode Reference

The Codesign Challenge

Meta-heuristics for Multidimensional Knapsack Problems

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming

S1 Note. Basis functions.

Problem Set 3 Solutions

Introduction to Geometrical Optics - a 2D ray tracing Excel model for spherical mirrors - Part 2

Support Vector Machines

Load Balancing for Hex-Cell Interconnection Network

Programming in Fortran 90 : 2017/2018

LECTURE NOTES Duality Theory, Sensitivity Analysis, and Parametric Programming

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT

Multiple optimum values

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 5 Luca Trevisan September 7, 2017

Life Tables (Times) Summary. Sample StatFolio: lifetable times.sgp

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

Smoothing Spline ANOVA for variable screening

A New Approach For the Ranking of Fuzzy Sets With Different Heights

USING GRAPHING SKILLS

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016)

LP Decoding. Martin J. Wainwright. Electrical Engineering and Computer Science UC Berkeley, CA,

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

Module Management Tool in Software Development Organizations

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

A Saturation Binary Neural Network for Crossbar Switching Problem

Classification / Regression Support Vector Machines

TN348: Openlab Module - Colocalization

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory

Outline. Midterm Review. Declaring Variables. Main Variable Data Types. Symbolic Constants. Arithmetic Operators. Midterm Review March 24, 2014

Support Vector Machines

ON SOME ENTERTAINING APPLICATIONS OF THE CONCEPT OF SET IN COMPUTER SCIENCE COURSE

Intra-Parametric Analysis of a Fuzzy MOLP

Greedy Technique - Definition

Intro. Iterators. 1. Access

DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT

CS 534: Computer Vision Model Fitting

Lecture #15 Lecture Notes

CMPS 10 Introduction to Computer Science Lecture Notes

A Binarization Algorithm specialized on Document Images and Photos

VISUAL SELECTION OF SURFACE FEATURES DURING THEIR GEOMETRIC SIMULATION WITH THE HELP OF COMPUTER TECHNOLOGIES

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Machine Learning 9. week

Solitary and Traveling Wave Solutions to a Model. of Long Range Diffusion Involving Flux with. Stability Analysis

On Some Entertaining Applications of the Concept of Set in Computer Science Course

Wishing you all a Total Quality New Year!

AP PHYSICS B 2008 SCORING GUIDELINES

Array transposition in CUDA shared memory

Spatially Coupled Repeat-Accumulate Coded Cooperation

Exercises (Part 4) Introduction to R UCLA/CCPR. John Fox, February 2005

Lecture 4: Principal components

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

Loop Transformations for Parallelism & Locality. Review. Scalar Expansion. Scalar Expansion: Motivation

Loop Permutation. Loop Transformations for Parallelism & Locality. Legality of Loop Interchange. Loop Interchange (cont)

Notes on Organizing Java Code: Packages, Visibility, and Scope

Virtual Machine Migration based on Trust Measurement of Computer Node

Today Using Fourier-Motzkin elimination for code generation Using Fourier-Motzkin elimination for determining schedule constraints

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Inverse Kinematics (part 2) CSE169: Computer Animation Instructor: Steve Rotenberg UCSD, Spring 2016

3D vector computer graphics

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Design for Reliability: Case Studies in Manufacturing Process Synthesis

Data Representation in Digital Design, a Single Conversion Equation and a Formal Languages Approach

Algorithm To Convert A Decimal To A Fraction

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation

Conditional Speculative Decimal Addition*

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

GSLM Operations Research II Fall 13/14

Dynamic Programming. Example - multi-stage graph. sink. source. Data Structures &Algorithms II

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

A Clustering Algorithm for Chinese Adjectives and Nouns 1

Simulation Based Analysis of FAST TCP using OMNET++

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

X- Chart Using ANOM Approach

Reading. 14. Subdivision curves. Recommended:

User Authentication Based On Behavioral Mouse Dynamics Biometrics

Transcription:

Improvng Low Densty Party Check Codes Over the Erasure Channel The Nelder Mead Downhll Smplex Method Scott Stransky Programmng n conjuncton wth: Bors Cukalovc 18.413 Fnal Project Sprng 2004 Page 1

Abstract Error correctng codes prevent loss of ntegrty n data transmsson. Low Densty Party Check codes are a famly of codes that are specfed by sparse matrces. Usng the Nelder-Mead Downhll Smplex Evoluton to desgn an rregular Low Densty Party Check code, we hope to mprove upon the accuracy of decodng. Introducton Loss of data ntegrty durng transmsson has always been a problem, snce there s no way of guaranteeng accuracy n the bts receved. Error correctng codes are a famly of methods that correct ths problem. In an error-correctng paradgm, extra bts are sent along wth the message. These extra bts are carefully calculated usng varous methods. On the recevng end of the transmsson, there must be a decodng algorthm to analyze all the bts that were receved and try to deduce the bts that were sent. In ths project, we wll look at a specfc type of code the Low Densty Party Check (LDPC) code. We wll try to use the Nelder-Mead optmzaton method to determne an optmal LDPC code, and thereby mprove upon the number of errors that can be corrected usng normal LDPC s. Channels Channels are a representaton of a path that bts (0 s and 1 s) can travel over. Bts wll usually travel n packets or codewords (messages that have been encoded by the encoder of an error correctng algorthm), denoted by the symbol C. Whle n the channel, there s a chance that the bt wll lose some or all of ts ntegrty. A major type of channel s the erasure channel, whch ths project wll deal wth. Ths channel wll be dscussed n the next secton. Page 2

The Erasure Channel The erasure channel wth erasure probablty p erases transmtted bts wth probablty p and accurately transmts bts wth probablty 1-p. The chart below detals ths process: Therefore, upon recevng a 0 or a 1 from the channel, there s a probablty of 1 that that s the bt that was sent. If a? s receved, there s a 0.5 probablty of a 1 havng been sent, and a 0.5 probablty of a 0 havng been sent. These propertes wll be mportant n the next secton. The followng chart explans ths: Bt Receved Bt Sent Probablty 1 1 1 0 0 0 1 0 0 1? 1 0.5 0 0.5 Page 3

Error Correctng Codes Error correctng codes provde a way of relably transmttng data over a channel. The general dea of an error correctng code s to add extra bts to the message (the encodng process), based on the message bts themselves, to create a codeword C. Once passed through a channel, a decodng algorthm can be appled to correct ncorrect bts n the receved message. We can defne the rate as the number of message bts per bt sent, so f there are twce as many bts sent as are n the message, the rate s ½. An example of an error correctng code s a Low Densty Party Check code. Regular Low Densty Party Check Codes (LDPC Codes) Regular Low Densty Party Check codes are capable of decodng receved messages wth a consderable number of erasures. In fact, a normal LDPC can decode a message properly even f about 42% of the bts are lost over an erasure channel. These codes are made from graphs of message and party nodes. The message nodes contan the bts of the message from the channel; whle the party nodes ensure that the party of all the message nodes connected to t by the graph are 0. The procedure for ths type of code can be broken down nto three separate stages: generatng a matrx and dual, encodng codewords, and decodng. Generatng a matrx and dual: the matrx tself wll represent a bpartte graph of decodng nodes both party check and message (also known as equalty nodes) that the decodng algorthm wll use. All of the 1 s n the matrx wll represent a mappng from an equalty node (the columns) to a party check node (the rows). Frst, we need to decde on what qualfes a code as normal, and for the purposes of ths paper, a 3-6 graph wll be normal. Ths means that there are 3 edges out of each message node and 6 edges out of each party check node. Gven that each node must connect to only one other node, there must be twce as many message Page 4

nodes as party check nodes. Ths can be represented n a sparse matrx. A sparse matrx s a matrx wth few 1 s; ths allows a computer to only store the poston of the 1 s. In a 5000x10000 matrx, ths can save a great deal of memory. To generate a 3-6 sparse matrx, there must be three 1 s per column and sx 1 s per row. These should be placed as randomly as possble. An example of ths s (where every poston represented by the dots would be a zero): 1 0 1 1 M 0 1 0 1 1 0 L 1 1 0 1 1 1 0 1 L 0 0 1 0 1 0 1 0 L 1 1 1 0 0 1 0 1 L 1 1 0 M M M M M O M M M 1 1 0 1 1 L 0 0 1 The next step s to generate the dual of ths matrx. Ths can be accomplshed wth a C routne that we were provded wth. The dual s what wll be used to encode. Encodng: usng the dual. To encode a 5000-bt message wth a 5000x10000 dual matrx, use multplcaton and multply the message as a vector by the dual matrx. The resultant wll be a 10000-bt long codeword vector. Decodng: usng the bpartte node graph. Each message node s connected to three unque party check nodes, whle each party check node s connected to sx unque message nodes. Ths s shown n the followng dagram (though there are only 8 message nodes): 0 1 0 1 M 1 The decodng process over the erasure channel s smplfed, as opposed to over any other channel, because f a bt s receved, t s guaranteed that that bt was sent. To begn decodng, the message from the channel s placed n the equalty nodes: Page 5

These nodes then transmt ther messages through all of ther outgong edges: Next, the party check nodes try to determne the value of any? messages n the followng way: If, at a party check node, there s one queston mark and all other bts are known, then the? bt s equal to the sum (mod 2) of all the other bts. If there s more than one?, then nothng can be determned. Followng ths stage, the newly known bts are sent back to the equalty nodes: Ths process then repeats untl all bts are known, or no new bts are learned on any gven teraton (and therefore errors cannot be corrected): Page 6

At termnaton, the corrected message wll appear n the equalty nodes. Ths entre process can be represented n the followng way: Irregular LDPC s (and ther specfcaton) In general, LDPC s can be mproved by changng the arrangement of the edges between the equalty nodes and the party check nodes. Ths mprovement can allow upwards of 49% of the bts to get erased by the channel, yet decodng wll stll be feasble. To begn thnkng about changng the number of edges from each node, some notaton must be ntroduced. We can defne λ to be the fracton of edges that are connected to equalty nodes wth degree. We wll also defne ρ to be the fracton of Page 7

edges that are connected to party check nodes wth degree. The followng dagram shows an example of an rregular LDPC graph: In ths graph, λ 1 s equal to 3/5 because 3 out of the 5 edges are extng equalty nodes wth degree one; λ 2 s equal to 2/5. All other λ s are equal to 0 because there are no equalty nodes of degree hgher than 2. Smlarly, ρ 2 s equal to 2/5 and ρ 3 s 3/5, whle all the other ρ 2 s are 0. Ths also mples that for a typcal 3-6 LDPC, the followng condtons hold: λ =0 for 3, λ 3 =1, ρ =0 for 6, and ρ 6 =1. It s mportant to determne how many of the lambda s and rho s are free to change. We must ntroduce some constrants for two reasons. Frstly, they ensure that the lambda s and rho s wll generate feasble graphs capable of decodng. Secondly, they wll lower the number of free varables, makng optmzaton easer. We wll start off wth L + R free varables, where L s the hghest degree of the lambda s and R s the hghest degree of the rho s. The frst two smple constrants are: λ = 1 and = 1, where n s an arbtrarly chosen number representng the hghest allowed node degree. These ensure that the probabltes add to 1. In addton, n = 0 n ρ = 0, λ, ρ 0. Ths must be true so that there are no negatve values of the lambda s or rho s; f there were, you could not construct a graph from them. Obvously, the number of edges out of the equalty nodes must equal the number of edges nto the party check nodes, whch s gven by ρ = β λ, where β s equal to 1 mnus the rate of the code. We also must assure that λ 1 =0 and ρ 1 =0, because f ths s not true, then the code may not work properly, or at all. Ths lowers the number of free varables to L + R 2. We can remove two more free Page 8

varables by defnng one of the lambda s and one of the rho s n terms of all the other ones, snce ther sums must be one: λ = 1 λ, ρ = ρ ; ths lowers the free varables to L + R 4. To lower the number of free varables to L + R 5, you can combne these prevous relatons wth R L 2 2 1 = 3 ρ = β R = 3 λ to determne that. 5 + ρ L 1 = 3.5 λ β = 3 λ L =. For smplcty, we can set L = R and determne that n 1.5 L terms of D, the total number of dmensons or free varables n the problem, L and R are equal to (D + 5)/2. A fnal constrant that must be satsfed s the followng nequalty 1 δj : f ( λ, ρ k, δ ; x j ) :: = δλ(1 ρ(1 x j )) < x j, x j =, j = 1,..., N, where N s the N number of equdstant, dscrete ponts on the nterval (0,δ]. Accordng to the Effcent Erasure Correctng Codes paper, f at most a δ-fracton of the codeword C s erased, ndependently and at random, there s a hgh probablty that the decodng algorthm wll termnate successfully. Usng varous methods, radcally dfferent λ s and ρ s can be arrved at. Methods suggested by prevously wrtten papers nclude dfferental evoluton, an algorthm that produces very good results. It has been suggested that the Nelder-Mead method mght produce reasonable results, so ths s the method that we are usng. 1 Desgn of Effcent Erasure Codes, equaton 5 Page 9

The Nelder-Mead Method 2 The Nelder-Mead method wll allow us to try to fnd an deal λ and ρ to decode and fx as many errors as possble. The Nelder-Mead method s typcally appled to problems that are unconstraned, but n ths case, there are all the constrants that were dscussed n the prevous secton. To use Nelder-Mead, we wll try to maxmze the value of δ (or, snce Nelder- Mead s a mnmzer, we wll mnmze the value of -δ) n f ( λ, ρ, δ ; x ) < 0 whle tryng to avod volatng any of the constrants. To deal wth the constrants, we are gong to use a penalty functon for all solutons that are n volaton of the constrants. Ths was a constant on the order of a bllon, thereby almost never allowng nfeasble ponts. Another consderaton s the startng values, because for Nelder-Mead to operate, there must be N+1 startng ponts where N s the number of dmensons. In our case, ths wll be the value of L + R 5. The ponts create a smplex (an n dmensonal object wth segments connectng all ponts and faces between all segments). The 2-dmensonal smplex s a trangle; the 3-dmensonal one s a tetrahedron. The way that we created our startng smplex s as follows Create an ntal feasble (vald) pont usng the followng procedure 3 : o Set λ 3 through λ L-1 equal to o Set sl equal to.5 1 L L = 2 1. 1. L 1 k j o Set sr equal to o Defne ρ R to be R 1 1 R 2 = 2 sl sr. 1 sr R 1. 2 C code for ths method, and a descrpton were suppled n Numercal Recpes n C 3 Suggested by Prof. Dan Spelman Page 10

1 ρ R o Set ρ 3 through ρ L-1 equal to. R 2 o Compute λ 2, λ L, and ρ 2 usng the constrants. o Ths pont corrects about a 30-35% bt erasure rate. Pck a random drecton n the space, and travel from the ntal feasble pont n that drecton untl an nfeasble pont s reached. Add the last feasble pont that was transversed to the smplex. Repeat the prevous step untl the smplex s completed (there are N+1 ponts n t). At each teraton, one of the followng four thngs happens (generalzed to 3 dmensons so t can be vsualzed). The startng stage mght look lke ths: In ths case, hgh represents the worst pont, based on the mnmzaton, and low s the best. The frst possblty s that the hgh pont s reflected through all the other ponts n the smplex resultng n: A second alternatve s for the hgh pont to be reflected and expanded through the smplex: The thrd opton s a contracton away from the hgh pont: Page 11

The fnal choce s a contracton around the lowest pont by all other ponts: The opton that s chosen s whchever gves the lowest value of the functon that s tryng to be mnmzed. The method termnates when none of the choces mprove the value of the functon by more than a tolerance value that we set. At the end of an ndvdual Nelder-Mead run, the results that are produced are not very good. A reason for ths could be that the method tself took an anomalous step at some pont n the run. Our way of solvng ths was as follows: Upon completon of a Nelder-Mead run, we took the best pont n the smplex to be our new startng pont. We then perturbed ths pont nto a new startng smplex, and ran the Nelder-Mead method agan. Ths process was repeated untl the optmzaton mproved the functon by less than a second tolerance value. To contnue to mprove the percentage of erasures that our code could cope wth, we repeated ths entre process for as long as we had tme. Our program dsplays (va a System.out) the current value of delta, whch represents the percentage of erasures that can be corrected wth the current polynomals. Page 12

Results and Data To standardze our results wth the lterature, our lambda and rho values wll be shown as polynomals gven by the generatng functons: L λ ( x ) = λ x = 2 1 and R ρ( x) = ρ x = 2 1. The frst stage n producng hgh δ values was an overnght run. Durng ths run, the dmensons of the smplexes were randomly chosen from the odd ntegers between 33 and 55. (They were chosen to be odd so that the formula L = R = (D + 5)/2 held wth nteger values of L and R.) Results of ths frst stage ranged from δ = 0.45127 to δ = 0.49001. The hghest value of delta was for a 49-dmesonal run. Ths frst stage allowed weak codes to be weeded out manually. The second stage of my procedure was takng the polynomal for the best δ value and usng t as a new startng pont. I elmnated all the other polynomals that I had generated. After each run of the Nelder-Mead algorthm, the new best δ valued polynomal was taken to restart the method. In ths way, the algorthm can be run ndefntely, begnnng wth an excellent polynomal, though wth the mprovement decayng somewhat per run. The followng s the best polynomal that we generated after stage two t s n 49 dmensons, therefore, L = R = 27. Ths means that our code would correct errors on a message where over 49.12% of the bts were erased: Polynomal coeffcents λ(x) = 0.0990299219062345x + 0.05322429856029543x 2 + 0.055277203574016234x 3 + 0.02844910724153615x 4 + 0.02623612386335452x 5 + 0.00974806698319024x 6 + 0.043625918242555586x 7 + 0.03731895663295148x 8 + 0.03031150462120532x 9 + 0.037309402010523x 10 + 0.029597164315280516x 11 + 0.021166600459527427x 12 + 0.03415225624197976x 13 + 0.041320608127954954x 14 + 0.009263507615627463x 15 + 0.018522044947541907x 16 + 0.008887897283874187x 17 + 1.8583708480587282E-6x 18 + 0.0024302037195027106x 19 + 4.875780862482407E-6x 20 + 1.4847922748886244E-5x 21 + 1.278790267836231E-4x 22 + δ-value 0.49123208 Page 13

1.8314221090588438E-6x 23 + 4.255863080467144E-8x 24 + 1.7067227839464666E-5x 25 + 0.4139608113430263x 26 ρ(x) = 7.632676934932192E-9x + 8.540515922834572E-9x 2 + 1.347032203221241E-7x 3 + 0.018706481130486223x 4 + 0.14176365914019248x 5 + 0.048483176456814533x 6 + 0.018207299247641195x 7 + 0.0019723149738321006x 8 + 0.03351827937913176x 9 + 3.5897497511212437E-4x 10 + 2.192759770277235E-4x 11 + 3.094560606056156E-4x 12 + 3.6525537046007547E-4x 13 + 6.241422929251609E-4x 14 + 0.0024904768128954646x 15 + 0.0010594188309099165x 16 + 2.447810869179167E-4x 17 + 0.0019071855087696428x 18 + 9.728748941992965E-4x 19 + 0.0035607085025847664x 20 + 8.074336119955476E-4x 21 + 0.002270527070515199x 22 + 0.0014808462429242576x 23 + 3.4174652757254845E-4x 24 + 0.005969329475817297x 25 + 0.714366205554256x 26 The next secton wll be devoted to the analyss of ths above polynomal. After the frst stage, the above polynomals had the followng values: Polynomal coeffcents λ(x) = 0.09913944229931859x + 0.05276368935878339x 2 + 0.055972280575682924x 3 + 0.029674260338107834x 4 + 0.017260488750503292x 5 + 0.025448559738954105x 6 + 0.04515638495454221x 7 + 0.04030333756774207x 8 + 0.023131373103644444x 9 + 0.033836826128835755x 10 + 0.024174237516130185x 11 + 0.010565020729052915x 12 + 0.015051741615809872x 13 + 0.022709753544240166x 14 + 0.008548520034345693x 15 + 0.015265273263572396x 16 + 0.02854094895193963x 17 + 0.02263104851684256x 18 + 0.013927898276636303x 19 + 0.07122136853087355x 20 + 0.02153165264315276x 21 + 0.040808253617859384x 22 + 0.010714065924932058x 23 + 0.002728620676797656x 24 + 0.015153512175180905x 25 + 0.2537414411665194x 26 δ-value 0. 49001044 ρ(x) = 3.6513918555414193E-9x + 5.062364731594058E-7x 2 + 0.00574684527124305x 3 + 0.028838659620213876x 4 + 0.11906353643723967x 5 + 0.0480945379927659x 6 + 0.022608297965228147x 7 + 0.0010237641058132966x 8 + 0.0336234867070067x 9 + 1.5748300415421974E-6x 10 + 0.003049627548126765x 11 + 0.001942286492248885x 12 + 7.533265390057562E-5x 13 + 2.917591878192492E-7x 14 + 8.144447734694117E-4x 15 + 0.001000888374498451x 16 + 6.469655073957167E-4x 17 + 7.214491255120121E-4x 18 + 0.0014739803468346929x 19 + 0.0026517581811077x 20 + 0.0026718342756574406x 21 + 0.0026424921242295726x 22 + 0.0011782712186155905x 23 + 1.1747323851536007E-4x 24 + 0.007483472621880171x 25 + 0.7145282189414026x 26 For comparson, the followng are the polynomals for the second best delta value before the weedng process (ths s n 43 dmensons): Page 14

Polynomal coeffcents λ(x) = 0.11274587736727337x + 0.06498455715355583x 2 + 0.03969428250127628x 3 + 0.048961038412381516x 4 + 0.028189656203807884x 5 + 0.03628565340232763x 6 + 0.039921651057134x 7 + 0.048561615545643276x 8 + 0.01035366914158521x 9 + 0.03930103848736154x 10 + 0.01204419343378802x 11 + 0.0026297657792068107x 12 + 0.007466072446663316x 13 + 1.1199455054028839E-6x 14 + 8.821403829892515E-5x 15 + 1.7479984313786059E-4x 16 + 0.03459305011604132x 17 + 0.021583402329676413x 18 + 0.004596632861824698x 19 + 0.044462415543915126x 20 + 0.04567687942127912x 21 + 0.12447510904948544x 22 + 0.23320930591883104x 23 δ-value 0.48937541 ρ(x) = 2.3126878301304998E-7x + 1.1951953253748004E-8x 2 + 2.2733910865119922E-4x 3 + 0.030190568328413908x 4 + 0.1772726933792846x 5 + 0.016180678882956462x 6 + 0.0060195214101926246x 7 + 0.003991365160585282x 8 + 0.01969672231078712x 9 + 0.004628767552064218x 10 + 0.001422882353708355x 11 + 0.008258629068767844x 12 + 0.01995513667205749x 13 + 9.895079953920287E-4x 14 + 0.0017074274635243986x 15 + 2.6541462280539835E-4x 16 + 1.345166853321739E-6x 17 + 0.0014414245548813587x 18 + 0.0019873824492244265x 19 + 0.0010053649143326932x 20 + 1.205146724011483E-4x 21 + 7.158366228945121E-4x 22 + 0.7039212340894853x 23 Gven tme, ths polynomal could have had stage two appled to t; n whch case, the delta value could have been consderably hgher. Analyss There are two man ways for us to analyze our λ s and ρ s. 1. Theoretcally a. To determne whether a codeword s capable of beng decoded after havng passed through the erasure channel wth probablty of erasure p 0, 1 we need to make sure that the plot of ρ ( x) = ρ (1 (1 x) ) les Page 15

1 above the plot of λ ( y) = p λ at all ponts 4. These two formulas 0 have been derved for us n Lecture Notes for lecture 17. Ths s an extremely accurate way of determnng whether all the errors can be corrected. 2. Expermentally a. Ths can be done smply, by usng our λ s and ρ s to create a random encodng matrx. Usng ths matrx, we can encode some codewords to send over the erasure channel wth varous probabltes of erasure. Upon runnng the decodng algorthm, t would not be hard to tell f the decoder was capable of decodng at a gven probablty of erasure. For the sake of ths paper, I shall test the polynomals usng the frst method. In addton, I wll make use of the program s outputted value of delta. Ths should be the frst probablty where the lnes cross. The polynomal that we generated can correct messages wth up to a 49.12% erasure rate, as outputted by the program. Here s the graph of λ(x) and ρ(x) at p =.42, the value where a normal LDPC begns to fal: y 4 Lecture Notes: Lecture #17 Page 16

Here s the graph at p =.48 (zoomed n on the closest pont): Page 17

Here s the graph at p =.49 (agan zoomed n, ths tme even further), a successful value, yet somewhat close to the falure pont: Fnally, here s the graph at p =.495, a value at whch correcton wll fal; the lnes cross: Page 18

Concluson Gven an ample amount of tme to run the algorthm, I beleve that the Nelder- Mead method wll produce extremely good polynomals for Low Densty Party Check codng. At the tme that I had to stop the algorthm to comple the results, delta was stll ncreasng by a few thousandths of a percent per teraton. The polynomals that our program dd generate are capable of correctng more than a 49.12% bt erasure rate. Ths value s better than the lnear programmng approach that was used n the Desgn of Effcent Erasure Codes wth Dfferental Evoluton paper; none of these codes had a delta greater than 48.86%. Our codes do not appear to be as good as those that used dfferental evoluton. Ths may not be the case, as the dfferental evoluton codes were run on the order of 2,000,000 teratons, whle ours only ran a few hundred thousand teratons. Ths s especally true because the Nelder-Mead Page 19

method s known to be one of the slowest optmzaton methods 5. Another way for us to mprove our codes would be allowng nfeasble ponts durng the teratons, and then flterng them out before outputtng the polynomals. Fnally, a dfferent algorthm for determnng the startng smplex could gve drastcally dfferent results. Our startng smplex, though n the mddle of the dmensonal space, could only correct n the md 30% s erasure rate. References Luby, Mtzenmacher, Shokrollah, Spelman. Effcent Erasure Correctng Codes. IEEE Transactons on Informaton Theory, Vol 47, No 2. February 2001. Press, et al. Numercal Recpes n C. Cambrdge Press. 1992. Shokrollah, Storn. Desgn of Effcent Erasure Codes wth Dfferental Evoluton. Spelman. 18.413 Error Correctng Codes Lecture Notes. Appendx: Java Code Our Java program s source code s attached. 5 Numercal Recpes n C Page 20