IMAGE MATCHING USING RELATIONAL GRAPH REPRESENTATION LAI CHUI YEN

Size: px
Start display at page:

Download "IMAGE MATCHING USING RELATIONAL GRAPH REPRESENTATION LAI CHUI YEN"

Transcription

1 IMAGE MATCHING USING RELATIONAL GRAPH REPRESENTATION LAI CHUI YEN A thesis submitted in fulfilment of the requirements for the award of the degree of Master of Science (Computer Science) Faculty of Computer Science and Information System Universiti Teknologi Malaysia JUNE 2005

2 UNIVERSITI TEKNOLOGI MALAYSIA PSZ 19:16 (Pind. 1/97) BORANG PENGESAHAN STATUS TESIS υ JUDUL: IMAGE MATCHING USING RELATIONAL GRAPH REPRESENTATION SESI PENGAJIAN: 2004/2005 Saya LAI CHUI YEN (HURUF BESAR) mengaku membenarkan tesis (PSM/Sarjana/Doktor Falsafah)* ini disimpan di Perpustakaan Universiti Teknologi Malaysia dengan syarat-syarat kegunaan seperti berikut: 1. Tesis adalah hakmilik Universiti Teknologi Malaysia. 2. Perpustakaan Universiti Teknologi Malaysia dibenarkan membuat salinan untuk tujuan pengajian sahaja. 3. Perpustakaan dibenarkan membuat salinan tesis ini sebagai bahan pertukaran antara institusi pengajian tinggi. 4. **Sila tandakan ( 4 ) SULIT TERHAD (Mengandungi maklumat yang berdarjah keselamatan atau kepentingan Malaysia seperti yang termaktub di dalam AKTA RAHSIA RASMI 1972) (Mengandungi maklumat TERHAD yang telah ditentukan oleh organisasi/badan di mana penyelidikan dijalankan) TIDAK TERHAD Disahkan oleh (TANDATANGAN PENULIS) (TANDATANGAN PENYELIA) Alamat Tetap: 5, JALAN 33, DESA JAYA, KEPONG, KUALA LUMPUR. PROF. MADYA DAUT BIN DAMAN Nama Penyelia Tarikh: 12 JUN 2005 Tarikh: 12 JUN 2005 CATATAN: * Potong yang tidak berkenaan. ** Jika tesis ini SULIT atau TERHAD, sila lampirkan surat daripada pihak berkuasa/organisasi berkenaan dengan menyatakan sekali sebab dan tempoh tesis ini perlu dikelaskan sebagai SULIT atau TERHAD. υ Tesis dimaksudkan sebagai tesis bagi Ijazah Doktor Falsafah dan Sarjana secara penyelidikan, atau disertasi bagi pengajian secara kerja kursus dan penyelidikan, atau Laporan Projek Sarjana Muda (PSM).

3 I hereby declare that I have read this thesis and in my opinion this thesis is sufficient in terms of scope and quality for the award of the degree of Master of Science (Computer Science). Signature : Name of Supervisor : Prof. Madya Daut bin Daman Date : 12 JUNE 2005

4 BAHAGIAN A Pengesahan Kerjasama* Adalah disahkan bahawa projek penyelidikan tesis ini telah dilaksanakan melalui kerjasama antara dengan Disahkan oleh: Tandatangan : Tarikh : Nama : Jawatan : (Cop rasmi) * Jika penyediaan tesis/projek melibatkan kerjasama. BAHAGIAN B Untuk Kegunaan Pejabat Sekolah Pengajian Siswazah Tesis ini telah diperiksa dan diakui oleh: Nama dan Alamat Pemeriksa Luar : Nama dan Alamat Pemeriksa Dalam I : Prof. Madya Dr. Md. Yazid Bin Mohd Saman Fakulti Sains dan Teknologi Kolej Universiti Sains dan Teknologi Malaysia Mengabang Telipot Kuala Terengganu, Terengganu Prof. Madya Dr. Dzulkifli Bin Mohamad Fakulti Sains Komputer dan Sistem Maklumat Universiti Teknologi Malaysia UTM Skudai, Johor Pemeriksa Dalam II : Nama Penyelia lain (jika ada) : Disahkan oleh Penolong Pendaftar di SPS: Tandatangan : Tarikh : Nama : GANESAN A/L ANDIMUTHU

5 ii I declare that this thesis entitled Image Matching Using Relational Graph Representation is the result of my own research except as cited in the references. The thesis has not been accepted for any degree and is not concurrently submitted in candidature of any other degree. Signature : Name : Lai Chui Yen Date : 12 JUNE 2005

6 To my mother and father, with gratitude iii

7 iv ACKNOWLEDMENTS This thesis was completed with the contribution of many people to whom I want to express my sincere gratitude. I am especially indebted to my supervisor, Prof. Madya Daut Daman, who gave me the opportunity to work in his research group. He provided a working environment with great facility and opportunities to meet interesting people. I thanked him for the useful suggestions and the freedom he gave me during my research. I am grateful to Universiti Teknologi Malaysia for sponsoring my studies. I would also like to take this opportunity to thank all the lecturers and staffs of Faculty of Geoinformation Science and Engineereing and Faculty of Computer Science and Information System, who have taught or assist me before. I wish to thank all my lab mates who turned these few years into a pleasant time. Many thanks go to Nor Abidah Rahmat and Chu Kai Chuen for the motivation given. We have gone through a lot of hard time together. I thank Leong Chung Ern for the idea to use MATLAB. I am deeply grateful to my family for their unconditional love through the years. I express my deepest gratitude to many of my friends who not particularly involved in the research but gave me encouragement and many just on time help, Chia Yun Lee, Tan Chooi Ee, Lim Yu Jian and Lau Bok Lih. Ong Boon Sheng deserves a special mention upon his continual patience, boundless encouragement and support during this study. Finally yet importantly, I want to extend my grateful appreciation to all the people who have contributed in some way to the completion of this thesis.

8 v ABSTRACT Image matching is a process to establish correspondence between primitives of two or more images that capturing a scene from different viewing position. Various image matching techniques using image features have been known in literature. Feature-based matching algorithm cannot tackle the problem of matching ambiguities easily. This study presents an image matching technique using the structural descriptions of image. Structural descriptions are consist of lines and interline relationship in the line-extracted image. Three conditions of inter-line relationship, namely ordering, intersection and co-linearity, were defined and derived in this study. The method involves the representation of the structural descriptions of image in relational graph and the matching between relational graphs to perform image matching. The methodology is consists of six steps: (1) input image, (2) line segment extraction from the image, (3) the interpretation and derivation of structural descriptions from the line-extracted image, (4) the construction of relational graph to represent the structural descriptions, (5) the derivation of association graph from relational graphs to perform relational graph matching, and (6) the searching of the largest maximal clique in the association graph to determine the best matching. Hence, image matching is transformed as a relational graph matching problem in this study. Experiments are carried out to evaluate the applicability of incorporating structural information into the image matching algorithm. The data is consisting of 14 pairs of stereo images. From the result obtained, it was found that the usage of structural information of image is only plausible for the matching of images of simple scene. The matching accuracy of images of complicated scene remains low even after the incorporation of inter-line descriptions into the image matching algorithm.

9 vi ABSTRAK Pemadanan imej adalah satu proses untuk menubuhkan persamaan antara primitif daripada imej-imej yang menangkap satu pemandangan dari kedudukan pandang yang berlainan. Pelbagai teknik pemadanan imej yang menggunakan ciri imej telah diketahui dalam literatur. Pemadanan imej berasaskan ciri tidak dapat mengatasi masalah keraguan pemadanan dengan mudah. Kajian ini menyampaikan satu teknik pemadanan imej yang menggunakan maklumat struktur imej. Maklumat struktur adalah terdiri daripada garisan dan hubungan antara garisan dalam imej penyarian garisan. Tiga keadaan bagi hubungan antara garisan yang dinamakan aturan, persilangan dan co-linearity telah didefinisikan dan diperolehi dalam kajian ini. Kaedah ini melibatkan perwakilan maklumat struktur daripada imej dalam graf hubungan dan pemadanan graf hubungan bagi memadankan imej. Metodologi adalah terdiri daripada enam langkah: (1) kemasukan data imej, (2) penyarian segmen garisan daripada imej, (3) interpretasi dan perolehan maklumat struktur daripada imej penyarian garisan, (4) pembinaan graf hubungan untuk mewakili maklumat struktur, (5) perolehan graf gabungan daripada graf-graf hubungan untuk memadankan graf hubungan, dan (6) pencarian maximal clique yang terbesar dalam graf gabungan untuk menentukan pemadanan terbaik. Dengan itu, pemadanan imej telah diubah sebagai masalah pemadanan graf hubungan dalam kajian ini. Eksperimen telah dijalankan untuk menilai kesesuaian untuk mengintegrasi maklumat struktur ke dalam algoritma pemadanan imej. Data adalah terdiri daripada 14 pasang imej stereo. Daripada hasil yang diperolehi, didapati bahawa penggunaan maklumat struktur adalah munasabah hanya untuk imej yang mempunyai pemandangan yang tidak kompleks. Ketepatan pemadanan imej bagi imej yang mempunyai pemandangan kompleks tetap rendah walaupun selepas menggabungkan hubugan antara garisan ke dalam algoritma pemadanan imej.

10 vii TABLE OF CONTENTS CHAPTER TITLE PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK TABLE OF CONTENTS LIST OF TABLES LIST OF FIGURES LIST OF SYMBOLS ii iii iv v vi vii xi xii xvi 1 INTRODUCTION 1.1 Introduction Problem Background Motivations Problem Statement Objectives Scope Research Contributions Organization of the Thesis 8 2 LITERATURE REVIEW 2.1 Introduction Definition of Digital Image Matching 10

11 viii 2.3 An Overview of Image Matching Approaches Area-Based Image Matching Feature-Based Image Matching Structural-Based Image Matching Previous Work Matching Primitives Image Windows as Matching 20 Primitives Image Features as Matching 21 Primitives Structural Descriptions as Matching Primitives Matching Constraints and Strategies Measure of Matching Discussion on the Image Matching Approaches Image Matching: Problem Areas 33 3 THEORETICAL FRAMEWORK AND METHODOLOGY 3.1 Introduction Overall Methodology Line Segment Extraction Derivation of Structural Descriptions from the 37 Line-Extracted Image Line Segment Labelling Derivation of Inter-Line Relationship Ordering Relationship Intersection Relationship Co-linearity Relationship Relational Graph Representation The Construction of Relational Graph The Definition of Relational Graph Matching 48

12 ix 3.8 The Association Graph and Clique Finding 50 Technique 3.9 The Construction of Association Graph for 52 Relational Graph Matching Building the Nodes of Association Graph Building the Arcs of Association Graph The Matching Strategy The Complexity 65 4 IMPLEMENTATION 4.1 Introduction The System Feature Extraction Module Structural Descriptions Derivation Module Line Segment Labelling Derivation of Inter-Line Relationship Ordering Relationship Intersection and Co-linearity Relationship Relational Graph Module Association Graph Module Clique-Finding Module 88 5 RESULT AND DISCUSSION 5.1 Introduction Image Data Initial Experiment on the firstn Parameter Experiment 1: Stereo Images on a House Experiment 2: Stereo Images on a House Experiment 3: Stereo Images on a Block Experiment 4: Stereo Images on a Note Experiment 5: Stereo Images on Some Rectangles 110

13 x 5.9 Experiment 6: Stereo Images on a Book and a 114 Block 5.10 Experiment 7: Stereo Images on a Gear Experiment 8: Stereo Images on a Gear Experiment 9: Stereo Images on a Rubik Cube 125 and a Block 5.13 Experiment 10: Stereo Images on Arch of Blocks Experiment 11: Stereo Images on a Telephone 133 and a Cup 5.15 Experiment 12: Stereo Images on a Tennis ball, 137 an Ice Chest and Two Cylinders 5.16 Experiment 13: Stereo Images of a Room Experiment 14: Stereo Images of a Room Discussions Constraint and Drawback CONCLUSIONS 6.1 Summary Conclusion Suggestions for Further Research 164 REFERENCES 167

14 xi LIST OF TABLES TABLE NO. TITLE PAGE 5.1 The image data used in the experiments The ranking for left-to-right matching pair based on B lr The resulted relational graph The matching results 152

15 xii LIST OF FIGURES FIGURE NO. TITLE PAGE 1.1 The plotting of two corresponding imaged points, m 2 and m in two images, cast by the same physical point M in 3-D space, from different viewing position, C and C 1.2 The two corresponding imaged points, m and m in 2 two images, cast by the same physical point M in real scene (3-D space), from different viewing position, C and C 2.1 Three cameras targeting the scene of a building from 11 different viewing positions (plane view) 2.2 Q is the homologous or corresponding point to P Different approaches of image matching Area-based matching The methodology of the study Generation of label matrix from line-extracted image Detection of ordering relationship for the line 40 labelled 5, the displacement is carried on the both side of the line labelled 5 until an edge pixel belongs to other neighbouring line is encountered 3.4 A line segment is defined by two end points Some possible intersection between line segments that derived for this study 42

16 xiii 3.6 The co-linearity condition between line segments that 43 derived for this study 3.7 A graph representation of relational structure Line-extracted image with lines labelled as l 1 to l The corresponding relational graph represents the 47 structural information of the line segment image (of Figure 3.9), where t 1 denotes relation type ordering, t 2 denotes relation type intersection, and t 3 denotes relation type co-linearity 3.10 Three different classes of graph matching Some examples of clique The definitions of compatibility in terms of colinearity 57 relation 3.13 Two line-extracted images to be matched Left relational graph, the represented internodes 60 relations are: to the left of (labelled as t 1 ), to the right of (labelled as t 2 ), to the top of (labelled as t 3 ), to the bottom of (labelled as t 4 ), intersect with (labelled as t 5 ), and collinear with (labelled as t 6 ) 3.15 Right relational graph, the represented internodes 61 relations are: to the left of (labelled as t 1 ), to the right of (labelled as t 2 ), to the top of (labelled as t 3 ), to the bottom of (labelled as t 4 ), intersect with (labelled as t 5 ), and collinear with (labelled as t 6 ) 3.16 The resulted association graph The corresponding lines between the left and right 65 image 4.1 Modules in the system Statements in buildlabelmatrix function Statements in detectordering function Statements in computeoverlap subfunction Statements in detectconnection function Statements in initializesearcharea subfunction 80

17 xiv 4.7 Statements in searchconnectlabel subfunction Statements in buildrelationalgraph function Statements in buildadjacencymatrix function Statements in plotrelationalgraph function Statements in buildassociationnode function Statements in buildassociationarc function Statements in propagatearc function The correct correspondence between the left and right 93 image, indicated by 23 set of left-to-right matching pair that labelled with the corresponding number 5.2 Some results of the first experiment Association graph resulted from the first experiment The matched lines without propagation from the first 99 experiment 5.5 The association graph without propagation of the first 100 experiment 5.6 Some results of the second experiment Association graph resulted from the second 104 experiment 5.8 Some results of the third experiment Association graph resulted from the third experiment Some results of the third experiment Association graph resulted from the fourth 111 experiment 5.12 Some results of the fifth experiment Association graph resulted from the fifth experiment Some results of the sixth experiment Association graph resulted from the sixth experiment Some results of the seventh experiment Association graph resulted from the seventh 124 experiment 5.18 Some results of the eighth experiment 127

18 xv 5.19 Association graph resulted from the eighth 127 experiment 5.20 Some results of the ninth experiment Association graph resulted from the ninth experiment Some results of the tenth experiment Association graph resulted from the tenth experiment Some results of the eleventh experiment Association graph resulted from the eleventh 138 experiment 5.26 Some results of the twelfth experiment Association graph resulted from the twelfth 142 experiment 5.28 Some results of the thirteenth experiment Association graph resulted from the thirteen 146 experiment 5.30 Some results of the fourteenth experiment Association graph resulted from the fourteenth 150 experiment 5.32 The ordering relationship is not well-defined for some conditions 157

19 xvi LIST OF SYMBOLS A - Searching area B lr - Similarity measure d - Euclidean distance e - edges (arcs) E - Set of edges (arcs) G - Relational graph h - The length of a line (in the number of pixels) l - Line l - Line in the left image l: r - Left-to-right matching pair max - Maximum operation min - Minimum operation m c - The slope of the line under consideration m s - The searching slope nedge - The number of edges in an image nelement - The number of elements in an adjacency matrix nline - The number of lines in an image nnode - The number of nodes in a relational graph nnz - The number of non-zero elements in an adjacency matrix nt - The number of relations hold by a line p - Properties P - Set of properties r - Line in the right image (r e, c e ) - The ending pixel of a line (r s, c s ) - The starting pixel of a line

20 xvii S - Relational Structure t - Relations T - Set of relations v - Elements (Nodes) V - Set of elements (Set of nodes) θ - The orientation of a line ρ - The density of adjacency matrix of a relational graph

21 CHAPTER 1 INTRODUCTION 1.1 Introduction Generally, image matching is a process of automatically establishing correspondence between primitives of two or more images that capturing at least partly the same object or scene from different viewing position. Image matching also can refer as a process to associate the content or primitives of two or more images that capturing an object or scene from different position (Julien, 1999). Image matching can be illustrated as a process of identifying the corresponding points of two images (see Figure 1.1 and Figure 1.2) or more images, which cast by the same physical point in three-dimensional (3-D) space from different viewing position (Medioni and Nevatia, 1985).

22 2 M m Image Plane m C C Figure 1.1: The plotting of two corresponding imaged points, m and m in two images, cast by the same physical point M in 3-D space, from different viewing position, C and C Figure 1.2: The two corresponding imaged points, m and m in two images, cast by the same physical point M in real scene (3-D space), from different viewing position, C and C Image matching is an integral part of many computer vision tasks such as image registration, feature tracking, 3-D structure recovering from stereo images, multiple images or image sequences. For instance, the first step in recovering 3-D information of a static scene from a pair of stereo images is the matching of a set of

23 3 identifiable of corresponding details between images. Where, a number of corresponding image primitives is used to match different images to each other and establish a local triangulation, to recover the depth of the scene. In addition, establish correspondence between images of a set of image sequences is also a key step in recovering 3-D structure from image sequences. Where, the correspondence is used to calculate the motion parameters of the camera with respect to the objects in the scene, to reconstruct the structure of the objects in the dynamic scene. Over the years, a broad range of image matching techniques has been proposed for various types of data and many domains of application, resulting in a large body of research. Some interesting areas are recovering 3-D structure from stereo images or image sequence for autonomous vehicle navigation, industrial automation and augmented reality. Approaches for image matching can be broadly classified into two categories: area-based matching and feature-based matching. Area-based matching uses intensity profiles or grey value template as the matching primitive. But, more recently, image features have extensively applied to image matching to establish image correspondence. In the feature-based approach, features are first extracted from the images, and then the matching process is based on the attributes associated with the extracted features. Feature-based matching alone might not deal with the problem of matching ambiguities easily. Some additional constraints must be imposed to control the search of matching candidates and reduce the possibility of error caused by ambiguous matching. 1.2 Problem Background Ideally, set of corresponding details and coherent collections of pixels between images are assumed always can be determined and then provide a reliable matching between images.

24 4 Ambiguous matching might happen, if some image primitives that are visible in one image are occluded partially or totally in the other. In addition, ambiguities might arise if set of corresponding details between images are not available, in a small quantity, or incompatible. Sometimes, a local primitive in one image matches equally well with more than one primitive in the other image (known as one-to-more mapping). All these situations will lead to ambiguities in matching where a one-toone mapping between image primitives is difficult to establish. Image matching can be complicated by several factors related to the geometric and radiometric properties of the images. For instance, when working with stereo images that capturing over a scene from different viewing position, geometric distortion in images, variation of image attributes and scene illumination could contribute to ambiguities in the matching result (Salari and Sethi, 1990). Sometimes, periodic structures in the scene can confuse the image matching process because a feature in one image may confuse with features from nearby parts of the structure in the other image, especially if the image features generated by these structures are close together, compared with the disparity of the features (Barnard and Fischler, 1982). In fact, these ambiguous conditions are likely to occur, when feature extraction does not provide reliable result, when image primitives are missing or partly occluded due to noise or shadow in the image, or when images are geometrically distorted due to different perspective viewpoint (Medioni and Nevatia, 1985). All these geometric or radiometric changes in images can in turn lead to wrong correspondence and causing the matching result drift away from their original correspondence set. Therefore, in order to control the search of matching candidates and minimize the occurrence of false matching, some matching constraints should be imposed in conjunction with the matching algorithm (refer Section for details). Nevertheless, the matching of images of complicated scene remains difficult even after the application of these matching constraints and strategies into the matching algorithm.

25 5 There are a number of reasons for this: (1) feature detection is not perfectly reliable, so false feature may be detected in the images, (2) feature in one image may be partially seen or fully occluded in the other image due to shadow, noise, failure in feature extraction, and thus it is difficult to find a one-to-one mapping between images, (3) one object may look different and varied in its attributes in the other images due to different viewing position and perspective distortion, and (4) ambiguities may occur, caused by repetitive pattern in the scene. Many feature-based matching approaches have to go through some processes such as edge detection, edge linking, binarization and thinning during the feature extraction process. Feature-based image matching algorithm is relies heavily on the quality of image and the performance of feature extraction. Thus, a method to tackle the problem is to perform image matching from feature-extracted image without relied solely on the identifiable primitives in the feature-extracted images. This means that the proposed method should not constrained much by the quality of the image, the performance of feature extraction algorithm and the quality of extracted features. It also should capable to work with two image descriptions that are not likely to have a strict one-to-one correspondence at the feature extraction level. By considering the clarified needs, a structural-based matching technique is proposed in this study. This study involves the interpretation and derivation of structural descriptions from image, the construction of relational graph to represent the structural descriptions and the matching between relational graphs. Hence, image matching is carried out as relational graph matching in this study. 1.3 Motivations Image matching is an important task in scene analysis and computer vision, which is to match two or more images taken.

26 6 Given two or more images, the matching of images or other closely related tasks such as image registration, pattern detection and localization, and common pattern discovery can be defined. Image registration is to find the transformation under which an image spatially fits best to another. Pattern detection and localization is to detect whether a small image is a sub-image of another image and locating the position of the sub-area. Whilst, common pattern discovery is to find the maximum common sub-image of two or more images. Despite of that, a broad set of applications also motivate the research area of image matching. Related areas include image registration, change detection, map updating, feature tracking, stereo matching, or recovering structure from image sequences for autonomous navigation. All these research have different level of purposes and difficulties, as a result, often associates with different approaches and solutions. They differ in their choice of primitives and the criteria used to resolve ambiguities, and each method has its own affinity function. The configuration of the method depends on the correspondence problem and the complexity of the scene. Commonly, there are constraints and schemes that can help reducing the number of false matches. From the review of previous works (refer Chapter 2), many open problems still exist in image matching. 1.4 Problem Statement This study is devoted to interpret and derive structural descriptions from an image. This study also looks into the incorporation of structural descriptions into image matching, in the context to look for compensation for failure in feature extraction, occlusion, noise, varied image acquisition condition and dissimilarity between images or other similar problem domain occurred in the feature-extracted images. To move on to tackle with the foregoing problems, the solution should not constrained much by the quality of the image, the performance of feature extraction and the quality of extracted feature.

27 7 1.5 Objectives The objectives of the study are: (1) To derive the structural descriptions of an image. (2) To represent the structural descriptions of an image using relational graph. (3) To perform image matching based on the constructed relational graphs. 1.6 Scope This study focuses firstly on the derivation of structural descriptions of an image. The structural descriptions of an image are defined as image features and their interrelationships in the image. Line features are extracted from a greyscale image. No pre-processing of greyscale image is done. Next was the study on the derivation of relationship between the extracted line features. The detection of interline relationships would be based upon the line extraction result. Nevertheless, line segment labelling would be studied, prior to the derivation of inter-line relationship. The inter-line relationships that derived for this study are confined to ordering, intersection and co-linearity. In addition, emphasis is given to the utilization of structural descriptions of the line-extracted image in image matching. Image matching would be based on the result of the first phase of the study. Where, image matching involves the representation of the derived structural description of an image in relational graph and the relational graph matching to implement image matching. Experiments would be run to evaluate the applicability of incorporating structural information into image matching. The data would consist of 14 pairs of stereo images. The data is confined to non-metric images, which is taken without any pre-acquisition setting.

28 8 1.7 Research Contributions The work addressed in this study has contributed to the following aspects: (1) An algorithm that transforms a feature-extracted image to its structural descriptions, represents the derived structural descriptions in relational graph, and performs graph matching between these relational graphs, was proposed in this study. (2) Three conditions of inter-line relationship in the line-extracted image, namely ordering, intersection and co-linearity, were defined and derived. The applicability and limitation of these inter-line relationships were analyzed. (3) The idea of applied relational graph matching to image matching was introduced. The incorporation of structural information of an image and the characteristics of relational graph representation in assisting image matching were studied and examined. (4) Relational graph matching by forming an association graph structure and computing the largest maximal clique in the association graph was performed and evaluated. 1.8 Organization of the Thesis The thesis is organized as follows: Chapter 1 is a brief introduction of the study. The background, motivation, problem statement, objectives and scopes of the study are discussed in this chapter. Chapter 2 describes some background knowledge and reviews previous works dealing with image matching.

29 9 Chapter 3 presents the methodology and theoretical framework of this study. This chapter explains the steps of transforming the line-extracted image to its structural descriptions and representing the derived structural descriptions as a relational graph for subsequent graph matching process. Chapter 4 reports on the implementation of the proposed approach. The methodology is designed to be implemented modularly by five computer modules, namely feature extraction module, structural description derivation module, relational graph module, association graph module and clique-finding module. Chapter 5 presents and discusses the results of conducted experiments based on the proposed structural-based image matching technique. work. Chapter 6 summarizes and concludes the study and outlines topics for future

30 CHAPTER 2 LITERATURE REVIEW 2.1 Introduction In this chapter, some background knowledge of image matching is given. In addition, some previous works dealing with image matching are reviewed. 2.2 Definition of Digital Image Matching Image matching is refers to the task of associating point to point two or more images that capturing over a same object or scene from different viewing position. Image matching can be performed permanently well by human binocular vision system. Human perform this natural image matching when merging the two images seen by eyes (human vision system) into a unique perceived one. Whilst, digital image matching is a substitute for natural visual matching with an artificial vision system, where the images are digital and the point to point association is performed by a computer program (Julien, 1999). Digital image matching is actually intended to associate the content or primitives of two or more images that capturing a scene from different position. More specifically, digital image matching can be defined as a process of automatically establishing correspondence between primitives of two or more digital images that depicting at

31 11 least partly the same object or scene from different viewing position. Image primitive can be grey value window, low level features extracted from the images or high level image description such as structural information (Heipke, 1996). In the remainder of this thesis, digital images are assumed to be available, and therefore the term digital will be omitted. Sometimes, image matching also refer as image correspondence or image correlation (Dowman, 1996). The scenario of image matching can be described further as follows: suppose a scene of a real world or an artificial world is being captured. The objects in the scene are assumed to be static. This means that they not move, and not change their shapes in time. Then, a camera is being moves in the scene or a number of cameras are targeting the scene, as illustrated by Figure 2.1. In certain short time intervals or simultaneously, the camera(s) take(s) images of the scene. Hence, a number of images of one scene are acquired, each taken from different position of the camera. Camera 1 Camera 2 Camera 3 Figure 2.1: Three cameras targeting the scene of a building from different viewing positions (plane view) By the term point to point association mentioned in the foregoing paragraph, this indicates that for any point P in one image, the image matching algorithm can determine which point Q in the other image(s) represents the same detail, or homologous to P (see Figure 2.2). In this example, Q is known as the homologous or corresponding point to P and hence P and Q are counted as one pair of homologous point. Hence, image matching is a process to establish the

32 12 correspondence between each pair of visible homologous image points on a given pair of images (Walder, 2000). Figure 2.2: Q is the homologous or corresponding point to P Julien (1999) defined image matching as: given two or more digital views of an object, find automatically all the pairs of homologous details. He urged several necessary complements to this definition: (1) The definition speaks of homologous details rather than points, because a detail is not necessarily punctual. For example, on the images of Figure 2, a corner of the house can be regarded as punctual; but an edge of a wall is linear, considered as not punctual; a detail such as the bush near the house is not punctual, and its description cannot be reduced to a single point. (2) The definition speaks of all the pairs, but always there are a number of pairs, great enough to describe completely the correspondence between images. Furthermore, some details may have no visible homologous details, because of occluding objects. (3) The definition does not assume stereoscopic views, because digital image matching can apply to non-stereoscopic views, for instance for image registration. Neither is there any a priori assumption about the nature and geometry of the views; they can be digital photographs, close range or aerial ones, satellite images, or drawings.

33 13 (4) The definition should not restrict to only two images, three or more images are common on these days. For stereo, the image matching problem can be called as stereo matching or stereo correspondence problem. Whilst, for other non-stereo image, the problem is regarded as the common the image matching or multiple image matching. For motion analysis, the problem is known as matching for image sequences. (5) Researchers are now developing digital matching on multiple image and image sequence (image motion). Therefore, the definition should be refined as follows: given two or more digital views of an object, find automatically all the (or a number of) set of homologous details. 2.3 An Overview of Image Matching Approaches There are many criterions can be adopted to classify among the various possible approaches for digital matching. The main and most common criterion to classify the image matching algorithms is the primitives used in the matching (Barnard and Fischler, 1982; Medioni and Nevatia, 1985; Greenfeld and Schenk, 1989; Hannah, 1989; Dowman, 1996; Heipke, 1996). The distinction between matching primitives is probably the most prominent difference between various image matching algorithms. One of the reasons is that the selection of matching primitives influences in part the subsequent steps (i.e. measure of match) in image matching. Other different classification schemes can be found in elsewhere (Brown, 1992; Brown et al., 2003). Basically, the image matching approaches fall into three broad categories according to the matching primitives: grey value windows as matching primitives, image features as matching primitives and structural descriptions as matching primitives; the resulting algorithms are usually known as area-based matching, feature-based matching and structural-based matching, respectively (Dowman, 1996; Heipke, 1996; You and Bhattacharya, 2000).

34 14 For each category, there can be subdivided into local and global approaches. Within each category, there are a large number of different techniques available in the literature and no single method can be considered useful for all types of images. In the research area of image matching, there can be any possible combination between these approaches (see Figure 2.3) and also some hybrid methods that involved the fusion of different approaches. The background knowledge of area-based matching, feature-based matching and structural-based matching approach is discussed in the following subsections, accordingly. Area-Based Matching Feature-Based Matching Structural-Based Matching Local Global Figure 2.3: Different approaches of image matching Area-Based Image Matching As mentioned above, grey level windows are serves as matching primitives for the area-based matching. Area-based matching techniques are the oldest and simplest of the image matching algorithms. Area-based image matching is also known as intensity-based image matching. Ideally, one would like to find a corresponding pixel for each pixel in each image to be match, but the semantic information conveyed by a single pixel is too low to resolve ambiguous matches, therefore an area or neighbourhoods around each

35 15 pixel have to be consider. Then, the matching for this area is found by searching in the other image for a best match that defined by a similarity measure (Medioni and Nevatia, 1985; Dowman, 1996). More specifically, area-based matching approaches match images by statistically comparing grey level properties and establish a correspondence between image sub-areas, according to the degree of similarity between windows. Essentially, the degree of similarity between different image windows defined the justification and measure of matching. Some strategy or mathematical functions are involved to measure the degree of similarity, as discussed in Section A basic procedure of this area-based approach is to take two patches from the images which are located around the same points of detail and uses some strategy or mathematical function to determine the degree of correlation between the two. This is done by selecting a template window in the first image and then to match this with a number of nearly corresponding patches within a larger search window in the second image using a function. The basic procedure is illustrated in Figure 2.4; Figure 2.4 (a) shows a typical small template window of the first image, Figure 2.4 (b) depicts the corresponding larger search window in the second image. Figure 2.4 (c) shows grey levels in the template window, while Figure 2.4 (d) shows grey levels in search window with corresponding window marked.

36 16 (a) (b) (c) (d) Figure 2.4: Area-based matching: (a) a typical small template (reference) window of the first image, (b) the corresponding larger search (target) window in the second image, (c) grey levels in the template window, and (d) grey levels in search window with corresponding window marked In the example, a 3 x 3 template window in the first image is shifted pixel by pixel across a larger 7 x 7 search window in the second image and 25 positions (or windows) have covered. At each position, similarity between the template window and the corresponding part of the search window is computed as a function of correlation or a function of differences (Section 2.3) to justify the best match. Take, for instance, a function of correlation coefficient (always between 1 and +1) is used to measure the similarity. The maximum of the resulting correlation coefficient function defines the position of the best match between the template and the search window. In other

37 17 word, the pair of window patches that yield the highest correlation values is regarded as the best match and would be taken as the correct match. In this example, an exact match which represented by optimal correlation coefficient value of +1 would be obtained because a 3 x 3 patch in the right image exactly corresponds to the 3 x 3 template window in left image. However, normally there will not be an exact match and therefore the possibility of error in reality. In practice, more complex matching strategies are used. The review on the matching constraints and matching schemes that applied to area based matching approaches is given in Section Feature-Based Image Matching The feature-based approach matches image features rather than matching intensity array in area-based approach. The feature-based approach matches images by establishes a correspondence between image-derived features (Medioni and Nevatia, 1985). Basically, this approach is comprised of two main steps: (1) Feature extraction, where features such as points, edges and lines are extracted from each image individually using a feature detector, prior to finding correspondence between them (Heipke, 1996). (2) Feature matching, where extracted features are then matched according to a set of constraints and conditions, similarity measure and consistency criteria, which aim to ensure that a correct match is obtained (Dowman, 1996). After undergoing the first step, the result of feature extraction is a list containing the image features and their properties descriptions. Features are more abstract descriptions of the image content. It should be noted that the features are discrete functions of position. This is because, after feature extraction, a feature is either exists at a given position in the image or it doesn't. Only these lists are

38 18 processed further in the second step of matching, not the entire image. In matching process, some benefit functions based on the attributes of the extracted features are involved to measure the degree of similarity between features. Features should be distinct with respect to their neighbourhood, invariant with respect to geometric and radiometric influences, stable with respect to noise, and seldom with respect to other features (Förstner, 1986). There are local and global supports for features as matching primitives. Local features can be used in the feature-based approaches are points or corners, edges, segments or lines, curve segments and regions. The review on these matching primitives is given in Section Each extracted feature is characterized by a set of properties. As the position in terms of its image coordinates is always present, further examples for properties are the edge orientation and strength (gradient across the edge) for edge elements, the length and orientation for lines, and the area size and average intensity for regions. The properties for points are types of junction that the corners correspond to, for instance, Y-junction and L-junction. Whilst, global features are usually composed of different local features and more complex descriptions of the image content called structural descriptions. Image matching with global features is also referred as structural-based image matching (Shapiro and Haralick, 1987). Structural-based image matching is discussed further in the following section Structural-Based Image Matching The structural-based approach matches images by establishes a correspondence between structural descriptions derived from images (Boyer and Kak, 1988; Horaud and Skordas, 1989; Jiang and Ngo, 2004). Structural descriptions of an image are defined by a set of image features (e.g. points, lines, and regions),

39 19 feature attributes and their interrelationship (Shapiro and Haralick, 1981; Boyer and Kak, 1988). Within this representation an image is considered globally rather than as a list of individual features (Horaud and Skordas, 1989). Hence, more specifically, structural-based image matching is to find out a correspondence (i.e. best matching) from the image features and inter-feature relations of one structural descriptions of an image to the features and inter-feature relations of one structural descriptions of another image. Within the structural description of an image, besides the properties of features, relations between these features also are introduced to characterize the features. Due to these characteristics, structural-based image matching is also known as relational matching (Shapiro and Haralick, 1987) or high-level feature based matching method (You and Bhattacharya, 2000). They can be varied combination of features and inter-feature relations to define structural descriptions for an image. Different kinds of image features can be characterized by different types of properties and inter-feature relationship. There can be geometrical, photometrical and topological relationship between these features. Geometrical relationships are such as the minimum distance between two edges, the angle between two segments or two adjacent polygon sides. Photometrical relationships are such as the difference in mean grey value (contrast) between two adjacent regions. There can also be topological relationship such as the spatial notion that one feature is on the left side of another. 2.4 Previous Work This section presents a brief review of previous work on image matching approaches that closely related to the study. It is more appropriate and easy to review the image matching algorithms by decomposing them into smaller sub-topics: matching primitives, matching constraints and strategies and measure of matching. The review is carried out in the following sub-sections, accordingly.

40 Matching Primitives As discussed previously in Section 2.3, the matching primitives fall into three categories: grey value windows, extracted image features or structural descriptions as the basis for matching. The resulted algorithm is known as area-based, feature-based and structural-based matching approach, respectively. Generally, the technological developments in image matching have advanced and diversified from traditional area-based to feature-based matching, and then to structural-based matching. The review on matching primitives is important because it influences the selection of matching strategies and matching metrics in the subsequent steps Image Windows as Matching Primitives Area-based matching is the most oldest and traditional methods. There is a category of local and global support for intensity windows as matching primitives. However, the terms local and global are not sharply defined. Local refers to a window within 50-by-50 pixels in image, global means a larger area and can comprise the whole image (Heipke, 1996). Generally, area-based matching is commonly carried out on local windows. Cross-correlation image matching and least squares image matching are the well-known methods for area-based matching. These methods need a very good initial position of the two areas to match. Image matching using cross-correlation is agreed to be the most classical algorithm for area-based matching, as can observed from some earlier work (Moravec, 1980; Hannah, 1989). The cross-correlation coefficient is a simple but widely used measure for the similarity between image windows. For a more recent work, Kim and Fessler (2004) developed a robust correlation coefficient as a similarity measure to reduce the influence of outlier objects (object that appear in one image but not the other).

41 21 Whilst, least square image matching approach is first proposed by Förtner (1982). Later, Grün (1985) has further expanded the original least square matching approach by Förstner (1982) as adaptive least square template matching. The technique allows one of the patches being matched to be distorted and shaping by an affine transformation and then find the best fit using least squares solution. Essentially, least square image matching technique is capable to accommodate local perspective distortions but require a very good initial position of the two patches. Area-based matching also can base on global windows. Rosenholm (1987) carried out area-based matching globally using connected windows. In this case poor or repetitive texture can be successfully dealt with to a certain extent Image Features as Matching Primitives Generally, in recent years, the focus on the matching primitives has shifted from grey-level correlation to features. The transition is due to the extensive development in computer vision. Image features have been widely used in the feature-based approaches are points (Barnard and Thompson, 1980; Rosenholm, 1987; Ton and Jain, 1989; Salari and Sethi, 1990; Walker et al., 1997; Pollefeys, 1999; Kenney et al., 2003; Georgescu and Meer, 2004), edges (Greenfeld and Schenk, 1989), and line segments (Medioni and Nevatia, 1984; Medioni and Nevatia, 1985; Liu and Huang, 1992; Zhang and Faugeras, 1992; Loaiza et al., 2001; Kamgar-Parsi and Kamgar-Parsi, 2004). Edge elements and corners are easy to detect, but may suffer from occlusion. Line and curve segments require extra computation time, but are more robust against occlusion, as they are longer and therefore less likely to be completely occluded. Matching using curve segments has not been widely attempted. The reason is probably due to the ambiguity involved, as every point on a curve is likely to be

42 22 match with every other point on another curve. Deriche and Faugeras (1990) is one of the very few that has been reported. They proposed to match the turning points of curve segments. In a more recent work by Han and Park (2000), curve matching is done by initially matching corner points on the contour. From the initial corner-point matching, they obtained the epipolar geometry and then using them to control the selection of corresponding contours, together with contour end point constraint and contour distance measures. Matching using polygonal regions have been carried out by Du (1994). However, regions are more restricted to images of indoor-scene and applicable to detection of defects on industrial parts. Polygonal regions could be costly to extract. Some feature-based image matching systems are not restricted to using only specific types of feature. Instead, a collection of features types is incorporated. For instance, Du (1994) has chosen a hierarchy of features from points, lines to region to solve the correspondence problem for stereo vision on a mobile-platform. Whilst, Schenk et al. (1991) used a combination of global and local features. Cochran and Medioni (1992) present a matching approach integrates both area-based and feature-based primitives. This allows the matching algorithm to take advantage of the unique attributes of each of these techniques Structural Descriptions as Matching Primitives Herman and Kanade (1986) proposed a matching algorithm that takes into account geometric knowledge available with the urban scene. For example, the fact that in urban scenes, the building roofs tend to be parallel to the ground plane, while walls tend to be perpendicular to this plane. However, this approach considered image features individually and the geometrical relationship between features is used only weakly and domain-dependent.

43 23 It has been suggested that structural descriptions of image can be used in stereo matching. Boyer and Kak (1988) were among the first to consider structural descriptions as matching primitives in solving the stereo correspondence problem. They define the structural descriptions of an image by following and extend the formalism of Shapiro and Haralick (1981). Where, they defined structural descriptions of an image as a set of primitive feature and a set of named relations over the primitives. A primitive is characterized by a set of attribute-value pairs. Then, they propose a cost function derived from information theory to be use in conjunction with heuristics tree search to find the match. They obtain good results when matching skeletal primitives extracted from elongated objects. The most important contribution by Shapiro and Haralick (1981) is the demonstration that the extraction of structural descriptions from an image by using some low-level image processing task can serve as a suitable source for image matching primitives. Although they define structural matching mathematically, but they never make explicit what they really mean by structure from a computer vision view-point and it is not clear what feature and relation properties are to be used for structural descriptions. Then, the effort is continued by an approach of matching straight lines and relationship between them (Horaud and Skordas, 1988; Horaud and Skordas, 1989). They emphasize the relevance between feature grouping and structural descriptions of an image. Feature grouping is carried out to extract local feature configuration (i.e. structural information). Straight lines are extracted from each image using classical line detection paradigm: edge detection, edge thinning, edge linking and piecewise polygonal approximation. Then these lines are grouped to include adjoining regions and the connecting lines, on the premise that some scene properties are invariant under perspective projection, to extract the feature configuration as structural information for the image.

44 24 They defined structural descriptions following the mathematical formalism of Boyer and Kak (1988). Where, an image is cast into structural descriptions in terms of straight lines, line attributes and relationship between nearby lines. The structural description thus obtained is represented as a relational graph. Where, the set of image lines is represented by a set of nodes. Each node represents a line with its properties-position, orientation, length and contrast. Torkar and Pavešić (1996) also generating structural descriptions from stereo image pairs and then matching them to recover 3D information. Similarly, they also using straight line segments as primitive features and then investigated the relations among them, include parallelism and connection. Every line segment was represented by a node in the graph weighted by the position of a segment, size, contrast and the number of arcs of each type of relationship. Whilst, graph arcs representing the relationship between lines (i.e. parallelism and connection). Hence the resulted graph is an unconnected labelled graph, representing the relationship between lines. Then, the construction of the labelled relational graph was continued by deleting some isolated nodes, in order to construct subgraphs that likely to representing single objects in the image. Then the association graph is constructed from the relational subgraphs. Here, the problem of finding the correspondent structures between images was translated to the maximum matching problem and is found using the stable marriage searching algorithm. In a more recent work, Zhang and Košecká (2003) involving line segment merging to extract rectangular structure and then matching the rectangular structure for automated recovery of camera motion and 3D modelling of the scene. Here, rectangular structure is considered as high level information to facilitate the matching of man-made objects (e.g. buildings). Jiang and Ngo (2004) proposed an image matching algorithm which involved structural representation of image. Where, images are split into small blocks and represent each block as a node in a bipartite graph. A maximum weighted bipartite

45 25 graph matching algorithm is then employed in an iterative way to find the best transformation set for image matching. In addition to matching pair of images in the foregoing works, structural descriptions is particularly useful to match multiple images and image sequence for motion analysis. For instance, Chou and Teller (1997) make use of structural relationship between image features: lines and points at L-junction (formed by intersection of lines) in multi-images stereo analysis. Pla and Marchant (1997) use structural descriptions of an image to establish matching between image sequences for calculating motion parameter and thus recovering structure to guide an autonomous vehicle. Their work matches structural representation between two successive images in a sequence by incorporating distance between features. Structural descriptions also have been used in image registration (Ventura et al. 1990). They attempt to register image by recognition of corresponding structures in two satellite images. Where, the corresponding structures from both images provide set of control points for the geometric mapping function in registration. Despite of that, Ude et al. (1994) generates a symbolic (structural) description of an image to recognize object from an image (consider as image matching problem). The symbolic descriptions consist of straight line segments detected in the image and then detect relations: parallelism, co-linearity and end point proximity which are likely to associate with 3D shape. The structure is detected using perceptual grouping Matching Constraints and Strategies In order to minimize ambiguous and false matches, some matching constraints must be imposed. Moreover, a physical point in 3D space projects onto the two images from two different viewing positions and in the absence of additional

46 26 knowledge it is practically impossible to establish a relationship between these positions. For area based matching, because comparison made between a given window with every possible corresponding window is computationally expensive, various heuristics also have been developed to limit the searching area. In addition, for feature-based approach, because the processing to extract features throws away much of the information in the image, many heuristics or knowledge have been incorporated to overcome the resulting matching ambiguities. Various heuristics, matching constraints and strategies have been applied in both area-based and feature-based methods, which can be broadly classified into the following categories: (1) Similarity constraint One of the most exploited ideas to solve the correspondence problem is that image appearance of an object point in different image frames should be similar. For the area-based approach, the matching pixels must have similar intensity values or the matching windows must be highly correlated. For the feature-based approach, the matching features must have similar attribute values. Thus, several image matching algorithms use similarities between features properties to match them, for instance, sign of change and orientation in zerocrossing (Nasrabadi, 1992), or orientation, contrast, and length in straight line segments (Liu and Huang, 1992). (2) Uniqueness constraint Essentially, a given pixel from one image can match no more than one pixel in another image, i.e. one-to-one mapping. The uniqueness nature is applicable for feature-based approach as well, only may not be applicable for line segment-based algorithm as a given line in one image can match to a line in the other image that has

47 27 been break into two or more pieces. Horaud and Skordas (1989), Pla and Marchant (1997), and Musse et al. (2001) practice this constraint in their matching strategy. (3) The use of coarse-to-fine control structure This matching scheme making use of multiple scale representation and is also known as hierarchical method. Where, the matching achieved at the coarser level is considered as approximations for the next finer level and is used to guide the matching process gradually up to the finest level. For this strategies, images are represented in a hierarchy of resolutions (i.e. image pyramids), from coarse to fine. The resolution from one level to the next is reduced by a specified scale factor. Hannah (1989) uses hierarchical method and incorporates this with other search strategies, includes iterative refinement and uses a best-first strategy in the searching process. Other researchers who employ multiple scale representation in their matching algorithm include Xu et al. (1987), Pla and Marchant (1997), and You and Bhattacharya (2000). (4) Ordering constraint If a given feature f in one image is matched to f in another image, and g in the first image is matched to g in the corresponding image and if f is to the left of g then f should also be to the left of g. That is, the ordering of features is preserved across images. Opaque surfaces impose an ordering constraint along corresponding epipolar lines. Earlier research by Ohta and Kanade (1985) has led to this ordering constraint. (5) The epipolar constraint Given a feature point in the left image, the corresponding feature point must lie on the corresponding epipolar line. This constraint is well-known for reducing the dimensionality of the search space from two-dimension to one-dimension.

48 28 However, this constraint is not applicable if the acquisition geometry of the sensor is unknown. The geometric property of epipolar constraint available only with the sensor properly calibrated. They are many researches employ the epipolar constraint (Hannah, 1989; Horaud and Skordas, 1989; Han and Park, 2000). (6) The use of higher-level image structures High level image features is a powerful way of improving the image matching result as they are more structured and thus the number of ambiguous matches can be diminished Measure of Matching The definition for best match criteria obviously plays an important part in each matching algorithm, whether is image-based, feature-based or structural-based. In reality, there will not be an exact match since some part of the image content is usually corrupted in real images by noise, and distorted by geometric distortion, occlusion, illumination condition and others. Thus, the exact copy of the pattern of interest cannot be expected in the processed image and there is therefore always have the possibility of error (mismatch). Thus, a search for the locations of best match is appropriate. The good match or more appropriately known as the best match indicate a condition where the correct correspondence between homologous points has established. The best match is based on the definition of criteria of optimality which related to object properties and object relations. The measure will be counted as the measure of match or measure of mismatch (which are designed to increase with decreasing similarity between two images). Though, in practice more complex methods that apply different strategies are used to reduce the change of mismatches.

49 29 There are various quantitative measures of best matching, differ according to which primitives is used: area-based, feature-based or structural-based. But, all these methods share one common routine. Normally, the maximum or minimum of certain matching function would be taken as the metric of match. For area-based matching, criteria of optimality for best match can be defined as the degree of similarity between the corresponding windows. Here, some strategy or mathematical functions are involved to determine the degree of similarity of grey values between them. The degree of similarity between different image windows can either measured as a difference function that is minimized, such as root mean square (RMS) difference, or more commonly as a correlation function that is maximized, such as cross-correlation, normalized cross-correlation and least square correlation. The position of the best match between the template and the search window is given by the pair of window that yields the best function values and would be taken as the correct match. The function can be the simple cross correlation function, and covariance between the windows or image region, the sum of the absolute differences between corresponding pixels, and complex approaches like graph matching. These measures have their background in statistics and are theoretically well understood. The match measure for area-based matching does not use explicit knowledge of the content of the images as required by feature-based matching. The only information used is the actual image data and statistical or mathematical models for the image and noise sources. Cox (1995) reviewed and surveyed different quantitative measure of similarity for area-based method. He classified the measures of match into three categories, which are correlation measures, intensity difference measures (inter-pixel distance measures), and sequential similarity detection algorithm. The similarity measure for feature-based matching is more complicated, the definition must be based on the attributes of the extracted features. In most featurebased matching approaches, the differences in the geometric and radiometric attribute values are combined using heuristics and thresholds in order to compute the

50 30 similarity measure, called a cost function or benefit function. Whereas a cost function is to be minimized, a benefit function must be maximized in order to achieve a good match (Heipke, 1996). A good survey as well as further details about cost function act as similarity measure can be found in Ballard and Brown (1982). 2.5 Discussion on the Image Matching Approaches Area-based matching work well and has a high accuracy potential when the image gradient surfaces are continuous and the image regions are well textured, which contain sufficient common feature (visual texture) to allow corresponding matches to be obtained. Mismatch and ambiguities may encounter when the images are of the scene of repetitive texture or does not contain adequate texture, or with many depth discontinuity. In addition, blunders can be occurring in image areas of occlusions and noise. Area-based matching is weak in handling the sensitivity of the grey values to changes in radiometry due to illumination changes. However, in these situations, e.g. in the presence of image noise, area correlation degrades gracefully - it usually continues to find the matching answer, but with reduced confidence measure and increased ambiguities. In addition, this method only works well under the condition that images are acquired geometrically alike. Area-based method encounters difficulties when the two images are taken from extremely different viewpoints. It requires a very good initial position of the two areas before the actual matching step is taking, to avoid large search space.

51 31 Area-correlation can be summarized suffered from the following limitations: Require the presence of detectable texture within each correlation window; therefore, they tend to fail in featureless, poor or repetitive texture environments. Tend to be confused by the presence of a surface discontinuity in a correlation window. Tend to get confused in rapidly changing depth fields (e.g. vegetation). Sensitive to absolute intensity, contrast, and illumination. Due to these limitations, area-based matching requires the intervention of human operators to initialize and guide the process, correcting and editing. The dependency on human guidance and on favourable image conditions could not be resolved by this area matching solution and thus limiting its automation in the digital image matching. For these reasons, in recent years the focus on the solution for automated image matching problem has shifted from grey level correlation to feature-based matching (Greenfeld and Schenk, 1989). Another reasons for the shift is the extensive work on image matching done in computer vision. Aside from a handful of area-based solutions, feature-based solutions constitute the overwhelming majority of published work on image matching in the field of computer vision. For feature-based matching, feature extraction schemes are often computationally expensive and require a number of free parameters and thresholds which must be chosen a priori. As mentioned before, the area-based matching needs a very good initial position of the two areas. But, the initial values for feature-based matching need no more as accurate as for area-based matching. This is because the extracted features have given the approximate value for the matching step.

52 32 Feature-based can be summarized to owning some of the following advantages: Faster than area-based methods, because they are many fewer features to consider. The match is more accurate as edges may be located with sub-pixel precision. Less sensitive to photometric variations, since they represent geometric properties of a scene. For structural-based methods, You and Bhattacharya (2000) have comment that the topological and geometrical relation between features contains important information to constraint the large space of possible mappings between the features. The structural-based matching methods are more insensitive to any geometrical differences and grey level variation between images, compare to area-based and feature-based method. This is because the properties and interrelationships represented by the high level feature not much vary with the foregoing changes. However, in most cases the extraction and representation of the relationship itself is a difficult problem. The area-based and feature-based matching methods have been widely used for various three-dimensional vision applications. But these methods are not suitable for the non-metric images, which are photographs capture with common amateur cameras or charged couple device (CCD) scanners, because the non-metric images are usually convergent images without known interior and exterior orientation parameters. In order to solve the image matching problems for non-metric images, without knowing any a priori information about the input image, an image matching method based on structural descriptions is needed, known as the structural-based image matching, is addressed in this study.

53 Image Matching: Problem Areas There are a number of problems still exist in the area of image matching. Firstly, ambiguous matching is very likely to occur if the image matching technique using only local information. Some geometric and radiometric changes over images attribute caused by different perspective viewing position might lead to incorrect matching. Secondly, feature detection is not perfectly reliable, so false feature may be detected in the images. Sometimes, for a given primitive in one image, its corresponding primitive in the other images might not exist, due to occlusion, shadow or noise. Sometimes, there may be more than one potential matching candidate due to repetitive texture or pattern. Besides, the condition of dissimilarity exists among images also appear to be one of the difficulties in image matching. Feature in one image may be partially seen or fully occluded in the other image due to noise, shadow, radiometric variation and bad feature extraction result. Establishing the matching between two images might confounded by the noise exists in the images. This may affect the matching algorithm in the searching for the correct matching candidates. The set of corresponding primitives is likely to be contaminated with a number of wrong matching candidates or outliers. Besides, large search space, high computational costs and numerical instabilities, which may arise during the image matching process also contribute to some problems.

54 CHAPTER 3 THEORETICAL FRAMEWORK AND METHODOLOGY 3.1 Introduction This study presents a structural-based image matching approach, which involves the interpretation of structural descriptions of an image, the representation of the derived structural descriptions in relational graph and the matching between relational graphs in an association graph, to accomplish image matching. This chapter gives an overview about the methodology of the structural-based image matching approach. The ideas are presented within a theoretical framework. 3.2 Overall Methodology The methodology of the study is consists of six major steps: (1) input image, (2) line segment extraction from the image, (3) the derivation of structural descriptions from the line-extracted image, (4) the construction of relational graph to represent the structural descriptions, (5) the derivation of association graph from relational graphs to perform relational graph matching, and (6) the searching of the largest maximal clique in the association graph. The methodology of this study was graphically summarized in Figure 3.1.

55 35 Step 1: Input Image Step 2: Line Segment Extraction Edge Detection Line Segment Fitting Step 3: Derivation of the Structural Descriptions Line Segment Labelling Derivation of Inter-Line Relationship Step 4: The Construction of Relational Graph Step 5: The Construction of Association Graph for Relational Graph Matching Step 6: The Searching of the Largest Maximal Clique in the Association Graph Figure 3.1: The methodology of the study 3.3 Line Segment Extraction Line segment extraction is a key step to derive the structural descriptions of each image to be match. Line is used as the matching primitives for the image

56 36 matching technique in this study. The advantages of line segment as matching primitive are: (1) line segments are reasonably and relatively easy to detect, (2) lines are present in nearly all kinds of scenes, (3) lines embody the continuity information of edge pixels across multiple scanlines and hence are more reliable than edge pixels in the matching process, and (4) they are closely related to object boundaries, provide better and more accurate representation of the object boundary than do other commonly used features (e.g. points and corner). To extract linear segment from image data, the procedure comprises three main steps: (1) edge detection, (2) edge tracing as complementary processing task, and (3) line segment fitting. First, edge detection is applied to the input. The edge operator used in this study is the Canny operator supported by the MATLAB s edge function. Then, edge tracing is carried out to track and link edge pixels together into chain of sequential edge pixels. Edge tracing forms lists of connected edge pixels found in the edge-detected image, one list for each connected edge pixels. During the tracing process, edges are thinned and edges less than a minimum length of interest are discarded. Then, line segments is formed by fitting straight lines that adhere to the edge pixels with a specified tolerance, i.e. maximum deviation (in pixels) from original edge. From the array of sequential edge pixels, the size and position of the maximum deviation from the line that joins the endpoints is computed. If the maximum deviation exceeds the allowable tolerance, the edge is shortened to the point of maximum deviation and the test is repeated. In this manner, each edge is broken down to line segments that adhere to the original edge with the specified tolerance. Each resulted straight line segment is defined by its two ends, which are the starting pixel and the ending pixel. The result consists of lines that might correspond to some object boundary or any meaningful boundary between scene entities. The result of line extraction gives the basis to derive structural descriptions from an image.

57 Derivation of Structural Descriptions from the Line-Extracted Image A number of researchers have investigated structural descriptions for shape analysis and the construction of object descriptions for the libraries of image understanding systems. In accordance with their objective, structural description of an object consists of the descriptions of its parts and their interrelationships (Shapiro and Haralick, 1981). Whilst, investigation on the derivation of structural descriptions from an image is proposed by Medioni and Nevatia (1985). Later, by Boyer and Kak (1988) who have extended and modified the approach developed by Shapiro and Haralick (1981). The derivation of structural descriptions essentially involves defining structural primitives and a set of named relations over those primitives. This study will follow, and subsequently extend, the formalism of Horaud and Skordas s (1989). In this study, structural descriptions need to be derived from the image resulted from line segment extraction, as discussed in Section 3.3. Where, the resulted line-extracted image is cast into structural descriptions in terms of line features, line s attributes and relationships between nearby lines. The structural description derivation process involves two major processes, which are line segment labelling and derivation of structural relation between line segments, as discussed in Section and Section 3.4.2, respectively. Line segment labelling and inter-line relation assemble the structural information of an image. The structural descriptions of the image are then represented by relational graph, as discussed in the subsequent Section 3.5 and Section Line Segment Labelling To interpret the inherent structural descriptions of the line-extracted image, first, a label matrix needs to generate from the line-extracted image. Where, all the

58 38 line features exists in the line-extracted image is search for and then each of them is label with unique number. Label matrix is a matrix of the same size as its line-extracted image, whose elements are identification number that labels all the lines exists in the image. The elements of the label matrix have integer values greater than or equal to zero, where, the pixels labelled zero are the background, the pixels labelled one make up one line, and the pixels labelled two constitute a second line, and so on. The idea of label matrix generation is illustrated by Figure 3.2, with an example of line-extracted image (see Figure 3.2(a)) and an equivalent size of label matrix that derives from it (see Figure 3.2(b)) (a) (b) Figure 3.2: Generation of label matrix from line-extracted image: (a) line-extracted image; (b) label matrix Label matrix is generated as the basis to construct relational graph because relational graph is easier to construct from labelled features in an image matrix rather than ordinary feature-extracted image. Every line segment in the label matrix, which is a candidate for a graph node, is labelled with an identification number. Hence, to generate the relational graph, each line identified in the label matrix contributes a node that also labelled with the same identification number. Besides, one feature s attributes can easily be stored by associating the attributes with its identification

59 39 number. Feature-extracted image contains this information as well, only it is much more difficult to recall from there, when constructing a relational graph Derivation of Inter-Line Relationship The structural information of the line-extracted image is indeed referred to the relationship exists between lines in the image. The types of inter-line relationship that derived in this study included ordering, intersection and co-linearity, as discussed in the subsequent Section , Section and Section , respectively. The inter-line relationships that derived between line segments are to be represented by the arcs of a relational graph in the subsequent step. Where, the arc linked between two nodes is labelled with the relation type, to represent the relationship exists between the connected nodes, and will discussed in Section Ordering Relationship To detect the ordering relation for a line segment, a set of neighbouring lines on the both side of the line segment need to be searched for. To do this, a pixel-bypixel displacement is performed from every pixel of the line under consideration, in a direction perpendicular with that line. The displacement is carried on the both side of the line until a pixel belongs to other neighbouring line is encountered, as shown in Figure 3.3.

60 Figure 3.3: Detection of ordering relationship for the line labelled 5, the displacement is carried on the both side of the line labelled 5 until an edge pixel belongs to other neighbouring line is encountered Hence, for each line segment, its both sides left-right or top-bottom candidates may found. The derived ordering relationship for a line may consist of to the left of and to the right of relations, or the equivalent to the top of and to the bottom of relations in another orientation. Where, the to the left of and to the right of relationships are associated with lines segments that are more vertical, while the to the top of and to the bottom of relationships are associated with line segments that are more horizontal Intersection Relationship An intersection between line segments is occurred as a set of at least two line segments connecting or passing through a common point. However, the intersection condition considered in this study is those lines connecting the endpoints of a line. In this study, intersection relation is derived from the search of intersection occurrence of one line with other lines within a neighbourhood area centred from both its end pixels.

61 41 As discussed in Section 3.3, each extracted line segment is defined by its two ends, which also known as end pixels or end points (see Figure 3.4). In this study, the search of intersection occurrence of one line with any other lines is within a neighbourhood area centred at both its ends. Thus, each line may have two sets of intersecting (connecting) lines, which are associated with each one of its end. Some possible intersection conditions between line segments that considered to be derived for this study are given in Figure 3.5. The steps to detect any intersection between line segments mainly involve the following steps. Firstly, a neighbourhood searching area is established according to the position of the end pixel of the line in question. Where, the square shaped searching area is centred at the end pixel. Secondly, the existence of any other lines that have some portion falling within the defined neighbourhood area is checked. First endpoint l 1 Second endpoint Figure 3.4: A line segment is defined by two end points

62 42 l 2 l 1 l 3 l 1 l 2 l 4 (a) (b) Neighbourhood area of the endpoint l 1 Endpoint l 2 l 2 l 1 l 3 (c) (d) Figure 3.5: Some possible intersection between line segments that derived for this study: (a) l 2 is found to intersect with l 1 at one of its end point and l 4 intersect with l 1 at another end point, the same condition occurred as l 1 and l 3 intersect with l 2 at its end points, l 2 and l 4 intersect with l 3 at its end points, l 1 and l 3 intersect with l 4, one each with the first end and the second end; (b) l 2 has connected with l 1 at one of its end point; (c) l 2 is searched to intersect with l 1 within a neighbourhood area centred at one of its end; (d) l 1 and l 2 is connected with l 3 at one of its end point Co-linearity Relationship A line is said to co-linear with another line if the two lines are aligned in the same direction as if they are belong to the same line. Co-linearity condition is always occurred, when a line is breaking into two disconnected segments during the feature extraction process. The co-linearity condition that considered for this study is given in Figure 3.6.

63 43 The search of co-linearity occurrence is start with defining a neighbourhood area centred at the end pixel of the line in question l 1. Then, checking whether any line l 2 that has either one of its end pixel falling within the defined neighbourhood area and both of them have the same orientation. If both these two condition is met, then l 1 is co-linear with l 2. Co-linearity is a condition that invariant under projection and hence it will not vary with the changing of viewing position. Therefore, co-linearity is an interesting clue for feature-based matching. Endpoint of l 1 Endpoint of l 2 l 1 l 2 Neighbourhood area of the end point of l 1 Figure 3.6: The co-linearity condition between line segments that derived for this study 3.5 Relational Graph Representation The foregoing Section 3.3 and Section 3.4 discussed about the process to derive structural descriptions from the line-extracted image. Up to this current stage, the line-extracted image is cast into structural descriptions in terms of line features with its inherent properties (attributes) and relationships exist between any two neighbouring lines. These structural descriptions are then represented by a relational graph with labelled arcs. In this section, the relational graph representation is discussed within a

64 44 theoretical framework, and the construction of relational graph that practiced in this study is given in the subsequent Section 3.6. In the computer vision terminology, a relational structure S is a set of elements (or units) V, V = {v 1, v 2,, v i }, a set of properties (or unary predicates) P defined over the elements, P = {p 1, p 2,, p j }, and a set of binary relations (or binary predicates) T defined over pairs of the elements, T = {t 1, t 2,, t k }. S = (V, P, T) (3.1) where V = {v 1, v 2,, v i } is a set of elements, P = {p 1, p 2,, p j } is a set of properties, and T = {t 1, t 2,, t k } is a set of binary relations between elements pairs (Ballard and Brown, 1982; Bomze et al., 1999). Note that the notion of a relational structure is essentially equivalent to that of a pseudograph employed in graph theory (Bomze et al., 1999). In the traditional sense, a relational structure becomes a relational graph, when the relation set T contains a single relation, and the property set P is empty (Bomze et al., 1999). In other words, relational structures are relational graphs when they are represented graphically (Ballard and Brown, 1982). Where, the elements of relational structures are represented graphically as graph nodes and relations between pairs of elements are represented by the arcs between the corresponding nodes. The term of relational structure and the term of relational graph are introduced interchangeably in the theoretical part. However, in the context of this study, the term of relational graph is used. In a mathematician s terminology, a graph refers to a network of points and lines connecting some subset of the points. The points of a graph are most commonly known as graph vertices, but may also be called nodes. Similarly, the lines connecting the vertices of a graph are most commonly known as graph edges, but also be called arcs. A relational graph G is symbolically represented as G = (V, E), where the graph G consists of a set V of vertices and a collection E of unordered pairs of vertices called edges.

65 45 However, the term edge of relational graph is confused with the term edge of edge detection in the thesis writing. Therefore, throughout the thesis, the term node and arc is used to refer the points and lines structure of a relational graph. An example of graph representation of a relational structure is given in Figure 3.7. In Figure 3.7, nodes v 1 and v 2 have property p 2, node v 3 has property p 3, and node v 4 has property p 1. The arcs labelled t 1 between node v 1 and node v 2 indicate that relation t 1 holds between node v 1 and node v 2. Relation type t 1 also exists between node v 2 and node v 3. Whilst, relation type t 2 exists between node v 3 and node v 4, and between node v 4 and node v 1. In the context of this study, the set of properties attached to a graph node can be photometric or geometric attributes of the represented features (elements), such as contrast, orientation and area. There are varieties of relations that can be derived between line segments, which are to be represented by the arcs of a relational graph, for instance, co-linear with, same junction as, inside, enclosed and parallel with. The actual steps that practiced in the study to construct the nodes and arcs of relational graph are discussed in the following section. p 2 p 2 v 1 t 1 v 2 t 2 t 1 v 4 v 3 t 2 p 1 p 3 Figure 3.7: A graph representation of relational structure (Source: Modified after Ballard and Brown, 1982)

66 The Construction of Relational Graph In the previous section, the relational graph representation is discussed within a rather theoretical context. Whilst, in this section, the actual steps that practiced in the study to construct the nodes and arcs of relational graph are clarified. The line segment labelling process discussed in Section prepares the structural information to construct relational graph s nodes. Whilst, the derivation of relationship between line segments discussed in Section prepares the structural information to build arcs between nodes. Hence, the derived structural information of an image is consists of line segment features with its inherent properties (attributes) and relationships between nearby lines, as the basis to construct relational graph. These structural descriptions are then represented as network of nodes and arcs of a relational graph. Where, in the resulted relational graph, each node represents a line of the image together with its attached properties (referred by its identification number). A labelled arc is inserted between any two nodes to represent the ordering (to the left of, to the right of, to the top of, to the bottom of), intersection or co-linearity relationship between lines, if exists any. Within this relational graph representation, a line-extracted image is considered globally rather than as a list of individual line features. If u and v are two vertices of a graph and if there is an edge connected between vertex u and vertex v, then the edge can be represented by unordered pair (u, v) structure. In this study, the relational graph is defined by a collection of unordered pairs in which the two elements u and v are distinct. Figure 3.8 shows a line-extracted image, which consists 16 lines, labelled as l 1 to l 16. The relational graph derived from the line-extracted image of Figure 3.8 is given in Figure 3.9. Where, vertex v 1 represents line l 1, vertex v 2 represents line l 2, and so forth. Each edge that linked between two nodes is to represent the relation exists between the corresponding nodes, and thus each edge labelled by relation t. Where, t 1 denotes relation type ordering, t 2 denotes relation type intersection, and t 3 denotes relation type co-linearity.

67 47 l 4 l 1 l 7 l 11 l 5 l 8 l 12 l 2 l 3 l 9 l 13 l 14 l 15 l 16 l 6 l 10 Figure 3.8: Line-extracted image with lines labelled as l 1 to l 16 t 1 t 2 v 4 t 2 v 7 t 2 v 11 v 1 t 1 t 2 t 2 t 1 t v 2 2 v 5 t 1 t 3 t 1 t 1 v 3 t v 6 2 t 2 t 2 t 2 v 8 v 9 t 2 t 3 t 1 t 1 v 13 v 14 v 12 t 2 t 2 t 1 t 2 t 1 t 1 t 2 t 2 v 15 v 16 t 2 t 2 t 2 v 10 Figure 3.9: The corresponding relational graph represents the structural information of the line-extracted image (of Figure 3.8), where t 1 denotes relation type ordering, t 2 denotes relation type intersection, and t 3 denotes relation type co-linearity

68 The Definition of Relational Graph Matching Many fundamental problems in computer vision and pattern recognition can be formulated as the problem of matching relational structures (relational graphs) (Bomze et al., 1999). There are several definitions of matching between relational graphs. Three different classes of graph matching are discussed in this thesis, namely graph isomorphism, subgraph isomorphism and double subgraph isomorphism. Graph isomorphism is a very pure version of graph matching. Given two graphs G 1 = (V 1, E 1 ) and G 2 = (V 2, E 2 ) and match G 1 and G 2 by means of graph isomorphism. Then, G 1 and G 2 are said to be isomorphic if there exists a one-to-one and onto mapping f, called an isomorphism, between V 1 and V 2 such that for v 1, v 2 V 1, V 2, f(v 1 ) = v 2 and for each edge of E 1 connecting any pair of nodes v 1 and v 1 V 1 in G 1, there is an edge of E 2 connecting f(v 1 ) and f(v 1 ) in G 2 (Ballard and Brown, 1982; Balakrishnan, 1997). When such a mapping function can be found then G 1 and G 2 are said to be isomorphic. If one of the graphs involved in the matching process is larger than the other, for instance, G 2 contains more vertices than G 1, then a subgraph isomorphism from G 1 to G 2 are searching for (Messmer, 1995). That is, find a subgraph H of G 2 such that G 1 and H are isomorphic. In other words, subgraph isomorphism refers to find isomorphism between a graph G 1 = (V 1, E 1 ) and subgraph of another graph G 2 = (V 2, E 2 ). This is computationally harder than the isomorphism problem because it is unknown in advance which subsets of V 2 and E 2 are involved in isomorphism (Ballard and Brown, 1982). Whilst, double subgraph isomorphism is to find all isomorphisms between subgraphs of a graph G 1 = (V 1, E 1 ) and subgraphs of another graph G 2 = (V 2, E 2 ) (Ballard and Brown, 1982). Figure 3.10 shows the example of these three kinds of graph matches. Figure 3.10 (a), (b), (c) and (d) show graph G 1, G 2, G 3 and G 4, respectively. Graph G 1 has an isomorphism with graph G 2, various subgraph isomorphisms with graph G 3, and several double subgraph isomorphisms with graph G 4.

69 49 (a) G 1 (b) G 2 (c) G 3 (d) G 4 Figure 3.10: Three different classes of graph matching: Graph G 1 has an isomorphism with graph G 2, various subgraph isomorphisms with graph G 3, and several double subgraph isomorphisms with graph G 4 (Source: Modified after Ballard and Brown, 1982) Finding double subgraph isomorphism between subgraphs of a graph G 1 and subgraphs of another graph G 2 can be accomplished by forming an association graph from both the graph G 1 and graph G 2 and searching cliques in the association graph. Hence, double subgraph isomorphism is reduced to subgraph isomorphism via another well-known graph problem, namely the clique problem. The approach of clique algorithm applied to an association graph will discuss within a theoretical outline in the subsequent Section 3.8. Then, the actual steps practiced in the study to construct association graph will given in Section 3.9. The similarity measure used to construct the nodes of association graph is discussed in Section whilst the compatibility conditions to connect arcs between nodes are defined in Section

70 The Association Graph and Clique Finding Technique Always two images involved in the image matching are not identical, so as the graphs derived from the images also not isomorphic. In addition, the problem of false or missing features resulted as false nodes, missing or extra nodes in the relational graph. Therefore, when image matching is transformed as graph matching problem, graph matching as double subgraph isomorphism is searched for, to accomplish image matching. Where, all isomorphisms between subgraphs of a relational graph and subgraphs of another relational graph are search. However, in order to accomplish image matching, it is not sufficient to determine solely on the double isomorphism between the two graphs, but to determine that the labelling of the arcs and nodes is also equivalent. Therefore, the maximal clique algorithm applied to an association graph is the solution to this problem. Given two relational structures, S 1 = (V 1, P, T) and S 2 = (V 2, P, T) to be matched. As discussed in Section 3.6, graph matching can accomplished by forming an association graph (also known as correspondence graph) from both relational graph S 1 and relational graph S 2 and then searching cliques in the resulted association graph. The association graph of two relational structures S 1 and S 2 is the undirected graph G = (V, E) defined as (Bomze et al., 1999): V = {(v 1, v 2 ) V 1 x V 2 : p (v 1 ) p (v 2 )}and E = {((v 1, v 2 ), (v 1, v 2 )) V x V : (v 1, v 2 ) and (v 1, v 2 ) are compatible} (3.2) Each node of the association graph comprises an assignment of element v 1 to element v 2, one from set V 1 and one from set V 2, which have the same properties. Where, for each element v 1 in set V 1 and element v 2 in set V 2, if v 1 and v 2 have the same properties (p(v 1 ) iff p(v 2 ) for each p in P), an association node v 12 (v 1 : v 2 ) is constructed by assigned (mapped) v 1 to v 2 (Ballard and Brown, 1982).

71 51 Whilst, the arcs of association graph denote the linking of nodes that are mutually compatible according to relations T. Where, node (v 1 : v 2 ) and node (v 1 : v 2 ) are connected with an arc if they represent compatible assignments according to relation T, that is, if the pairs satisfy the same binary predicates (t (v 1, v 1 ) iff t (v 2, v 2 ) for each t in T) (Ballard and Brown, 1982). Hence, a match between S 1 and S 2, the two relational structures, is just a set of node pairings (assignments) that are all mutually compatible in relations. The best match could well be taken to be the largest set of assignments that were all mutually compatible under the relations. This condition of best match, when searching in the association graph, is refers to set of totally connected nodes, or in other words, is simply a clique. A clique of a given graph is a totally connected subgraph or a complete subgraph, where each node in a clique is connected to every other node in the clique (Ballard and Brown, 1982; Balakrishnan, 1997). Figure 3.11 shows some examples, in which the clique is highlighted by a network of grey coloured nodes and thicker arcs. Figure 3.11 (a) shows a clique of size 4 and Figure 3.11 (b) shows a clique of size 5, as indicated by five grey colour nodes. (a) (b) Figure 3.11: Some examples of clique: (a) a clique of size 4 indicated by a set of four nodes with grey colour; (b) a clique of size 5 indicated by a set of five grey colour nodes

72 52 However, the condition of best match is more correctly refers to the searching of maximal clique in the association graph. Maximal clique is a clique to which no new nodes may be added without destroying the clique properties. Therefore, maximal clique represents sets of consistent assignment. In this formulation of matching, larger maximal cliques are taken to indicate better matches, since they account for more mutually connected nodes, which indicate larger set of assignments (correspondences) that were all mutually compatible under the relations. The largest maximal clique provides the node pairings of the best match. Hence, the best matches can be determined by the largest maximal cliques in the association graph. To summarize, to perform graph matching between two relational graphs, an association graph needs to be derived from both the relational graphs. The nodes of association graph composed of pairs of relational graph s nodes, v 1 and v 2, one each from S 1 and S 2, whose properties are the same. The arcs of the association graph indicate that the endpoints of the arc represent compatible nodes (under the relations). Maximal cliques in the association graph indicate set of compatible nodes. The largest maximal clique provides the node pairings of the best match. The association graph is functioned as an auxiliary structure to search the best available matching between the elements of set V 1 and the set V 2 while preserves the compatibilities of relations between these elements. 3.9 The Construction of Association Graph for Relational Graph Matching In the previous section, the association graph technique is discussed within a rather theoretical context. Here, in this section, the actual steps taken to construct association graph in the study are discussed. The similarity measure to construct the nodes of association graph is discussed in Section whilst the compatibility to connect arcs between nodes is defined in Section

73 Building the Nodes of Association Graph To perform graph matching between two relational graphs, an association graph needs to be constructed from these two relational graphs. Given a pair of relational graph to match, with the left relational graph containing M nodes (line segments), and the right relational graph containing N nodes, the total number of association nodes is M x N. As every association node is consists of a set of left-toright assignment, thus there are M x N left-to-right matching candidates considered in the resulted association graph. However, the complexity of the association graph building process is proportional to the number of association nodes. Therefore, in this study, the position constraint and feature properties is taken into account to eliminate as many incorrect left-to-right matching candidates as possible in the association node building process, to reduce the complexity. The potential right line candidates to be assigned to each left line are selected based on the position constraint. That is, the potential candidates to be assigned to this left line must situate within a region of interest. The region of interest is approximated according to the centre position of the left line l i. Thus, a list of potential corresponding right lines is established for each line in the left image, based on the position constraint. In a theoretical context, each of the association graph node is formed by associating corresponding relational graph s nodes with exactly the same properties (as discussed in Section 3.8). However, this condition is not practical to state, because the exact same condition is not likely occurring among features. Practically, the properties should take on ranges of values greater than the binary same or not same. Thus, in this study, a measure of property similarity is introduced to determine which two relational graph s nodes are similar for assignment, to form as association nodes for association graph. An association node is formed, if the length and orientation properties of two corresponding relational graph s nodes and the number of relations that hold between nodes are similar.

74 54 The similarity measure B lr capture a measure of similarity between two lines within an association graph node, by average up three conditions: (1) the length difference of line segments, (2) the orientation difference between line segments, and (3) the difference in the number of relations that own by the line segment with its neighbouring line. B lr = 1 min( hl, hr ) min( θl, θ r ) min( ntl, ntr ) max( hl, hr ) max( θ l, θ r ) max( ntl, ntr ) (3.3) where h l is the length (in terms of the number of pixels) of the left line segment, h r is the length of the right line segment, and θ l and θ r is the orientation of the left and right line, respectively. n l denotes the number of relations that own by the left line segment with its neighbouring line, and n r is the number of relations that hold by the right line segment with its neighbouring line. Whilst, min denotes the minimum operation and max is the maximum operation, respectively. The similarity measure is quantified as a number in the range from zero to one. The value of similarity measure tends to one as the similarity increase, its value equals to one denote a perfect matching and tends to zero as the dissimilarity decrease. The number of relations between each line segment and its neighbouring line is an important measure because it reflects the local density of the structural descriptions Building the Arcs of Association Graph After the node building process, then any pair of nodes is connected with an arc if they have equivalent relations, and vice versa. Every arc in the association graph indicates that the endpoints of the arc represent compatible nodes in terms of relation. In this subsection, the conditions of compatibility and incompatibility are defined to determine whether two association nodes are compatible or incompatible in terms of relation, and to or not to link an arc between these two nodes.

75 55 The definitions for compatibility reveal the compatibility conditions in terms of three different types of relations dealing in this study: ordering, intersection, and co-linearity. Thus, there are three definitions: definition 1, definition 2 and definition 3 to describe the node compatibility according to relation type ordering, intersection, and co-linearity, respectively. Whilst, definition 4 asserts clearly the status of incompatibility according to relation types. Definition 5 is a set of propagation rule to extend the compatibility for association nodes whenever there is no direct relation detected between two lines. All these definitions are designed to conserve the overall compatibility of the matching features from the left image to the right image. Given l i and l j are lines from the left image and r a and r b are lines from the right image. The matching is carried out as a mapping function of left element l i to right element r a (l i : r a ) and of left element l j to right element r b (l j : r b ). The matching must satisfy the compatibility rules: (1) the relation between l i and l j must compatible with relation between r a and r b, and (2) the matching is one-to-one (uniqueness constraint), i.e. each feature in the left image is matched to a single feature in the right image, eventually. Therefore, an association graph is derived from the left and right relational graph, to satisfy the matching rules mentioned above. Given two association graph s nodes, node v ia and node v jb. Node v ia consists of assignment l i to r a (l i : r a ) and node v jb consists of assignment l j to r b (l j : r b ). Then, an arc is inserted between node v ia and node v jb if they are compatible in terms of relation, according to either one of the following definitions. Definition 1: v ia is compatible with v jb if either one of the following ordering conditions is true: (i j) AND (a b) AND (l i to the left of l j ) AND (r a to the left of r b ) (i j) AND (a b) AND (l i to the right of l j ) AND (r a to the right of r b ) (i j) AND (a b) AND (l i to the top of l j ) AND (r a to the top of r b ) (i j) AND (a b) AND (l i to the bottom of l j ) AND (r a to the bottom of r b )

76 56 Definition 1 reflects the ordering constraint, it embedded an ordering violation checking. For instance, said l i is found to lie on the left side of l j. If r a is also found to the left of r b, then node v ia (l i : r a ) is said to compatible the to the left of relationship with node v jb (l j : r b ) and therefore node v ia and node v jb are connected with an arc. On the other hand, if the to the left of relationship does not hold, which node v ia is against the to the left of relationship with v jb, then this two nodes cannot be connected. Definition 2: v ia is compatible with v jb if the following intersection condition is true: (i j) AND (a b) AND (l i intersect with l j ) AND (r a intersect with r b ) Definition 2 reflects the descriptive property that two or more lines intersecting in spaces intersecting in the both images. Similarly, this definition embedded a violation checking of the intersection relationship. For instance, l i is found intersect with l j and since both (l i : r a ) and (l j : r b ) are matching pairs, r a should also intersect with r b. If r a is found to intersect with r b, then this indicates that the relation between l i and l j is compatible with the relation between r a and r b. Node v ia (l i : r a ) is compatible the intersect with relationship with v jb (l j : r b ), and therefore node v ia and node v jb are linked with arc. On the other hand, if node v ia is contrary to the intersect with relationship with v jb, these two nodes cannot be connected. Definition 3: v ia is compatible with v jb if either one of the following co-linearity conditions is true: (i j) AND (a b) AND (l i co-linear with l j ) AND (r a co-linear with r b ) (i = j) AND (a b) AND (r a co-linear with r b ) (see Figure 3.12 (a) and (b)) (i j) AND (a = b) AND (l i co-linear with l j ) (see Figure 3.12 (c) and (d))

77 57 l i r a r b (a) Left image (b) Right image l i r a l j (c) Left image (d) Right image Figure 3.12: The definitions of compatibility in terms of co-linearity relation: (a) and (b) line l i in the left image may corresponding to both line r a and r b in the right image that has been broken (co-linear to each other); (c) and (d) line l i co-linear with line l j in the left image may corresponding to the same line r a in the right image Definition 3 reflects the descriptive property that two lines co-linear in space are co-linear in both images. Similarly, this definition embedded a co-linearity violation checking. For instance, l i is found that co-linear with l j and since both (l i : r a ) and (l j : r b ) are matching pairs, r a should also co-linear with r b. If r a is found that colinear with r b, then node v ia is said to compatible the co-linear with relationship with v jb and therefore these two nodes is connected with an arc, or vice versa. It also embeds the fact that a line in one image may match a line in the other image that has been broken into two or more pieces (see Figure 3.12). This is the only exception allowed with respect to the uniqueness constraint (one-to-one matching).

78 58 Definition 4: v ia is incompatible with v jb if either one of the following conditions is true: (i j) and (a b) AND (relation between l i, l j relation between r a and r b ) (i = j) and (a b) AND (r a not co-linear with r b ) (i j) and (a = b) AND (l i not co-linear with l j ) Definition 4 asserts clearly the status of incompatibility, which is to restrict the arc connection between two incompatible nodes. Definition 5: v ia is compatible with v jb if there is a node v xy such that make the following proposition conditions is true: (v ia is compatible with v xy ) AND (v xy is compatible with v jb ) Definition 5 is to propagate compatibility for association nodes whenever there is no relation detected between two lines. There is no relation detected between two lines may due to several circumstances: 1) the relation detection methods used in this study is only concerned two neighbouring lines, 2) the relation between two lines is in fact other than the inter-line conditions covered by the study: ordering, intersection and co-linearity, and 3) occlusion or noise influence the detection of inter-line relationship and hence miss out the relation The Matching Strategy In this section, an example together with the aid of illustrations is given to explain the graph matching strategy. The example illustrate the flow of the process, start from how a line-extracted image is cast into a relational graph representation, how the image matching problem is transform into a graph matching problem, then

79 59 about how to match these relational graphs in an association graph, and finally on how the best available solution is search within the resulted association graph. A pair of stereo mages is considered to match, which comprised of left image and right image. Thus, there will be two relational graphs, one each constructed from the left and right image. The construction of relational graph from the lineextracted image is shown with the example in Figure 3.13, Figure 3.14 and Figure First, the image pair is undergoing line extraction process to extract line segments from the image data and then labelled each line with an identification label. The resulted image is referred as left line-extracted image (see Figure 3.13 (a)) and right line-extracted image (see Figure 3.13 (b)), respectively. Figure 3.13 (a) shows a set of six line segments that labelled l 1 to l 6, within the left line-extracted image. Figure 3.13 (b) shows a set of eight right lines labelled r a to r h, of the right lineextracted image. Compare Figure 3.13 (a) with Figure 3.13 (b), the left and right line-extracted image to be matched are not identical. Some lines in one image, is appear as redundant lines in the other image. In addition, some lines are missing or broken into pieces in one image but not in the other image. These false, missing or redundant line features will contribute as false, missing or extra nodes in the resulted relational graph. Figure 3.14 shows the relational graph derived from the left line-extracted image (of Figure 3.13 (a)), which generally referred as left relational graph. Figure 3.15 shows the relational graph derived from the right line-extracted image (of Figure 3.13 (b)), which referred as right relational graph. In the relational graph, the set of resulted line segment features is represented by a network of nodes and arcs, where each node represents a line labelled with its identification number and each arc represents the relation between two nodes, if exists any. In this study, the represented relation between lines involved ordering (to the left of, to the right of, to the top of, and to the bottom of), co-linearity and intersection, as shown in Figure 3.14 and Figure 3.15.

80 60 r d l 2 r f l 1 l 5 l 6 r a r e r g l 3 r b r h l 4 r c (a) (b) Figure 3.13: Two line-extracted images to be matched: (a) left line-extracted image, and (b) right line-extracted image l 2 t 1 t 2 t 1 t 1 l 5 l 6 l 1 t 2 t 2 t 6 t 2 t2 t 1 t 5 t 1 l 3 t 5 l 4 Figure 3.14: Left relational graph, the represented internodes relations are: to the left of (labelled as t 1 ), to the right of (labelled as t 2 ), to the top of (labelled as t 3 ), to the bottom of (labelled as t 4 ), intersect with (labelled as t 5 ), and co-linear with (labelled as t 6 )

81 61 r d r f t 5 t 1 t 6 t 2 t 1 r a r e t 2 t 2 t 1 r g t 5 t 5 t 5 t 3 r b t 5 r h r c t 4 Figure 3.15: Right relational graph, the represented internodes relations are: to the left of (labelled as t 1 ), to the right of (labelled as t 2 ), to the top of (labelled as t 3 ), to the bottom of (labelled as t 4 ), intersect with (labelled as t 5 ), and co-linear with (labelled as t 6 ) When the left and right line structures to be matched are not identical (compare Figure 3.13 (a) with Figure 3.13 (b)), this indicates that the left and right relational graphs derived from it also not isomorphic (compare Figure 3.14 with Figure 3.15). Hence, the image matching problem is modelled as a double subgraph isomorphism problem, which is to find isomorphism between subgraph of the left relational graph and subgraph of the right relational graph. To solve this double subgraph isomorphism problem, an association graph is derived from the left and right relational graph and then searching the largest maximal cliques within the association graph. First, a list of potential right matching candidate is established for each line in the left image, based on the position constraint explained in Section That is, the potential candidates to be assigned to this left line must situate within a region of interest. For instance, the potential matching candidates for left line l 2 are r q and r e in the right image (refer Figure 3.13 (a) and Figure 3.13 (b)). Thus, the matching of left element l 2 to right element r a (l 2 : r a ) and the matching of l 2 to r e (l 2 : r e ) is resulted as association node v 2a and v 2e, respectively (see Figure 3.16).

82 62 Then, the property similarity between the left line l i and the right line r j of each node V ij (l i : r j ) is computed. The similarity measure take into account the length and orientation similarity between the left line l i and the right line r j and also the similarity in terms of the number of relations. Among these nodes, those with a similarity measure value falls below a user-defined threshold are eliminated in the node building process. However, in any case, the best three nodes for a left candidate are always kept in the graph. All the resulted left-to-right matching candidates (l: r) are to be represented by the nodes of association graph. Each node v ij consists of the assignment of a left line l i to its potential matching candidate r j in the right image (l i : r j ). The resulted nodes are v 1a (l 1 : r q ), v 1e (l 1 : r e ), v 2a (l 2 : r a ), v 2e (l 2 : r e ), v 3a (l 3 : r a ), v 3e (l 3 : r e ), v 4b (l 4 : r b ), v 4c (l 4 : r c ), v 5e (l 5 : r e ), v 5f (l 5 : r f ), v 5g (l 5 : r g ), v 6e (l 6 : r e ), v 6f (l 6 : r f ) and v 6g (l 6 : r g ) (see Figure 3.16). l 6 : r g l 1 : r a l 1 : r e l 6 : r f l 2 : r a l 6 : r e l 2 : r e l 5 : r g l 3 : r a l 5 : r f l 3 : r e l 5 : r e l 4 : r c l 4 : r b Figure 3.16: The resulted association graph

83 63 After the node building process, then the compatibility and incompatibility rules are applied to connect arc between the resulted nodes (see Section 3.9.2). Any pair of nodes is connected with an arc if they are mutually compatible in terms of relations or vice versa, according to the compatibility and incompatibility definitions given in Section The arcs of the association graph connected all the mutually compatible nodes together. In other words, the arcs indicate that the endpoints of the arc represent compatible associations in terms of relation. For instance, consider pair of nodes v 2a and v 5e in the association graph (see Figure 3.16). Node v 2a represents the matching of left element l 2 to right element r a (l 2 : r a ) and v 5e represents the matching of l 5 to r e (l 5 : r e ). As shown in the images and relational graphs, l 2 is to the left of l 5, while r a is also to the left of r e (refer Figure 3.13 (a), Figure 3.13 (b), Figure 3.14 and Figure 3.15), this means t (l 2, l 5 ) is equivalent with t (r a, r e ). This indicates that v 2a is compatible with v 5e in terms of ordering relation, according to the definition 1 given in Section Thus, an arc is linked between v 2a and v 5e to represent the state of compatibility between node v 2a and node v 5e (see Figure 3.16). Consider v 3a and v 4b, where l 3 matched to r a (l 3 : r a ) and l 4 matched to r b (l 4 : r b ). As shown in the images and relational graphs, l 3 is intersect with l 4, while r a is also intersect with r b (refer Figure 3.13 (a), Figure 3.13 (b), Figure 3.14 and Figure 3.15). This means v 3a is compatible with v 4b and therefore an arc is linked between them (see Figure 3.16). Take another instance, consider v 2a and v 3a, where l 2 matched to r a (l 2 : r a ) and l 3 also matched to the same r a in the right image (l 3 : r a ). Since l 2 is co-linear with l 3 (refer Figure 3.13 (a), Figure 3.13 (b), Figure 3.14 and Figure 3.15), v 2a is compatible with v 3a in terms of co-linearity relationship, according to the definition 3 given in Section Thus, an arc is linked between v 2a and v 3a (see Figure 3.16). Consider v 2e and v 5a, where l 2 matched to r e (l 2 : r e ) and l 5 matched to r a (l 5 : r a ). As shown in the images or the relational graphs, l 2 is to the left of l 5, but r e is to the right of r a (refer Figure 3.13 (a), Figure 3.13 (b), Figure 3.14 and Figure 3.15). This

84 64 means that node v 2e is incompatible with v 5a (refer Section s definition 4) and hence no arc connection between them (see Figure 3.16). Consider v 2a and v 4b, there is no relation detected between l 2 and l 4 while r a is intersect with r b, thus the propagation rule is applied (refer Section s definition 5) to connect arc between v 2a and v 4b. By using relational and association graph representation, image matching becomes equivalent to searching for the largest set of mutually compatible nodes or largest maximal clique in the association graph, as discussed in Section 3.8. The best available matching between two images can determine by the largest maximal cliques in the association graph. This is because a larger maximal clique with the larger set of mutually connected nodes in the association graph provides larger number of left-to-right matching lines. In the end of the graph construction process, there are a number of maximal cliques found in the resulted association graph. Each maximal clique consists of set of association nodes that indicate the left-to-right matching pairs. By examining Figure 3.16, there are several maximal cliques found in the association graph. The largest maximal clique is found as a maximal clique of size 6, which consists of v 2a (l 2 : r a ), v 3a (l 3 : r a ), v 4b (l 4 : r b ), v 5e (l 5 : r e ), v 6f (l 6 : r f ) and v 6g (l 6 : r g ). The largest maximal clique is highlighted by grey coloured nodes and thicker arcs in the association graph (see Figure 3.16). Figure 3.17 (a) and (b) show the left-to-right matching lines as indicated by the nodes contain in the largest maximal clique.

85 65 l 2 r f l 5 l 6 r a r e r g l 3 r b l 4 (a) (b) Figure 3.17: The corresponding lines between the left and right image 3.11 The Complexity The matching strategy is designed in such a way to maintain the number of nodes and arcs in the graph as minimum as possible. The complexity of the association graph building process is proportional to the number of association nodes. To reduce the complexity, the position constraint and feature properties similarity (indicating by similarity measure) is configured in the proposed matching algorithm to eliminate as many incorrect left-to-right matching candidates as possible during the association node building process, as discussed in Section The size of association graph (the number of nodes and arcs) should maintain as low as possible to increase the efficiency of graph building process. Therefore, the number of arcs is maintaining low by detecting all incompatible nodes, as discussed in Section The detection of incompatible nodes is to restrict the connection of new arcs when the propagation definition is applied (refer Section 3.9.2).

86 66 Despite of the efficiency of graph building process, the detection of incompatibility between association nodes is a key step to reduce the possibility of incorrect matching. This is due to indirect compatibility through propagation does not taking into account the feature properties and hence false matching likely to happen when this fifth definition is applied.

87 CHAPTER 4 IMPLEMENTATION 4.1 Introduction The foregoing chapter clarified the steps of structural-based image matching approach, which involves the representation of the structural descriptions of an image as relational graph and the matching between relational graphs, to perform image matching. This approach separating the problem in a number of more manageable sub problems, which can then be solved by separate modules. Hence, the methodology is designed to be implemented modularly by five computer modules, namely feature extraction module, structural description derivation module, relational graph module, association graph module and clique-finding module. This chapter reports on the implementation based on the design of the methodology. The implementation and development details of each module are given in the following sections, accordingly. 4.2 The System The system is consists of feature extraction module, structural description derivation module, relational graph module, association graph module and cliquefinding module, as illustrated in Figure 4.1.

88 68 Feature Extraction Module Input Image Edge Detection MATLAB Image Processing Toolbox Built-in Function: imread MATLAB Image Processing Toolbox Built-in Function: edge bwmorph Trace Connected Edge Pixels Written function: traceedgepixel Straight Line Segments Fitting with a Specified Tolerance Written function: fitline plotline Structural Description Derivation Module Line Segment Labelling Detection of Relations between Line Segments: Ordering Intersection Collinearity Written function: buildlabelmatrix MATLAB Image Processing Toolbox Built-in Function: regionprops Written function: detectordering detectdirectconnection screenordering detectconnection Figure 4.1: Modules in the system

89 69 Relational Graph Module Construction of Relational Graphs Written function: buildrelationalgraph buildadjacencymatrix plotrelationalgraph Association Graph Module Construction of Relational Graph Vertices Written function: buildassociationnode Construction of Relational Graph Edges Written function: buildassociationarc propagatearc Clique-Finding Module Searching the Largest Maximal Clique in the Association Graph Mathematica s Combinatorica Command MaximumClique Figure 4.1: Modules in the system (continued) Most part of the feature extraction module, structural description derivation module, relational graph module and association graph module were coded using MATLAB programming features. MATLAB provides a full programming language that enables users to write a series of MATLAB statements into a file and then execute it with a simple function call. Every written program was given a file name

90 70 of filename.m, with file extension of.m, and was known as MATLAB M-file. The term that used for filename becomes a new command (function) that MATLAB associates with the written program, which can be call within MATLAB environment to execute the MATLAB codes defining the function statements. MATLAB functions accept arguments and produce output. The system is assembling from a number of well-designed functions, which each implement a particular task. Functions for different objectives are written to implement some required steps of the methodology. The function body contains a series of statements that perform the specified computation and assign value to output argument when being execute. In this study, traceedgepixel, fitline, detectordering and detectconnection function are written to detect lines and inter-line relations. Besides, buildrelationalgraph, buildadjacencymatrix and plotrelationalgraph function are written to construct the relational graph for representing structural descriptions. Then, buildassociationnode, buildassociationarc and propagatearc function are developed to construct association graph from relational graphs. The detailed descriptions for these written functions are given in the subsequent sections, accordingly. However, two technical computing tools MATLAB 6 and Mathematica 5 are used to ease some technical computation. Some built-in functions of the MATLAB image processing toolbox are used to assist the feature extraction tasks. Mathematica built-in package Combinatorica is used to compute the largest maximal clique in the resulted association graph. Combinatorica is one of the Mathematica standard add-on packages written in the Mathematica language to provide functions in combinatory and graph theory (computational discrete mathematics).

91 Feature Extraction Module Feature extraction module, in accordance with its objective, assembles from the written traceedgepixel, fitline and plotline functions. To extract linear segment from input image, the procedures included: (1) edge detection, (2) edge tracing, and (3) line segment fitting. First, image data is input using MATLAB s imread function. The syntax of this function is I = imread (filename, fmt). Where imread function reads an image from a graphic file named filename into I with supported format fmt. Then edge detection is applied to the input. The edge operator used in this research is the Canny operator supported by the MATLAB s edge function. The syntax of this function is BW = edge (I,' Canny'). Where, edge function takes an intensity image I as its input, and returns a binary edge image BW of the same size as I, with 1 is where the function finds edges in the intensity image and 0 is elsewhere. Here, edge function specifies the Canny method. From the resulted edge image, traceedgepixel function track edge pixels into lists of sequential edge pixels. This function links edge points together into chains. It forms lists of connected edge pixels found in the edge image, one list for each set of connected edge pixels. When searching along an edge, the function simply tracks one of the branches where an edge diverges at a junction. The other branch is eventually processed as another edge. During the tracing process, edges are thinned using the MATLAB s bwmorph function and edges less than a minimum length (in pixels) of interest is discarded. Where bwmorph (BW, thin) function take BW as input, and applies thinning morphological operation to BW, as specified by the operation named thin. The flow of the function starts with a raster scan through the edge image to search for edge pixels. The traceedgepixel function simply tracks one of the branches whenever a junction is encountered along an edge. In other word, whenever an edge pixel is encountered, this edge pixel is adopted as a seed point

92 72 where the rest of the edge pixels following this edge pixel are traced. From this starting pixel, the searching process tracks along the edge pixels in one direction and stored the row and column coordinate array (r, c) of the traced edge pixels as an edge list in the form of [(r 1,c 1 ), (r 2,c 2 ), (r n,c n )] i. Every list of connected edge pixels is labelled with a number i. When no more connected edge pixels are found, the function returns to the start pixel and tracks in the opposite direction. Finally, a check for the overall number of edge pixels found is made and the edge is ignored if its length is shorter than the specified minimum length of interest. All the traced edge lists is stored as a cell array of arrays of connected edge pixels called edgelist, in the form of {[(r 1, c 1 ), (r 2, c 2 ), (r n, c n )] i1, [(r 1, c 1 ), (r 2, c 2 ), (r n, c n )] i2, [(r 1, c 1 ), (r 2, c 2 ), (r n, c n )] nedge }. Where, nedge denotes the number of edges that successfully tracked in the edge image. The resulted edge list is to be used by fitline function to form straight line segments that adhere to the edge. Then, fitline function fits straight lines through the edge pixels in edgelist using a specified tolerance, typically two pixels. This fitline function takes array of edge pixels stored in edgelist and then find the size and position of the maximum deviation (in pixels) from the line that joins the endpoints. If the maximum deviation exceeds the allowable tolerance (typically set as two pixels), the edge is shortened to the point of maximum deviation and the test is repeated. In this manner, each edge is broken down to linear segments that adhere to the original edge with the specified tolerance. Each resulted straight line segment is defined by its two end pixels, which are the starting pixel (r s, c s ) and the ending pixel (r e, c e ), and stored in a four column array linelist in the form of (r s, c s, r e, c e ). Despite of that, the derived line segment also represented in the edge list form for the usage of line segment labelling process in the next module. The resulted line segments stored as a cell array of edge lists newedgelist, in the form of {[(r 1, c 1 ), (r 2, c 2 ), (r n, c n )] l1, [(r 1, c 1 ), (r 2, c 2 ), (r n, c n )] l2, [(r 1, c 1 ), (r 2, c 2 ), (r n, c n )] nline }. Where, nline denotes the number of line segments has formed. Hence, each cell gives the list of edge pixels that constitute the corresponding line segment. For instance, the first line is composed of the edge pixels grouped in the cell of [(r 1, c 1 ),

93 73 (r 2, c 2 ), (r n, c n )] l1, and the second line is composed of the edge pixels grouped in the cell of [(r 1, c 1 ), (r 2, c 2 ), (r n, c n )] l2, and so forth. 4.4 Structural Description Derivation Module Line Segment Labelling The very essential pre-step to interpret the inherent structural descriptions of the feature image is to transform the feature image into label matrix. Label matrix is a more suitable image data for deriving structural description between line segments. To do this, each edge pixels that constitute one line segment in the feature image is re-labelled with an identification number. Thus, the newedgelist cell array (newedgelist = {[(r 1, c 1 ), (r 2, c 2 ), (r n, c n )] l1, [(r 1, c 1 ), (r 2, c 2 ), (r n, c n )] l2, [(r 1, c 1 ), (r 2, c 2 ), (r n, c n )] nline }) resulted from the previous module is used to generate label matrix. Label matrix is resulted as an image of the same size as its feature image (size = m row x n column) whose pixels are valued with the identification number for the lines exist in the feature image. The elements of label matrix are integer values greater than or equal to zero, where, the set of elements in label matrix that labelled zero are the background, the set of pixels labelled as one correspond to one line, the set of pixels labelled as two make up a second line, and so on. The maximum pixel value in the label matrix is equal to the number of lines nline. MATLAB image processing toolbox s built-in function regionprops is called within function buildlabelmatrix, as shown in Figure 4.2. This regionprops function is used to compute a set of properties for each labelled line in the label matrix. The return value, LabelProperty, is a structure array where the fields of the structure array denote different measurements computed for each region, as specified by properties. For the need of the study, the measurements computed for each line are its area, centroid and orientation.

94 74 To generate label matrix from the newedgelist, buildlabelmatrix function is written, as summarized in Figure 4.2. function [labelmatrix,labelproperty] = buildlabelmatrix(newedgelist,m,n) % Calculate the number of lines nline = length(newedgelist) % Initialize and generate label matrix labelmatrix = zeros(m,n); for iline = 1:nLine [rowarray columnarray] = newedgelist{iline}(r,c); labelmatrix(rowarray,columnarray) = iline; end % Compute line properties Property = regionprops(labelmatrix,'area','centroid','orientation'); Figure 4.2: Statements in buildlabelmatrix function Derivation of Inter-Line Relationship The structural description derivation module is to detect the relationship between line segments, from the label matrix. The derivation of relations is an essential step to generate structural descriptions of one image, which is to be represented by the relational graph. There are three types of inter-line relations to be derived in this study, included type ordering, type intersection and type co-linearity, as discussed previously in Section Therefore, detectordering function, detectconnection function and detectcollinearity function were coded to derive each of these relation types, accordingly. The descriptions of these functions are given in the following Section 4.4.1, Section 4.4.2, and Section 4.4.3, respectively. The derived relations are then represented by relational graph, which is carried out by the relational graph module, as discussed in Section 4.5.

95 Ordering Relationship To detect the ordering relationship for a line segment, a set of neighbouring lines on the both side of the line segment need to be searched for. To do the searching, a pixel-by-pixel displacement is carried out from each pixel of the current line under consideration, in the perpendicular direction of that line. The searching is carried on the both side of the line until a pixel belongs to other neighbouring line is encountered (as showed in Figure 3.4). For each line segment, its both sides left-right or top-bottom candidate labels may found. Hence, the ordering relationship for a line may consist of to the left of and to the right of relations, or the equivalent to the top of and to the bottom of relations in another orientation. However, one of these set of candidate label may be empty if the line segment is close to the image border. The algorithm to determine the left-right or top-bottom lines for a line segment is summarized as below: (1) Consider a line l q with label number = q and length h q (in the number of pixels). Line l q is composed by edge pixels [(r 1, c 1 ), (r 2, c 2 ), (r n, c n )] lq, as given by newedgelist. (2) Calculate the slope of l q (m q ) and the slope of the searching direction m s, which perpendicular (90 0 ) with l q. m q = r r e e s c c s (4.1) 1 m s = (4.2) m q

96 76 (3) Carried out the searching as a pixel-by-pixel displacement from each pixel of l q (currentrow, currentcol) in the searching direction of m s and on the both side of l q. The pixel-by-pixel displacement searching is carried out in the (r d, c d ) coordinate, where the r d and c d is approximated using the following formula: r d = m s (c d currentcol) + currentrow (4.3) c d = ( r currentrow) + currentcol m s (4.4) (4) When a pixel belongs to another line (l c ) is encountered (i.e. when labelmatrix (r d, c d ) = c, c 0 and c q), terminate the displacement. This process is repeating for every pixel of l q. (5) Count the number of pixels of l c (h o ) that has successfully encourtered in the searching process, which denote the overlapping length (in pixel) covered by l q and its neighbouring candidate l c. (6) Calculate the percentage of overlapping (overlap %) between l q and l c as follows: overlap % = h 0 min( h q, h c ) (4.5) (7) The candidate l c with overlapping percentage falls below threshold is eliminated. The algorithm is coded as detectordering function with computeoverlap subfunction, and the structure is shown in Figure 4.3 and Figure 4.4, respectively.

97 77 function detectordering(labelmatrix,linelist,newedgelist) for iline = 1:nLine end % Calculate the search direction using the starting and ending pixel r1 = linelist(iline,1); c1 = linelist(iline,2); r2 = linelist(iline,3); c2 = linelist(iline,4); lineslope = (r2-r1)/(c2-c1); searchslope = -1/labelSlope; % Search for the ordering candidate count = 0; npixel = length(newedgelist{iline}); for ipixel = 1:nPixel [currentrow,currentcol] = newedgelist{iline}(ipixel); r = searchslope*(c currentcol)+currentrow; c = (r-currentrow)/searchslope+currentcol; if labelmatrix (r,c) 0 & labelmatrix (r,c) iline count = count+1; candidate = labelmatrix (r,c); end end % Compute the overlap percentage computeoverlap(count); Figure 4.3: Statements in detectordering function function overlappercent = computeoverlap(overlaplength) overlappercent = overlaplength/ min(linelength,candidatelength) Figure 4.4: Statements in computeoverlap subfunction The output is writing into a text file named relation1.dat. The file stored a matrix of size n x 3, where n denotes the number of relation type ordering, the first column refer to the line l q, the second column refer to the candidate of neighbouring line l c and the third column represents the ordering relationship exists between l q and l c. Every pair of (q, c) element in the matrix denotes relation ordering is exists between line l q and line l c. When these structural information are represented by

98 78 relational graph in the subsequent module, every set of node v q and node v c is connected with a labelled arc to denote the ordering relation Derivation of Intersection and Co-linearity Relationship In this study, relation type intersection and co-linearity is derived from the search of intersection and co-linearity occurrence of one line with other lines within a searching area. To do the searching, a searching area is defined to centre at each of the end pixel of the line in question. Then, the search of intersection or co-linearity occurrence of the line with any other lines is within the searching area. The algorithm is summarized as below: (1) Consider a line, l q with label number = q and orientation θ q. Line l q is defined by its two end pixel, which are the starting pixel (r sq, c sq ) and the ending pixel (r eq, c eq ), as given by linelist. (2) Consider an end pixel of l q (r q, c q ), define a searching area A (size = m row x n column) according to the position of end pixel. Where, the square shaped searching area (rowarray, colarray) is centred at the end pixel, and is defined as an area bounded from (r a) to (r + a) and (c a) to (c + a). (3) Detect the existence of pixels belongs to any other line l c (with label number = c, c q) within the searching area. Line l c with orientation θ q, is defined by its two end pixel, which are the starting pixel (r sc, c sc ) and the ending pixel (r ec, c ec ), as given by linelist. (4) Calculate the orientation difference ( θ) between l q and l c. θ = θ θ (4.6) q c

99 79 (5) If θ is greater than 10 0, then l c is intersect with l q ; else if θ is less than 10 0, calculate the Euclidean distance (d) between the end pixel of l q in consideration and both the end pixels of l c. 2 d 1 = ( r r ) + c c ) q sc 2 (4.7) ( q sc 2 d 2 = ( r r ) + c c ) q ec 2 (4.8) ( q ec (6) If either d 1 or d 2 is eligible for the distance constraint ( a ) for colinearity condition, then l c is co-linear with l q. d 1 < a (4.9) d 2 < a (4.10) (7) This process is repeating for another end pixel of l q. The algorithm to determine the connection of a line segment with other lines, either in intersection or in co-linearity condition is coded as detectconnection function with initializesearcharea and searchconnectlabel subfunction. The structure is shown in Figure 4.5, Figure 4.6 and Figure 4.7, respectively.

100 80 function detectconnection(labelmatrix,property,linelist,a) for iline = 1:nLine end % Define the searching area using the starting and ending pixel r1 = linelist(iline,1); c1 = linelist(iline,2); r2 = linelist(iline,3); c2 = linelist(iline,4); A = initializesearcharea(r1,cl,a) A = initializesearcharea(r2,c2,a) % Search for the intersecting or colinear candidate searchconnectlabel(property,iline,r1,cl,a) searchconnectlabel(property,iline,r2,c2,a) Figure 4.5: Statements in detectconnection function function A = initializesearcharea(labelmatrix,r,c,a) rowarray = r-a: r+a colarray = c-a: c+a A = labelmatrix(rowarray,colarray) Figure 4.6: Statements in initializesearcharea subfunction

101 81 Whilst, the co-linearity output is writing into a text file named relation3.dat. The file stored a matrix of size n x 3, where n denotes the number of relation type colinearity, the first column refer to l q, the second column refer to the candidate of colinear line l c, and the third column represents the co-linearity relationship exists between l q and l c. Every pair of (q, c) element in the matrix denotes relation cofunction searchconectlabel(property,r,c,a) % Detect the existence of candidate within the searching area if any(a 0 & A iline) candidate = nonzeros(a) diff = abs(property(iline).orientation-property(candidate).orientation) % Intersection is detected if diff > 10 relation2 = [iline,candidate] else r1 = linelist(candidate,1); c1 = linelist(candidate,2); r2 = linelist(candidate,3); c2 = linelist(candidate,4); d1 = sqrt((c-c1)^2 + (r-r1)^2) d2 = sqrt((c-c2)^2 + (r-r2)^2) isproximate1 = d1 <= sqrt(a^2+1); isproximate2 = d2 <= sqrt(a^2+1); if xor(isproximate1,isproximate2) relation3 = [iline,candidate] end end end Figure 4.7: Statements in searchconnectlabel subfunction The intersection output is writing into a text file named relation2.dat. The file stored a matrix of size n x 3, where n denotes the number of relation type intersection, the first column refer to l q, the second column refer to the candidate of intersecting line l c, and the third column represents the intersection relationship exists between l q and l c. Every pair of (q, c) element in the matrix denotes relation intersection is exists between line l q and line l c.

102 82 linearity is exists between line l q and line l c. When these structural information are represented by relational graph, every pair of node v q and node v c is connected with a labelled arc to represent the corresponding relation. 4.5 Relational Graph Module The structural descriptions resulted from the previous module are to be transform to relational graph representation. Hence, relational graph module involved the construction of the relational graph and the graphical representation of the relational graph. Hence, buildrelationalgraph, buildadjacencymatrix and plotrelationalgraph are written for this purpose. The buildrelationalgraph function involves matrix concatenation, i.e. the process of joining small matrices to make bigger ones. First, this function input the output file resulted from the structural descriptions derivation module, i.e. relation1.dat, relation2.dat and relation3.dat. These files store a matrix of size n x 3, where the first column refer to l q, the second column represents the candidate line l c, and the third column represents the relationship t (q, c) exists between l q and l c. Then, these three files are combining as one matrix by concatenating its row elements together. The output is a nnode-by-3 matrix relationalgraph, where nnode denotes the total number of relational graph s nodes, the first column element is l q, the second column element is l c, and the third column represents the relationship t (q, c) exists between l q and l c. Where, t = 1 for relation to the left of, t = 2 for relation to the right of, t = 3 for relation to the top of, t = 4 for relation to the bottom of, t = 5 when the relation is intersection, and t = 6 for the relation is co-linearity. Every row of the (q, c, t) elements in the relationalgraph denotes a particular relation t is detected between l q and l c. When these structural information are to be represented graphically by a relational graph, l q is represented as node v q and l c is

103 83 represented as node v c and every pair of node v q and node v c is connected with an arc to represent the corresponding relation t between them, if exists any. In this study, the graphical representation of relational graph is based on its adjacency matrix representation. Where, the adjacency matrix of a relational graph is a nnode-by-nnode matrix whose (i, j) th and (j, i) th entries are non-zero value if node v i is connected to node v j and 0 otherwise. Thus, buildadjacencymatrix is written to generate adjacency matrix representation for a relational graph structure. Then, plotrelationalgraph function plots the relational graph based on its adjacency matrix representation. The location of nodes in the graph is defined in a nnode-by-2 matrix coord. Where nnode is the number of nodes and each coordinate pair define one node. The structure of buildrelationalgraph, buildadjacencymatrix and plotrelationalgraph are shown in Figure 4.8, Figure 4.9, and Figure 4.10 respectively. function relationalgraph = buildrelationalgraph(dat) % Read from file relation1.dat matrix1(u,v,t) = fscanf(relation1.dat); % Read from file relation2.dat matrix2(u,v,t) = fscanf(relation2.dat); % Read from file relation3.dat matrix3(u,v,t) = fscanf(relation3.dat); % Matrix concatenation relationalgraph = [matrix1; matrix2; matrix3]; Figure 4.8: Statements in buildrelationalgraph function

104 84 function adjmatrix = buildadjacencymatrix(relationalgraph) % Initialize adjacency matrix adjmatrix = zeros(nline,nline); % Generate adjacency matrix nrow = length(relationalgraph); for irow = 1:nRow u = relationalgraph(i,1); v = relationalgraph(i,2); t = relationalgraph(i,3); adjmatrix(u,v) = t; end Figure 4.9: Statements in buildadjacencymatrix function function plotrelationalgraph(linelist,adjmatrix) % The corresponding xy coordinate for each node coord = linelist(1,2); % Plotting graph from its adjacency matrix and xy coordinates plot(adjmatrix,coord) % Add node number in the plotting nnode = size(adjmatrix,1); for inode = 1:nNode text(coord (inode,1), coord(inode,2),int2str(inode)); end Figure 4.10: Statements in plotrelationalgraph function 4.6 Association Graph Module As discussed in Section 3.8, association graph work as an auxiliary structure for matching two relational structures. Given two relational graphs, G 1 = (V 1, P, T) and G 2 = (V 2, P, T) to be matched. Graph matching can accomplish by forming an association graph from both G 1 and G 2 and then searching cliques in the resulted association graph.

105 85 An association graph is constructed as follows. For each v 1 in V 1 and v 2 in V 2, construct an association graph s node v 12 (v 1 : v 2 ) if v 1 and v 2 have similar properties. Thus, the nodes of association graph denote assignments, or pairs of nodes, one each from V 1 and V 2, which have similar properties. Then connect node (v 1 : v 2 ) and node (v 1 : v 2 ) of association graph if they represent compatible assignments according to relation T. The algorithm to construct association graph s nodes is coded as buildassociationnode function (refer Figure 4.11) and summarized as below: (1) Consider G 1 and G 2 are two relational graphs to be matched, with nnode 1 and nnode 2 denotes the number of nodes in G 1 and G 2, respectively. Thus, the number of association nodes to be considered = nnode 1 x nnode 2. (2) The matching is carried out as an assignment of node v 1 in G 1 to its potential matching candidate v 2 in G 2. Establish list of potential matching candidates for every node in G 1. Each matching pair of v 1 to v 2 (v 1 : v 2 ) compose association node v 12 in association graph. (3) Calculate property similarity B 12 between v 1 and v 2 within the association node (v 1 : v 2 ) as follows: B 12 = 1 min( h1, h2 ) min( θ1, θ 2 ) min( nt1, nt2 ) max( h1, h2 ) max( θ1, θ 2 ) max( nt1, nt2 ) (4.11) (4) Association node v 12 (v 1 : v 2 ) with property similarity B 12 falls below threshold is eliminated.

106 86 function nasocnode = buildassociationnode(property1,property2) count = 0 for i = 1:nNode1 for j = 1:nNode2 hi = Property1(i).Orientation; hj = Property2(j).Orientation; oi = Property1(i).Orientation; oj = Property2(j).Orientation; nti = nnz(relationalgraphadjacencymatrix1(i,:)); ntj = nnz(relationalgraphadjacencymatrix2(j,:)); B = (min(hi,hj)/max(hi,hj)+(min(oi,oj)/max(oi,oj))+ (min(nti,ntj)/max(nti,ntj)))/3 if B < threshold count = count+1 asocnode = [i,j,b] end end end nasocnode = count Figure 4.11: Statements in buildassociationnode function The algorithm to connect arc between association nodes is coded in buildassociationarc (refer Figure 4.12) and propagatearc function (refer Figure 4.13) and summarized as below: (1) Consider association node v ia and node v jb which consists of the matching of v i to v a (v i : v a ) and the matching of v j to v b, respectively. (2) If the relation between v i and v j in G 1 is equals with the relation between v a and v b in G 2, then association node v ia is compatible with association node v jb and hence connect an arc between node v ia with node v jb. (3) Apply propagation rule: if v ia is compatible with v xy and v xy is compatible with v jb then association node v ia is consider to compatible with association node v jb and hence connect an arc between node v ia with node v jb.

107 87 function buildassociationarc(nasocnode) for u = 1:nAsocNode for v = 1:nAsocNode i = asocnode(1,u); j = asocnode(1,v); a = asocnode(2,u); b = asocnode(2,v); if i j & a b t1 = relationalgraphadjacencymatrix1(i,j); t2 = relationalgraphadjacencymatrix2(a,b); iscompatible = t1 0 & t2 0 & t1 = = t2; if iscompatible asocarc = [u,v]; else incompatible = [u,v]; end elseif i j & a = b t1 = relationalgraphadjacencymatrix1(i,j); if t1 = = 6 asocarc = [u,v]; else incompatible = [u,v]; end elseif i = j & a b t2 = relationalgraphadjacencymatrix2(i,j); if t2 = = 6 asocarc = [u,v]; else incompatible = [u,v]; end end end end Figure 4.12: Statements in buildassociationarc function

108 88 function propagatearc(nasocnode,asocarc,incompatible) for u = 1:nAsocNode for v = 1:nAsocNode iscompatible = ismember([u,v],asocarc,'rows'); isincompatible = ismember([u,v],incompatible,'rows'); if iscompatibleedge = = 0 & isincompatibleedge = = 0 x = asocarc(u,2); canpropagate = ismember([x,v],asocarc,'rows'); if canpropagate asocarc = [u,v]; end end end end Figure 4.13: Statements in propagatearc function 4.7 Clique-Finding Module A match between two relational graphs is just a set of assignments that are all mutually compatible in relations. The best match could well be taken to be the largest set of assignments (node correspondences) that were all mutually compatible under the relations. This idea of solution can be modelled as a graph property. In the association graph, it is just a set of nodes, it is a clique, and to be more specifically it is a maximal clique. Maximal clique is a clique to which no new nodes may be added without destroying the clique properties. In this formulation of matching, larger cliques are taken to indicate better matches, since they account for more nodes. Thus, the best matches are determined by the largest maximal cliques in the association graph. Therefore, in the end of association graph building process, there will be a number of maximal cliques, which constitute different combination of mutually connected nodes, which own compatible relations among each other. The largest maximal clique with the largest set of mutually connected nodes in the association

109 89 graph will provide the largest number of feature matching pairs with compatibility of relations. To compute the largest clique in the resulted association graph, Mathematica built-in package Combinatorica was used. Combinatorica is one of the Mathematica standard add-on packages written in the Mathematica language to provide functions in combinatory and graph theory (computational discrete mathematics). The command MaximumClique [g] finds the largest clique in a given graph g.

110 CHAPTER 5 RESULT AND DISCUSSION 5.1 Introduction The structural-based image matching technique proposed in this thesis is applied to match some stereo images. This chapter discusses about the experiments to test the proposed algorithm. The descriptions of the data used in the experiments are given in Section 5.2. Each of the experimental result is reported and discussed from Section 5.4 to Section 5.17, accordingly. A final discussion of the result is given in Section The evaluation is mainly concentrated on the applicability and limitation of the inter-line relation derivation algorithm. The evaluation also focused on the relevance and the limitation of incorporating inter-line relationship into image matching to tackle with occlusion, noise, missing features, bad feature extraction result or other similar problem occurred in the feature-extracted images. The applicability of incorporating structural information into image matching is assessing by the investigations on the resulted relational graph and the left-to-right matching pair found by the largest maximal clique in association graph. 5.2 Image Data Experiments are carried out on 14 pairs of stereo images, which each pair consists of left image and right image. The data included 5 pair of syntactic images,

111 91 6 pair of images of some objects on table, and 2 pair of images of indoor scene of room. Most of the image data is downloaded from an image database that provided by Vision and Autonomous Systems Centre (VASC) of Carnegie Mellon University. Every set of image data consists of a pair of left image and right image. The data format is Portable Network Graphics (PNG). Whilst, the data for the third experiment is downloaded from an image database that provided by Institut National de Recherche en Informatique et Automatique (INRIA). The data is in Graphics Interchange Format (GIF). The data used in the sixth and seventh experiment are in the format of Portable Graymap (PGM), provided by Visual Geometry Group (VGG) of Department of Engineering Science, University of Oxford. Baseline information and calibration data is not available for these data. The brief descriptions of the data used in the experiments are summarized in Table 5.1. Table 5.1: The image data used in the experiments Experiment Size Type Descriptions Source x 250 PNG Syntactic stereo images of a house VASC x 250 PNG Syntactic stereo images of a house VASC x 384 GIF Syntactic stereo images of a block INRIA x 384 GIF Syntactic stereo images of a note INRIA x 206 PNG Syntactic stereo images of some VASC rectangles x 250 PNG Stereo images of a book VASC x 300 PGM Stereo images of a piece of gear VGG x 496 PGM Stereo images of a piece of gear VGG x 212 PNG Stereo images of a Rubik cube and a VASC wooden block x 512 PNG Stereo images of arch of blocks VASC x 256 PNG Stereo images of a telephone and a cup VASC

112 x 512 PNG Stereo images of a tennis ball, an ice VASC chest and two cylinders x 250 PNG Stereo images of an indoor room VASC x 250 PNG Stereo images of an indoor room VASC 5.3 Initial Experiment on the firstn Parameter The complexity of the association graph building process is proportional to the number of association nodes. As every association node is consists of left-toright matching pair, position constraint and feature properties similarity (indicating by similarity measure) is taking into account to eliminate as many incorrect left-toright matching candidates as possible during the association node building process, to reduce the complexity (discussed in Section 3.9.1). The number of association nodes should maintain as low as possible to increase the efficiency of graph building process. Therefore, there is a need to determine the suitable number of association nodes (firstn) for a given left line. An initial test has been carried out on a pair of left and right image (Figure 5.1 (a) and (b)) to determine the most appropriate setting for firstn parameter. Figure 5.1 shows the correct correspondence between the left and right image, indicated by 23 set of left-to-right matching pair. Each matching pair is labelled with the corresponding number, where l 1 in the left image is corresponds to r 1 in the right image, l 2 in the left image is corresponds to r 2 in the right image, and so forth.

113 93 (a) The left image (b) The right image Figure 5.1: The correct correspondence between the left and right image, indicated by 23 set of left-to-right matching pair that labelled with the corresponding number In the node building process, similarity measure B lr is computed for each association node, and those with B lr value below threshold value of 0.5 are eliminated. To determine the most appropriate number of association nodes (firstn) for each left line, its potential right matching candidates in the association nodes are sorted based on the similarity measure value. Table 5.1 tabulate the ranking for the set of left-to-right matching pair, based on the similarity value. The matching candidate (represented by association node) ranked first based on similarity measure is the correct one in most of the cases, which account 83% of the cases. The node classified third is the correct one in 13% of the cases. Experiment shows that the first three nodes is account about 96% of the best available matching. According to the above analysis, therefore, the parameter firstn is set to 3, where three association nodes are kept for a given left line. In any case, the best three association nodes are always kept in the graph.

114 94 Table 5.2: The ranking for left-to-right matching pair based on B lr Association node Ranking Association node Ranking v 1 (l 1 : r 1 ) 1 st v 13 (l 13 : r 13 ) 1 st v 2 (l 2 : r 2 ) 1 st v 14 (l 14 : r 14 ) 1 st v 3 (l 3 : r 3 ) 1 st v 15 (l 15 : r 15 ) 1 st v 4 (l 4 : r 4 ) 1 st v 16 (l 16 : r 16 ) 1 st v 5 (l 5 : r 5 ) 1 st v 17 (l 17 : r 17 ) 1 st v 6 (l 6 : r 6 ) 4 th v 18 (l 18 : r 18 ) 1 st v 7 (l 7 : r 7 ) 3 rd v 19 (l 19 : r 19 ) 1 st v 8 (l 8 : r 8 ) 1 st v 20 (l 20 : r 20 ) 3 rd v 9 (l 9 : r 9 ) 1 st v 21 (l 21 : r 21 ) 1 st v 10 (l 10 : r 10 ) 1 st v 22 (l 22 : r 22 ) 3 rd v 11 (l 11 : r 11 ) 1 st v 23 (l 23 : r 23 ) 1 st v 12 (l 12 : r 12 ) 1 st 5.4 Experiment 1: Stereo Images on a House The data is a pair of syntactic grey scale image depicted a scene of a house with image dimension 250 x 250 (Figure 5.2 (a) and (b)). In the edge detection process, there are 6 edges detected from the left image and 6 edges detected from the right image (see Figure 5.2 (c) and (d)). In the edge tracing process, no edges are eliminated and therefore the edge tracing image (Figure 5.2 (e) and (f)) appear the same with edge detection image (Figure 5.2 (c) and (d)). After undergoing the step of line segment extraction, there are 23 line segments derived from the left image and 23 line segments derived from the right image (see Figure 5.2 (g) and Figure 5.2 (h)). The structural information interpreted from the left and right line segment image is represented by the left and right relational graph respectively (see Figure 5.2 (i) and (j)).

115 95 Association graph is constructed from both the left and right relational graph. The resulted association graph has 68 nodes and 257 arcs, as shown in Figure 5.3. Then, the maximal clique search is performed. The largest maximal clique is a clique of size 19, which comprised of 19 mutually connected nodes. With 23 lines in the left image and 23 lines in the right image, the matching algorithm found 19 leftto-right correct matching pairs, with no false matched (mismatched) lines. There are 4 unmatched lines. There are 83 % of the left lines are matched correctly. Figure 5.2 (k) and (l) show the left-to-right matching lines found by the largest maximal clique. The unmatched lines are shown in Figure 5.2 (m) and (n). Ambiguity in image matching might happen, as can observed from the left and right image that there are two similar structure that formed by the house windows. For instance, line labelled 20 of the first window in the left image might match falsely to line labelled 15 of the second window in the right image due to the similarity between these two window structures, and line labelled 21 in the left image might match falsely to line labelled 16 in the right image, and so forth. However, the matching result shows that mismatch case is not occurred at all. This observation shows that the structural information is plausible to reduce the ambiguity in image matching.

116 96 (a) Left image (b) Right image (c) Edge detection on left image (d) Edge detection on right image Figure 5.2: Some results of the first experiment: (a) & (b) the left and right image, respectively, (c) & (d) edge detection for the left and right image, (e) & (f) edge tracing images, (g) & (h) the plotting of extracted line segments, (i) & (j) the relational graphs, (k) & (l) the corresponding lines between the left and right image, and (m) & (n) the unmatched line segments

117 97 (e) Edge tracing on left image (f) Edge tracing on right image (g) Line segment plotting for left image (h) Line segment plotting for right image Figure 5.2: /continued

118 98 (i) The left relational graph (j) The right relational graph (k) The matched lines for left image (l) The matched lines for right image (m) The unmatched lines for left image (n) The unmatched lines for right image Figure 5.2: /continued

119 99 Figure 5.3: Association graph resulted from the first experiment (a) The matched line segments for left image without propagation of relation (b) The matched line segments for right image without propagation of relation Figure 5.4: The matched lines without propagation from the first experiment: (a) & (b) the corresponding lines between the left and right image

120 100 Figure 5.5: The association graph without propagation of the first experiment 5.5 Experiment 2: Stereo Images on a House In this experiment, the test data is also a pair of syntactic grey scale image depicted a scene of a house with image dimension 250 x 250 (Figure 5.6 (a) and (b)). In the edge detection process, there are 7 edges detected from the left image and 6 edges detected from the right image (see Figure 5.6 (c) and (d)). In the edge tracing process, no edges are eliminated and therefore the edge tracing image (Figure 5.6 (e) and (f)) appear visually same with the edge detection image (Figure 5.6 (c) and (d)). After undergoing the step of line segment extraction, there are 24 line segments derived from the left image and 25 line segments derived from the right image (see Figure 5.6 (g) and Figure 5.6 (h)). The structural information interpreted from the left and right line segment image is represented by the left and right relational graph respectively (see Figure 5.6 (i) and (j)).

121 101 The resulted association graph has 66 nodes and 151 arcs, as shown in Figure 5.7. The largest maximal clique is a clique of size 20, which comprised of 20 mutually connected nodes. The matching algorithm found 20 left-to-right matching pairs, without mismatched case. There are 4 unmatched lines. The proposed matching algorithm matched 83 % of the left lines. Figure 5.6 (k) and (l) show the left-to-right matching lines found by the largest maximal clique. The unmatched lines are shown in Figure 5.6 (m) and (n). The matching percentage is relatively high (refer Table 5.4). This is because the ordering, intersection and co-linearity relationship used in this study can describe the house in the image comprehensively. This indicates the capability of the three condition of inter-line relationship in describing simple scene. (a) Left image (b) Right image Figure 5.6: Some results of the second experiment: (a) & (b) the left and right image, respectively, (c) & (d) edge detection for the left and right image, (e) & (f) edge tracing images, (g) & (h) the plotting of extracted line segments, (i) & (j) the relational graphs, (k) & (l) the corresponding lines between the left and right image, and (m) & (n) the unmatched line segments

122 102 (c) Edge detection on left image (d) Edge detection on right image (e) Edge tracing on left image (f) Edge tracing on right image (g) Line segment plotting for left image (h) Line segment plotting for right image Figure 5.6: /continued

123 103 (i) The left relational graph (j) The right relational graph (k) The matched lines for left image (l) The matched lines for right image (m) The unmatched lines for left image (n) The unmatched lines for right image Figure 5.6: /continued

124 104 Figure 5.7: Association graph resulted from the second experiment 5.6 Experiment 3: Stereo Images on a Block In this experiment, the test data is a pair of syntactic image on a block, with image dimension 288 x 384 (Figure 5.8 (a) and (b)). There are 6 edges detected from the left image and 6 edges detected from the right image (see Figure 5.8 (c) and (d)). In the edge tracing process, no edges are eliminated and therefore the edge tracing image (Figure 5.8 (e) and (f)) appear visually same with the edge detection image (Figure 5.8 (c) and (d)). There are 19 line segments derived from the left image and 18 line segments derived from the right image (see Figure 5.8 (g) and Figure 5.8 (h)). The structural information interpreted from the left and right line segment image is represented by the left and right relational graph respectively (see Figure 5.8 (i) and (j)). The resulted association graph has 55 nodes and 75 arcs, as shown in Figure 5.9. The largest maximal clique is a clique of size 16, which comprised of 16

125 105 mutually connected nodes. The matching algorithm found 16 left-to-right matching pairs, without false matched case. There are 3 unmatched lines. The proposed algorithm matched 84 % of the left lines. Figure 5.8 (k) and (l) show the left-to-right matching lines found by the largest maximal clique. The matching percentage is relatively high (refer Table 5.4). This is because the block in the images can be extracted as line features flawlessly and hence is appropriate to the proposed algorithm. In addition, the configuration of the lines of the block is relevant to the inter-line relationship focused in this study. This indicates the capability of the three condition of inter-line relationship in describing simple scene. (c) Edge detection on left image (d) Edge detection on right image (e) Edge tracing on left image (f) Edge tracing on right image Figure 5.8: Some results of the third experiment: (a) & (b) the left and right image ( Copyright INRIA), respectively, (c) & (d) edge detection for the left and right image, (e) & (f) edge tracing images, (g) & (h) the plotting of extracted line segments, (i) & (j) the relational graphs, and (k) & (l) the corresponding lines between the left and right image

126 106 (g) Line segment plotting for left image (h) Line segment plotting for right image (i) The left relational graph (j) The right relational graph (k) The matched lines for left image (l) The matched lines for right image Figure 5.8: /continued

127 107 Figure 5.9: Association graph resulted from the third experiment 5.7 Experiment 4: Stereo Images on a Note In this experiment, the test data is a pair of syntactic image on US dollar note, with image dimension 288 x 384 (Figure 5.10 (a) and 5.10(b)). There are 2 edges detected from the left image and 1 edge detected from the right image (see Figure 5.10 (c) and (d)). In the edge tracing process, edges less than 120 pixels are discarded (see Figure 5.10 (e) and (f)). There are 13 line segments derived from the left image and 6 line segments derived from the right image (see Figure 5.10 (g) and Figure 5.10 (h)). The structural information interpreted from the left and right line segment image is represented by the left and right relational graph respectively (see Figure 5.10 (i) and (j)). The resulted association graph has 29 nodes and 71 arcs, as shown in Figure The largest maximal clique is a clique of size 4, which comprised of 4 mutually connected nodes. The matching algorithm found 4 left-to-right matching

128 108 pairs, with one false matched case. There are 2 unmatched lines. The proposed algorithm matched 31 % of the left lines. Figure 5.10 (k) and (l) show the left-toright matching lines found by the largest maximal clique. The configuration of the extracted lines of the note is relevant to the inter-line relationship focused in this study. However, one ambiguous case has occurs, as can observed from the left and right image due to the structural similarity of the false matched lines. (a) Left image (b) Right image (c) Edge detection on left image (d) Edge detection on right image Figure 5.10: Some results of the fourth experiment: (a) & (b) the left and right image ( Copyright INRIA), respectively, (c) & (d) edge detection for the left and right image, (e) & (f) edge tracing images, (g) & (h) the plotting of extracted line segments, (i) & (j) the relational graphs, and (k) & (l) the corresponding lines between the left and right image

129 109 (e) Edge tracing on left image (f) Edge tracing on right image (g) Line segment plotting for left image (h) Line segment plotting for right image (i) The left relational graph (j) The right relational graph Figure 5.10: /continued

130 110 (k) The matched lines for left image (l) The matched lines for right image Figure 5.10: /continued Figure 5.11: Association graph resulted from the fourth experiment 5.8 Experiment 5: Stereo Images on Some Rectangles The image data is a pair of syntactic grey scale image depicted the scene of some rectangles with image dimension 256 x 206 (Figure 5.12 (a) and (b)). There are 32 edges detected from the left image and 33 edges detected from the right image, as shown in Figure 5.12 (c) and (d). In the edge tracing process, edges less than 15 pixels are discarded (see Figure 5.12 (e) and (f)). There are 76

131 111 line segments derived from the left image and 80 line segments derived from the right image (see Figure 5.12 (g) and (h)). The structural information interpreted from the left and right line segment image is represented by the left and right relational graph respectively (see Figure 5.12 (i) and (j)). The resulted association graph has 226 nodes and 369 arcs, as shown in Figure The largest maximal clique is a clique of size 47, which comprised of 47 totally connected nodes. The matching algorithm found 47 left-to-right matching pairs, with 41 correct matching pair and 6 false matching pair. There are 29 unmatched lines. The algorithm matched about 62% of the left lines. Figure 5.12 (k) and (l) show the left-to-right matching lines found by the largest maximal clique. The configuration of the extracted lines of the note is relevant to the inter-line relationship focused in this study. However, many ambiguous cases have occurred, due to the repetitive pattern of the rectangles in the image. (a) Left image (b) Right image Figure 5.12: Some results of the fifth experiment: (a) & (b) the left and right image, respectively, (c) & (d) edge detection for the left and right image, (e) & (f) edge tracing images, (g) & (h) the plotting of extracted line segments, (i) & (j) the relational graphs, and (k) & (l) the corresponding lines between the left and right image

132 112 (c) Edge detection on left image (d) Edge detection on right image (e) Edge tracing on left image (f) Edge tracing on right image Figure 5.12: /continued

133 113 (g) Line segment plotting for left image (h) Line segment plotting for right image (i) The left relational graph (j) The right relational graph (k) The matched lines for left image (l) The matched lines for right image Figure 5.12: /continued

134 114 Figure 5.13: Association graph resulted from the fifth experiment 5.9 Experiment 6: Stereo Images on a Book The image data involved in this experiment is a pair of stereo image of a book, with image dimension 250 x 250 (Figure 5.14 (a) and (b)). There are 8 edges detected from the left image and 8 edges detected from the right image, as shown in Figure 5.14 (c) and (d). In the edge tracing process, edges less than 100 pixels are discarded (see Figure 5.14 (e) and (f)). There are 27 line segments derived from the left image and 32 line segments derived from the right image (see Figure 5.14 (g) and (h)). The structural information interpreted from the left and right line segment image is represented by the left and right relational graph respectively (see Figure 5.14 (i) and (j)). The resulted association graph has 80 nodes and 209 arcs, as shown in Figure The largest maximal clique is a clique of size 8. The matching algorithm found 8 left-to-right matching pairs, with 4 correct matching pair and 4 mismatched

135 115 case. There are 19 unmatched lines. The algorithm matched about 30% of the left lines. Figure 5.14 (k) and (l) show the left-to-right matching lines found by the largest maximal clique. (a) Left image (b) Right image (c) Edge detection on left image (d) Edge detection on right image Figure 5.14: Some results of the sixth experiment: (a) & (b) the left and right image, respectively, (c) & (d) edge detection for the left and right image, (e) & (f) edge tracing images, (g) & (h) the plotting of extracted line segments, and (i) & (j) the corresponding lines between the left and right image

136 116 (e) Edge tracing on left image (f) Edge tracing on right image (g) Line segment plotting for left image (h) Line segment plotting for right image Figure 5.14: /continued

137 117 (i) The left relational graph (j) The right relational graph (k) The matched lines for left image (l) The matched lines for right image Figure 5.14: /continued

138 118 Figure 5.15: Association graph resulted from the sixth experiment 5.10 Experiment 7: Stereo Images on a Gear The image data involved in this experiment is stereo image on a piece of gear, with image dimension 300 x 300 (Figure 5.16 (a) and (b)). There are 5 edges detected from the left image and 20 edges detected from the right image, as shown in Figure 5.16 (c) and (d). In the edge tracing process, edges less than 40 pixels are discarded (see Figure 5.16 (e) and (f)). There are 34 line segments derived from the left image and 90 line segments derived from the right image (see Figure 5.16 (g) and (h)). The structural information interpreted from the left and right line segment image is represented by the left and right relational graph respectively (see Figure 5.16 (i) and (j)). The resulted association graph has 101 nodes and 289 arcs, as shown in Figure The largest maximal clique is a clique of size 9. The matching algorithm found 9 left-to-right matching pairs, with 8 correct matching pair and one

139 119 mismatched case. There are 25 unmatched lines. The algorithm matched about 26% of the left lines. Figure 5.16 (k) and (l) show the corresponding lines between the left and right image found by the largest maximal clique. (a) Left image (b) Right image (c) Edge detection on left image (d) Edge detection on right image Figure 5.16: Some results of the seventh experiment: (a) & (b) the left and right image, respectively, (c) & (d) edge detection for the left and right image, (e) & (f) edge tracing images, (g) & (h) the plotting of extracted line segments, and (i) & (j) the corresponding line segments between the left and right image

140 120 (e) Edge tracing on left image (f) Edge tracing on right image (g) Line segment plotting for left image (h) Line segment plotting for right image Figure 5.16: /continued

141 121 (i) The left relational graph (j) The right relational graph (k) The matched lines for left image (l) The matched lines for right image Figure 5.16: /continued

142 122 Figure 5.17: Association graph resulted from the seventh experiment 5.11 Experiment 8: Stereo Images on a Gear The image data involved in this experiment is also stereo image on a piece of gear, with image dimension 347 x 496 (Figure 5.18 (a) and (b)). There are 20 edges detected from the left image and 26 edges detected from the right image, as shown in Figure 5.18 (c) and (d). In the edge tracing process, edges less than 80 pixels are discarded (see Figure 5.18 (e) and (f)). There are 105 line segments derived from the left image and 124 line segments derived from the right image (see Figure 5.18 (g) and (h)). The structural information interpreted from the left and right line segment image is represented by the left and right relational graph respectively (see Figure 5.18 (i) and (j)). The resulted association graph has 313 nodes and 151 arcs, as shown in Figure The largest maximal clique is a clique of size 6. The matching algorithm found 6 left-to-right matching pairs, but all of them are mismatched case.

143 123 There are 99 unmatched lines. The algorithm matched about 6% of the left lines. Figure 5.18 (k) and (l) show the left-to-right matching lines found by the largest maximal clique. The algorithm fails to perform image matching successfully in this experiment. The object in the image is not consists of complicated scene and can be extracted as line features successfully. Moreover, the extracted lines is relevant to the inter-line relationship used in this study. However, the left-to right matching pairs is completely incorrect. The matching algorithm fails to find any correct leftto-right matching pair in this experiment (see Figure 5.18 (k) and (l)) and the matching percentage is relatively low. This is because the left and right image is captured from two extremely different positions. (a) Left image (b) Right image (c) Edge detection on left image (d) Edge detection on right image Figure 5.18: Some results of the eighth experiment: (a) & (b) the left and right image, respectively, (c) & (d) edge detection for the left and right image, (e) & (f) edge tracing images, (g) & (h) the plotting of extracted line segments, and (i) & (j) the corresponding lines between the left and right image

144 124 (e) Edge tracing on left image (f) Edge tracing on right image (g) Line segment plotting for left image (h) Line segment plotting for right image (i) The left relational graph (j) The right relational graph Figure 5.18: /continued

145 125 (k) The matched lines for left image (l) The matched lines for right image Figure 5.18: /continued Figure 5.19: Association graph resulted from the eighth experiment 5.12 Experiment 9: Stereo Images on a Rubik Cube and a Block The test data is a pair of grey scale real image, which capture a partial view on a Rubik cube that occluding a wooden block (Figure 5.20 (a) and (b)). The size of each image is 134 x 212 pixels.

146 126 There are 27 edges detected in the left image and 36 edges detected in the right image, as shown in Figure 5.20 (c) and (d). In the edge tracing process, edges less than 15 pixels are discarded (see Figure 5.20 (e) and (f)). There are 97 line segments derived from the left image and 88 line segments derived from the right image (see Figure 5.20 (g) and (h)). The structural information interpreted from the left and right line segment image is represented by the left and right relational graph respectively (see Figure 5.20 (i) and (j)). The resulted association graph has 286 nodes and 431 arcs, as shown in Figure The largest maximal clique is a clique of size 79. The matching algorithm found 79 left-to-right matching pairs, with 78 correct matching pair and one mismatched case. There are 18 unmatched lines. The algorithm matched about 81 % of the left lines. Figure 5.20 (k) and (l) show the corresponding lines between the left and right image, as found by the largest maximal clique. (a) Left image (b) Right image Figure 5.20: Some results of the ninth experiment: (a) & (b) the left and right image, respectively, (c) & (d) edge detection for the left and right image, (e) & (f) edge tracing images, (g) & (h) the plotting of extracted line segments, (i) & (j) the relational graphs, and (k) & (l) the corresponding lines between the left and right image

147 127 (c) Edge detection on left image (d) Edge detection on right image (e) Edge tracing on left image (f) Edge tracing on right image (g) Line segment plotting for left image (h) Line segment plotting for right image Figure 5.20: /continued

148 128 (i) The left relational graph (j) The right relational graph (k) The matched lines for left image (l) The matched lines for right image Figure 5.20: /continued Figure 5.21: Association graph resulted from the ninth experiment

149 Experiment 10: Stereo Images on Arch of Blocks The data is stereo pair of arch of blocks (Figure 5.22 (a) and (b)). The size of each image is 512 x 512 pixels. There are 31 edges detected in the left image and 34 edges detected in the right image, as shown in Figure 5.2 (c) and (d). In the edge tracing process, edges less than 20 pixels are discarded (see Figure 5.22 (e) and (f)). There are 73 line segments derived from the left image and 74 line segments derived from the right image (see Figure 5.22 (g) and (h)). The structural information interpreted from the left and right line segment image is represented by the left and right relational graph respectively (see Figure 5.22 (i) and (j)). The resulted association graph has 217 nodes and 398 arcs, as shown in Figure The largest maximal clique is a clique of size 23. The matching algorithm found 23 left-to-right matching pairs, with 22 correct matching pair and one mismatched case. There are 50 unmatched lines. The algorithm matched about 32% of the left lines. Figure 5.22 (k) and (l) show the corresponding lines between the left and right image, as found by the largest maximal clique.

150 130 (a) Left image (b) Right image (c) Edge detection on left image (d) Edge detection on right image Figure 5.22: Some results of the tenth experiment: (a) & (b) the left and right image, respectively, (c) & (d) edge detection for the left and right image, (e) & (f) edge tracing images, (g) & (h) the plotting of extracted line segments, (i) & (j) the relational graphs, and (k) & (l) the corresponding lines between the left and right image

151 131 (e) Edge tracing on left image (f) Edge tracing on right image (g) Line segment plotting for left image (h) Line segment plotting for right image (i) The left relational graph Figure 5.22: /continued (i) The right relational graph

152 132 (k) The matched lines for left image (l) The matched lines for right image Figure 5.22: /continued Figure 5.23: Association graph resulted from the tenth experiment

153 Experiment 11: Stereo Images on a Telephone and a Cup The data is stereo images depicted a scene of a telephone and a cup on a table, with image dimension of 256 x 256 (Figure 5.24 (a) and (b)). There are 75 edges detected in the left image and 80 edges detected in the right image, as shown in Figure 5.24 (c) and (d). In the edge tracing process, edges less than 15 pixels are discarded (see Figure 5.24 (e) and (f)). There are 222 line segments derived from the left image and 227 line segments derived from the right image (see Figure 5.24 (g) and (h)). The structural information interpreted from the left and right line segment image is represented by the left and right relational graph respectively (see Figure 5.24 (i) and (j)). The resulted association graph has 664 nodes and 83 arcs, as shown in Figure The largest maximal clique is a clique of size 4. The matching algorithm only found 4 left-to-right matching pairs. There are 218 unmatched lines. The algorithm matched only about 2% of the left lines. Figure 5.24 (k) and (l) show the corresponding lines between the left and right image, as found by the largest maximal clique. Although having less-structured feature-extracted images, the matching algorithm still capable to find the correct left-to-right matching pair in this experiment, as can observed from Figure 5.24 (k) and (l). False matching pair is not found at all in the matching result. However, the matching percentage is relatively low (refer Table 5.4). The utilization of inter-line relationship in image matching was limited by: (1) the comprehensiveness of inter-line relationship to describe flowered pattern on the cup in the image, and (2) the characteristic of input data.

154 134 (a) Left image (b) Right image (c) Edge detection on left image (d) Edge detection on right image Figure 5.24: Some results of the eleventh experiment: (a) & (b) the left and right image, respectively, (c) & (d) edge detection for the left and right image, (e) & (f) edge tracing images, (g) & (h) the plotting of extracted line segments, (i) & (j) the relational graphs, and (k) & (l) the corresponding lines between the left and right image

155 135 (e) Edge tracing on left image (f) Edge tracing on right image (g) Line segment plotting for left image (h) Line segment plotting for right image Figure 5.24: /continued

156 136 (i) The left relational graph (j) The right relational graph (k) The matched lines for left image (l) The matched lines for right image Figure 5.24: /continued

157 137 Figure 5.25: Association graph resulted from the eleventh experiment 5.15 Experiment 12: Stereo Images on a Tennis ball, an Ice Chest and Two Cylinders The data is stereo images depicted a scene of a tennis ball, an ice chest and two cylinders on a table, with image dimension of 512 x 512 (Figure 5.26 (a) and (b)). There are 35 edges detected in the left image and 34 edges detected in the right image, as shown in Figure 5.26 (c) and (d). In the edge tracing process, edges less than 20 pixels are discarded (see Figure 5.26 (e) and (f)). There are 91 line segments derived from the left image and 91 line segments derived from the right image (see Figure 5.26 (g) and (h)). The structural information interpreted from the left and right line segment image is represented by the left and right relational graph respectively (see Figure 5.26 (i) and (j)). The resulted association graph has 273 nodes and 392 arcs, as shown in Figure The largest maximal clique is a clique of size 8. The matching algorithm only found 8 left-to-right matching pairs, with 7 correct

158 138 matching pair and one mismatched case. There are 83 unmatched lines. The algorithm matched only about 9% of the left lines. Figure 5.26 (k) and (l) show the corresponding lines between the left and right image, as found by the largest maximal clique. (a) Left image (b) Right image (c) Edge detection on left image (d) Edge detection on right image Figure 5.26: Some results of the twelfth experiment: (a) & (b) the left and right image, respectively, (c) & (d) edge detection for the left and right image, (e) & (f) edge tracing images, (g) & (h) the plotting of extracted line segments, (i) & (j) the relational graphs, and (k) & (l) the corresponding lines between the left and right image

159 139 (e) Edge tracing on left image (f) Edge tracing on right image (g) Line segment plotting for left image (h) Line segment plotting for right image (i) The left relational graph (j) The right relational graph Figure 5.26: /continued

160 140 (k) The matched lines for left image (l) The matched lines for right image Figure 5.26: /continued Figure 5.27: Association graph resulted from the twelfth experiment

161 Experiment 13: Stereo Images of a Room The data is a pair of grey scale image captures an in-door scene of a room (Figure 5.28 (a) and Figure 5.28 (b)). The size of each image is 250 x 250 pixels. There are 54 edges detected in the left image and 60 edges detected in the right image, as shown in Figure 5.28 (c) and (d). In the edge tracing process, edges less than 20 pixels are discarded (see Figure 5.28 (e) and (f)). There are 160 line segments derived from the left image and 177 line segments derived from the right image (see Figure 5.28 (g) and (h)). The structural information interpreted from the left and right line segment image is represented by the left and right relational graph respectively (see Figure 5.28 (i) and (j)). The resulted association graph has 476 nodes and 207 arcs, as shown in Figure The largest maximal clique is a clique of size 6. The matching algorithm found 6 left-to-right matching pairs, without mismatched case. There are 154 unmatched lines. The algorithm matched about 4% of the left lines. Figure 5.28 (k) and (l) show the correspondence between the left and right image, as found by the largest maximal clique. Although having less-structured feature-extracted images and low density relational graph, the matching algorithm still capable to find the correct left-to-right matching pair in this experiment, as can observed from Figure 5.28 (k) and (l). False matching pair is not found in the matching result. Nevertheless, the matching percentage is relatively low (refer Table 5.4). This is because many things are piled in the room, which cannot depicted comprehensively with merely ordering, intersection and co-linearity relationship. This indicates the imperfection of the three condition of inter-line relationship in describing the real scene. The proposed algorithm is only applicable when the objects in the image can be extracted as well-structured line features.

162 142 (a) Left image (b) Right image (c) Edge detection on left image (d) Edge detection on right image Figure 5.28: Some results of the thirteenth experiment: (a) & (b) the left and right image, respectively, (c) & (d) edge detection for the left and right image, (e) & (f) edge tracing images, (g) & (h) the plotting of extracted line segments, (i) & (j) the relational graphs, and (k) & (l) the corresponding line segments between the left and right image

163 143 (e) Edge tracing on left image (f) Edge tracing on right image (g) Line segment plotting for left image (h) Line segment plotting for right image Figure 5.28: /continued

164 144 (i) The left relational graph (j) The right relational graph (k) The matched lines for left image (l) The matched lines for right image Figure 5.28: /continued

165 145 Figure 5.29: Association graph resulted from the thirteenth experiment 5.17 Experiment 14: Stereo Images on a Scene of a room The test data is a pair of grey scale image depicted an in-door scene of a room with image dimension 250 x 250 (Figure 5.30 (a) and (b)). There are 54 edges detected in the left image and 61 edges detected in the right image, as shown in Figure 5.30 (c) and (d). In the edge tracing process, edges less than 20 pixels are discarded (see Figure 5.30 (e) and (f)). There are 173 line segments derived from the left image and 174 line segments derived from the right image (see Figure 5.30 (g) and (h)). The structural information interpreted from the left and right line segment image is represented by the left and right relational graph respectively (see Figure 5.30 (i) and (j)). The resulted association graph has 518 nodes and 316 arcs, as shown in Figure The largest maximal clique is a clique of size 6. The matching algorithm found 6 left-to-right matching pairs, with 6 correct matching pair and no

166 146 mismatched case happen. There are 167 unmatched lines. The algorithm matched about 3% of the left lines. Figure 5.30 (k) and (l) show the correspondence between the left and right image, as found by the largest maximal clique. In this experiment, the matching algorithm capable to find the left-to-right matching pairs correctly (see Figure 5.30 (k) and (l)). However, the matching percentage is relatively low (refer Table 5.4). This is due to the limitation of interline relationship in describing the real scene. The proposed algorithm is only applicable when the objects in the image can be extracted as well-structured line features. The utilization of inter-line relationship in image matching is constrained by the characteristics of data. (a) Left image (b) Right image Figure 5.30: Some results of the fourteenth experiment: (a) & (b) the left and right image, respectively, (c) & (d) edge detection for the left and right image, (e) & (f) edge tracing images, (g) & (h) the plotting of extracted line segments, (i) & (j) the relational graphs, and (k) & (l) the corresponding line segments between the left and right image

167 147 (c) Edge detection on left image (d) Edge detection on right image (e) Edge tracing on left image (f) Edge tracing on right image Figure 5.30: /continued

168 148 (g) Line segment plotting for left image (h) Line segment plotting for right image (i) The left relational graph (j) The right relational graph Figure 5.30: /continued

BORANG PENGESAHAN STATUS TESIS

BORANG PENGESAHAN STATUS TESIS UNIVERSITI MALAYSIA PAHANG BORANG PENGESAHAN STATUS TESIS JUDUL: RFID BASED SYSTEMATIC STUDENT S ATTENDANCE MANAGEMENT SYSTEM SESI PENGAJIAN: 2010/2011 Saya HANISAH BT HAMID ( 860210-02-5274 ) (HURUF BESAR)

More information

BORANG PENGESAHAN STATUS TESIS

BORANG PENGESAHAN STATUS TESIS UNIVERSITI MALAYSIA PAHANG BORANG PENGESAHAN STATUS TESIS JUDUL: MODAL ANALYSIS OF CAR DISC BRAKE SESI PENGAJIAN: 2010/2011 Saya AHMAD ZAKI BIN CHE ZAINOL ARIFF (871228-11-5749) (HURUF BESAR) mengaku membenarkan

More information

DEVELOPMENT OF VENDING MACHINE WITH PREPAID PAYMENT METHOD AMAR SAFUAN BIN ALYUSI

DEVELOPMENT OF VENDING MACHINE WITH PREPAID PAYMENT METHOD AMAR SAFUAN BIN ALYUSI DEVELOPMENT OF VENDING MACHINE WITH PREPAID PAYMENT METHOD AMAR SAFUAN BIN ALYUSI Report submitted in partial fulfilment of the requirements for the award of the degree of Bachelor of Mechanical Engineering

More information

DESIGN ANALYSIS OF EXTERIOR CAR BODY PART BASTIAN WIBAR BIN MOMANG

DESIGN ANALYSIS OF EXTERIOR CAR BODY PART BASTIAN WIBAR BIN MOMANG DESIGN ANALYSIS OF EXTERIOR CAR BODY PART BASTIAN WIBAR BIN MOMANG Thesis submitted in partial fulfillment of the requirements for award of Bachelor of Mechanical Engineering with Automotive Engineering

More information

UNIVERSITI MALAYSIA PAHANG BORANG PENGESAHAN STATUS TESIS

UNIVERSITI MALAYSIA PAHANG BORANG PENGESAHAN STATUS TESIS UNIVERSITI MALAYSIA PAHANG BORANG PENGESAHAN STATUS TESIS JUDUL: AUTOMATIC TEMPERATURE CONTROL SYSTEM FOR AQUAPONIC GREEN HOUSE SESI PENGAJIAN: 2012/2013 Saya, AMIN KHAIRI BIN ROSLI (890214-01-5839) (HURUF

More information

PREDICTION OF SURFACE ROUGHNESS IN TURNING OPERATION OF LOW CARBON STEEL AISI 1018 FAKHRUR RAZI BIN BAHRIN UNIVERSITI MALAYSIA PAHANG

PREDICTION OF SURFACE ROUGHNESS IN TURNING OPERATION OF LOW CARBON STEEL AISI 1018 FAKHRUR RAZI BIN BAHRIN UNIVERSITI MALAYSIA PAHANG PREDICTION OF SURFACE ROUGHNESS IN TURNING OPERATION OF LOW CARBON STEEL AISI 1018 FAKHRUR RAZI BIN BAHRIN UNIVERSITI MALAYSIA PAHANG ii UNIVERSITI MALAYSIA PAHANG BORANG PENGESAHAN STATUS TESIS JUDUL:

More information

IMAGE MATCHING USING RELATIONAL GRAPH REPRESENTATION LAI CHUI YEN

IMAGE MATCHING USING RELATIONAL GRAPH REPRESENTATION LAI CHUI YEN IMAGE MATCHING USING RELATIONAL GRAPH REPRESENTATION LAI CHUI YEN A thesis submitted in fulfilment of the requirements for the award of the degree of Master of Science (Computer Science) Faculty of Computer

More information

VIDEO DISTORTION MEASUREMENT USING PSNR IN WAVELET DOMAIN MOK YUNG LENG

VIDEO DISTORTION MEASUREMENT USING PSNR IN WAVELET DOMAIN MOK YUNG LENG VIDEO DISTORTION MEASUREMENT USING PSNR IN WAVELET DOMAIN MOK YUNG LENG Bachelor of Engineering with Honors (Electronics & Computer Engineering) 2009/2010 UNIVERSITI MALAYSIA SARAWAK R13a BORANG PENGESAHAN

More information

TUITION CENTRE MANAGEMENT SYSTEM (TCMS) ZARIFAH BINTI MOHD PAHMI UNIVERSITI TEKNIKAL MALAYSIA MELAKA

TUITION CENTRE MANAGEMENT SYSTEM (TCMS) ZARIFAH BINTI MOHD PAHMI UNIVERSITI TEKNIKAL MALAYSIA MELAKA TUITION CENTRE MANAGEMENT SYSTEM (TCMS) ZARIFAH BINTI MOHD PAHMI UNIVERSITI TEKNIKAL MALAYSIA MELAKA TUITION CENTRE MANAGEMENT SYSTEM (TCMS) ZARIFAH BINTI MOHD PAHMI This report is submitted in partial

More information

WEB-BASED DEVICE CONTROL AND COMMUNICATION VIA PARALLEL PORT MOHD RASHDAN BIN ABD RAHMAN UNIVERSITI TEKNIKAL MALAYSIA MELAKA

WEB-BASED DEVICE CONTROL AND COMMUNICATION VIA PARALLEL PORT MOHD RASHDAN BIN ABD RAHMAN UNIVERSITI TEKNIKAL MALAYSIA MELAKA WEB-BASED DEVICE CONTROL AND COMMUNICATION VIA PARALLEL PORT MOHD RASHDAN BIN ABD RAHMAN UNIVERSITI TEKNIKAL MALAYSIA MELAKA BORANG PENGESAHAN STATUS TESIS JUDUL: WEB-BASED DEVICE CONTROL AND COMMUNICATION

More information

BORANG PENGESAHAN STATUS TESIS

BORANG PENGESAHAN STATUS TESIS BORANG PENGESAHAN STATUS TESIS JUDUL: Network Analysis and Design at Universiti Teknologi Mara (UiTM) Lendu, Melaka and Implementation Using OPNET Modeler SESI PENGAJIAN: 200712008 Saya NUR AZNIDA BINTI

More information

THE DEVELOPMENT OF MODULAR PRODUCT DESIGN: FOLDABLE CHAIR

THE DEVELOPMENT OF MODULAR PRODUCT DESIGN: FOLDABLE CHAIR UNIVERSITI TEKNIKAL MELAKA MALAYSIA THE DEVELOPMENT OF MODULAR PRODUCT DESIGN: FOLDABLE CHAIR Thesis submitted in accordance with the requirements of the Malaysia Technical University of Malacca for the

More information

HOME APPLIANCE CONTROL SYSTEM TAN WEI SYE

HOME APPLIANCE CONTROL SYSTEM TAN WEI SYE HOME APPLIANCE CONTROL SYSTEM TAN WEI SYE This report is submitted in partial fulfillment of the requirements for award of Bachelor of Electronic Engineering (Computer Engineering) with honors Faculty

More information

SYSTEM MANAGEMENT AQIQAH AND QURBAN ONLINE (SMAQO)

SYSTEM MANAGEMENT AQIQAH AND QURBAN ONLINE (SMAQO) SYSTEM MANAGEMENT AQIQAH AND QURBAN ONLINE (SMAQO) UNIVERSITI TEKNIKAL MALAYSIA MELAKA BORANG PENGESAHAN STATUS TESIS* JUDUL: SYSTEM MANAGEMENT AOIOAH AND OURBAN ONLINE SESI PENGAJIAN: 20091200 10 Saya

More information

BORANG PENGESAHAN STATUS TESIS* TERHAD (Mengandungi maklumat TERHAD yang telah ditentukan oleh organisasi/badan di mana penyelidikan dijalankan)

BORANG PENGESAHAN STATUS TESIS* TERHAD (Mengandungi maklumat TERHAD yang telah ditentukan oleh organisasi/badan di mana penyelidikan dijalankan) BORANG PENGESAHAN STATUS TESIS* JUDUL: ONLINE BASED SIGNATURE VERIFICATION TOOLS SESI PENGAJIAN: _2012 / 2013 Saya TANG HAN YANG. (HURUF BESAR) mengaku membenarkan tesis Projek Sarjana Muda ini disimpan

More information

AN ANDROID-BASED SMART SECURITY TOURING SYSTEM FOR REAL-TIME DATA RECORDING USING NFC, GPS AND GSM TECHNOLOGY.

AN ANDROID-BASED SMART SECURITY TOURING SYSTEM FOR REAL-TIME DATA RECORDING USING NFC, GPS AND GSM TECHNOLOGY. AN ANDROID-BASED SMART SECURITY TOURING SYSTEM FOR REAL-TIME DATA RECORDING USING NFC, GPS AND GSM TECHNOLOGY. DINESH A/L MANIYAM UNIVERSITI TEKNIKAL MALAYSIA MELAKA ii AN ANDROID-BASED SMART SECURITY

More information

THE APPLICATION OF DIFFERETIAL BOX-COUNTING METHOD FOR IRIS RECOGNITION AHMAD AZFAR BIN MAHMAMI

THE APPLICATION OF DIFFERETIAL BOX-COUNTING METHOD FOR IRIS RECOGNITION AHMAD AZFAR BIN MAHMAMI i THE APPLICATION OF DIFFERETIAL BOX-COUNTING METHOD FOR IRIS RECOGNITION AHMAD AZFAR BIN MAHMAMI This Report Is Submitted In Partial Fulfillment of the Requirements for the Award Of Bachelor of Electronic

More information

UNIVERSITI TEKNIKAL MALAYSIA MELAKA

UNIVERSITI TEKNIKAL MALAYSIA MELAKA UNIVERSITI TEKNIKAL MALAYSIA MELAKA COMPARISON STUDY OF PRESS PART QUALITY INSPECTION SYSTEM: CHECKING FIXTURE AND FARO ARM LASER SCANNER This report submitted in accordance with requirement of the Universiti

More information

Study of Distributed Coordination Function (DCF) and Enhanced DCF (EDCF) in IEEE MAC Protocols for Multimedia Applications.

Study of Distributed Coordination Function (DCF) and Enhanced DCF (EDCF) in IEEE MAC Protocols for Multimedia Applications. Study of Distributed Coordination Function (DCF) and Enhanced DCF (EDCF) in IEEE 802.11 MAC Protocols for Multimedia Applications Chan Chen Hoong Bachelor of Engineering with Honors (Electronics & Computer

More information

BORANG PENGESAHAN STATUS TESIS ν

BORANG PENGESAHAN STATUS TESIS ν UNIVERSITI TEKNOLOGI MALAYSIA BORANG PENGESAHAN STATUS TESIS ν JUDUL: THE DEVELOPMENT OF METRICA/NPR 3.3 SESI PENGAJIAN: 2004 / 2005 PSZ 19:16(Pind.1/97) Saya MOHD FARID ISMAIL (HURUF BESAR) mengaku membenarkan

More information

UNIVERSITI TEKNOLOGI MALAYSIA

UNIVERSITI TEKNOLOGI MALAYSIA PSZ 19:16 (Pind. 1/97) UNIVERSITI TEKNOLOGI MALAYSIA BORANG PENGESAHAN STATUS TESIS JUDUL: DEVELOPMENT OF DATABASE MANAGEMENT SYSTEM (DBMS) BASED ON ELEMENTAL COST ANALYSIS (ECA) METHODOLOGY Saya SESI

More information

COORDINATION PROTECTION SYSTEM IN INDUSTRIAL PLANTS AHMAD TARMIZI BIN MD NOR

COORDINATION PROTECTION SYSTEM IN INDUSTRIAL PLANTS AHMAD TARMIZI BIN MD NOR COORDINATION PROTECTION SYSTEM IN INDUSTRIAL PLANTS AHMAD TARMIZI BIN MD NOR This report is submitted in partial fulfillment of this requirement for the award of Bachelor of Electronic Engineering (Industrial

More information

Performance of Real Time Traffic In The Ethernet And WLAN Using TCP And UDP Protocols. Punitha Subbramaniam

Performance of Real Time Traffic In The Ethernet And WLAN Using TCP And UDP Protocols. Punitha Subbramaniam Performance of Real Time Traffic In The Ethernet And WLAN Using TCP And UDP Protocols. Punitha Subbramaniam Bachelor of Engineering with Honors (Electronics & Telecommunications Engineering) 2009/2010

More information

COMPARATIVE STUDY BETWEEN FEATURE EXTRACTION METHODS FOR FACE RECOGNITION

COMPARATIVE STUDY BETWEEN FEATURE EXTRACTION METHODS FOR FACE RECOGNITION COMPARATIVE STUDY BETWEEN FEATURE EXTRACTION METHODS FOR FACE RECOGNITION SITI FAIRUZ BINTI ABDULLAH UNIVERSITI TEKNIKAL MALAYSIA MELAKA JUDUL: BORANG PENGESAHAN STATUS TESIS* COMPARATIVE STUDY BETWEEN

More information

BORANG PENGESAHAN STATUS TESIS*

BORANG PENGESAHAN STATUS TESIS* BORANG PENGESAHAN STATUS TESIS* JUDUL: NETWORK ANALYSIS AND DESIGN AT WISMA NEGERI SESI PENGAJIAN: II I 2008 Saya MOHO EZWAN BIN MD SAID mengaku membenarkan tesis (PSM/Sarjana/Doktor Falsafah) ini disimpan

More information

REMOVING AL-QURAN ILLUMINATION AMIRUL RAMZANI BIN RADZID UNIVERSITI TEKNIKAL MALAYSIA MELAKA

REMOVING AL-QURAN ILLUMINATION AMIRUL RAMZANI BIN RADZID UNIVERSITI TEKNIKAL MALAYSIA MELAKA REMOVING AL-QURAN ILLUMINATION AMIRUL RAMZANI BIN RADZID UNIVERSITI TEKNIKAL MALAYSIA MELAKA BORANG PENGESAHAN STATUS TESIS JUDUL: REMOVING AL-QURAN ILLUMINATION _ SESI PENGAJIAN: 2014/2015 Saya AMIRUL

More information

PROTOTYPE OF POWER LINE INTERFACE SOCKET USING EMBEDDED CONTROLLER FOR DATA ACQUISITION AND CONTROL. LAI CHING HUAT

PROTOTYPE OF POWER LINE INTERFACE SOCKET USING EMBEDDED CONTROLLER FOR DATA ACQUISITION AND CONTROL. LAI CHING HUAT i PROTOTYPE OF POWER LINE INTERFACE SOCKET USING EMBEDDED CONTROLLER FOR DATA ACQUISITION AND CONTROL. LAI CHING HUAT This Report Is Submitted In Partial Fulfillment of Requirements for the Bachelor Degree

More information

UPGRADE FMS200: SHAFT SUPPLY MODULE THOUGH HUMAN MACHINE INTERFACE LEE HO CHUNG

UPGRADE FMS200: SHAFT SUPPLY MODULE THOUGH HUMAN MACHINE INTERFACE LEE HO CHUNG i UPGRADE FMS200: SHAFT SUPPLY MODULE THOUGH HUMAN MACHINE INTERFACE LEE HO CHUNG This report is submitted in partial fulfilment of the requirements for the award of Bachelor of Electronic Engineering

More information

SMART PARKING SYSTEM USING LABVIEW MUHAMMAD NAZIR BIN MAT ISA

SMART PARKING SYSTEM USING LABVIEW MUHAMMAD NAZIR BIN MAT ISA SMART PARKING SYSTEM USING LABVIEW MUHAMMAD NAZIR BIN MAT ISA This report is submitted in partial fulfillment of the requirements for the award of Bachelor of Electronic Engineering (Industrial Electronics)

More information

PERFORMANCE ANALYSIS OF VIDEO TRANSMISSION OVER IEEE ARCHITECTURE NOOR HURUL-AIN BINTI MOHAMAD

PERFORMANCE ANALYSIS OF VIDEO TRANSMISSION OVER IEEE ARCHITECTURE NOOR HURUL-AIN BINTI MOHAMAD PERFORMANCE ANALYSIS OF VIDEO TRANSMISSION OVER IEEE 802.16 ARCHITECTURE NOOR HURUL-AIN BINTI MOHAMAD This report is submitted in partial fulfillment of the requirements for the award of Bachelor of Electronic

More information

PERPUSTAKAAN UTHM *

PERPUSTAKAAN UTHM * * v.l:wsf JvA *-Ji\ PERPUSTAKAAN UTHM 30000001957505* UNIVERSITI TEKNOLOGI MALAYSIA PSZ 19:16 (Pind.1/97) BORANG PENGESAHAN STATUS TESIS^ JUDUL: VESSELS CLASSIFICATION SESI PENGAJ IAN: 2005/2006 Saya NOR

More information

SESSION BASED ACTIVITY MONITORING APPLICATION FOR ANDROID TAN LEIK HO

SESSION BASED ACTIVITY MONITORING APPLICATION FOR ANDROID TAN LEIK HO SESSION BASED ACTIVITY MONITORING APPLICATION FOR ANDROID TAN LEIK HO This report is submitted in partial fulfillment of requirements for the Bachelor Degree of Electronic Engineering (Industrial Electronics)

More information

FORCE ANALYSIS ON ROBOTIC DEBURRING PROCESS

FORCE ANALYSIS ON ROBOTIC DEBURRING PROCESS UNIVERSITI TEKNIKAL MALAYSIA MELAKA (UTeM) FORCE ANALYSIS ON ROBOTIC DEBURRING PROCESS Thesis submitted in accordance with the partial requirements of the Universiti Teknikal Malaysia Melaka for the Bachelor

More information

ZIGBEE-BASED SMART HOME SYSTEM NURUL ILMI BINTI OMAR

ZIGBEE-BASED SMART HOME SYSTEM NURUL ILMI BINTI OMAR ZIGBEE-BASED SMART HOME SYSTEM NURUL ILMI BINTI OMAR This report is submitted in partial fulfillment of the requirement for the Bachelor Degree in Electronic Engineering (Wireless Communication) with Honors

More information

UNIVERSITI TEKNIKAL MALAYSIA MELAKA

UNIVERSITI TEKNIKAL MALAYSIA MELAKA UNIVERSITI TEKNIKAL MALAYSIA MELAKA WIRELESS CENTRALIZED ACCESS SMART HOME This report submitted in accordance with requirement of the Universiti Teknikal Malaysia Melaka (UTeM) for the Bachelor s Degree

More information

IMPLEMENTATION OF DIAMOND SEARCH (DS) ALGORITHM FOR MOTION ESTIMATION USING MATLAB SITI HAJAR BINTI AHMAD

IMPLEMENTATION OF DIAMOND SEARCH (DS) ALGORITHM FOR MOTION ESTIMATION USING MATLAB SITI HAJAR BINTI AHMAD IMPLEMENTATION OF DIAMOND SEARCH (DS) ALGORITHM FOR MOTION ESTIMATION USING MATLAB SITI HAJAR BINTI AHMAD This report is submitted in partial fulfillment of the requirements for the award of Bachelor of

More information

SMART BODY MONITORING SYSTEM MOHAMAD KASYFUL AZIM BIN AHMAD

SMART BODY MONITORING SYSTEM MOHAMAD KASYFUL AZIM BIN AHMAD SMART BODY MONITORING SYSTEM MOHAMAD KASYFUL AZIM BIN AHMAD This report is submitted in partial fulfillment of the requirements for the award of Bachelor of Electronic Engineering (Computer Engineering)

More information

EDUCATION PATH SYSTEM MOHD ZULHAFIZ BIN HUSSIN

EDUCATION PATH SYSTEM MOHD ZULHAFIZ BIN HUSSIN EDUCATION PATH SYSTEM MOHD ZULHAFIZ BIN HUSSIN This report is submitted in partial fulfillment of the requirements for the Bachelor of Computer Science (Database Management) FACULTY OF INFORMATION AND

More information

BORANG PENGESAHAN STATUS TESIS JUDUL: TAILOR SYSTEM (TailorSys) (HURUF BESAR)

BORANG PENGESAHAN STATUS TESIS JUDUL: TAILOR SYSTEM (TailorSys) (HURUF BESAR) BORANG PENGESAHAN STATUS TESIS JUDUL: TAILOR SYSTEM (TailorSys) SESI PENGAJIAN: 2-200812009 Saya SIT1 SALBIAH BTE MOHD SALLEH (HURUF BESAR) mengaku membenarkan tesis (PSM/Sarjana/Doktor Falsafah) ini disirnpan

More information

BORANG PANGESAHAII STATUS TESIS

BORANG PANGESAHAII STATUS TESIS PSZ 19:16 (Pind.ll97) UNVERST TEKNOLOG MALAYSA BORANG PANGESAHA STATUS TESS JUDUL : SSTEM PARPUSTAKAAN MN ATAS TALAN SES PENGAJAN: 2004noas Saya MUDZALFAH BNT AKBAR (HURUF BESAR) mengaku membenarkantesis

More information

UNIVERSITI TEKNIKAL MALAYSIA MELAKA OPTIMIZATION OF MEASUREMENT PARAMETERS IN NON- CONTACT MEASURING SYSTEM

UNIVERSITI TEKNIKAL MALAYSIA MELAKA OPTIMIZATION OF MEASUREMENT PARAMETERS IN NON- CONTACT MEASURING SYSTEM UNIVERSITI TEKNIKAL MALAYSIA MELAKA OPTIMIZATION OF MEASUREMENT PARAMETERS IN NON- CONTACT MEASURING SYSTEM This report submitted in accordance with the requirement of the Universiti Teknikal Malaysia

More information

HOME APPLIANCES MONITORING AND CONTROL USING SMARTPHONE APPLICATION AHMAD DANIAL BIN AHMAD NAZRI

HOME APPLIANCES MONITORING AND CONTROL USING SMARTPHONE APPLICATION AHMAD DANIAL BIN AHMAD NAZRI i HOME APPLIANCES MONITORING AND CONTROL USING SMARTPHONE APPLICATION AHMAD DANIAL BIN AHMAD NAZRI This Report Is Submitted In Partial Fulfillment Of Requirements For The Bachelor Degree of Electronic

More information

PERFORMANCE EVALUATION OF LEACH PROTOCOL FOR WIRELESS SENSOR NETWORKS USING NS2 MUHAMAD FAIZ BIN RAMDZAN

PERFORMANCE EVALUATION OF LEACH PROTOCOL FOR WIRELESS SENSOR NETWORKS USING NS2 MUHAMAD FAIZ BIN RAMDZAN i PERFORMANCE EVALUATION OF LEACH PROTOCOL FOR WIRELESS SENSOR NETWORKS USING NS2 MUHAMAD FAIZ BIN RAMDZAN This report is submitted in partial fulfilment of requirements for the Bachelor Degree of Electronic

More information

UNIVERSITI TEKNIKAL MALAYSIA MELAKA

UNIVERSITI TEKNIKAL MALAYSIA MELAKA UNIVERSITI TEKNIKAL MALAYSIA MELAKA INTELLIGENT KEYCHAIN This report submitted in accordance with requirement of the Universiti Teknikal Malaysia Melaka (UTeM) for the Bachelor s Degree in Electronics

More information

KOLEJ UNIVERSITI TEKNOLOGI TUN HUSSEIN ONN

KOLEJ UNIVERSITI TEKNOLOGI TUN HUSSEIN ONN KOLEJ UNIVERSITI TEKNOLOGI TUN HUSSEIN ONN BORANG PENGESAHAN STATUS TESIS JUDUL: EMBEDDED WEB SERVER SESI PENGAJIAN: 200412005 Saya MUHAMMAD SHUKRI BIN AHMAD ( 801208-14-5007) (HURUF BESAR) mcngaku mcmbcnarkan

More information

BORANG PENGESAHAN STATUS TESIS*

BORANG PENGESAHAN STATUS TESIS* BORANG PENGESAHAN STATUS TESIS* JUDUL: In_fi_ _n_eo_n D_i=~-it_a_l_L_ib_r_a_ry~ ---C_a_ta_l_o=~~in~~~------- SESI PENGAJIAN: 2011/ 2 0 1 2 Saya LEE KlAN SENG mengaku membenarkan tesis Projek Sarjana Muda

More information

IMPROVED IMAGE COMPRESSION SCHEME USING HYBRID OF DISCRETE FOURIER, WAVELETS AND COSINE TRANSFORMATION MOH DALI MOUSTAFA ALSAYYH

IMPROVED IMAGE COMPRESSION SCHEME USING HYBRID OF DISCRETE FOURIER, WAVELETS AND COSINE TRANSFORMATION MOH DALI MOUSTAFA ALSAYYH 4 IMPROVED IMAGE COMPRESSION SCHEME USING HYBRID OF DISCRETE FOURIER, WAVELETS AND COSINE TRANSFORMATION MOH DALI MOUSTAFA ALSAYYH A thesis submitted in fulfilment of the requirements for the award of

More information

UNIVERSITI TEKNIKAL MALAYSIA MELAKA

UNIVERSITI TEKNIKAL MALAYSIA MELAKA UNIVERSITI TEKNIKAL MALAYSIA MELAKA HOME APPLIANCES WEB SWITCH CONTROL WIRELESSLY USING SMARTHPHONE This report is submitted in accordance with the requirement of Universiti Teknikal Malaysia Melaka (UTeM)

More information

DEVELOPMENT OF HOME ENERGY MANAGEMENT SYSTEM (HEMS) CHEA MENG HUAT UNIVERSITI TEKNIKAL MALAYSIA MELAKA

DEVELOPMENT OF HOME ENERGY MANAGEMENT SYSTEM (HEMS) CHEA MENG HUAT UNIVERSITI TEKNIKAL MALAYSIA MELAKA 1 DEVELOPMENT OF HOME ENERGY MANAGEMENT SYSTEM (HEMS) CHEA MENG HUAT UNIVERSITI TEKNIKAL MALAYSIA MELAKA i DEVELOPMENT OF HOME ENERGY MANAGEMENT SYSTEM (HEMS) CHEA MENG HUAT This Report Is Submitted In

More information

PERPUSTAKAAN KUi TTHO 3 OOOO

PERPUSTAKAAN KUi TTHO 3 OOOO power m m i m a m. pam m, optic a t Ii-'iVi ' l v lvi iy InVKv Rt it i FOR m o network I 0-r/ V m m PERPUSTAKAAN KUi TTHO 3 OOOO 00054916 6 UNIVERSITI TEKNOLOGI MALAYSIA BORANG PENGESAHAN STATUS TESIS

More information

BLOCK-BASED NEURAL NETWORK MAPPING ON GRAPHICS PROCESSOR UNIT ONG CHIN TONG UNIVERSITI TEKNOLOGI MALAYSIA

BLOCK-BASED NEURAL NETWORK MAPPING ON GRAPHICS PROCESSOR UNIT ONG CHIN TONG UNIVERSITI TEKNOLOGI MALAYSIA BLOCK-BASED NEURAL NETWORK MAPPING ON GRAPHICS PROCESSOR UNIT ONG CHIN TONG UNIVERSITI TEKNOLOGI MALAYSIA BLOCK-BASED NEURAL NETWORK MAPPING ON GRAPHICS PROCESSOR UNIT ONG CHIN TONG A project report submitted

More information

NUR ZURAIN BT ZUBAIDI B

NUR ZURAIN BT ZUBAIDI B ANALYSIS COMPARISON BETWEEN SOLIDWORKS PLASTICS AND SIMULATION MOLDFLOW ADVISER OF OPTIMUM GATE SIZE FOR THE DESIGN OF A SINGLE CAVITY PLASTIC NAME CARD HOLDER MOLD NUR ZURAIN BT ZUBAIDI B051210038 UNIVERSITI

More information

LOW COST MP3 PLAYER USING SD CARD KHAIRIL AMRI BIN MUHAMAD UNIVERSITI TEKNIKAL MALAYSIA MELAKA

LOW COST MP3 PLAYER USING SD CARD KHAIRIL AMRI BIN MUHAMAD UNIVERSITI TEKNIKAL MALAYSIA MELAKA LOW COST MP3 PLAYER USING SD CARD KHAIRIL AMRI BIN MUHAMAD UNIVERSITI TEKNIKAL MALAYSIA MELAKA LOW COST MP3 PLAYER USING SD CARD KHAIRIL AMRI BIN MUHAMAD This report is submitted in partial fulfillment

More information

FINITE ELEMENT ANALYSIS OF SEEPAGE FLOW UNDER A SHEET PILE LOH LING PING

FINITE ELEMENT ANALYSIS OF SEEPAGE FLOW UNDER A SHEET PILE LOH LING PING FINITE ELEMENT ANALYSIS OF SEEPAGE FLOW UNDER A SHEET PILE LOH LING PING Bachelor of Engineering with Honours (Civil Engineering) 2006 Universiti Malaysia Sarawak Kota Samarahan BORANG PENYERAHAN TESIS

More information

NUR FARAH DIYANA BINTI SABARUDIN

NUR FARAH DIYANA BINTI SABARUDIN MOBILE E-TIME TABLE SYSTEM ON ANDROID PLATFORM USING NEAR FIELD COMMUNICATION (NFC) NUR FARAH DIYANA BINTI SABARUDIN This Report Is Submitted In Partial Fulfillment of Requirements for the Bachelor Degree

More information

UNIVERSITI TEKNIKAL MALAYSIA MELAKA

UNIVERSITI TEKNIKAL MALAYSIA MELAKA UNIVERSITI TEKNIKAL MALAYSIA MELAKA DESIGN AND DEVELOPMENT OF VEHICLE SECURITY DEVICE BY USING BIOMETRIC IDENTIFICATION (FINGERPRINT) This report submitted in accordance with requirement of the Universiti

More information

ISOGEOMETRIC ANALYSIS OF PLANE STRESS STRUCTURE CHUM ZHI XIAN

ISOGEOMETRIC ANALYSIS OF PLANE STRESS STRUCTURE CHUM ZHI XIAN ISOGEOMETRIC ANALYSIS OF PLANE STRESS STRUCTURE CHUM ZHI XIAN A project report submitted in partial fulfilment of the requirements for the award of the degree of Master of Engineering (Civil-Structure)

More information

DESIGN OF ENERGY SAVING AIR CONDITIONING CONTROL SYSTEM MOHD KHUZAIRIE BIN MOHD TAUFIK

DESIGN OF ENERGY SAVING AIR CONDITIONING CONTROL SYSTEM MOHD KHUZAIRIE BIN MOHD TAUFIK DESIGN OF ENERGY SAVING AIR CONDITIONING CONTROL SYSTEM MOHD KHUZAIRIE BIN MOHD TAUFIK This report is submitted in partial fulfillment of the requirement for award of Bachelor of Electronic Engineering

More information

OPTIMIZE PERCEPTUALITY OF DIGITAL IMAGE FROM ENCRYPTION BASED ON QUADTREE HUSSEIN A. HUSSEIN

OPTIMIZE PERCEPTUALITY OF DIGITAL IMAGE FROM ENCRYPTION BASED ON QUADTREE HUSSEIN A. HUSSEIN OPTIMIZE PERCEPTUALITY OF DIGITAL IMAGE FROM ENCRYPTION BASED ON QUADTREE HUSSEIN A. HUSSEIN A thesis submitted in partial fulfillment of the requirements for the award of the degree of Master of Science

More information

SEMANTICS ORIENTED APPROACH FOR IMAGE RETRIEVAL IN LOW COMPLEX SCENES WANG HUI HUI

SEMANTICS ORIENTED APPROACH FOR IMAGE RETRIEVAL IN LOW COMPLEX SCENES WANG HUI HUI SEMANTICS ORIENTED APPROACH FOR IMAGE RETRIEVAL IN LOW COMPLEX SCENES WANG HUI HUI A thesis submitted in fulfilment of the requirements for the award of the degree of Doctor of Philosophy (Computer Science)

More information

KARAOKE MACHINE TOOL MOHD AIEZATT DANIAL B RAMIZAN

KARAOKE MACHINE TOOL MOHD AIEZATT DANIAL B RAMIZAN KARAOKE MACHINE TOOL MOHD AIEZATT DANIAL B RAMIZAN This report is submitted in partial fulfillment of the requirements for the award for of Bachelor Degree of Electronic Engineering (Industrial Electronics)

More information

SIT1 NURI-IAZA BINTI MOHD RAMLI

SIT1 NURI-IAZA BINTI MOHD RAMLI ANALYSIS AND CALCULATION'OF FIBER TO FIBER CONNECTION LOSS SIT1 NURI-IAZA BINTI MOHD RAMLI This report is submitted in partial fulfillment of the requirements for the award of Bachelor of Electronic Engineering

More information

THE COMPARISON OF IMAGE MANIFOLD METHOD AND VOLUME ESTIMATION METHOD IN CONSTRUCTING 3D BRAIN TUMOR IMAGE

THE COMPARISON OF IMAGE MANIFOLD METHOD AND VOLUME ESTIMATION METHOD IN CONSTRUCTING 3D BRAIN TUMOR IMAGE THE COMPARISON OF IMAGE MANIFOLD METHOD AND VOLUME ESTIMATION METHOD IN CONSTRUCTING 3D BRAIN TUMOR IMAGE SHAMSHIYATULBAQIYAH BINTI ABDUL WAHAB UNIVERSITI TEKNOLOGI MALAYSIA THE COMPARISON OF IMAGE MANIFOLD

More information

HARDWARE AND SOFTWARE CO-SIMULATION PLATFORM FOR CONVOLUTION OR CORRELATION BASED IMAGE PROCESSING ALGORITHMS SAYED OMID AYAT

HARDWARE AND SOFTWARE CO-SIMULATION PLATFORM FOR CONVOLUTION OR CORRELATION BASED IMAGE PROCESSING ALGORITHMS SAYED OMID AYAT HARDWARE AND SOFTWARE CO-SIMULATION PLATFORM FOR CONVOLUTION OR CORRELATION BASED IMAGE PROCESSING ALGORITHMS SAYED OMID AYAT UNIVERSITI TEKNOLOGI MALAYSIA HARDWARE AND SOFTWARE CO-SIMULATION PLATFORM

More information

UNIVERSITI TEKNIKAL MALAYSIA MELAKA

UNIVERSITI TEKNIKAL MALAYSIA MELAKA UNIVERSITI TEKNIKAL MALAYSIA MELAKA SMART AQUARIUM USING GLOBAL SYSTEM FOR MOBILE COMMUNICATION (GSM) This report submitted in accordance with requirement of the Universiti Teknikal Malaysia Melaka (UTeM)

More information

2D CUT-OUT ANIMATION "MAT TUNANGKU"

2D CUT-OUT ANIMATION MAT TUNANGKU 2D CUT-OUT ANIMATION "MAT TUNANGKU" NUR SAHIDAH BINTI BASHlER UNIVERSITI TEKNIKAL MALAYSIA MELAKA BORANG PENGESAHAN STATUS TESIS* JUDUL: 2D CUT-OUT ANIMATION "MAT TUNANGKU" SESI PENGAJIAN: 2-200812009

More information

PROJECT TITLE JARIPAH BINTI ADZHAR

PROJECT TITLE JARIPAH BINTI ADZHAR i PROJECT TITLE WIRELESS REMOTE CONTROL UTILIZING XBEE FOR MOBILE ROBOT APPLICATION JARIPAH BINTI ADZHAR This Report Is Submitted In Partial Fulfillment of Requirements For The Bachelor Degree in Electronic

More information

LOGICAL OPERATORS AND ITS APPLICATION IN DETERMINING VULNERABLE WEBSITES CAUSED BY SQL INJECTION AMONG UTM FACULTY WEBSITES NURUL FARIHA BINTI MOKHTER

LOGICAL OPERATORS AND ITS APPLICATION IN DETERMINING VULNERABLE WEBSITES CAUSED BY SQL INJECTION AMONG UTM FACULTY WEBSITES NURUL FARIHA BINTI MOKHTER LOGICAL OPERATORS AND ITS APPLICATION IN DETERMINING VULNERABLE WEBSITES CAUSED BY SQL INJECTION AMONG UTM FACULTY WEBSITES NURUL FARIHA BINTI MOKHTER UNIVERSITI TEKNOLOGI MALAYSIA i LOGICAL OPERATORS

More information

Signature :.~... Name of supervisor :.. ~NA.lf... l.?.~mk.. :... 4./qD F. Universiti Teknikal Malaysia Melaka

Signature :.~... Name of supervisor :.. ~NA.lf... l.?.~mk.. :... 4./qD F. Universiti Teknikal Malaysia Melaka "I hereby declare that I have read this thesis and in my opinion this thesis is sufficient in term of scope and quality for the reward of the Bachelor' s degree of Mechanical Engineering (Structure and

More information

7 I I, BORANG PENGESAHAN STATUS TESIS* SESI PENGAnAN: 2012 I Saya MOHD FARID BIN MOHD YUSOF (B )

7 I I, BORANG PENGESAHAN STATUS TESIS* SESI PENGAnAN: 2012 I Saya MOHD FARID BIN MOHD YUSOF (B ) BORANG PENGESAHAN STATUS TESIS* JUDUL : CLOUD STORAGE SUBSYSTEM FOR PIN-IT SOCIAL NETWORK SESI PENGAnAN: 2012 I 2013 Saya MOHD FARID BIN MOHD YUSOF (B031010350) mengaku membenarkan tesis Projek Sarjana

More information

HARDWARE-ACCELERATED LOCALIZATION FOR AUTOMATED LICENSE PLATE RECOGNITION SYSTEM CHIN TECK LOONG UNIVERSITI TEKNOLOGI MALAYSIA

HARDWARE-ACCELERATED LOCALIZATION FOR AUTOMATED LICENSE PLATE RECOGNITION SYSTEM CHIN TECK LOONG UNIVERSITI TEKNOLOGI MALAYSIA HARDWARE-ACCELERATED LOCALIZATION FOR AUTOMATED LICENSE PLATE RECOGNITION SYSTEM CHIN TECK LOONG UNIVERSITI TEKNOLOGI MALAYSIA HARDWARE-ACCELERATED LOCALIZATION FOR AUTOMATED LICENSE PLATE RECOGNITION

More information

AUTO SILENT MODE FOR ANDROID SMARTPHONES MUHAMMAD AZLAN SHAHARIMAN BIN AHMAD

AUTO SILENT MODE FOR ANDROID SMARTPHONES MUHAMMAD AZLAN SHAHARIMAN BIN AHMAD AUTO SILENT MODE FOR ANDROID SMARTPHONES MUHAMMAD AZLAN SHAHARIMAN BIN AHMAD This report is submitted in partial fulfillment of requirement for the Degree of Bachelor of Electronic Engineering (Computer

More information

UNIVERSITI TEKNIKAL MALAYSIA MELAKA

UNIVERSITI TEKNIKAL MALAYSIA MELAKA UNIVERSITI TEKNIKAL MALAYSIA MELAKA AUTOMATED STREETLIGHT MALFUNCTION ALERT SYSTEM (ASMAS) BY USING GSM This report is submitted in accordance with the requirement of the Universiti Teknikal Malaysia Melaka

More information

PLC APPLICATION FOR FLOOD DETECTION AND PROTECTION VIA COMMUNICATION SYSTEM MOHD AKMAL BIN ZAINAL ABIDIN

PLC APPLICATION FOR FLOOD DETECTION AND PROTECTION VIA COMMUNICATION SYSTEM MOHD AKMAL BIN ZAINAL ABIDIN PLC APPLICATION FOR FLOOD DETECTION AND PROTECTION VIA COMMUNICATION SYSTEM MOHD AKMAL BIN ZAINAL ABIDIN This report is submitted in partial fulfillment of the requirements for the award Bachelor of Electronic

More information

RECOGNITION OF PARTIALLY OCCLUDED OBJECTS IN 2D IMAGES ALMUASHI MOHAMMED ALI UNIVERSITI TEKNOLOGI MALAYSIA

RECOGNITION OF PARTIALLY OCCLUDED OBJECTS IN 2D IMAGES ALMUASHI MOHAMMED ALI UNIVERSITI TEKNOLOGI MALAYSIA RECOGNITION OF PARTIALLY OCCLUDED OBJECTS IN 2D IMAGES ALMUASHI MOHAMMED ALI UNIVERSITI TEKNOLOGI MALAYSIA i RECOGNITION OF PARTIALLY OCCLUDED OBJECT IN 2D IMAGES ALMUASHI MOHAMMED ALI A dissertation submitted

More information

ELECTROMAGNETIC MODELLING OF ARTIFICIAL PACEMAKER. Emelia Anak Gunggu

ELECTROMAGNETIC MODELLING OF ARTIFICIAL PACEMAKER. Emelia Anak Gunggu ELECTROMAGNETIC MODELLING OF ARTIFICIAL PACEMAKER Emelia Anak Gunggu Bachelor of Engineering with Honours (Electronics and Telecommunication Engineering) 2009 UNIVERSITI MALAYSIA SARAWAK R13a BORANG PENGESAHAN

More information

90(111H7. AND 1800i1H7. MOBILE PHONE SI1Il'LATION WITH HVNIAN HEAD ANI) HAND 11ODEl.

90(111H7. AND 1800i1H7. MOBILE PHONE SI1Il'LATION WITH HVNIAN HEAD ANI) HAND 11ODEl. 90(111H7. AND 1800i1H7. MOBILE PHONE SI1Il'LATION WITH HVNIAN HEAD ANI) HAND 11ODEl. Nasyitah bt Ahmad Kamal TK 6%4.4 Bachelor of Engineering with Honours C45 (Electronics & Telecommunication Engineering)

More information

SECURE-SPIN WITH HASHING TO SUPPORT MOBILITY AND SECURITY IN WIRELESS SENSOR NETWORK MOHAMMAD HOSSEIN AMRI UNIVERSITI TEKNOLOGI MALAYSIA

SECURE-SPIN WITH HASHING TO SUPPORT MOBILITY AND SECURITY IN WIRELESS SENSOR NETWORK MOHAMMAD HOSSEIN AMRI UNIVERSITI TEKNOLOGI MALAYSIA SECURE-SPIN WITH HASHING TO SUPPORT MOBILITY AND SECURITY IN WIRELESS SENSOR NETWORK MOHAMMAD HOSSEIN AMRI UNIVERSITI TEKNOLOGI MALAYSIA SECURE-SPIN WITH HASHING TO SUPPORT MOBILITY AND SECURITY IN WIRELESS

More information

DEVELOPMENT OF TIMETABLING PROGRAM FONG WOON KEAT

DEVELOPMENT OF TIMETABLING PROGRAM FONG WOON KEAT ii DEVELOPMENT OF TIMETABLING PROGRAM FONG WOON KEAT This report is submitted in partial fulfillment of the requirements for the award of Bachelor of Electronic Engineering and Computer Engineering With

More information

SUPERVISED MACHINE LEARNING APPROACH FOR DETECTION OF MALICIOUS EXECUTABLES YAHYE ABUKAR AHMED

SUPERVISED MACHINE LEARNING APPROACH FOR DETECTION OF MALICIOUS EXECUTABLES YAHYE ABUKAR AHMED i SUPERVISED MACHINE LEARNING APPROACH FOR DETECTION OF MALICIOUS EXECUTABLES YAHYE ABUKAR AHMED A project submitted in partial fulfillment of the requirements for the award of the degree of Master of

More information

HARDWARE/SOFTWARE SYSTEM-ON-CHIP CO-VERIFICATION PLATFORM BASED ON LOGIC-BASED ENVIRONMENT FOR APPLICATION PROGRAMMING INTERFACING TEO HONG YAP

HARDWARE/SOFTWARE SYSTEM-ON-CHIP CO-VERIFICATION PLATFORM BASED ON LOGIC-BASED ENVIRONMENT FOR APPLICATION PROGRAMMING INTERFACING TEO HONG YAP HARDWARE/SOFTWARE SYSTEM-ON-CHIP CO-VERIFICATION PLATFORM BASED ON LOGIC-BASED ENVIRONMENT FOR APPLICATION PROGRAMMING INTERFACING TEO HONG YAP A project report submitted in partial fulfilment of the requirements

More information

AN IMPROVED PACKET FORWARDING APPROACH FOR SOURCE LOCATION PRIVACY IN WIRELESS SENSORS NETWORK MOHAMMAD ALI NASSIRI ABRISHAMCHI

AN IMPROVED PACKET FORWARDING APPROACH FOR SOURCE LOCATION PRIVACY IN WIRELESS SENSORS NETWORK MOHAMMAD ALI NASSIRI ABRISHAMCHI AN IMPROVED PACKET FORWARDING APPROACH FOR SOURCE LOCATION PRIVACY IN WIRELESS SENSORS NETWORK MOHAMMAD ALI NASSIRI ABRISHAMCHI A thesis submitted in partial fulfillment of the requirements for the award

More information

HOME APPLIANCES AND SECURITY CONTROLLED VIA GSM SYSTEM NUR SYAFIQAH BINTI YUSOP

HOME APPLIANCES AND SECURITY CONTROLLED VIA GSM SYSTEM NUR SYAFIQAH BINTI YUSOP HOME APPLIANCES AND SECURITY CONTROLLED VIA GSM SYSTEM NUR SYAFIQAH BINTI YUSOP This Report Is Submitted In Partial Fulfilment of Requirements for the Bachelor Degree of Electronic Engineering (Wireless

More information

BORANG PENCALONAN HADIAH UNIVERSITI NOMINATION FORM FOR UNIVERSITY AWARD

BORANG PENCALONAN HADIAH UNIVERSITI NOMINATION FORM FOR UNIVERSITY AWARD BORANG PENCALONAN HADIAH UNIVERSITI NOMINATION FORM FOR UNIVERSITY AWARD PERIHAL HADIAH DESCRIPTION OF AWARD Nama Hadiah (Name of Award) Spesifikasi Hadiah (Specification of Award) Syarat Kurniaan (Condition

More information

SLANTING EDGE METHOD FOR MODULATION TRANSFER FUNCTION COMPUTATION OF X-RAY SYSTEM FARHANK SABER BRAIM UNIVERSITI TEKNOLOGI MALAYSIA

SLANTING EDGE METHOD FOR MODULATION TRANSFER FUNCTION COMPUTATION OF X-RAY SYSTEM FARHANK SABER BRAIM UNIVERSITI TEKNOLOGI MALAYSIA SLANTING EDGE METHOD FOR MODULATION TRANSFER FUNCTION COMPUTATION OF X-RAY SYSTEM FARHANK SABER BRAIM UNIVERSITI TEKNOLOGI MALAYSIA SLANTING EDGE METHOD FOR MODULATION TRANSFER FUNCTION COMPUTATION OF

More information

DETECTION OF WORMHOLE ATTACK IN MOBILE AD-HOC NETWORKS MOJTABA GHANAATPISHEH SANAEI

DETECTION OF WORMHOLE ATTACK IN MOBILE AD-HOC NETWORKS MOJTABA GHANAATPISHEH SANAEI ii DETECTION OF WORMHOLE ATTACK IN MOBILE AD-HOC NETWORKS MOJTABA GHANAATPISHEH SANAEI A project report submitted in partial fulfillment of the requirements for the award of the degree of Master of Computer

More information

INTEGRATION OF CUBIC MOTION AND VEHICLE DYNAMIC FOR YAW TRAJECTORY MOHD FIRDAUS BIN MAT GHANI

INTEGRATION OF CUBIC MOTION AND VEHICLE DYNAMIC FOR YAW TRAJECTORY MOHD FIRDAUS BIN MAT GHANI INTEGRATION OF CUBIC MOTION AND VEHICLE DYNAMIC FOR YAW TRAJECTORY MOHD FIRDAUS BIN MAT GHANI A thesis submitted in fulfilment of the requirements for the award of the degree of Master ofengineering (Mechanical)

More information

LOCALIZING NON-IDEAL IRISES VIA CHAN-VESE MODEL AND VARIATIONAL LEVEL SET OF ACTIVE CONTOURS WITHTOUT RE- INITIALIZATION QADIR KAMAL MOHAMMED ALI

LOCALIZING NON-IDEAL IRISES VIA CHAN-VESE MODEL AND VARIATIONAL LEVEL SET OF ACTIVE CONTOURS WITHTOUT RE- INITIALIZATION QADIR KAMAL MOHAMMED ALI LOCALIZING NON-IDEAL IRISES VIA CHAN-VESE MODEL AND VARIATIONAL LEVEL SET OF ACTIVE CONTOURS WITHTOUT RE- INITIALIZATION QADIR KAMAL MOHAMMED ALI A dissertation submitted in partial fulfillment of the

More information

AUTOMATIC RAILWAY GATE CONTROLLERUSING ZIGBEE NURLIYANA HAZIRAH BINTI MOHD SAFEE (B )

AUTOMATIC RAILWAY GATE CONTROLLERUSING ZIGBEE NURLIYANA HAZIRAH BINTI MOHD SAFEE (B ) AUTOMATIC RAILWAY GATE CONTROLLERUSING ZIGBEE NURLIYANA HAZIRAH BINTI MOHD SAFEE (B021110154) This report is submitted in partial fulfilment of requirements for the Bachelor Degree of Electronic Engineering

More information

STUDY OF FLOATING BODIES IN WAVE BY USING SMOOTHED PARTICLE HYDRODYNAMICS (SPH) HA CHEUN YUEN UNIVERSITI TEKNOLOGI MALAYSIA

STUDY OF FLOATING BODIES IN WAVE BY USING SMOOTHED PARTICLE HYDRODYNAMICS (SPH) HA CHEUN YUEN UNIVERSITI TEKNOLOGI MALAYSIA STUDY OF FLOATING BODIES IN WAVE BY USING SMOOTHED PARTICLE HYDRODYNAMICS (SPH) HA CHEUN YUEN UNIVERSITI TEKNOLOGI MALAYSIA STUDY OF FLOATING BODIES IN WAVE BY USING SMOOTHED PARTICLE HYDRODYNAMICS (SPH)

More information

IMPLEMENTATION OF UNMANNED AERIAL VEHICLE MOVING OBJECT DETECTION ALGORITHM ON INTEL ATOM EMBEDDED SYSTEM

IMPLEMENTATION OF UNMANNED AERIAL VEHICLE MOVING OBJECT DETECTION ALGORITHM ON INTEL ATOM EMBEDDED SYSTEM IMPLEMENTATION OF UNMANNED AERIAL VEHICLE MOVING OBJECT DETECTION ALGORITHM ON INTEL ATOM EMBEDDED SYSTEM CHEONG WEI WEI UNIVERSITI TEKNOLOGI MALAYSIA IMPLEMENTATION OF UNMANNED AERIAL VEHICLE MOVING OBJECT

More information

INTELLIGENT FINGERPRINT RECOGNITION SYSTEM SY MOHD SYATHIR BIN SY ALI ZAINOL ABIDIN UNIVERSITI MALAYSIA PAHANG

INTELLIGENT FINGERPRINT RECOGNITION SYSTEM SY MOHD SYATHIR BIN SY ALI ZAINOL ABIDIN UNIVERSITI MALAYSIA PAHANG INTELLIGENT FINGERPRINT RECOGNITION SYSTEM SY MOHD SYATHIR BIN SY ALI ZAINOL ABIDIN UNIVERSITI MALAYSIA PAHANG SY MOHD SYATHIR BIN SY ALI ZAINOL ABIDIN BACHELOR OF ELECTRICAL ENGINEERING (POWER SYSTEMS)

More information

ONTOLOGY-BASED SEMANTIC HETEROGENEOUS DATA INTEGRATION FRAMEWORK FOR LEARNING ENVIRONMENT

ONTOLOGY-BASED SEMANTIC HETEROGENEOUS DATA INTEGRATION FRAMEWORK FOR LEARNING ENVIRONMENT ONTOLOGY-BASED SEMANTIC HETEROGENEOUS DATA INTEGRATION FRAMEWORK FOR LEARNING ENVIRONMENT ARDA YUNIANTA UNIVERSITI TEKNOLOGI MALAYSIA iii This thesis is dedicated to My Wife, who always give me full of

More information

AUTOMATIC APPLICATION PROGRAMMING INTERFACE FOR MULTI HOP WIRELESS FIDELITY WIRELESS SENSOR NETWORK

AUTOMATIC APPLICATION PROGRAMMING INTERFACE FOR MULTI HOP WIRELESS FIDELITY WIRELESS SENSOR NETWORK AUTOMATIC APPLICATION PROGRAMMING INTERFACE FOR MULTI HOP WIRELESS FIDELITY WIRELESS SENSOR NETWORK MOHD HUSAINI BIN MOHD FAUZI UNIVERSITI TEKNOLOGI MALAYSIA AUTOMATIC APPLICATION PROGRAMMING INTERFACE

More information

INFORMATION TECHNOLOGY EQUIPMENT MANAGEMENT SYSTEM (ITEMS) MOHD NOR IRMAN BIN SULAIh4AN UNIVERSITI TEKNCKAL MALAYSIA MELAKA

INFORMATION TECHNOLOGY EQUIPMENT MANAGEMENT SYSTEM (ITEMS) MOHD NOR IRMAN BIN SULAIh4AN UNIVERSITI TEKNCKAL MALAYSIA MELAKA INFORMATION TECHNOLOGY EQUIPMENT MANAGEMENT SYSTEM (ITEMS) MOHD NOR IRMAN BIN SULAIh4AN UNIVERSITI TEKNCKAL MALAYSIA MELAKA BORANG PENGESABAN STATUS TESIS * JUDUL: INFORMATION TECHNOLOGY EQUIPMENT UANAGEMENT

More information

DEVELOPMENT OF MOBILE ROBOT CONTROLLER BASED ON BLUETOOTH COMMUNICATION SYSTEM MUHAMAD ROZAIMI BIN MUHAMAD SABRI B

DEVELOPMENT OF MOBILE ROBOT CONTROLLER BASED ON BLUETOOTH COMMUNICATION SYSTEM MUHAMAD ROZAIMI BIN MUHAMAD SABRI B DEVELOPMENT OF MOBILE ROBOT CONTROLLER BASED ON BLUETOOTH COMMUNICATION SYSTEM MUHAMAD ROZAIMI BIN MUHAMAD SABRI B051110128 UNIVERSITI TEKNIKAL MALAYSIA MELAKA 2014 UNIVERSITI TEKNIKAL MALAYSIA MELAKA

More information

UNIVERSITI TEKNIKAL MALAYSIA MELAKA

UNIVERSITI TEKNIKAL MALAYSIA MELAKA UNIVERSITI TEKNIKAL MALAYSIA MELAKA AUTOMATIC PET FEEDER USING RASPBERRY PI This report is submitted in accordance with requirement of the Universiti Teknikal Malaysia Melaka (UTeM) for the Bachelor of

More information

ENHANCING TIME-STAMPING TECHNIQUE BY IMPLEMENTING MEDIA ACCESS CONTROL ADDRESS PACU PUTRA SUARLI

ENHANCING TIME-STAMPING TECHNIQUE BY IMPLEMENTING MEDIA ACCESS CONTROL ADDRESS PACU PUTRA SUARLI ENHANCING TIME-STAMPING TECHNIQUE BY IMPLEMENTING MEDIA ACCESS CONTROL ADDRESS PACU PUTRA SUARLI A project report submitted in partial fulfillment of the requirements for the award of the degree of Master

More information

ENHANCING WEB SERVICE SELECTION USING ENHANCED FILTERING MODEL AJAO, TAJUDEEN ADEYEMI

ENHANCING WEB SERVICE SELECTION USING ENHANCED FILTERING MODEL AJAO, TAJUDEEN ADEYEMI ENHANCING WEB SERVICE SELECTION USING ENHANCED FILTERING MODEL AJAO, TAJUDEEN ADEYEMI A dissertation submitted in partial fulfillment of the requirements for the award of the degree of Master of Science

More information

COLOUR IMAGE WATERMARKING USING DISCRETE COSINE TRANSFORM AND TWO-LEVEL SINGULAR VALUE DECOMPOSITION BOKAN OMAR ALI

COLOUR IMAGE WATERMARKING USING DISCRETE COSINE TRANSFORM AND TWO-LEVEL SINGULAR VALUE DECOMPOSITION BOKAN OMAR ALI COLOUR IMAGE WATERMARKING USING DISCRETE COSINE TRANSFORM AND TWO-LEVEL SINGULAR VALUE DECOMPOSITION BOKAN OMAR ALI A dissertation submitted in partial fulfillment of the requirements for the award of

More information