Fast Nano-object tracking in TEM video sequences using Connected Components Analysis Hussein S Abdul-Rahman Jan Wedekind Martin Howarth Centre for Automation and Robotics Research (CARR) Mobile Machines and Vision Laboratory (MMVL) Sheffield Hallam University, Sheffield, UK Abstract - Connected components analysis is a well known pre-processing step in many image processing applications. Not only is it used to divide the image into its constituent parts and to give different labels for each segment, but it is also a key step in the tracking of moving objects in video sequences. In this paper the authors propose the use of connected component analysis as a tracking algorithm, for the tracking of a nano-object in a dynamical in-situ Transmission Electron Microscopy (TEM) using video sequences. The results show that the proposed algorithm is fast and capable of tracking nano-objects, even when they become deformed and changed in shape. Keywords: - Connected Component analysis, Segmentation, Labelling, Tracking, Transmission electron microscopy (TEM), Nano-robotics. 1. INTRODUCTION Studying the properties of different materials at the nano-scale is very demanding requiring not only a high degree of precision in both observing and manipulating materials at the nano-scale, but also updated and enhanced software processing algorithms which are capable of analysing information for better understanding of the material properties. processing algorithms bring great benefit to the analyses of microscopy images for those materials under test. Many image processing algorithms are involved with microscopy images to increase their quality, extract key features, and measure certain length or area, and many other analytical applications. [1] In this paper the authors propose a nano-object tracking algorithm based on connected components analysis. Connected components analysis plays a key role in many image processing applications [1-4]. Not only is it used to divide the image into its constituent parts and to give unique labels to each segment, but it is also the key step in the tracking of moving objects in video sequences. [3] The connected component analysis is discussed and explained in section 2 of this paper. Section 3 describes how to apply the connected components to track a nano-object in TEM video images. The results are shown and discussed in section 4 followed by the conclusions which are discussed in section 5. 2. CONNECTED COMPONENTS ANALYSIS labelling using the connected components analysis is a key step in pattern recognition, target tracking, computer vision, fingerprint identifications, character recognition, medical images analysis and many other image-based applications [1-7]. Connected components analysis can be defined as the process in which a binary image is transferred into a N-state image where all connected pixels which belong to one object are assigned a unique label. Connected component labelling has been widely investigated and many algorithms have been proposed. These algorithms vary in their complexity and speed, and they can be roughly classified into two main categories. The first group of algorithms attempt to solve the connectivity between pixels using multiple scans. They keep scanning the image, forwards and backwards to resolve label equivalences until there is no change [8]. The other group attempts to assign the labels using two-scans only. In the first scan they assign initial labels and in the second scan they resolve the equivalence between the labels [9-15]. In this paper, a fast two-scan connected component algorithm is implemented to divide the image into its individual components; this algorithm is explained in the following section. Sheffield Hallam University, Sheffield, UK 1
2.1 TWO-SCAN CONNECTED COMPONENT ALGORITHM: In this section, the authors explain the connected components algorithm that is used to segment the binary images coming from the TEM video sequences. As in other two-scan algorithms, this algorithm is divided into two main parts; the initial labelling scan and secondly the resolving equivalent labels scan. The second case is that the neighbours have only one label, this could imply that only one neighbour is labelled or two or more neighbours have the same label. In this case the active pixel inherits that label as shown in the Figure 3 below. 3 3 3 Case 2: All neighbours have only one label Labels[pixel] = True_Labels[neighbour_label] 2.1.1 INITIAL LABELLING The initial labelling procedure can be described as follows: Scan the image pixel by pixel starting form the top left pixel ending with the final pixel at bottom right. For each pixel check if the value of that pixel is 0, i.e.; it does not belong to any object; if so, skip that pixel and go to the next one. Otherwise, record and examine the labels of the four neighbours which were processed before. Figure 1 below, shows the active pixel and its processed neighbours which have to be considered. Figure 3: Labelling case 2, one label found for the neighbours. When the neighbours have two or more different labels, the active pixel takes the smallest label as shown in figure 4. After assigning the label to the active pixel, the true labels table has to be updated so that all labels are connected to each other. So, for the example shown in Figure 4 below, the true labels tables should be modified so that the true labels of the labels 3, 5 and 7 are equal. Figure 5 illustrates the pseudo code used to update the true labels table. The processed neighbours The active pixel 5 3 7 3 Case 3: Neighbours have more than one label Labels[pixel] = True_labels[smallest_label] Update_true_labels() Figure 1: The active pixel and its processed neighbours. If none of the four neighbours has been labelled, assign a new label to the active pixel and modify the true labels table as shown in Figure 2 below. The true labels table is a vector that contains the true values of the initial labels so that it can be used to resolve the equivalence between the labels in the second scan, as will be explained. 1 Case 1: All neighbours do not have labels Labels[pixel] = next_free_label True_Labels[next_free_label] = next_free_label next_free_label++ Figure 2: Labelling case 1, no labels found for the neighbours Figure 4: Labelling case 3, more than one label found for the neighbours. Updating the True Labels Table: Start Update_true_labels() Tmp = True_labels[smallest_label] while( True_Labels[Tmp]!= Tmp) Tmp = True_labels[Tmp] for (each of the labelled neighbour) do { Tmp1 = neighbour_label while ( True_Labels[Tmp1]!= Tmp1) Tmp1 = True_labels[Tmp1] if ( Tmp > Tmp1) True_labels[Tmp1] = True_labels[Tmp1] else True_labels[Tmp1] = True_labels[Tmp] } End Figure 5: Pseudo-code to update the true labels table. Sheffield Hallam University, Sheffield, UK 2
2.1.2 RESOLVING EQUIVALENCES BETWEEN LABELS same procedure is then repeated for each label in the table until all labels have been resolved. After the first scan, a single object which is actually connected can be given two or more different labels; therefore, resolving equivalences between these labels is very important to establish the connection between these labels. The algorithm will scan the true labels table to check and recursively change the true labels, until no more changes occur, as shown in the pseudo code in Figure 6 below. Resolving Equivalences between labels: Start resolving_equivalences() for (label=last_label to label=1) { while(true_lable[label]!= true_label[true_label[label]]) { true_label[label] = true_label[true_label[label]] } } End Figure 6: Pseudo-code to resolve equivalences between labels. As an example, let us assume that the following true labels table, which is shown in Figure 7(a), resulted from the initial labelling procedure described in the pervious section. As shown in the figure, resolving the equivalence starts with the last label in the table and checks if the true label is a seed label, a seed label is defined as a label which is a true label is equal to its value. In other words, it checks that the true label is an independent label and does not refer to any other true label. In Figure 7(a), the true label of label 5 is 3, which is referring to other another label, so, in this case, label 3 is not a seed label and the true label 5 becomes 2 as shown if Figure 7(b). After the replacement, the program will check again to determine if the true label of label 5, which is now 2, is independent. Once again it is still referring to a new label which is 1; therefore a new replacement is required as shown in Figure 7(c). Now, the true label 5 is equal to 1 which is considered to be a seed label because it is referring to itself, consequently label 5 has been resolved and no further processing is needed. The Label 1 2 3 4 5 True Label 1 1 2 4 3 (a) Label 1 2 3 4 5 True Label 1 1 2 4 2 (b) Label 1 2 3 4 5 True Label 1 1 2 4 1 (c) Figure 7: Example of resolving equivalences between labels. 3. NANO-OBJECT TRACKING USING CONNECTED COMPONENTS The specific example used here to illustrate the connected component analysis is a sharp TEM probe moving into contact with a TEM specimen. The TEM nano-probe is used as an indenter to deform a nano-machined Si bridge in real time inside the TEM microscope. Tracking this prob using the connected components analysis is carried out by two main steps; object selection and object tracking. Object selection is a key stage in the identification of objects of interest. In this stage, an initial frame is analysed and divided into its components. Only the components of interest are selected and the rest are neglected. For each component of interest, the properties of this object are recorded to be used in the tracking stage. In our situation, the area and centre of gravity of each component is calculated and stored to enable the tracking algorithm to identify that object using these properties. Object tracking uses the previous information of the area and centre of gravity to identify the object in the current video frame. Figure 8 below shows a flow chart of the tracking procedure for each new frame. As shown in the figure, each new frame is first thresholded to convert it into a binary image. This binary image is smoothed using a median filter so that all thresholding noise and unwanted small components are filtered out. Then, the labelling algorithm using the connected components analysis is applied and all the components of that frame are extracted. Sheffield Hallam University, Sheffield, UK 3
Input Threshold -ing Binary Labelling (connected components analysis) Labelled Tracking (Area and centre of gravity) Target tracking Figure 8: Flow chart for tracking algorithm using connected components. The next step of the algorithm proceeds by calculating the areas and centre of gravities for each component and comparing the results with that which was stored in the selection stage. The targets are identified using the two following facts: First, in our application, the target is the nanoobject which moving in TEM video images, this indenter is moving smoothly across video frames controlled by a voltage ramp on a piezo crystal. This means that the next position of the indenter should be close to the previous position, which means that any components located a significant distance from this position will not be the target. Note that the position of the component is taken to be its centre of gravity. Secondly, the area of each labelled component, in two successive frames is approximately equal or slightly different. After the targets are identified, their areas and position of centre of gravities, are recorded and updated to be used in the next frame. Results show that this technique is capable of tracking the nano-object in TEM video images even when it deforms and changes its shape, as will be shown in the next section. 4. RESULTS AND DISCUSSION The proposed algorithm has been applied to tracking a nano-probe in a TEM video sequence. Figure 9 shows a nano-probe (top left) moving to deform a Si nano-bridge. It shows that the probe has been successfully tracked through a series of video frames throughout the video sequences. In Figure 9 below, the green coloured component represents the current position of the indenter. Whilst the pink coloured component represent the initial position of the indenter in the first frame. As shown in the Figure 9, the nano-probe has been successfully tracked despite it deforming and changing shape, as shown in Figure 9(d), (e) and (f). Deforming of the nano-probe is a real challenge to many tracking algorithms and most template based algorithms will fail at this stage. Table 1 below shows the distances that the nanoprobe moves with respect to the initial position in the direction of the indenter axis (Z), the red arrow in the figure. Table 1. distance moved by the nanoprobe from the starting position frames Distance (pixels) a 10 b 47 c 72 d 48 e 49 f -33 Sheffield Hallam University, Sheffield, UK 4
(a) (b) (c) (d) (e) (f) Figure 9: Results of nano-indenter tracking using connected components analysis. Multiple objects could also be tracked using the proposed algorithm. Figure 10 shows the results of tracking both the indenter and the sample. The green coloured component represents the indenter tracking, the yellow coloured component represents the current position of the sample and the blue coloured component represents the initial position of the sample. Table 2. Distance moved by the sample throughout the frames frames Distance (pixels) a 10 b 47 c 72 d 48 e 49 f 0 As shown in Figure 10 above, the proposed technique has successfully managed to track both objects even when they are in contact with each other and could be misinterpreted as a single object as shown in Figure 10(c), (d) and (e). In our example the nano-probe and the sample have different contrast levels, as shown in Figure 10, therefore object contact is resolved using different threshold levels to distinguish between the sample and the indenter. Table 2 shows the distances that the sample moves with respect to the initial position in the direction of the indenter axis (Z). 5. CONCLUSIONS In this paper, tracking of moving nano-objects in TEM videos using connected components analysis has been proposed. The proposed algorithm has been tested on TEM video sequences of a moving nano-probe, as it indents and deforms a specimen. The results show that the proposed algorithm manages to track and identify its target throughout the sequences. Results also show that this algorithm is capable of tracking the nano-probes and nano-objects even when they are deformed during the indentation process. The main advantage of the proposed technique is that it is fast and computationally efficient, which makes it applicable to dynamical applications. Sheffield Hallam University, Sheffield, UK 5
(a) (b) (c) (d) (e) (f) Figure 10: Results of multiple objects tracking using connected components analysis. 6. IMPLEMENTATION This algorithm was implemented using HornetsEye computer vision library. HornetsEye is a Ruby real-time computer vision extension running under Linux and Microsoft Windows. HornetsEye is probably the first free software project providing a solid platform for implementing real-time computer vision software in a scripting language. The platform has potential for use in robotic applications, industrial automation, unmanned aerial vehicles as well as in image and video processing, microscopy, materials science, and medical research [16, 17]. REFERENCES [1] R. C. Gonzalez, R. E. Woods, Digital Processing, Third Edition, Prentice Hall (2007) [2] D. A. Forsyth, J. Ponce, Computer Vision: A Modern Approach, Prentice Hall Pearson Education, Inc, (2003). [3] R.G. Abbott, L. R. Williams, Multiple target tracking with lazy background subtraction and connected component analysis, Machine Vision and Applications, vol. 20, No. 2, pp. 93-101, (2009). [4] W. Yalin, J. Ilhwan, W. Stephen, Y. Shing- Tung, C. Tony, Segmentation and tracking of 3D neuron microscopy images using a PDE based method and connected component labeling algorithm, 2006 IEEE/NLM life Science Systems and Applications Workshop, Bethesda, USA, p. 2, (2006). [5] K. Suzuki, S.G. Armato, F. Li, S. Sone, K. Doi, Massive training artificial neural network (MTANN) for reduction of false positives in computerized detection of lung nodules in lowdose CT, Med. Phys. Vol. 30, No. 7 1602 1617, (2003). [6] K. Suzuki, H. Yoshida, J. Nappi, S.G. Armato, A.H. Dachman, Mixture of expert 3D massivetraining ANNs for reduction of multiple types of false positives in CAD for detection of polyps in CT colonography, Med. Phys. Vol. 35, No. 2, pp. 694 703, (2008) [7] K. Suzuki, F. Li, S. Sone, K. Doi, Computeraided diagnostic scheme for distinction between benign and malignant nodules in thoracic lowdose CT by use of massive training artificial neural network, IEEE Trans, Medical Imaging, Vol. 24, No. 9, pp. 1138-1150, (2005). [8] A. Hashizume, R. Suzuki, H. Yokouchi, et al., An algorithm of automated RBC classification Sheffield Hallam University, Sheffield, UK 6
and its evaluation, Bio Med. Eng., vol. 28, No. 1, pp. 25-32, (1990). [9] K. Wu, E. Otoo, K. Suzuki, Optimizing twopass connected-component labeling algorithms, Pattern Analysis and Application, vol. 12, No. 2, pp. 117-135, (2009) [10] L. He, Y. Chao, K. Suzuki, A linear-time tow-scan labeling algorithm, IEEE Trans. Processing, ICIP 2007, pp. V-241 V-244, (2007). [11] L. He, Y. Chao, K. Suzuki, K. Wu, Fast connected-component labling, Pattern Recognition, vol. 42(2009), pp. 1977-1987, (2009). [12] L. He, Y. Chao, K. Suzuki, A run-based two-scan labeling algorithm, IEEE Trans. Processing, Vol. 17(5), pp. 749 756, (2008) [13] K. Suzuki, I. Horiba, N. Sugie, Linear-time connected-component labeling based on sequential local operations, Comput. Vision Understanding, vol. 89 (2003), pp 1 23, (2003). [14] Q. Hu, G. Qian, W.L. Nowinski, Fast connected-component labeling in threedimensional binary images based on iterative recursion, Comput. Vision Understanding, vol. 99 (2005), pp. 414 434, (2005). [15] D.S. Hirschberg, A.K. Chandra, D.V. Sarwate, Computing connected components on parallel computers, Commun. ACM, Vol. 22 (8), pp. 461 464, (1979). [16] J. Wedekind, B. P. Amavasai, K. Dutton, M. Boissenin, A machine vision extension for the Ruby programming language International Conference on Information and Automation (ICIA), pp. 991-6, Zhangjiajie, China.(2008) [17] http://www.wedesoft.demon.co.uk/hornetseyeapi/ Sheffield Hallam University, Sheffield, UK 7