QUANTITATIVE ANALYSIS OF BICYCLUS ANYNANA S EYESPOT WING PATTERN IMAGES

Size: px
Start display at page:

Download "QUANTITATIVE ANALYSIS OF BICYCLUS ANYNANA S EYESPOT WING PATTERN IMAGES"

Transcription

1 Universidade de Lisboa Faculdade de Ciências Departamento de Biologia Animal QUANTITATIVE ANALYSIS OF BICYCLUS ANYNANA S EYESPOT WING PATTERN IMAGES Pedro dos Santos Lopes Mestrado em Bioinformática e Biologia Computacional, 2011 Bioinformática

2

3 Universidade de Lisboa Faculdade de Ciências Departamento de Biologia Animal QUANTITATIVE ANALYSIS OF BICYCLUS ANYNANA S EYESPOT WING PATTERN IMAGES Pedro dos Santos Lopes Mestrado em Bioinformática e Biologia Computacional, 2011 Bioinformática Master dissertation supervised by Dra. Filipa Alves, Instituto Gulbenkian de Ciência Dr. Gabriel G Martins, Faculdade de Ciências da Universidade de Lisboa

4 Index Abstract... i Resumo... iii Introduction... 1 Objectives... 3 Implementation... 4 Input dialog... 5 Separation of each eyespot (wing) Analysis of the eyespot Analysis of normal images Analysis of fluorescent microscopy images Auxiliary methods Results Discussion Initial approach Current approach Future developments References Annexes Variables... 49

5

6 Abstract The main objective of this work is to provide researchers with a tool to quantitatively analyse images of the eyespot patterns present in the wings of the butterfly species Bicyclus anynana. More specifically, this tool is a plugin for imagej, a free open sourced image processing program. Until now, researchers have been using software such as imagej to quantify some dimensions of these patterns through manual measurements. This plugin offers an effective, quick and automatic way to obtain these measurements and others more difficult to get until now, such as the area of each coloured region. It also offers the possibility to obtain representative images of the eyespot(s). Besides images with a single eyespot, the program also analyses wing images with several eyespots as well as fluorescence microscopy images with specific proteins labelled. In the first two types, the program finds the eyespots automatically and analyses them individually, and in the latter ones, the program can provide us with intensity plots of transversal cuts through the middle of the eyespot. To obtain the required data, the plugin finds the centre of each eyespot and, from there, searches for the frontiers of each of its coloured areas. It then calculates their area, diameter and roundness which will be used to calculate the rest of the needed data and to create the representative images. In case of fluorescence microscopy images, the program unites its coloured dots through a dilation process and then acquires the intensity plots. In the end, we have a program that gives more and better data to help future research on evolution and development using this species and that could subsequently be transformed into a more generic plugin capable of analysing any pattern containing closed frontiers due to its strong aptitude to obtain these, regardless of any possible background noise. Keywords: Bicyclus anynana, ImageJ, eyespots, quantitative analysis, fluorescence microscopy. i

7 Acknowledgements Dra. Filipa Alves, Instituto Gulbenkian de Ciência Dr. Gabriel Martins, Faculdade de Ciências da Universidade de Lisboa ii

8 Resumo Este trabalho tem como objectivo proporcionar aos investigadores uma ferramenta para análise quantitativa de padrões eyespot em imagens de asas de borboletas Bicyclus anynana, nomeadamente um plugin para o imagej, um programa de análise de imagem gratuito open source. Até então tem se recorrido ao uso de softwares como o imagej para obter algumas dimensões destes padrões através de medidores manuais de distâncias que se encontram disponíveis nos ditos softwares. Este plugin oferece um modo rápido e eficaz de obter essas medições de distâncias e outras que até então não eram possíveis de serem obtidas por métodos manuais, como a área de cada região do eyespot e a circularidade (roundness) do mesmo. O plugin também oferece a possibilidade da criação automática de imagens representativas do eyespot em análise, quer usando as fronteiras reais de cada área quer usando representações elípticas das mesmas. Para além de imagens com um eyespot individual, o programa também analisa imagens de asas com vários eyespots bem como imagens de microscopia de fluorescência mostrando expressão da proteina Distal less ou Engrailed. Para os primeiros tipos de imagem, o programa encontra automaticamente os eyespots e analisa os individualmente, e para os últimos o programa possibilita o retorno de cortes transversais, passando pelo meio do eyespot, através de gráficos de intensidades, que auxiliam o investigador na sua análise. A espécie Bicyclus anynana tem eyespots anterior e posterior nas superficies ventral e dorsal das asas dianteiras e vários eyespots na superficie dorsal das asas traseiras. Estes padrões são circulares e são de diferentes tamanhos e têm a mesma constituição: uma pequena área branca no centro com aproximadamente um centésimo do tamanho do eyespot, uma área preta predominante e uma faixa amarela em torno desta, envolvendo o eyespot. Estes aspectos morfológicos têm uma grande variação genética e todos eles mostraram que reagem em conjunto à selecção artificial e até mesmo a mutações. Esta aparente união é devido a partilharem o mesmo tipo de desenvolvimento, a partir de centros organizadores denominados por foci, sendo também observado uma expressão de genes de desenvolvimento característica em préadultos. No entanto, um grande potencial para variação independente do tamanho do eyespot tem se observado. Para estudar mais a fundo estas variações neste tipo de padrão, é preciso obter se dados de aspecto mais quantitativo para se ter uma correspondência mais precisa e significativa entre a expressão genética e a variação fenotípica que se observa no indivíduo adulto. Para a obtenção dos dados pretendidos, o plugin, para imagens de eyespots individuais ou imagens de uma asa, obtém o centro do(s) eyespot(s) e, a partir dele, procura as três fronteiras que o constituem. Ao encontrar cada fronteira, o programa regista essas fronteiras individuais e utiliza as para obter informações como as áreas, diâmetros e circularidade que depois servem para os cálculos pretendidos de proporções entre áreas ou iii

9 criação das imagens representativas. Para a obtenção destas fronteiras, o programa binariza a imagem, onde depois é aplicada a função outline e skeletonize, obtendo se assim linhas de um pixel de largura. Depois o programa segue por cada uma delas num determinado sentido e percorre as somente pelo caminho mais curto, sem se enganar por possíveis erros presentes na imagem. Para saber qual o caminho a seguir, o programa pinta cada lado de cada linha com uma determinada cor e, para ser a linha correcta, esta tem de ter pelo menos um pixel de cada uma destas cores presente na sua vizinhança. Existem casos especiais que podem ocorrer, como haver mais que um pixel para seguir que contenha as condições certas ou o lado exterior não ter sido completamente pintado para poder seguir em frente, por exemplo. Para estes casos e outros, existem outro tipo de subregras que depois são aplicadas, de modo a possibilitar ao programa seguir em frente e continuar pelo caminho certo. São estas correcções e subregras que fazem com que o plugin seja relativamente robusto em relação a possíveis indefinições existentes na própria imagem. No caso de imagens de uma asa, para se obter dados correctos e individuais para cada eyespot, o programa tem de os separar, e fá lo traçando rectas entre eles, de modo a que quando o programa obtém a última fronteira, a exterior, o programa não siga para o eyespot vizinho e o contabilize para os dados do eyespot a ser analisado. Ao finalizar a análise de cada um dos eyespots, o programa volta a pintar o que coloriu com as cores originais, branco para as áreas e preto para as fronteiras, de modo a não ocorrer erros com a análise do eyespot anterior. No caso de imagens de microscopia de fluorescência, sendo esta constituída por pontos coloridos num fundo preto que correspondem à expressão de uma determinada proteína, o programa une esses pontos através de um processo de dilatação para dar volume ao eyespot e proceder então aos cortes transversais pretendidos pelo utilizador. Um dos tipos de cortes possíveis é um corte médio que corresponde a uma média de vários cortes ao longo do eyespot em diferentes ângulos. Para tal, o programa cria uma imagem que é a média entre várias imagens, cada uma delas com uma determinada rotação em relação à anterior com centro no meio da região central do eyespot. O programa também possibilita a obtenção de outros dados como a área, o diâmetro e a circularidade (roundness) para cada região. Como resultado final, tem se um programa que analisa quantitativamente os padrões eyespot presente nas asas de borboleta da espécie Bicyclus anynana, que nos possibilita ter uma coneccção mais fidedigna entre o padrão observado e a expressão genética envolvente na sua formação, obtendo se assim mais e melhores dados de análise para futuros trabalhos de investigação em evolução e desenvolvimento usando esta espécie. Posteriormente, este plugin pode ser transformado num mais genérico, capaz de analisar qualquer padrão constituido por fronteiras circulares fechadas, devido à sua forte capacidade de obtenção deste tipo de fronteiras apesar de qualquer possível ruido de fundo na imagem. iv

10 Palavras chave: Bicyclus anynana, imagej, eyespots, análise quantitativa, microscopia de fluorescência. v

11 vi

12 Introduction The colour patterns on butterfly wings are a great example of phenotypic variation. These patterns can vary both across and within species and are ecologically significant, often having a known adaptive value. The butterfly Byciclus anynana became an important model to study adaptive morphological evolution in evolutionary developmental biology due to its variable wing color patterns, namely the eyespots, and because it can be easily maintained in the laboratory. Evolutionary and developmental biology is confronted with the task of understanding the genetic bases of phenotypic variation. For this reason, the genetic pathways involved in eyespot formation and the physiological basis of its plasticity as well as the biochemical pathways of the pigments formation have been the object of study (3). Knowledge from Drosophila melanogaster wing development studies have contributed to the understanding of butterfly wing pattern formation, having a number of its developmental pathways been implicated in butterflies, like the Distal less (Dll), engrailed (en), spalt (sal) and Notch (N) genes that are expressed in the eyespot area in developing wings. Although useful, this approach only gives us candidate genes from the Drosophila and not all the ones involved in butterfly wing formation (1). Since Diptera and Lepidoptera are quite different, namely, for the case in study, their wings, we can conclude that it is likely that not all the involved genes will be the same as in Drosophila, and a more profound search is necessary to fully understand the butterfly wing colour evolution and development (3). Bicyclus anynana has an anterior and a posterior eyespot on both the dorsal and ventral forewing surfaces and several eyespots on the dorsal hindwing surfaces. Each eyespot is approximatelly circular in shape and may have a different size, but all have the same colour composition: a small white centre area roughly 1/100 th of the eyespot, a large middle black area and a narrow yellow ring surrounding it. These morphological aspects have a great phenotypic variation, but all of the eyespots have been shown to react to artificial selection in unison and can also be affected as a group in some wing pattern mutants. This association between the eyespots is due to the fact that they all share the same developmental basis while they are formed from central organizers called foci. The characteristic eyespot shape can already be recognized in the wing disk of the late pupa by the expression pattern of some developmental genes (e.g. en, sal Dll). Consequently, we can view the whole pattern of eyespots as a single module that is evolutionary and developmentally independent from those of other pattern elements present on the wings. Some aspects of eyespot morphology have genetic correspondences between eyespots, being stronger for eyespots on the same wing surface. It has been shown that these correspondences determine the eyespot size and the whole pattern, including in different wing surfaces and, to some degree, other features of eyespot morphology. However, great potential has been found for independent variation of eyespot size even though there s a strong genetic connection between eyespots on the same wing surface. This goes against 1

13 the prevalent role of developmental constraints derived from the coupling between individual eyespots in shaping the evolution in eyespot size. This seemingly overridden aspect suggests that the genetic correlations aren t a major factor constraining wing pattern formation. Even though there s obvious genetic connections, it is known that eyespots from dorsal and ventral surfaces are rather independent, as, for example, the ventral eyespots show plasticity in their size depending on temperature and hormonal regulation whereas dorsal eyespots do not (2). Some genetic tools are currently being developed for B. anynana like the construction of a Bacterial Artificial Chromosome library (3), and an extensive Expression Sequence Tags project is being done, that will help identify new genes involved in wing pattern formation and variation as well as to serve as a basis for the development of a linkage map and DNA microarrays and to help identify DNA sequence polymorphisms in wing genes using its redundancy. These polymorphisms could be then used to build a highdensity gene based linkage map for this species that will be an essential help to the mapping of the wing pattern variation to gene regions, and at the same time testing the contribution of a number of candidate loci to this variation (1). The development of germline transformation techniques for this butterfly is also a recent development in its studies, being the only butterfly species to date where this is possible. All these techniques and tools in development are going to be crucial in testing the function of candidate genes and their contribution to pattern development and evolution, as well as in providing the first steps of the development of more sophisticated gene manipulation techniques already available in other model systems (3). Furthermore, a modelling approach of the pathways involved in the eyespot pattern formation is currently underway in order to help understanding the regulatory relations among the candidate genes, and also to predict other unknown genes possibly involved in this development process. To help this approach, new ways of measuring the eyespots and their phenotypic variation are needed in order to quantify the natural variation in the patterns, as well as the possible experimentally induced alterations, either by artificial selection (selective breeding) or by mutations. To this day, these measurements were made coarsely and prone to human errors as they were being made by hand. In order to obtain more precise values, and others that until now weren t possible to obtain, a new automatic measurement tool must be created and that s the main goal of this work. By providing a precise quantification of the eyespot images, this tool enables the objective comparison between the different variants of the same pattern, both in wild type and mutant butterflies, thus contributing to stablish a more meaningful correspondence between gene expression and phenotypic variation. 2

14 Objectives The main goal of this work is to provide the researchers working with the butterfly species Bicyclus anynana with a tool to quantitatively analyse images of the eyespots present on their wings. Because the study of their formation cannot be done without studying gene expression, this tool also analyses fluorescence microscopy images of the eyespots where the expression pattern of some genes is analysed. This project uses ImageJ as the platform for the tool created, in this case a plugin. Because it s a public domain, java based image processing program and its source code is freely available, it proved to be a good choice to provide the researchers an inexpensive tool for their works (15). This plugin analyses several different types of images: images with a single cut out eyespot on them, images of the whole wing, both fore and hind wings, and fluorescence microscopy images with a single eyespot, showing the homologues of Drosophila melanogaster gene product expression for Engrailed. Previously, the methods used to compare eyespots were rudimentary, such as measuring their diameters with a program ruler, from edge to edge. These types of methods are more laborious and less producible or thorough so a more precise tool is required, one that would automatically identify spots and calculate data such as areas, roundness, diameters, intensities, without the normal human errors and biases that come with manual measurements. This plugin will allow future works on this subject to have more accurate quantitative data rather than manual measurements and comparison of different eyespots, as well as a faster and quicker method/tool to analyse the images collected experimentally. 3

15 Implementation The plugin is composed of four parts as shown in the diagram below. First we have a section that contains the code for creating the initial input dialog in which the user can choose the wanted output parameters, then we have the main body of the program that contains a section to separate each eyespot in case the user analyses the whole wing and a section that performs the analysis of the eyespot. This is the main section which is used to analyse all image types and is divided into two distinct subsections: the first analyses normal images and the other performs the analysis of fluorescence microscopy images. At the end of the code there are some auxiliary methods to be used on the main body. The images used to test and obtain the data to perform comparisons between them comes from the articles referenced at the end after the discussion section. Figure 1: main structure of the program. 4

16 Input dialog The input dialog section contains the code to create the initial menu for the user to choose the output parameters. This initial section actually consists of two separate dialogs: the first one asks the user to select the type of image to be analysed, which is going to define the composition of the next one, and the second will ask the user to choose the output parameters as well as other information, depending on the type of image selected. The first dialog is a very straight forward drop down choicer, as it can be observed on the image below. The user is allowed to choose between Single Eyespot, Forewing, Hindwing, Distal less and Engrailed. Figure 2: first input dialog for choosing the type of image to analyse. The program then sets the default number of eyespots in the image, one for Single Eyespot, Distal less and Engrailed images, two for Forewing images and seven for Hindwing images. The second dialog is different for each type of image to analyse. When the user chooses the Single Eyespot option, the program will recognize the image as being a normal image (not a fluorescence microscopy one) with a single eyespot cut from a wing, and will analyse it using only the subsection analysis of normal images. Its second dialog (fig. 3) has two parts: one in which the user selects which areas to study (the options are the eyespot in its totality and each individual areas) and another in which the user can introduce the scale of the image (pixels/mm), manually or selecting the image scale set on imagej or present in its metadata, and select which output data to get. In the later part, the user can choose to obtain the area, the roundness and the diameters (minimum and maximum) for the selected areas to study. It is also possible to get several areas ratios by selecting which areas to use, as well as image representations of the eyespot using the real frontiers for each area (Coloured original areas) and using elliptical representations of each frontier to draw the eyespot (Coloured elliptical areas). The user can also choose to analyse all the opened images (though they all have to be of the same image type) and to show binary images for each individual area (white, black and yellow). 5

17 Figure 3: second input dialog for Single Eyespot image. If the user chooses either the Forewing or Hindwing options, the program will recognize the image as being a normal image of a whole wing (at least with all the eyespots visible) and will analyse it using the separation of each eyespot (wing) section and the same analysis used for the previous case. In this case, the second dialog (fig. 4) is the same as the previous one (fig. 3) with an additional part in which the user has to introduce the number of eyespots on the wing as well as which eyespots to analyse, using in this case the word all for analyse all eyespots or introducing the number of the eyespots (ordered from top to bottom) separated by a dash ( ). The user can also choose to view the outlined binarized image with the dividing lines (that mimics the division provided by the veins). 6

18 Figure 4: second input dialog for Forewing and Hindwing image. The last two options will tell the program that the image is a fluorescence microscopy one with a single eyespot and will analyse it using only the subsection analysis of fluorescence microscopy images. The second dialog (fig. 5) is somewhat simpler than the previous ones since it only has the part where the user introduces the scale (pixels/mm), manually or selecting the image scale set on imagej or present in its metadata, and select which output data to get, such as the area, roundness and diameters (minimum and maximum) for all areas. It is possible to obtain an image representation of the eyespot using elliptical representations of each frontier and cross sections (average, along the minimum diameter and along the maximum diameter), perpendicular to the image. There is also the choice to analyse all the opened images. 7

19 Figure 5: second dialog for the Distal less and Engrailed image. In order to proceed correctly to the analysis, the program searches the user input for errors and gives the corresponding alert. Some conditions must be observed: the characters for numbers in the dialog must be numbers; the number of eyespots present must be bigger than zero; the number of eyespots to be analysed must be smaller or equal to the number of eyespots present in the image; to calculate areas ratios there must be more than one area selected in the Area ratios section; the user must select at least one area to analyse and select at least one output data. Also if any of the selected areas for the areas ratios isn t included in the areas to study, the program asks the user if it should proceed. After this dialogs and if the user has chosen a Single Eyespot, Forewing or Hindwing image type, the program continues by applying the macrowork method. The macrowork method, which is found at the end of the program, returns an image with the skeletonized outlines of the binarized image. This process starts by extracting the Brightness portion of the original image, by splitting it into an HSB stack and retain only the Brightness for analysis. As opposed to simply convert the image into an 8 bit black/ white image, the Brightness portion provides us with a more reliable image in terms of pixel intensity quantification which correlate to the colour intensity, since each colour have different brightness (as noted in the use of different coefficients in the formula Y = 0.299R G B to calculate the value for each pixel). Therefore, the image obtained is composed of the biggest value of the red, green and blue group, giving the best contrast between each area and not loosing information in the process, as shown in the example below. 8

20 Figure 6: left conversion to 8 bit; right Brightness image (from HSB stack). After obtaining this image, the program applies a Gaussian blur filter with sigma (radius) equal to one pixel to eliminate small irregularities, which could originate errors, and converts it to a binary image. Then the program outlines and skeletonizes it, getting this way the one pixel width frontiers separating the different zones (fig.7). Because each image has their own error prone irregularities, the program shows the user the result image and asks if the frontiers of interest are well defined. At that time, the user can apply more or decrease the blur sigma used before and the program will do the binary image again, using this new value. When the frontiers are acceptable, the program can continue the analysis. Figure 7: image with the skeletonized outlines of the binarized original image. 9

21 Separation of each eyespot (wing) This section is only used for Forewing and Hindwing images and consists in separating the eyespots from each other in case there s merging of neighbouring eyespots through the yellow area. This part of the program obtains the coordinates for each eyespot centre and draws dividing lines between each eyespot on the image with the skeletonized outlines of the binarized original image (fig. 8 right image). Figure 8: left original image (bigeye mutant); centre eyespot representation image; right skeletonized outlines of the binarized original image with dividing lines. Firstly the program has to check if the image contains a white background in order to eliminate it. This is required because the program searches for the white areas as the position reference for each eyespot. As the wing is round shaped, the most efficient and quick method to accomplish this is by inspecting each corner for white pixels. If there s a white background, the image is binarized with a threshold of 128 (middle point in the gradient range of 256) and a great part of its smallest holes filled with the close operator, which is a morphological operator which is simply a dilation followed by an erosion, and the dilate operator. In spite of this processes and even if the fill holes function is used, the wing still could have holes on the edges of the image, mainly because the fill holes function searches the sides for the background colour so that it can identify the area outside the object that is going to have its holes filled. A work around for this possible problem is to make the program draw lines around the edges where the wing touches, by finding out where the white corners finish and the wing starts and drawing a line between them. After this, the fill holes option is applied and all that is left is the white background and the black 10

22 section of the wing. Then the image is inverted and framed with a white line around it, to account for small white pixels at the border of the wing which could be just at the edge of the image, followed by some dilation, also to account for the same possible problem. After all this steps, this image is added to the original one, overlapping the white background (fig. 9 right image). Figure 9: left original image; right image without the white background. The next step consists in finding each eyespot present. For this, the program will search for the white centre area pixels with the value given by the pixellimit variable (equal to 255 to start with) for each colour. Once it s found a pixel with that value, it is inserted in a blank image. After all the pixels have been analysed, the program investigates the blank image with the particle analyser function to see if there s at least as much particles as there is eyespots, number that was provided by the user earlier in the second initial dialog. If this is false, the program then decreases the pixellimit value by one and searches the pixels again, creating a new blank image for it. This process continues until the previous condition is true, and then it shows the result blank image with the possible locations for each eyespot and subsequently asks the user if it contains all of the eyespots present in the original image. If some of them aren t in the correct position, it means that it s a false eyespot, probably an error from the photography, and the program will have to analyse the image again to find the missing eyespot(s). After it has found all the eyespots, if there were false ones, the program asks the user to select with the mouse pointer the particles that correspond to the real eyespots (this is a user friendly input because the selection doesn t have to be so accurate or sequential) and it will make a correspondence between those points selected and the nearest particle, followed by the sorting of the eyespots from top to bottom. If the user selects a different number of points than the number of eyespots, the program will alert the user to the situation, allowing him to correct this. 11

23 The final step of the separation of the eyespots consists in drawing the dividing lines between them. First the program starts to make an image with the outlines of the binarized original image the same way the macrowork method do but without the skeletonize function, so that the outlines are continuous without diagonal breaks. To draw the line between two eyespots, the program needs to know what its slope is and where it is going to pass, which will be a point between the black area/ yellow area frontiers. To find out the latter, an imaginary line is drawn connecting the centres of the two eyespots to be analysed. Then, starting from each eyespot centre, the program will run along the line until it finds its frontier between the black and the yellow areas. The running process actually starts at 1/8th of the distance between the eyespots and if it found only one of the frontiers due to its absence or to poor quality of the image or even possibly the merging of the black areas, the program sets the passing point as the middle of the imaginary line. The other passing points, for Hindwing images, are set more or less near the frontiers, depending on the ratio between them, with some exceptions. This deviation from the centre of the zone between each black area/ yellow area frontier is necessary because by observing the images, we can perceive that the veins are closer to the frontier belonging to the biggest of the pair of eyespots. So by applying this ratio to the calculation of the passing point, the separation of the eyespots gets more accurate by mimicking the position of their natural dividers, the veins, and thus getting a more precise area value for each individual yellow area. For Hindwing images from mutated individuals with extra eyespots the value used for the ratio is 0.5, mainly because by having extra eyespots, their positions may cause problems with their normal arrangement and could cause the divisions to occur in odd places. In the case of a hind wing with seven (or even eight eyespots although a mutant, this eyespot position don t disrupt the normal layout of the rest) the ratios are treated differently for each case. For the first four eyespots, the ratio used to determine the position of the passing point is the calculated one and for the rest of the eyespots the ratio is 0.5 due to better division with this value than with the calculated one. The ratio of 0.5 is also used for all the other cases. After finding out the passing points for the lines, the program is going to calculate their slopes which is very straight forward since they re perpendicular to the imaginary lines between the eyespots centres. Although logic dictates that for each pair of eyespots, these imaginary lines are suppose to be between the eyespots in question, we observe that most of the times, for hind wings, this doesn t reflect the positions of the veins at all. So we found out that by using other eyespot to pair with the upper one of the pair, we get a more accurate line. Even though the veins are slightly curved, the lines almost exactly overlap their terminal portions where the eyespots are (see fig. 8). So when the program analyses Hindwing images and there s seven or even eight eyespots present, the pairing are: the first with the fifth, the second with the fifth, the third with the sixth, the forth with the seventh, the sixth with the seventh, the seventh with the sixth again and, for the eighth 12

24 eyespot, the seventh with the eighth. For the other cases, the slope is calculated using the eyespots that bracket them. In the end, with the values for the slope and the passing position, the program calculates the equations for the dividing lines and draws them over the skeletonized outlines of the binarized original image, which can be viewed if the user chooses to. 13

25 Analysis of the eyespot Analysis of normal images This subsection is used when the Single Eyespot, Forewing or Hindwing option is selected during the first initial dialog. It s the one that gives the user the outlines of each area and their data (area, roundness, diameter, area ratios and image representations) for normal eyespot or wing images. To obtain the frontiers of each eyespot, the program starts at its centre and moves along the pixel line to the right until it has found three frontiers (this method is performed for each eyespot in the case of a wing image). So first the program sets the eyespot centre coordinate. If it s a wing image, the coordinate is set as the current eyespot to be analysed, found by the separation of each eyespot (wing) section, and if it s a Single Eyespot image, the coordinate is set as the centre of the image. Also in this latter case, the image is framed with a black line encased in an exterior colour line (more about colours later also see fig. 11 for colour codes), to account for single eyespots that could be merged with another out of frame eyespot or for those that could be missing parts of its yellow area. Mainly because the Single Eyespot image could possibly not be centred properly on the image frame or to account for the possibility of some error with the outlines, the program now applies the findwhite method to this centre point, in order to find the closest white (or close to white) pixel. The findwhite method, which is found at the end of the program, returns a set of coordinates (x, y) from the original image, corresponding to the closest white (or close to white) pixel to the given set of coordinates. The value used to find this pixel is the pixellimit variable discussed above. When the image type is Single Eyespot, the default value is set to 245 and not 255 to take into consideration the possibility of image quality error. This method works by comparing the pixels in a spiral motion (fig. 10) around the initial coordinate until one with all the three RGB values equal to the pixellimit value or, in the case of a Single Eyespot image, until it reaches the limit of the frame. For wing images, this method doesn t need to rely on this latter condition because it most certainly finds a pixel right at the initial coordinate. If either of the returned coordinates are zero, it means that it didn t find the pixel, and so the program sharpens the image and tries again until it has found out a correct set of coordinates to start the search for the three frontiers properly. 14

26 Figure 10: code snippet from the findwhite method that changes the coordinates in order to run a spiral path around the initial pixel. The side variable gives the length of the path to run until it turns, the i variable keeps the pace for each run and the a variable serves as an alternate switch between 1 and 1, in order to change the direction for each half turn. After the program has found the initial pixel, it paints the white area with its colour (more about colours later also see fig. 11 for colour codes) and it starts the search for the frontiers of each area. To do this, the program runs along the x axis, starting from the previously obtained set of coordinates, until it finds a black pixel. When this occur, the program analyses the next pixel colour. If it s white, then it most certainly is a possible frontier. If it s another colour besides white or black, it s most likely some black pixel or line that doesn t make part of a frontier the area outside the frontier is supposed to be a nonpainted white area, so the program continues the search forward for another black pixel. After it has found a possible frontier, with a white area on the other side, the program runs along the frontier and registers each of its pixels onto a separate blank image corresponding to that area of the eyespot. This just cannot be done by simply find the next black pixel, going up or down, because there could be several black pixels or lines that go through the frontier and mislead the program and causing it to go the wrong way. The best way to avoid this is to simply give the program a sense of where it is along the line. The line that makes the frontier must be surrounded by a different area on each side. So to distinguish both sides, the program paints the next area with a different colour and simply follow some rules. The colours used are shown bellow. Figure 11: colour scheme used to run through the frontiers. The first four colours are used for each or the areas: white area, black area, yellow area and exterior area. The other three colours are respectively: debris colour, line colour and initial pixel colour. 15

27 As it can be seen, there s one colour for each area around each frontier and there s also three extra colours used for the frontier itself: the first to paint black pixels that aren t the next pixel of the frontier, the second to paint the correct frontier pixels themselves and the third to paint only the first found pixel of the frontier. The frontier is completed when the program finds a pixel with the initial pixel colour and it paints this pixel with the line colour and floodfills it with the debris colour, so that in the end the program can paint all the line back to black to account for the merged eyespots possibly present in wing images, which have the same external frontier in the dividing lines area. To get the program to follow a specific direction, the next thing it does is to apply the debris colour to the black pixels present bellow the first frontier pixel. This way the program can only go up, and this pixels will be coloured back to black afterwards so that they too can be analysed. After this, the program now analyses the neighbours of the current pixel to find which black pixel is the next one. This analysis is done by analysing the neighbours neighbours, giving weights to each black pixel neighbour in accordance with its area coloured neighbours and setting different viability values for each type of situation (example in fig. 12). The program detects if the black pixel neighbours have at least one neighbour with the inner area colour and at least one with the outer area colour and how much inner coloured and line coloured neighbours it has. The value given to each neighbour is one of the following: 1 if it s a (viable) black pixel with at least one inner area coloured neighbour and one outer area coloured neighbour; 2 If it s a (non viable) black pixel that doesn t comply with the previous rule; 1 if it s the pixel with the initial pixel colour; 2 if it s a white pixel; 0 if it s a pixel with some other colour (area, debris or line colour). Figure 12: example of current pixel neighbourhood analysis. In this case we can see that there are three black pixels. After the neighbourhood analysis, it can be observed that there s only one pixel that qualifies to be the next frontier pixel, marked with 1. The other two don t have all the requirements as they don t have at least one inner area coloured neighbour. Note that there s no way for the program to run backwards as the previous frontier pixel is line coloured. 16

28 The number of inner coloured neighbours and of line coloured neighbours, as well as the viability value, are set to zero for the central pixel, in order to eliminate it from the neighbours. Following the frontier line isn t as easy as finding the next single black pixel. Several types of error could occur due to possible lines derived from the background noise or even from excess of resolution. This lines could cross or even go side by side along the frontier and some holes could appear at its side or even right in the pixel line the program uses to find the frontiers. The figure 13 illustrates most of those errors, if not all, that could possibly occur. In the first image from figure 13, there are three situations that could happen and destabilize the analysis. The topmost one reveals that it could occur that there could be some lines that go from the current frontier to the next, blocking the 4 connected floodfill method to paint all of the exterior area. When the program reaches this point, it doesn t find any viable black neighbour pixel since the topmost one doesn t have any outer coloured neighbour pixel and the lower right one doesn t have any inner coloured neighbour pixel as well. So, in this possible scenario, the program searches for white neighbour pixels and floodfills them with the outer colour, revert any debris coloured neighbour pixel to black and analyses the pixel again for viable black neighbour pixels. The next situation is the one where the program encounters two viable black pixels next to each other. In this case, it simply chooses the one in the corner and dismisses the other as debris. The last one, and also the next three images, is about the first encounter of a possible frontier. In the first image we have the case where there s a line coming from the frontier inwards. Obviously the pixel the program found isn t the one of the frontier. So it moves forward in search of viable black neighbour pixels that would symbolize the presence of the frontier. In this particular case, when it reaches the X point, it chooses the viable black pixel in the corner, as discussed above. The next image shows that it could have a closed loop in the path of the pixel line used to search for the frontiers. When this occurs, the program bypasses it simply by verifying if it s running along the line in a clockwise motion. In the two following images, it s represented the case where there s two or even three viable black neighbour pixels, respectively. In both cases, the program chooses the top leftmost viable black pixel. In the lower three images from figure 13, there s other three possible problems that could occur. The first image shows that there could be a closed loop adjacent to the frontier. In this case, the program simply treats this as a case of absence of viable black neighbour pixels, equal to the situation previously mentioned, and floodfills it with outer colour. The second image illustrates the situation where there s a lack of viable black neighbour pixels and there s white pixels to floodfill. But in this particular case, there could be a previous situation of two viable black neighbour pixels in which the program colorized the other left out viable pixel with the debris colour. When it goes to return this pixels back to black to analyse again, the program in the next analysis of the same pixel will have two viable black 17

29 neighbour pixels but not touching each other. In this situation, the program chooses the one with the least number of line coloured neighbours which is the one in the right. And finally in the third image there s the situation where the frontier has a line next to it, side by side, and there s no viable black neighbour pixels nor even white neighbour pixels to floodfill. So the program chooses the black pixel with the least number of inner coloured neighbours, allowing it to proceed along the line until it finds a white pixel to floodfill or a viable black pixel to continue on. Figure 13: possible situations in which the program could fail to perform correctly. Each picture represents a portion of the frontier line in a given situation. The already travelled is painted with the line colour the darkest gray and the non viable black pixels are painted with the debris colour a slightest clear tone than the line colour but darker than the area colours. The initial pixels are marked with an X and the possible situation is marked by a black arrow. The direction of the analysis is given by the red arrows. Bearing in mind these possible situations that could complicate the analysis of the lines, the program has to select the next frontier pixel to be analysed. There are several possibilities that could occur and they re set in Boolean statements (see flow chart in fig. 15). The first possibility is the presence of the initial coloured pixel, and if the current pixel s not one of the first three to be drawn in the blank temporary images of the correspondent area, which the program selects as the next and final pixel of the frontier. Then if the program doesn t find any viable black pixel and it s not the first pixel to be drawn, it will floodfill all the white neighbour pixels with the outer colour and converts every debris coloured neighbour pixels back to black, returning to analyse the current pixel again. It could occur the possibility of nonexistence of white pixels mainly because there could be a line along that part of the frontier or some extra pixel making a block like point. If it s the case, the program then sets the next pixel as the one with the least number of inner coloured neighbour pixels. The next possible case is the one where there s only one viable black pixel which is selected as the next one of the frontier. Subsequently we have the cases of several viable black pixels. The case in which there s two viable black pixels and if it s not the first pixel to be drawn, the next frontier pixel is the one that s in the corner or if the pixels aren t touching each other it s the one with the least number of line coloured neighbour pixels 18

30 (see code snippet in fig. 14) unless none of them is located in a corner in which case it s the one with the biggest number of inner coloured neighbour pixels. And the case where there s two or even three viable black pixels, and being the first pixel to be drawn, the next pixel is the leftmost one. At the end, there s only the possibility of no viable black pixel present and it s the first pixel to be drawn, where it could mean that the frontier is ahead of this point. So the program sets the x coordinate as x+1, paints the lower debris coloured pixels back to back, if there s any, paints white the previously floodfilled outer area pixel (which coloured the area) and search for the next black pixel again along the pixel line. Figure 14: code snippet from the next pixel Boolean statements that, in case of two viable black neighbour pixels, selects the pixel in the corner or the one with the least number of line coloured neighbour pixels. In the main if/ else statement, the first statement searches for one of the viable black pixels in the up right or down left corners and the second in the up left or down right corners. Then in each of the statements, it searches for the position of the other one and sets the position of the next pixel. 19

31 Figure 15: code diagram for finding the next frontier pixel. Blue arrows and rectangles are output values/actions; black arrow and rectangles are Boolean statement. 20

32 Now that the program has the position of the next pixel and if it s not the case where it floodfilled all the white pixels or set the x to x+1, it paints all the black pixels with the debris colour, sets the coordinates for the next pixel (see code snippet in fig. 16), put the frontier pixel in the temporary area image, increment the counting of drawn pixels (only for the first four) and paints the next pixel with line colour. Figure 16: code snippet to convert the array position of the next pixel to (x,y) coordinates. Each neighbour pixel has its position in the array as: To account for possible errors that could occur such as stranded lines or even closed line loops which could appear while it searches for a frontier, the program has to find if it s running the line in a counter clockwise motion, as it should, or not. To do this, when it reaches a point in the line where the y coordinate is the same as the initial one, it compares the x value with the initial one. If it s bigger, then it means that the program is running in a clockwise motion along the line, meaning that it s one of those errors and the analysis of this frontier must stop and the program must search again for the real frontier along the pixel line. At this point, the program also paints this final pixel black, turns the initial pixel to line colour, to be floodfilled later with black, and fills the possible hole with white by searching for the closest outer coloured pixel and floodfills it. Also the temporary blank area image is discarded. If the next pixel is the initial coloured one, the program stops the analysis for this frontier and starts the search for the next one (if it s not the third), paints the initial pixel with line colour, floodfills it with debris colour, records its position to be floodfilled with black later on and puts the temporary blank area image, now with the complete frontier line drawn, in the actual area image which is used to calculate all the data. After all the frontiers have been found, the program clears all the areas back to white by floodfilling the white area with the black area colour, the black area with the yellow area colour then with the exterior area colour and finally with white. Note that all this floodfillings were 8 connected in order to cross the lines to the neighbouring areas. The program also floodfills the areas in each area images with black and,for the wing images, adds each eyespot area to its corresponding area image. 21

33 For the data results, the program uses the analyze function of the imagej to obtain the area, diameters, the angle of the fitted ellipse, the centroid and the roundness for each area. The value for the yellow area and the one for the black area is calculated by subtracting the previous area to it. After all this is applied to every eyespot present, the program shows each area image at the bottom of the screen, if the user chose to in the initial second dialog, and then it starts to write the results table with the data chosen by the user. Each area will have its corresponding name to identify it amongst the others, and when analysing a wing image, each eyespot will also have its label to identify them. These labels are set as default for wings with the normal number of eyespots: for Forewing images with two eyespots, their labels are A and P (anterior and posterior) and for Hindwing images, their names are H1 to H7. For mutants with extra or less eyespots, they re just labelled with numbers. The program then proceeds in filling the table with the values, showing only the ones for the eyespots selected. Another type of output is the depictions of the eyespot(s). The program can return an image drawn with the obtained frontiers ( Colored original areas ) and an image drawn with the elliptical depiction of the frontiers ( Colored elliptical areas ). To make the first, the program paints the yellow area with yellow and then makes another image that is the result of the calculation between this and the black area image using the bitwise operator AND (&). Then it floodfills the background with a brown colour. To make the elliptical frontiers image, the program draws the ellipse for the external frontier of the yellow area on a temporary blank image, rotating it to its position, and adds it to a final blank image which will have all the eyespots. When the program is making this, it s also doing the same for the black area. In the end, it calculates another image using the bitwise operator AND (&) between these two, filling the background with the brown colour. Then the program does the same for the white areas with the only difference being the fact that the background of the white area image is black and the calculation uses the bitwise operator OR ( ) this time. It calculates then the final image using the bitwise operator OR ( ) between the previous black, yellow and brown image and the white in black image. 22

34 Analysis of fluorescence microscopy images This subsection is used when the Distal less or Engrailed option is selected during the first initial dialog. It s the one that gives the user the cross sections of the eyespot and its data (area, roundness, diameter and image representation) for fluorescence microscopy eyespot images. Since the image is composed by several dots instead of being a continuous object, the program joins the points to obtain a certain continuity between them, without gaping holes or opened areas. This is accomplished by applying the maximum filter present in the imagej with radius set to five, which is then applied with a Gaussian blur with the same radius (fig. 17). The maximum plugin does a dilation of the image by replacing each pixel in it with the largest pixel value in that pixel s neighbourhood. The radius value used is a good neighbourhood to apply the filter due to the distance between the dots: a smaller value would leave gaps between the coloured points and a larger one would be too much, ruining the intensity of the object in study. Figure 17: left original Engrailed image; right original image after applying the maximum and Gaussian blur filters. Note how the object is continuous but maintaining a certain degree of variable intensity throughout it. Firstly, the program makes three copies of the image to calculate each area data (fig. 18). To the first one, it applies the maximum and Gaussian process described above, followed by binarization, to obtain the whole eyespot area. To the second one, it simply binarizes the image, apply the close operator and then the erode operator until it only has the central area. Then it applies the dilate operator the number of times it did the erosion to revert the size of the centre. And to the third one, it applies the maximum and Gaussian process, transforms it into a binary image and then applies the outline function. After this, it eliminates the outer frontier of the object to obtain the intermediate frontier found in Engrailed images. 23

35 Figure 18: top left original Engrailed image; top right the whole eyespot area; down left the central area; down right the intermediate frontier. When the program has these three images, it analyses them with the analyze function of the imagej, eliminating the smallest areas from the table in the case of the outer and the intermediate frontiers. The inner area will have its area value subtracted by the centre area. After this, the program puts the wanted data from each area inside the results table, each one labelled as Centre, Band and Whole respectively for the centre, the band around the centre if it s an Engrailed image and the whole eyespot area. For the cross sections, the program acquires the brightness portion of the original image and applies the maximum and Gaussian process to it. Because the imagej cannot get the intensity cross sections unless they re horizontal or vertical to the image, the image has to be rotated. And to avoid loosing information of the image, such as the corners, the image is then copied and centred by the centroid of the central area to another blank image with the diagonal of the original image as its width and height dimensions. To make the average cross section, the program calculates an average image between eight copies of the same, each rotated by 45 degrees from the previous one (fig. 24

36 20). Since the imagej can only make image calculations between two images at a time, the global average for n images had to be made with the equation bellow, after the average between the first two, in which Av (n 2) is the previous average and n the number of the image. 1 To make the widest cross section and the shortest cut section, the program rotates the images accordingly so that the widest or the shortest diameter, respectively, is in the horizontal position. Figure 20: left treated image; right average image from rotating the left one. After all these rotations and averages, the program sets the horizontal cross section as long as 5/4 of the maximum diameter of the outer frontier and as wide as the maximum diameter of the inner frontier. The result is a graphical representation of the pixel intensities (ranging from 0 to 255) across this section. It is also possible to obtain an image representation of the eyespot using elliptical representations of each frontier. The method used is similar to the previously described for normal eyespot images with some minor differences. In the end, the program returns the results table if the user selected any data as a required output. 25

37 Auxiliary methods The auxiliary methods contains the previously mentioned findwhite and macrowork methods as well as the distance method that calculates the distance between two points, to assist the program in its calculations. It contains also the dialog listener method to change the scale input on the second initial dialog. 26

38 Results This section shows some output examples using single eyespot, wing and fluorescence microscopy images. Single eyespot image (wildtype) Figure 21: a) Original single eyespot image. Image representations of the eyespot; b) original frontiers; c) elliptical representation of the frontiers. Figure 22: Results table. Top panel data corresponding to area, roundness, diameters and the minor and major widths of the black and yellow areas; Bottom panel data corresponding to the area ratios (ratio of a small area with a larger area). 27

39 Hindwing images Wildype hindwing Figure 23: Wildtype hindwing image analysis. left original image (wildtype); centre original frontiers; right elliptical representations of the frontiers. Table I: results table with the data for each area of each eyespot. 28

40 Bigeye mutant hindwing Figure 24: left original image (bigeye mutant); centre original frontiers; right elliptical representations of the frontiers. Table II: results table with the data for each area of each eyespot. 29

41 Frodo mutant hindwing Figure 26: left original image ( frodo mutant); centre original frontiers; right elliptical representations of the frontiers. Table III: results table with the data for each area of each eyespot. 30

42 Characterization of mutant phenotypes These results can also be used to compare each type of wings as well as between each eyespot on the same wing in terms of areas and their ratios. The following graphics are some examples of possible data comparison. h1 h2 h3 h4 h5 h6 h7 wildtype bigeye frodo Figure 27: pie charts representing the distribution of each area for each eyespot of the three types of wings ratio wildtype bigeye frodo eyespot Figure 28: white area / eyespot area ratio of each eyespot for the three types of wing. 31

43 ratio wildtype bigeye frodo eyespot Figure 29: black area / eyespot area ratio of each eyespot for the three types of wing ratio wildtype bigeye frodo eyespot Figure 30: yellow area / eyespot area ratio of each eyespot for the three types of wing. In these graphics, we have the representations of the ratios between each area and the whole eyespot of each eyespot for the three types of wing: wildtype, bigeye and frodo. As it can be observed in the first plot, the white areas are the most difficult ones to use for comparison due to their small size that confers them more errors in the analysis: there s a larger variation between the eyespots than in the case of the other areas. In the 32

44 figures 29 and 30, there s an agreement between their values. It can be noted that the wildtype individual has its values more or less constant between each eyespot, meaning that all the eyespots are equal except for their global size, which points to the fact that there s a union between them on what concerns their formation. For the mutant individuals, it can be observed that they have a wave like variation between their eyespots but in a different way for each area. In the case of the black one, the ratio decreases at the beginning to the second or third eyespot, increases towards the fifth and then decreases again, and in the case of the yellow area, the variation is the opposite. This is a very interesting fact to observe, which indicates that the variation made to the individual to turn him into a mutant doesn t apply equally to all the eyespots present on the wing, instead there s some that suffers it more than others. The fact that there s opposite variations for each area must suggest that the mechanisms involved in the formation of each of the areas must be linked in a way that when one varies in a certain way, e.g. increasing in size, the other varies in the opposite way. It can be seen as well that the bigeye mutant is the closest to the wildtype in terms of proportionalities between the areas of the eyespots, contrary to the frodo mutant which has smaller black areas and bigger yellow ones. All these data and more can clearly provide a deeper insight into the mechanisms behind the formation of the eyespots, whether individually or in unison on a wing, as well as supply its study with new paths to approach. 33

45 Engrailed image Wildtype Figure 31: left original Engrailed image; right image representations of the areas. a) b) c) Figure 32: a) average cross section; b) widest cross section; c) shortest cross section. 34

46 Mutant Figure 33: left original Engrailed image (mutant); right image representations of the areas. a) b) c) Figure 34: : a) average cross section. b) widest cross section.; c) shortest cross section. 35

47 Table IV: results table with the Engrailed image (wildtype) data. Label Area Roundness MinorAxis MajorAxis Engrailed Centre area Engrailed Band area Engrailed Whole area Table V: results table with the Engrailed image (mutant) data. Label Area Roundness MinorAxis MajorAxis Distal less Centre area Distal less Whole area As it can be observed, the engrailed mutant doesn t have a doughnut shaped outer area. Instead, all the eyespot expresses the protein, and to analyse this image, the best method to use is the Distal less type of image option, even though it s an Engrailed image, i.e. it s similar to a Distal less expression image. This method is used because the Distal less one analyses the eyespot assuming that the image has only the central area and the whole eyespot area whereas the Engrailed method assumes that the image has a doughnut shaped outer area besides the central one. Another way that could be implemented in the future is to make only one method for the fluorescence microscopy images and ask the user if the eyespot has a ring shaped outer area or if the protein is expressed in the whole eyespot. This way, the mutants could be analysed by the correct method instead of using the method of other type of expression. By looking at the cross sections of the mutant, it can be noted that there s a small decrease of intensity near the central area, indicating that although it s a mutant, the effect is not all effective on the whole eyespot, a characteristic that can be measured using the values obtained from these graphics. Also, it s possible to get the values and the position of the highest peaks of intensity that could be used to measure the distance between them in order to compare with other individuals. These points can be used as well to make a quantitative analysis of the intensity variations. With the values, it can be noted and calculated using the intensity values that the highest density central point is approximately 12.6 times more intense than the average lowest point inside the eyespot and 1.2 times more intense than the average highest point of the ring area in the case of the wildtype and 1.46 and 1.25 respectively for the mutant. In this latter case, it can be observed with this numbers that it s possible to quantify the slight difference in the decrease of intensity near the centre of the eyespot, being the highest intensity average for the surrounding area 1.17 times higher than the depression. Another type of analysis that could be made using the data obtained from this types of image is the comparison of this data with the corresponding formed eyespot. The problem is that the wing is damaged for the process of fluorescence and there s no way to 36

48 obtain images from the final formed wing with the eyespot intact. Instead, one thing that could be made is to make the fluorescence microscopy analysis with one wing and allow the individual to survive in order to see the other wing with the corresponding eyespot. It would give a good substitute for the actual eyespot that was marked due to the symmetry of individuals, i.e. each wing is a mirror image of the other. This type of analysis would be most useful to the study of the formation of the eyespots, giving an insight on the corresponding cells that expressed a certain protein and the actual cells that give origin to a scale from a particular area. For a better analysis of these type of images, a good pre treatment is required, such as the removal of the background noise, so that the program doesn t confuse this noise with the actual protein expression. Even though the removal of the said noise would help in making a better analysis of the image, it causes some problems to it, mainly the cross section plots would become very small, i.e. the intensity decreases with the noise removal. It could help but ultimately the best images to use are those that have well defined expression of the protein, or else the low intensity and the background noise could make the image a bit undefined for the program to get proper plots of intensity. 37

49 38

50 Discussion The approach of acquiring the eyespot frontiers proved to be the best to separate each of their areas. Although it could have some flaws and be prone to some errors, the plugin is relatively strong to a certain amount of noise which could be present on the image and provides the user with detailed useful data. In order to provide good and meaningful data to research, mainly to make a connection between what is being changed and what is formed, the values of area, diameters and roundness are possible to obtain. This last one is preferred to circularity because it s a more significant value to the study; the circularity depends on the perimeter of the particle (4π*area/perimeter^2) whereas the roundness just depends on the major axis (4*area/(π*major_axis^2)). By not depending on the perimeter, the roundness gives us a more correct value for how much close the particle is to a perfect circle, while the circularity would give us a value for how much close the frontier of the particle is to a smooth regular line. In the following subchapters, I discuss the first failed method and the current successful method that were used to resolve the biological problem at hand, which is the quantitative analysis of the eyespot in certain types of image, in terms of effectiveness and reliability, as well as its problems and difficulties regarding its construction. After the latter method I discuss the prospects and possible advances for the future of the program. 39

51 Initial approach Prior to the use of this method which obtains the frontiers between each different area of the eyespot, another approach was initially thought of. This previously used method consisted in obtaining four or even five colours that would be identified as each coloured area, three for the eyespot and one or possibly two for the rest of the wing. Basically, this process consisted in obtaining the value of each pixel on the image and plot them in a three dimensional RGB space plot. Then, the program would search for four or five clouds of points: one in the white section, another in the black and another in the yellow and one or two in the brown section. With these clouds, the program would find their centre and calculate a neighbourhood around them; those points that were inside it, would correspond to the area corresponding to that colour, thus allowing the program to know which pixel belongs to each area. After obtaining the values for each pixel, they were plotted as shown in the figure 35. As it can be observed, there aren t the expected rounded clouds of points, instead there s some clouds of points around certain lines and to make things more complicated, it appears that the yellow portion is very similar to the brown one, practically merged together and very difficult to separate, either manually or automatically. Therefore, this idea was abandoned in favour of the one that the plugin is actually based on, as the separation of each coloured areas by finding their colour doesn t seem to work as expected. Also, it would work well on images with too much light, making the yellow area resembling the brown of the rest of the wing. 40

52 Figure 35: top left drawn model of two eyespots; top right real image of an eyespot. Bottom RGB space with every pixel represented by its colour. As it can be observed on the left graphic, it was expected four clouds of points: black, brown, yellow, white. The graphic on the right it can be observed that the clouds are in the form of lines and for some images the difference between the yellow and the brown areas aren t distinguishable as we can, more or less, visually in this eyespot image. 41

53 Current approach Although the method used seems very straightforward and simple, it s actually quite troublesome concerning all the possible errors that can occur during the processing of the frontiers. All the possible errors were taken into account which makes the program run properly, but if the image quality isn t good, i.e. with huge definition which allows a person to see all the scales individually or poor light quality, it causes the program either thinking that the scales outlines are the frontiers or not being able to see them, respectively. The current approach to get each area of the eyespot works well either for single eyespot or whole wing images, but it could also occur some errors in some types of images. In case of single eyespot image, if the user crops an eyespot which is merged with others, the program won t separate its yellow area from the others. Instead, the user is advised to cut the entire portion where there s the eyespot in question and its merged neighbours and select Forewing image type. This way, the program will isolate each eyespot and the user only has to select the eyespot to be analysed. Furthermore, even though the user makes a poorly isolated eyespot by cutting some part of the yellow region while cropping the image, the program has a safety feature for this by framing the image, as explained before, making the program sturdy to this kind of error. Whole wing images don t have these types of problems, the biggest problem could be the quality of the image. One way that could help to make sure the quality of the image is the right one would ve been to make the program analyse it and if there s an excessively high quality image it would decrease its size, and would make the opposite to a poor quality image. This was attempted but it didn t seem to solve the problem in the case of excessive high quality images, only in low quality ones. In this latter case, the user can look at the outlined image and choose to enlarge it. The program could also automate this process but there s no reference for the visual quality of the image, i.e. if the lines are closed and separated from each other, in order to proceed in increasing its size. Even if we take into account the distance between the first and the last eyespot, the program could possibly have a point of reference to do it, but it would make the program too heavy and slow and thus not very efficient in terms of time. The selection of each eyespot present on a wing image in order to separate them could ve been made in two different ways: automatically or manually. It was decided to use the former to allow for the program to be more precise and fast in case of very good images to process, with the eyespots centre well defined, even though this process couldn t be fully automated because of not so good images. As explained before, if the white areas found aren t the correct ones or there s more white spots than the actual eyespots number, the user has to select the correct eyespots and thus the eyespot selection becomes manual. A complete manual selection could have been implemented and the program would search for the eyespot in the vicinity of each selected points but it would be time consuming for several images that could have a good quality to be analysed, and so the pseudo automatic method was chosen which helps the user in some if not most of the cases. 42

54 Another approach was attempted which consisted in an algorithm that would search the image for circular objects with a certain radius and count them. If there wasn t enough, it would increase the radius and analyse the image again, and would keep doing this until all the eyespots were found. Although this seems a more precise method, it turned out to be quite inefficient in terms of time, as each iteration took a lot of time to run and the whole process several minutes for simple small testing images. In the future, a more efficient algorithm could be made to find even eyespots without the central white area. After the centre of each eyespot is found, the program goes along a virtual line between each eyespot, starting at each one, searching for the frontier between the areas black and yellow. These distances allows the program to find the point between them where the dividing line s going to pass which is going to be at an angle to the virtual line. As it can be observed, this point isn t equidistant from each of those frontiers. Instead, it s near the frontier dividing the yellow and black areas of the biggest eyespot and a ratio is calculated to find the point. Because the eyespots have the same developmental basis, it would be expected that this condition would be applied to all of them, but it seems that the lines between the eyespot five, six and seven (and eight if it s a mutant with an extra lower eyespot) have a closer resemblance to the veins separating them when this point is equidistant from each of those black/yellow frontiers. Also, it could be observed that if the dividing lines are perpendicular to their corresponding virtual line between the eyespots, they don t mimic the position of the veins in that terminal area of the wing and end up not dividing the eyespots correctly, cutting parts that should belong to the other or giving the eyespot more yellow area than it should. So, as the wing radiates from a point that isn t central to it and the distance of each eyespot to this point increases the lower it is on the wing, a new approach was taken. Instead of making the lines perpendicular to the corresponding virtual line, the program would make it perpendicular to a virtual line between the upper eyespot and another one further down. After some testing and experimenting, the following pairing was used: first with fifth, second with fifth, third with sixth, forth with seventh, and sixth with seventh for the last lines. As it can be observed in figure 36, the dividing lines using this method have a closer resemblance to the real veins in the area between each eyespot than the other method, making a more accurate division. 43

55 Figure 36: centre normal hind wing image; left outlined image with the dividing lines with the alternative method; right outlined image with the dividing lines without the alternative method. Although these lines seem to divide the eyespots at the correct boundary, the division between the first and the second eyespot seems to be better in the normal perpendicular way to the virtual line between both eyespots. Even though the other line mimics better the vein, it seems that the first eyespot s yellow area bled through to the second eyespot zone beyond the supposedly limiting vein. The current thought is that the veins separates the eyespots and limits their development but it seems that the eyespots could trespass the veins if they re bigger than their supposedly restricted area. It s only observed in the case between the first and the second eyespot because in the other cases where there s merging of the yellow areas, the eyespots are facing each other with the vein right between them and in this case the eyespots aren t directly in front of the other, they re not in the same position regarding the vein. Further experimental studies have to be made in order to test this hypothesis and if it turns out to be confirmed, the program will be modified to accommodate this situation. When analysing the individual eyespots present in a wing image, even if the user only wants the results for some particular eyespots, the program analyses all of them to simplify the way it returns the values in a table and to enable the creation of the 44

Campaigns. Salesforce, Winter

Campaigns. Salesforce, Winter Campaigns Salesforce, Winter 19 @salesforcedocs Última atualização: 16/10/2018 A versão em Inglês deste documento tem precedência sobre a versão traduzida. Copyright 2000 2018 salesforce.com, inc. Todos

More information

13 Vectorizing. Overview

13 Vectorizing. Overview 13 Vectorizing Vectorizing tools are used to create vector data from scanned drawings or images. Combined with the display speed of Image Manager, these tools provide an efficient environment for data

More information

Unit 1, Lesson 1: Moving in the Plane

Unit 1, Lesson 1: Moving in the Plane Unit 1, Lesson 1: Moving in the Plane Let s describe ways figures can move in the plane. 1.1: Which One Doesn t Belong: Diagrams Which one doesn t belong? 1.2: Triangle Square Dance m.openup.org/1/8-1-1-2

More information

Part II: Creating Visio Drawings

Part II: Creating Visio Drawings 128 Part II: Creating Visio Drawings Figure 5-3: Use any of five alignment styles where appropriate. Figure 5-4: Vertical alignment places your text at the top, bottom, or middle of a text block. You could

More information

Creating Digital Illustrations for Your Research Workshop III Basic Illustration Demo

Creating Digital Illustrations for Your Research Workshop III Basic Illustration Demo Creating Digital Illustrations for Your Research Workshop III Basic Illustration Demo Final Figure Size exclusion chromatography (SEC) is used primarily for the analysis of large molecules such as proteins

More information

09/11/2017. Morphological image processing. Morphological image processing. Morphological image processing. Morphological image processing (binary)

09/11/2017. Morphological image processing. Morphological image processing. Morphological image processing. Morphological image processing (binary) Towards image analysis Goal: Describe the contents of an image, distinguishing meaningful information from irrelevant one. Perform suitable transformations of images so as to make explicit particular shape

More information

Adobe Flash CS3 Reference Flash CS3 Application Window

Adobe Flash CS3 Reference Flash CS3 Application Window Adobe Flash CS3 Reference Flash CS3 Application Window When you load up Flash CS3 and choose to create a new Flash document, the application window should look something like the screenshot below. Layers

More information

Counting Particles or Cells Using IMAQ Vision

Counting Particles or Cells Using IMAQ Vision Application Note 107 Counting Particles or Cells Using IMAQ Vision John Hanks Introduction To count objects, you use a common image processing technique called particle analysis, often referred to as blob

More information

Using Microsoft Word. Working With Objects

Using Microsoft Word. Working With Objects Using Microsoft Word Many Word documents will require elements that were created in programs other than Word, such as the picture to the right. Nontext elements in a document are referred to as Objects

More information

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS.

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. 1. 3D AIRWAY TUBE RECONSTRUCTION. RELATED TO FIGURE 1 AND STAR METHODS

More information

25 The vibration spiral

25 The vibration spiral 25 The vibration spiral Contents 25.1 The vibration spiral 25.1.1 Zone Plates............................... 10 25.1.2 Circular obstacle and Poisson spot.................. 13 Keywords: Fresnel Diffraction,

More information

How to draw and create shapes

How to draw and create shapes Adobe Flash Professional Guide How to draw and create shapes You can add artwork to your Adobe Flash Professional documents in two ways: You can import images or draw original artwork in Flash by using

More information

2 SELECTING AND ALIGNING

2 SELECTING AND ALIGNING 2 SELECTING AND ALIGNING Lesson overview In this lesson, you ll learn how to do the following: Differentiate between the various selection tools and employ different selection techniques. Recognize Smart

More information

EDITING AND COMBINING SHAPES AND PATHS

EDITING AND COMBINING SHAPES AND PATHS 4 EDITING AND COMBINING SHAPES AND PATHS Lesson overview In this lesson, you ll learn how to do the following: Cut with the Scissors tool. Join paths. Work with the Knife tool. Outline strokes. Work with

More information

Graphic Design & Digital Photography. Photoshop Basics: Working With Selection.

Graphic Design & Digital Photography. Photoshop Basics: Working With Selection. 1 Graphic Design & Digital Photography Photoshop Basics: Working With Selection. What You ll Learn: Make specific areas of an image active using selection tools, reposition a selection marquee, move and

More information

Topic 6 Representation and Description

Topic 6 Representation and Description Topic 6 Representation and Description Background Segmentation divides the image into regions Each region should be represented and described in a form suitable for further processing/decision-making Representation

More information

Osmond Tutorial. First Page / J C Chavez / / Osmond Tutorial

Osmond Tutorial. First Page / J C Chavez / / Osmond Tutorial Osmond Tutorial Draft Version corresponding to Osmond PCB Design Version 1.0b2 November 30, 2002 J C Chavez http://www.swcp.com/~jchavez/osmond.html jchavez@swcp.com First Page / J C Chavez / jchavez@swcp.com

More information

Name: Tutor s

Name: Tutor s Name: Tutor s Email: Bring a couple, just in case! Necessary Equipment: Black Pen Pencil Rubber Pencil Sharpener Scientific Calculator Ruler Protractor (Pair of) Compasses 018 AQA Exam Dates Paper 1 4

More information

Work with Shapes. Concepts CHAPTER. Concepts, page 3-1 Procedures, page 3-5

Work with Shapes. Concepts CHAPTER. Concepts, page 3-1 Procedures, page 3-5 3 CHAPTER Revised: November 15, 2011 Concepts, page 3-1, page 3-5 Concepts The Shapes Tool is Versatile, page 3-2 Guidelines for Shapes, page 3-2 Visual Density Transparent, Translucent, or Opaque?, page

More information

NADABAS and its Environment

NADABAS and its Environment MZ:2011:02 NADABAS and its Environment Report from a remote mission to the National Statistical Institute of Mozambique, Maputo Mozambique 8 March - 1 April 2011 within the frame work of the AGREEMENT

More information

Microsoft Excel 2007

Microsoft Excel 2007 Microsoft Excel 2007 1 Excel is Microsoft s Spreadsheet program. Spreadsheets are often used as a method of displaying and manipulating groups of data in an effective manner. It was originally created

More information

Unit 21 - Creating a Navigation Bar in Macromedia Fireworks

Unit 21 - Creating a Navigation Bar in Macromedia Fireworks Unit 21 - Creating a Navigation Bar in Macromedia Fireworks Items needed to complete the Navigation Bar: Unit 21 - House Style Unit 21 - Graphics Sketch Diagrams Document ------------------------------------------------------------------------------------------------

More information

(Refer Slide Time: 00:02:00)

(Refer Slide Time: 00:02:00) Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 18 Polyfill - Scan Conversion of a Polygon Today we will discuss the concepts

More information

Education and Training CUFMEM14A. Exercise 2. Create, Manipulate and Incorporate 2D Graphics

Education and Training CUFMEM14A. Exercise 2. Create, Manipulate and Incorporate 2D Graphics Education and Training CUFMEM14A Exercise 2 Create, Manipulate and Incorporate 2D Graphics Menu Exercise 2 Exercise 2a: Scarecrow Exercise - Painting and Drawing Tools... 3 Exercise 2b: Scarecrow Exercise

More information

Part 1: Basics. Page Sorter:

Part 1: Basics. Page Sorter: Part 1: Basics Page Sorter: The Page Sorter displays all the pages in an open file as thumbnails and automatically updates as you add content. The page sorter can do the following. Display Pages Create

More information

A new interface for manual segmentation of dermoscopic images

A new interface for manual segmentation of dermoscopic images A new interface for manual segmentation of dermoscopic images P.M. Ferreira, T. Mendonça, P. Rocha Faculdade de Engenharia, Faculdade de Ciências, Universidade do Porto, Portugal J. Rozeira Hospital Pedro

More information

5 Subdivision Surfaces

5 Subdivision Surfaces 5 Subdivision Surfaces In Maya, subdivision surfaces possess characteristics of both polygon and NURBS surface types. This hybrid surface type offers some features not offered by the other surface types.

More information

Overview: The Map Window

Overview: The Map Window Overview: The Map Window The Map Menu Map Window Tools and Controls Map Drawing Tools Clipboard Commands Undoing Edits Overview: The Map Window The MFworks Map window is a powerful facility for the visualization

More information

A Step-by-step guide to creating a Professional PowerPoint Presentation

A Step-by-step guide to creating a Professional PowerPoint Presentation Quick introduction to Microsoft PowerPoint A Step-by-step guide to creating a Professional PowerPoint Presentation Created by Cruse Control creative services Tel +44 (0) 1923 842 295 training@crusecontrol.com

More information

Face Detection on Similar Color Photographs

Face Detection on Similar Color Photographs Face Detection on Similar Color Photographs Scott Leahy EE368: Digital Image Processing Professor: Bernd Girod Stanford University Spring 2003 Final Project: Face Detection Leahy, 2/2 Table of Contents

More information

An Introduction to GeoGebra

An Introduction to GeoGebra An Introduction to GeoGebra Downloading and Installing Acquiring GeoGebra GeoGebra is an application for exploring and demonstrating Geometry and Algebra. It is an open source application and is freely

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

SketchUp. SketchUp. Google SketchUp. Using SketchUp. The Tool Set

SketchUp. SketchUp. Google SketchUp. Using SketchUp. The Tool Set Google Google is a 3D Modelling program which specialises in making computer generated representations of real-world objects, especially architectural, mechanical and building components, such as windows,

More information

The Mathcad Workspace 7

The Mathcad Workspace 7 For information on system requirements and how to install Mathcad on your computer, refer to Chapter 1, Welcome to Mathcad. When you start Mathcad, you ll see a window like that shown in Figure 2-1. By

More information

IJIRST International Journal for Innovative Research in Science & Technology Volume 1 Issue 10 March 2015 ISSN (online):

IJIRST International Journal for Innovative Research in Science & Technology Volume 1 Issue 10 March 2015 ISSN (online): IJIRST International Journal for Innovative Research in Science & Technology Volume 1 Issue 10 March 2015 ISSN (online): 2349-6010 Counting of Micro-Organisms for Medical Diagnosis using Image Processing

More information

Animations involving numbers

Animations involving numbers 136 Chapter 8 Animations involving numbers 8.1 Model and view The examples of Chapter 6 all compute the next picture in the animation from the previous picture. This turns out to be a rather restrictive

More information

Lecture 18 Representation and description I. 2. Boundary descriptors

Lecture 18 Representation and description I. 2. Boundary descriptors Lecture 18 Representation and description I 1. Boundary representation 2. Boundary descriptors What is representation What is representation After segmentation, we obtain binary image with interested regions

More information

Step 1: Create A New Photoshop Document

Step 1: Create A New Photoshop Document Snowflakes Photo Border In this Photoshop tutorial, we ll learn how to create a simple snowflakes photo border, which can be a fun finishing touch for photos of family and friends during the holidays,

More information

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering Digital Image Processing Prof. P.K. Biswas Department of Electronics & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Image Segmentation - III Lecture - 31 Hello, welcome

More information

Color Characterization and Calibration of an External Display

Color Characterization and Calibration of an External Display Color Characterization and Calibration of an External Display Andrew Crocker, Austin Martin, Jon Sandness Department of Math, Statistics, and Computer Science St. Olaf College 1500 St. Olaf Avenue, Northfield,

More information

Solid Bodies and Disjointed Bodies

Solid Bodies and Disjointed Bodies Solid Bodies and Disjointed Bodies Generally speaking when modelling in Solid Works each Part file will contain single solid object. As you are modelling, each feature is merged or joined to the previous

More information

IGCSE ICT Section 16 Presentation Authoring

IGCSE ICT Section 16 Presentation Authoring IGCSE ICT Section 16 Presentation Authoring Mr Nicholls Cairo English School P a g e 1 Contents Importing text to create slides Page 4 Manually creating slides.. Page 5 Removing blank slides. Page 5 Changing

More information

Clustering CS 550: Machine Learning

Clustering CS 550: Machine Learning Clustering CS 550: Machine Learning This slide set mainly uses the slides given in the following links: http://www-users.cs.umn.edu/~kumar/dmbook/ch8.pdf http://www-users.cs.umn.edu/~kumar/dmbook/dmslides/chap8_basic_cluster_analysis.pdf

More information

Particle localization and tracking GUI: TrackingGUI_rp.m

Particle localization and tracking GUI: TrackingGUI_rp.m Particle localization and tracking GUI: TrackingGUI_rp.m Raghuveer Parthasarathy Department of Physics The University of Oregon raghu@uoregon.edu Begun April, 2012 (based on earlier work). Last modified

More information

Algebra 2 Semester 1 (#2221)

Algebra 2 Semester 1 (#2221) Instructional Materials for WCSD Math Common Finals The Instructional Materials are for student and teacher use and are aligned to the 2016-2017 Course Guides for the following course: Algebra 2 Semester

More information

LME Software Block Quick Reference 1. Common Palette

LME Software Block Quick Reference 1. Common Palette LME Software Block Quick Reference Common Palette Move Block Use this block to set your robot to go forwards or backwards in a straight line or to turn by following a curve. Define how far your robot will

More information

Create a Swirly Lollipop Using the Spiral Tool Philip Christie on Jun 13th 2012 with 12 Comments

Create a Swirly Lollipop Using the Spiral Tool Philip Christie on Jun 13th 2012 with 12 Comments Advertise Here Create a Swirly Lollipop Using the Spiral Tool Philip Christie on Jun 13th 2012 with 12 Comments Tutorial Details Program: Adobe Illustrator CS5 Difficulty: Beginner Es timated Completion

More information

EE368 Project: Visual Code Marker Detection

EE368 Project: Visual Code Marker Detection EE368 Project: Visual Code Marker Detection Kahye Song Group Number: 42 Email: kahye@stanford.edu Abstract A visual marker detection algorithm has been implemented and tested with twelve training images.

More information

GroundFX Tracker Manual

GroundFX Tracker Manual Manual Documentation Version: 1.4.m02 The latest version of this manual is available at http://www.gesturetek.com/support.php 2007 GestureTek Inc. 317 Adelaide Street West, Toronto, Ontario, M5V 1P9 Canada

More information

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II T H E U N I V E R S I T Y of T E X A S H E A L T H S C I E N C E C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S Image Operations II For students of HI 5323

More information

LEGENDplex Data Analysis Software Version 8 User Guide

LEGENDplex Data Analysis Software Version 8 User Guide LEGENDplex Data Analysis Software Version 8 User Guide Introduction Welcome to the user s guide for Version 8 of the LEGENDplex data analysis software for Windows based computers 1. This tutorial will

More information

EDSL Guide for Revit gbxml Files

EDSL Guide for Revit gbxml Files EDSL Guide for Revit gbxml Files Introduction This guide explains how to create a Revit model in such a way that it will create a good gbxml file. Many geometry issues with gbxml files can be fixed within

More information

DIGITAL IMAGE CORRELATION ANALYSIS FOR DISPLACEMENT MEASUREMENTS IN CANTILEVER BEAMS

DIGITAL IMAGE CORRELATION ANALYSIS FOR DISPLACEMENT MEASUREMENTS IN CANTILEVER BEAMS DIGITAL IMAGE CORRELATION ANALYSIS FOR DISPLACEMENT MEASUREMENTS IN CANTILEVER BEAMS Yuliana Solanch Mayorca Picoy Mestre em Engenharia de Sistemas, Calle Jerez Mz Q Lt7UrbanizaciónMayorazgo Ate, Lima,

More information

A NEW CONSTRUCTIVE HEURISTIC METHOD FOR MINIMIZING MAKESPAN IN PERMUTATION FLOW SHOP SCHEDULING

A NEW CONSTRUCTIVE HEURISTIC METHOD FOR MINIMIZING MAKESPAN IN PERMUTATION FLOW SHOP SCHEDULING A NEW CONSTRUCTIVE HEURISTIC METHOD FOR MINIMIZING MAKESPAN IN PERMUTATION FLOW SHOP SCHEDULING Marcelo Seido Nagano Faculdade de Economia, Administração e Contabilidade, Universidade de São Paulo Av.

More information

FACULTY AND STAFF COMPUTER FOOTHILL-DE ANZA. Office Graphics

FACULTY AND STAFF COMPUTER FOOTHILL-DE ANZA. Office Graphics FACULTY AND STAFF COMPUTER TRAINING @ FOOTHILL-DE ANZA Office 2001 Graphics Microsoft Clip Art Introduction Office 2001 wants to be the application that does everything, including Windows! When it comes

More information

You are to turn in the following three graphs at the beginning of class on Wednesday, January 21.

You are to turn in the following three graphs at the beginning of class on Wednesday, January 21. Computer Tools for Data Analysis & Presentation Graphs All public machines on campus are now equipped with Word 2010 and Excel 2010. Although fancier graphical and statistical analysis programs exist,

More information

(Refer Slide Time: 00:02:02)

(Refer Slide Time: 00:02:02) Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 20 Clipping: Lines and Polygons Hello and welcome everybody to the lecture

More information

PowerPoint Basics: Create a Photo Slide Show

PowerPoint Basics: Create a Photo Slide Show PowerPoint Basics: Create a Photo Slide Show P 570 / 1 Here s an Enjoyable Way to Learn How to Use Microsoft PowerPoint Microsoft PowerPoint is a program included with all versions of Microsoft Office.

More information

form are graphed in Cartesian coordinates, and are graphed in Cartesian coordinates.

form are graphed in Cartesian coordinates, and are graphed in Cartesian coordinates. Plot 3D Introduction Plot 3D graphs objects in three dimensions. It has five basic modes: 1. Cartesian mode, where surfaces defined by equations of the form are graphed in Cartesian coordinates, 2. cylindrical

More information

Querying Microsoft SQL Server 2014 (20461)

Querying Microsoft SQL Server 2014 (20461) Querying Microsoft SQL Server 2014 (20461) Formato do curso: Presencial e Live Training Localidade: Lisboa Com certificação: MCSA: SQL Server Data: 14 Nov. 2016 a 25 Nov. 2016 Preço: 1630 Promoção: -760

More information

Interactive Tourist Map

Interactive Tourist Map Adobe Edge Animate Tutorial Mouse Events Interactive Tourist Map Lesson 1 Set up your project This lesson aims to teach you how to: Import images Set up the stage Place and size images Draw shapes Make

More information

ENV Laboratory 2: Graphing

ENV Laboratory 2: Graphing Name: Date: Introduction It is often said that a picture is worth 1,000 words, or for scientists we might rephrase it to say that a graph is worth 1,000 words. Graphs are most often used to express data

More information

2 Working with Selections

2 Working with Selections 2 Working with Selections Learning how to select areas of an image is of primary importance you must first select what you want to affect. Once you ve made a selection, only the area within the selection

More information

PLAY VIDEO. Fences can be any shape from a simple rectangle to a multisided polygon, even a circle.

PLAY VIDEO. Fences can be any shape from a simple rectangle to a multisided polygon, even a circle. Chapter Eight Groups PLAY VIDEO INTRODUCTION There will be times when you need to perform the same operation on several elements. Although this can be done by repeating the operation for each individual

More information

All textures produced with Texture Maker. Not Applicable. Beginner.

All textures produced with Texture Maker. Not Applicable. Beginner. Tutorial for Texture Maker 2.8 or above. Note:- Texture Maker is a texture creation tool by Tobias Reichert. For further product information please visit the official site at http://www.texturemaker.com

More information

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering Digital Image Processing Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Image Enhancement Frequency Domain Processing

More information

I can solve simultaneous equations algebraically, where one is quadratic and one is linear.

I can solve simultaneous equations algebraically, where one is quadratic and one is linear. A* I can manipulate algebraic fractions. I can use the equation of a circle. simultaneous equations algebraically, where one is quadratic and one is linear. I can transform graphs, including trig graphs.

More information

A Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA

A Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA Proceedings of the 3rd International Conference on Industrial Application Engineering 2015 A Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA Somchai Nuanprasert a,*, Sueki

More information

Crop Counting and Metrics Tutorial

Crop Counting and Metrics Tutorial Crop Counting and Metrics Tutorial The ENVI Crop Science platform contains remote sensing analytic tools for precision agriculture and agronomy. In this tutorial you will go through a typical workflow

More information

JASCO CANVAS PROGRAM OPERATION MANUAL

JASCO CANVAS PROGRAM OPERATION MANUAL JASCO CANVAS PROGRAM OPERATION MANUAL P/N: 0302-1840A April 1999 Contents 1. What is JASCO Canvas?...1 1.1 Features...1 1.2 About this Manual...1 2. Installation...1 3. Operating Procedure - Tutorial...2

More information

UV Mapping to avoid texture flaws and enable proper shading

UV Mapping to avoid texture flaws and enable proper shading UV Mapping to avoid texture flaws and enable proper shading Foreword: Throughout this tutorial I am going to be using Maya s built in UV Mapping utility, which I am going to base my projections on individual

More information

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features

More information

Appendix I. TACTICS Toolbox v3.x. Interactive MATLAB Platform For Bioimaging informatics. User Guide TRACKING MODULE

Appendix I. TACTICS Toolbox v3.x. Interactive MATLAB Platform For Bioimaging informatics. User Guide TRACKING MODULE TACTICS Toolbox v3.x Interactive MATLAB Platform For Bioimaging informatics User Guide TRACKING MODULE -17- Cell Tracking Module 1 (user interface) Once the images were successfully segmented, the next

More information

I can solve simultaneous equations algebraically and graphically. I can solve inequalities algebraically and graphically.

I can solve simultaneous equations algebraically and graphically. I can solve inequalities algebraically and graphically. B I can factorise and expand complex expressions. I can factorise Quadratics I can recognise the Difference of Two Squares (D.O.T.S) simultaneous equations algebraically and graphically. inequalities algebraically

More information

9 Using Appearance Attributes, Styles, and Effects

9 Using Appearance Attributes, Styles, and Effects 9 Using Appearance Attributes, Styles, and Effects You can alter the look of an object without changing its structure using appearance attributes fills, strokes, effects, transparency, blending modes,

More information

A program for representing and simulating population genetic phenomena

A program for representing and simulating population genetic phenomena Genetics and Molecular Biology, 23, 1, A 53-60 program (2000) for representing and simulating population genetic phenomena 53 METHODOLOGY A program for representing and simulating population genetic phenomena

More information

FEATURE SPACE UNIDIMENSIONAL PROJECTIONS FOR SCATTERPLOTS

FEATURE SPACE UNIDIMENSIONAL PROJECTIONS FOR SCATTERPLOTS 58 FEATURE SPACE UNIDIMENSIONAL PROJECTIONS FOR SCATTERPLOTS PROJEÇÕES DE ESPAÇOS DE CARACTERÍSTICAS UNIDIMENSIONAIS PARA GRÁFICOS DE DISPERSÃO Danilo Medeiros Eler 1 ; Alex C. de Almeida 2 ; Jaqueline

More information

Importing and processing a DGGE gel image

Importing and processing a DGGE gel image BioNumerics Tutorial: Importing and processing a DGGE gel image 1 Aim Comprehensive tools for the processing of electrophoresis fingerprints, both from slab gels and capillary sequencers are incorporated

More information

Planar Graphs and Surfaces. Graphs 2 1/58

Planar Graphs and Surfaces. Graphs 2 1/58 Planar Graphs and Surfaces Graphs 2 1/58 Last time we discussed the Four Color Theorem, which says that any map can be colored with at most 4 colors and not have two regions that share a border having

More information

SETTING UP A. chapter

SETTING UP A. chapter 1-4283-1960-3_03_Rev2.qxd 5/18/07 8:24 PM Page 1 chapter 3 SETTING UP A DOCUMENT 1. Create a new document. 2. Create master pages. 3. Apply master pages to document pages. 4. Place text and thread text.

More information

1 Background and Introduction 2. 2 Assessment 2

1 Background and Introduction 2. 2 Assessment 2 Luleå University of Technology Matthew Thurley Last revision: October 27, 2011 Industrial Image Analysis E0005E Product Development Phase 4 Binary Morphological Image Processing Contents 1 Background and

More information

Rastreamento de objetos do vpc

Rastreamento de objetos do vpc Rastreamento de objetos do vpc Índice Introdução Rastreamento de objetos do vpc Diagrama de Rede Comandos show da linha de base Introdução Este documento descreve o Rastreamento de objetos do vpc, porque

More information

Texas School for the Blind and Visually Impaired. Using The Drawing Tools in Microsoft Word 2007 for Tactile Graphic Production

Texas School for the Blind and Visually Impaired. Using The Drawing Tools in Microsoft Word 2007 for Tactile Graphic Production Texas School for the Blind and Visually Impaired Outreach Programs 1100 West 45 th Street Austin, Texas, 78756 Using The Drawing Tools in Microsoft Word 2007 for Tactile Graphic Production Developed by:

More information

Chapter 1. Getting to Know Illustrator

Chapter 1. Getting to Know Illustrator Chapter 1 Getting to Know Illustrator Exploring the Illustrator Workspace The arrangement of windows and panels that you see on your monitor is called the workspace. The Illustrator workspace features

More information

Slide Set 5. for ENEL 353 Fall Steve Norman, PhD, PEng. Electrical & Computer Engineering Schulich School of Engineering University of Calgary

Slide Set 5. for ENEL 353 Fall Steve Norman, PhD, PEng. Electrical & Computer Engineering Schulich School of Engineering University of Calgary Slide Set 5 for ENEL 353 Fall 207 Steve Norman, PhD, PEng Electrical & Computer Engineering Schulich School of Engineering University of Calgary Fall Term, 207 SN s ENEL 353 Fall 207 Slide Set 5 slide

More information

TotalLab TL100 Quick Start

TotalLab TL100 Quick Start TotalLab TL100 Quick Start Contents of thetl100 Quick Start Introduction to TL100 and Installation Instructions The Control Centre Getting Started The TL100 Interface 1D Gel Analysis Array Analysis Colony

More information

Robust line segmentation for handwritten documents

Robust line segmentation for handwritten documents Robust line segmentation for handwritten documents Kamal Kuzhinjedathu, Harish Srinivasan and Sargur Srihari Center of Excellence for Document Analysis and Recognition (CEDAR) University at Buffalo, State

More information

IDENTIFYING OPTICAL TRAP

IDENTIFYING OPTICAL TRAP IDENTIFYING OPTICAL TRAP Yulwon Cho, Yuxin Zheng 12/16/2011 1. BACKGROUND AND MOTIVATION Optical trapping (also called optical tweezer) is widely used in studying a variety of biological systems in recent

More information

4 TRANSFORMING OBJECTS

4 TRANSFORMING OBJECTS 4 TRANSFORMING OBJECTS Lesson overview In this lesson, you ll learn how to do the following: Add, edit, rename, and reorder artboards in an existing document. Navigate artboards. Select individual objects,

More information

A4.8 Fitting relative potencies and the Schild equation

A4.8 Fitting relative potencies and the Schild equation A4.8 Fitting relative potencies and the Schild equation A4.8.1. Constraining fits to sets of curves It is often necessary to deal with more than one curve at a time. Typical examples are (1) sets of parallel

More information

Drawing shapes and lines

Drawing shapes and lines Fine F Fi i Handmade H d d Ch Chocolates l Hours Mon Sat 10am 6pm In this demonstration of Adobe Illustrator CS6, you will be introduced to new and exciting application features, like gradients on a stroke

More information

On the Web sun.com/aboutsun/comm_invest STAROFFICE 8 DRAW

On the Web sun.com/aboutsun/comm_invest STAROFFICE 8 DRAW STAROFFICE 8 DRAW Graphics They say a picture is worth a thousand words. Pictures are often used along with our words for good reason. They help communicate our thoughts. They give extra information that

More information

Using Microsoft Excel

Using Microsoft Excel Using Microsoft Excel Introduction This handout briefly outlines most of the basic uses and functions of Excel that we will be using in this course. Although Excel may be used for performing statistical

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

4.5 VISIBLE SURFACE DETECTION METHODES

4.5 VISIBLE SURFACE DETECTION METHODES 4.5 VISIBLE SURFACE DETECTION METHODES A major consideration in the generation of realistic graphics displays is identifying those parts of a scene that are visible from a chosen viewing position. There

More information

AnalySIS Tutorial part 2

AnalySIS Tutorial part 2 AnalySIS Tutorial part 2 Sveinung Lillehaug Neural Systems and Graphics Computing Laboratory Department of Anatomy University of Oslo N-0317 Oslo Norway www.nesys.uio.no Using AnalySIS to automatically

More information

The Allen Human Brain Atlas offers three types of searches to allow a user to: (1) obtain gene expression data for specific genes (or probes) of

The Allen Human Brain Atlas offers three types of searches to allow a user to: (1) obtain gene expression data for specific genes (or probes) of Microarray Data MICROARRAY DATA Gene Search Boolean Syntax Differential Search Mouse Differential Search Search Results Gene Classification Correlative Search Download Search Results Data Visualization

More information

Motic Images Plus 3.0 ML Software. Windows OS User Manual

Motic Images Plus 3.0 ML Software. Windows OS User Manual Motic Images Plus 3.0 ML Software Windows OS User Manual Motic Images Plus 3.0 ML Software Windows OS User Manual CONTENTS (Linked) Introduction 05 Menus and tools 05 File 06 New 06 Open 07 Save 07 Save

More information

What is the Box Model?

What is the Box Model? CSS Box Model What is the Box Model? The box model is a tool we use to understand how our content will be displayed on a web page. Each HTML element appearing on our page takes up a "box" or "container"

More information

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS Setiawan Hadi Mathematics Department, Universitas Padjadjaran e-mail : shadi@unpad.ac.id Abstract Geometric patterns generated by superimposing

More information