Semi-automated annotation of histology images

Size: px
Start display at page:

Download "Semi-automated annotation of histology images"

Transcription

1 Linköping University Department of Computer science Master thesis, 30 ECTS Computer science 2016 LIU-IDA/LITH-EX-A--16/030--SE Semi-automated annotation of histology images Development and evaluation of a user friendly toolbox Semi-automatisk uppmärkning av histologibilder Utveckling och utvärdering av en användarvänlig verktygslåda Alexander Sanner Fredrik Petré Supervisor : Tommy Färnqvist - Dept. of Computer and Information Science, Linköping University Examiner : Ola Leifler - Dept. of Computer and Information Science, Linköping University Linköpings universitet SE Linköping ,

2 Upphovsrätt Detta dokument hålls tillgängligt på Internet eller dess framtida ersättare under 25 år från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns lösningar av teknisk och administrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsmannenslitterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida Copyright The publishers will keep this document online on the Internet or its possible replacement for a period of 25 years starting from the date of publication barring exceptional circumstances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: c Alexander Sanner Fredrik Petré

3 Abstract Image segmentation has many areas of application, one of them being in medical science. When segmenting an image, there are many automatic approaches that generally do not let the user change the outcome. This is a problem if the segmentation is badly done. On the other hand there is the manual approach which is slow and cumbersome since it relies heavily on the users effort. This thesis presents a semi-automated approach that allow user interaction and computer assisted segmentation, which was realized in a hi-fi prototype. The prototype made use of SLIC superpixels which the user could combine with interactions to create segments. The prototype was iteratively developed and tested to ensure high usability and user satisfaction. The final prototype was also tested quantitatively to examine if the process of segmenting images had been made more efficient, compared to a manual approach. It was found that the users got a better result in the prototype than the manual if the same time was spent segmenting. Although it was found that the users could not segment images faster by using the prototype than the manual process, it was believed that it could be made more efficient with superpixels that followed the natural border of the image better.

4

5 Abbreviations LBP HSV RGB SLIC UX GUI lo-fi hi-fi Local Binary Pattern Hue Saturation Value Red Green Blue Simple linear iterative clustering User experience Graphical User Interface Low Fidelity High Fidelity v

6

7 Acknowledgments We would like to thank the company Sectra AB for providing a fun and challenging project. We would also like to thank the two supervisors Jesper Molin and Martin Svenson for their technical and methodological guidance. We would like to thank the pathologists at Linköpings University hospital for setting time aside and providing great feedback which improved our prototype. Last but not least we would like to thank our supervisor at the University Tommy Färnqvist and examiner Ola Leifler for providing feedback which improved the thesis. vii

8 Contents Abstract Abbreviations Acknowledgments Contents List of Figures List of Tables iii v vii viii x xi 1 Introduction Motivation Aim Research questions Delimitations Theory Image segmentation Usability development User experience & Usability Evaluation Related Work Method Prestudy Requirements analysis Lo-Fi Prototype Development Lo-Fi Prototype Evaluation Lo-Fi Prototype Refinement Refined Lo-Fi Prototype Evaluation Hi-Fi Prototype Development Quantitative testing of Hi-Fi prototype Qualitative testing of Hi-Fi prototype Hi-Fi Prototype Evaluation Results Initial interview Requirements analysis Iteration I Iteration II Iteration III Discussion 47 viii

9 5.1 Results Method Ethics Conclusion Answering the research questions Future work Bibliography 53 A Appendix A 57 A.1 Initial interview A.2 Lo-fi prototype demonstration A.3 Questions and answers to Matlab-prototype test A.4 Questions and answers to prototype A.5 Questions and answers comparing the two prototypes A.6 General questions and answers about the whole product

10 List of Figures 1.1 Manually segmented colon Histology image of colon Manually segmented colon LBP explained Segmentation using k-means Superpixels zoomed in Results of watershed Timeline of project Superpixels Block schedule of the matlab prototype Colon image annotated during tests Reference image used during tests Ideas on post-its Slide of lo-fi prototype Smart-line interaction Result of interaction Complete segmentation using prototype Example annotations using the prototype in red and blue Cursors used in the hi-fi prototype Drawn line in the hi-fi prototype Result of a drawn line in the hi-fi prototype Graphical user interface of the hi-fi prototype Toolbox in the full view Graph showing the measured durations by annotation technique Example annotations during quantitative test x

11 List of Tables 4.1 User profile, contextual task analysis, platform and usability goals Comparison of concepts Time on task for annotating an image Time on task xi

12

13 1 Introduction Early diagnosis of diseases, e.g. cancer, can have a substantial impact on the patients chances of survival [43]. Unfortunately, this requires experts in the field whose time is very limited. The digitization of medical image data has enabled for the creation of powerful tools, using image analysis, that could assist the specialists in their decision making [14]. Machine learning has been and will continue to be of great interest since it can be used in a system that automatically can evaluate new cases based on previous [21]. A machine learning algorithm needs a big data set to learn from, which implies that someone has to create the learning data. To create data for a machine learning algorithm that can for example differentiate sick tissue from healthy, both types need to be marked up i.e. annotated. These annotated images are called ground truth data. Annotating an image involves dividing the image into segments and then labeling them. Image segmentation is the process of dividing an image into smaller sections and can be done either manually or by using an algorithm. There are multiple image segmentation techniques, some more complex than others, that compete in the area. The general idea is to group pixels in an image to form regions, called segments. Some use cases of image segmentation in medical science is to measure sizes of areas or volumes [11] or to estimate e.g. a tumor s size or growth [30]. Image segmentation can be used in all types of images, so the use cases are almost infinite. Since image science is of great interest in many fields, collaborations exist between different instances such as healthcare, academia and industry. 1.1 Motivation CMIV (Center for Medical Image Science and Visualization) [4] is a collaboration between academia, healthcare and industry that has its focus on visualisation and image science, which image segmentation is a part of. Linköping University hospital is one of the first hospitals in Sweden to implement digital pathology [35], where tissue samples are scanned digitally so that they can be analysed from a computer screen rather than with a microscope. The fact that the images are digital also comes with other possibilities, such as the ability to create powerful tools that can help the pathologist review cases and make diagnoses. One such tool under development by CMIV is a machine learning program that will, with the use of ground truth data, help doctors find the interesting regions in an image. One of the biggest bottlenecks when developing a machine learning program is the amount of ground 1

14 1.1. Motivation truth data needed to teach the system. When creating ground truth data at CMIV, images are currently being annotated by hand, which takes quite some time. If time is a limiting factor, the annotations can either not be done very accurately or few annotation can be made. Today, a toolbox exists to help create annotations at CMIV. The toolbox consists of a free-hand annotation tool and a square annotation tool. When using the tool, the user chooses a tool and draws a region by first selecting a tool in a context menu and then, depending on the tool, clicking on points to mark a polygon or by dragging to mark a square. After the region is marked, the user can click the drawn region and write a descriptive text to it. When the user is done with an image, all the regions of interest are marked up with a describing label connected to it. This is illustrated in Figure 1.1. This software is in the rest of the thesis called "the current software". An MS-Excel document is used to keep track of what the labels actually mean in SNOMED-CT [18] which is a medical ontology, containing a hierarchy of medical terms. The user needs to connect each label in the image with the correct SNOMED-CT codes in a MS-Excel document. Figure 1.1: Histology image of colon which has been segmented manually by a pathologist. The labels describe the Latin name for the encased tissue which is not of interest here. CMIV is wishing to speed up this process, to enable more ground truth images being produced, as well as making the work easier for the user that is annotating the images. This would increase the research opportunities since more images could be processed and used as ground truth. The issue of the slow segmentation process can be solved by letting the user be assisted by a computer while segmenting. An automatic algorithm can be used to segment an image by itself, which can be tuned by setting different parameters. A automatic algorithm can prove very good for specific images if the parameters are tuned to that specific image. The goal of this project was however to let the user interact with the algorithm to make it work in a more general sense. An interaction can be to click in certain points on the screen or be able to set some of the segmentation parameters. The user can hopefully complement the algorithm enough to make it perform well in a variety of image types. Image segmentation is a widely used practice, and is not only important for medical science. This work can therefore be of great interest to many others, interested in segmenting images, and not just those working in the medical field. 2

15 1.2. Aim 1.2 Aim The aim of this project was to investigate the process of annotating medical images for the purpose of creating ground truth data, find out how it is done at CMIV today, and what improvements that could be made to speed up the process. A goal that CMIV had was that the speed of creating ground truth data should be faster than before. The aim of the project was also to develop a prototype which could prove its usefulness by showing that tasks take less time to complete with the prototype compared to using the original toolbox. This was to be evaluated at the end of the project, along with a qualitative user experience evaluation. 1.3 Research questions The main goal of this thesis work was to investigate whether the use of image segmentation algorithms that aid the user when annotating an image will have a positive impact on the annotation process. Important to note here is that the segmentation algorithm will have to be able to segment various histology images well, in order to not favour the automatic algorithms by letting them be fine tuned for a specific type of image. Question 1: How can image segmentation be used to improve the process of annotating various histology images? Even though the automatic algorithms may perform very well and do a lot of work behind the scenes, the interaction with the algorithms must be easy to understand and use in order for the tool to perform well. Different image segmentation algorithms need different kinds of input which in turn allow different kinds of interactions. The algorithms will be chosen with regards to their interaction possibilities and the input methods will be tested with users. Question 2: What are good interaction methods for interacting with an image segmentation algorithm? When moving towards a more automatic annotation process, the program might not do what the user expects at times. Especially when the program returns a bad result it could have a negative impact on the user experience. The user might feel a loss of control when the program returns a different result than expected. Also if the user can not make the annotation precisely as wanted, there might arise a feeling of losing control. Question 3: How does the use of automatic image segmentation change the user s perspective of control when annotating images? 1.4 Delimitations The scope of this project was to improve the segmentation and labeling process in image annotation. Problems that were found during usability test that were not directly related to said scope were documented but not iterated on. 3

16 1.4. Delimitations 4

17 2 Theory This chapter describes different image segmentation and clustering techniques as well as usability development and evaluation methods. Techniques such as k-means, watershed and superpixeling are described. To get a good understanding of how these techniques can be used in practice, a histology image is segmented to visually explain how they work. Histology images such as Figure 2.1 shows how a typical case might look like and Figure 2.2 shows how to segment such an image with the tools that exist today. Figure 2.1: Histology image of colon. This image has been scanned in at high magnification from a 4 micrometer thick paraffin slice. The slice has been taken from a block containing an interesting piece of tissue sampled by the doctor. 5

18 2.1. Image segmentation Figure 2.2: Histology image of colon which has been segmented manually. The different regions of interest are marked 1, 2, 3 and Image segmentation When working with image segmentation, one could generalize into three main categories. These are manual segmentation, automatic segmentation and semi-automatic segmentation, which is a combination of the first two. During a manual segmentation, the user draws the region completely by hand. The user has complete control over what is included and excluded from each segment. The complete opposite of a manual segmentation process is fully-automatic segmentation which does not let the user interact with the segmentation at all. This means that the user effort is minimal but it can compromise the segmentation success. The user can however change parameters before running the algorithm to change the outcome. Many automatic algorithms exist [9, 12, 29]. These automatic algorithms are often tailored to segment a specific type of image with great success, but once a new problem is presented for the algorithm it might not perform as well. There also exist many semi-automated approaches to segmentation [40, 8, 7], which is the focus in this report, like manual segmentation lets the user interact with the segmentation process. There are however some underlying algorithms that assist the user in creating the segments. This means that the result may not always be what the user expected. Some image segmentation algorithms and feature selection techniques are described in the following sections Features Features can simply be explained as attributes for elements. These attributes can be used to calculate similarities by different algorithms. Often when working with features in computers, a feature vector is built to be able to measure distances between vectors. The distance between two vectors distinguishes how similar they are. 6

19 2.1. Image segmentation Color based features An image is often represented by its red, green and blue color channels, which is how a pixel is drawn on the screen. These values can then be directly used to cluster an image by using them as the feature vectors for each pixels. It is also possible to convert the pixels into other color spaces, such as the HSV (hue, saturation, value) space which also can be used for clustering or creating histograms [38]. A. Homeyer et al. [17] present an algorithm for quantifying necrosis in whole-slide images. In their algorithm, the image is first divided into tiles, for which pixel value features are extracted. This information is then used in a classifying algorithm that labels the tiles as either background, viable or necrotic. The presented algorithm requires initial training where a user labels individual tiles. In the report it is concluded that HSV-features appear to be more discriminative than RGB-features. They also conclude that more features provide better classifications while being more time consuming. Pattern based features The color values of the image are usually not enough to achieve a perfect segmentation. [38] Aside from looking at single pixel values, a common method for image classification is to analyze the texture of the image. One of the most famous algorithms for texture analysis is Local Binary Patterns (LBP) [28] which works on a gray scale version of the image. The general idea of LBP is to calculate the difference between a pixel and a its 8 neighboring pixels, as explained in Figure 2.3. For every neighboring pixel which value is greater than or equal to the center pixel a value is added to the total LBP. For every neighboring pixel with a smaller value than the center pixel value nothing is added to the total LBP. The added value depends on the pixel s position relative to the center pixel. The value is two to power of the number of clockwise steps taken from the top left corner, i.e. 1, 2, 4, 8, 16, 32, 64 and 128. At the end of the algorithm, each pixel in the image has a LBP value ranging from 0 to 255. Figure 2.3: Calculating the LBP value for the center pixel with value 4. a. shows the pixel values. b. shows the results after subtracting the center pixel from each neighbor. Pixels greater than or equal to zero in b. writes a one in c., else zero. d. shows the resulting values for the neighbors. The total LBP in this case would have been = 102. The fact that each combination of neighboring pixels have a unique value means that the algorithm is vulnerable to rotation of a pattern which in that case might give completely different LBP values, making two similar patterns with different rotation appear dissimilar. As presented by Guo et al. [15] it is possible to counter this issue by using the rotation invariant version of the LBP algorithm. As mentioned each pixel normaly has a value ranging from 0 to 255. As explained in the paper, these 256 values can be narrowed down to 36 unique patterns. By rotating the eight bit value from the initial LBP calculation until it reaches its corresponding value among the 36 patterns will result in an algorithm that disregards rotations. As an example, by representing the LBP value as a byte, the two bytes and

20 2.1. Image segmentation obviously have different values. However, in the sense of rotation invariant patterns they are equivalent. This is because becomes with four bitwise rotations k-means clustering Clustering is a category of automatic algorithms that are used to divide elements into different groups, or clusters, based on their features. In the case of image analysis, the elements are the pixels and most commonly their RGB (red, green, blue) values. The pixels can then be divided into k clusters, where the pixels that have the most similar RGB values, are put in the same group. Each group now represent a color, or a set of relatively similar colors, which can be used for grouping parts of an image. There are various techniques for determining how similar different pixels or points are. As discussed by Roy et al. [34] converting the image into HSV (Hue, Saturation, Value) space and then using the hue and saturation values, they are able to successfully divide the image into several coherent groups (or clusters). K-means [2] is a clustering algorithm that iteratively computes a mean value, also called centroid, for each cluster by calculating the mean for each element in that cluster. Using the new mean values, each element is assigned to the cluster with the closest centroid by distance between their feature vectors. Since it is iterative, the centroids constantly change with each new iteration. It converges to a result when few or no changes to the assignments are made. K-means clustering can be used to divide the image into k segments, letting the user choose the k value. This can be of interest when the regions of interest are highly uniform which will result in segments not being scattered across the image. However in histology images there are very often similar regions in different type of tissues which will render a clustering method bad on its own. Also a user will need to know how many regions of interest there are in the image and cluster the image into that many segments and hopefully get the right segments back. Figure 2.1 shows a histology image of a piece of colon tissue which has four visible region of interests and background. Figure 2.4 shows how k-means clusters the image into five segments which are red, green, blue, white and black. The real regions of interest is shown in Figure 2.2 which shows that the clustering method does not work on its own in this case Superpixeling Another image segmentation technique is superpixeling, where the image is divided into smaller segments, called superpixels [25]. A superpixel contains many coherent pixels, often with similar attributes. Superpixels can also be combined with other algorithms to increase the computing speed since there are fewer pixels to do the computations on [16]. A simple way to generate superpixels would be to divide the image into a grid, letting all the pixels in each section form their own superpixel. However, there exist algorithms that deform the superpixels so that they adapt to the natural shapes in their vicinity. Achanta et al. [1] present an algorithm called Simple Linear Iterative Clustering (SLIC), which works as the above mentioned case. There are several parameters that can affect the outcome of the algorithm regarding how much each super pixel can deform and how many superpixels that will be created. SLIC superpixels have relatively similar sizes compared to many other oversegmenting algorithms. Over-segmenting means in this case that the regions of interest are themselves split up into smaller segments [6]. SLIC creates a grid-like structure while still adapting the segments to the natural regions of the image, as seen in Figure 2.5. In the same figure, it can however be noted that some of the superpixels have failed to perfectly adapt and contain two tissue types. Achanta et al. have also done a comparative study between SLIC and several other algorithms with the same purpose. It was found that SLIC outperformed the other algorithms in the study on both precision and computational time. Since superpixels in general create an over-segmentation, it could be of interest to merge them to- 8

21 2.1. Image segmentation Figure 2.4: Histology image of colon which has been segmented into five segments using k-means clustering in Matlab. Features that were used here are the pixel s HSV values. gether to form larger segments. This can be done either manually by interactions or by using an automatic algorithms. Felzenszwalb [10] present a another method for generating superpixels which is different from SLIC in the way that the superpixels are more diverse in size and shape. Figure 2.5: A visualisation of how superpixels look like in a histology image. They are not all perfectly made in this case. Marked by the red ellipse is a superpixel containing both pink tissue and white tissue which might not be wanted. 9

22 2.1. Image segmentation Watershed segmentation Watershed [37], is an image segmentation algorithm that is performed on a gray scale image. An easy way to explain the algorithm is if it is viewed as a real world example. The image can be seen as terrain with hills and valleys where the altitude of each pixel is decided by its gray value. From each local minima in the valleys, water rise until it reaches a certain level. Each time the water level is high enough for two different minimas to merge with each other, a wall is created to stop it from happening and by the end of the algorithm, each minima is surrounded by walls and this is what creates the segments in the image. As seen in Figure 2.6 the usual results from running the watershed algorithm on such a patterned image is over-segmentation. This means that the segments created by the algorithm are very small in the regions of interest, which are marked in Figure 2.2. At the same time, the created segments are larger in the white, homogeneous background areas. To counter this issue, it is possible to post process the results by for example merging regions based on similarities as illustrated by Wang et al. [42]. Ng et al. [26] present a segmentation method consisting of a modified watershed algorithm combined with k-means pre-processing. The reason they do pre-processing with kmeans is to reduce the previously mentioned problem of over-segmentation that often occurs when using the watershed algorithm. The proposed method is completely automatic and has no human interaction. The results presented in the paper show that their method is effective at reducing over-segmentation compared to the conventional method, meaning that the number of segments produced is closer to the natural number of segments in the image. Figure 2.6: Histology image of colon which has been segmented using the watershed algorithm. Each region surrounded by a border in the image represents a shed. It can be noted that the image is very over-segmented, especially in the non-uniform tissue areas. A completely different type of algorithm, Chan-Vese [13], was also tested. The algorithm is based on a level set function but unlike many of its contestants, it does not rely on edge detection and rather adapts its curve to a change of pattern in the image. This meant that it performed relatively well in histology images of different types. A downside however was that it required a lot of computation time to create a segment and the algorithm often 10

23 2.2. Usability development produced an under-segmentation of the image meaning a user would have to manually split regions or rerun the algorithm with different parameters. 2.2 Usability development The Usability Engineering Lifecycle is a framework presented by Jacko [19] which goes over the steps from gathering requirements to having a fully functional prototype of the product. Before any design is done it is important to fully understand the current situation which involves knowledge of the users and their goals, this is done in the predesign phase Predesign phase The first step in the predesign phase is conducting an evaluation of the characteristics of the users, e.g. their educational level and experience with computers and their current workflows. One way of creating a user profile is to conduct interviews. The second step is to conduct a contextual task analysis. The product to be developed will be used to reach a goal and the focus is to analyze the steps used today to reach said goal. The focus should however not be purely on the tasks, but also the reason why they are done. This to be able to think outside of the box when looking for better solutions. To get a better and more realistic view of the work-flow it is recommended to visit the workplace of the users and watch them complete tasks while taking notes. Thirdly, an analysis is done on the platform capabilities and constraints where the product is to be developed and deployed and how they in turn will affect the design of the product [19]. These three knowledge areas combined are then used to derive usability goals that are central in the development process and reflect the stakeholders goal with the product. The five main usability goals mentioned in the literature are: learnability, efficiency of use, ability to relearn the system after absence from it, frequency and seriousness of errors and finally subjective user satisfaction. The derived usability goals should be more specific than the five main usability goals previously stated and they all do not have to be measured but can instead be used as guidelines during development [19] Concept development When developing a product, it can be hard to come up with new ideas of how to design the system. One technique for broadening the vision of the prototype is brainstorming. Brainstorming means that a person or a group of people come up with ideas, some good, some bad that can be of use when developing a concept for the design. Brainstorming can be done with co-workers, mentors or other people that might be of interest. While brainstorming, some trigger questions could help to bring forth some ideas. A brainstorming session can e.g. be five minutes long and focus on a very specific thing [19]. Once the predesign phase is considered complete, a more iterative design phase begins where the product is designed using previously gathered information. Before starting any coding it is necessary to be more certain that the prototype correlates with the vision of the users. This is done by creating simple lo-fi prototypes like screen designs or mock-ups and letting the users give their input on it [19] Prototyping Rettig [31] proposes one way of developing paper prototypes, and how to test them. Developing paper-prototypes can be done easily by having papers and pens, and some clever thinking in design. The prototype can then be created by drawing different sections of e.g. a monitor on a paper and some moving parts (also paper) which can be used to simulate a computer. The technique can be used to elicit requirements or validate concepts which can 11

24 2.3. User experience & Usability Evaluation be useful if the developers have the wrong idea about the product to begin with. M. Rettig discusses some problems with hi-fi prototypes such as that hi-fi prototypes takes too long to build and developers resist changes since they are attached to the work. The paper prototype can be thrown away once its purpose is complete. 2.3 User experience & Usability Evaluation The main goal of a usability evaluation is to improve the system under test. The tests shall be done by subjects that represent the intended end user of the product and the tasks that the subjects perform during the test shall also be similar to the tasks that real end users will perform in the system [5]. N. Bevan [3] discusses that the current interpretations of UX and usability are very diverse and later proposes a common framework for classifying UX and usability metrics and also how they relate to efficiency, effectiveness, satisfaction, accessibility and safety User tests A usual user test starts with letting the user complete a set of tasks in the system under test while being observed by the interviewers. The users can also be told to think aloud while completing the tasks to let the interviewers hear their thoughts. After the tasks are completed a set of questions regarding the system under test are preferably asked to the user [5]. To measure usability, some metrics should be analyzed from the set of user tests. Examples of metrics are time on task, task success, or number of issues reported for each user. These metrics can be used for comparing different versions of a program to see the progress made since the last iteration or it can be used to assess the difference in usability between different programs. Once an interview schedule has been developed, a pilot interview can be run to make sure that the questions will be answered in the intended way by the interviewees, that the questions are not leading and to test the performance of the interviewer in a real situation [22] Number of subjects Robert A. Virzi [41] discusses in his paper how many subjects are needed for a usability study. Letting users complete tasks while thinking aloud, problems in a user manual to a voice mail system were uncovered in the study. The results showed that to uncover about 80% of the issues only five participants are needed. The issues uncovered by the five participants are also the most severe ones. For every user that is added to the study after the five first users, the chance of them revealing a new issue decreases more and more. When it comes to quantitative studies, more users are usually required than in a qualitive test to be able to generalize from the results. Nielsen [27] recommends around 20 users in order to reach a good confidence interval. He also argues that it is a very expensive study compared to a qualitative study because of the high number of participants needed. To simply improve the usability of a system, a few qualitative interviews should suffice Instant Data Analysis Instant Data Analysis (IDA) is a quick method for assessing the usability of a system based on user tests. The motivation for creating the method was to reduce the time spent on analysing the data gathered from the user tests, while still maintaining a high rate of found usability flaws in the systems. IDA is supposed to be used on data gathered from around 4-6 thinkaloud sessions done in a single day. Aside from presenting IDA, Kjeldskov et al. [20] also ran a comparative test against another method, namely video analysis where the entire sessions 12

25 2.4. Related Work were recorded and analysed afterwards. Their findings was that IDA captured almost as many usability issues with only 10 % of the time spent. A downside of IDA however is that it can not be done by one person. It is also mentioned that they did not test the method with non-usability experts Time on task Time on task [39] is a performance metric that can be used when assessing the usability of a product. If a user needs to perform a specific task many times, the time it takes to perform that task tells a lot about the efficiency of the product. It can be measured by simply starting a clock when the user starts performing the task and stopping the clock when the user completes it. Very often when comparing the values given from time on task, a mean value is calculated. Since a user s time can be very different from the vast majority, strictly looking at the mean value can be misleading. It is therefore important to show the variability when presenting the data. A choice in this method is if only successful tasks are to be included in the measurement or if all tasks performed should be measured. If only measurements on successful tasks are included, a more cleaner efficiency measurement is obtained. However if all time measurements are included, a more true reflection of the efficiency measurement is obtained. 2.4 Related Work There exist quite a few similar tools for segmenting or annotating images that have influenced the tool presented in this thesis. Below are a few examples of those tools Cytomine With the aim to allow for collaborative work when analysing large images, a software called Cytomine [23] has been developed. The software contains functionality to create and share annotations in gigapixel resolution images online along with machine learning algorithms. What is relevant to this project is the software s functionality for creating annotations. In order to create annotations using the Cytomine s online demo [24] various tools exist, ranging from fully manual tools for creating circles, squares and ambiguous polygons to more automatic tools like the MagicWand that automatically creates an area for the user given a single click with the mouse pointer. In its current state, the tools are heavily weighted towards fully manual interactions with the exception of the MagicWand GrabCut Rother et al. [32] present an algorithm called GrabCut which allows for a user to draw a rectangle around the region of interest and hopefully separate the region of interest from the rest of the image. If the algorithm separates the wrong regions, it is possible to include or exclude some parts, which makes this algorithm semi-automated. The user can affect the drawn area a lot through interaction by drawing the region of interest, including and excluding. The algorithm is extracting the foreground from the background of the content of the image, meaning that the encased area would ideally contain the foreground object of the image in order to get a good segmentation. The interaction that includes regions makes it possible to add parts of the image to the foreground while the interaction that excludes allows for removing parts of the image from the foreground. This being said, the idea behind GrabCut is to exclude the background from the foreground. This algorithm is not optimal for segmenting several regions of interest, that histology images most often contain, since it basically only separates foreground from background. 13

26 2.4. Related Work Live Wire and Live Lane With the goal of giving the user control over the segmentations while requiring as little effort as possible Falcão et al. [8] present two paradigms, live wire and live lane. The concept behind live wire and live lane is to let the user make a relatively bad or inaccurate annotation while the algorithms snap the boundary of the segment to what is classified as the natural boundary. To simplify the algorithms used to achieve this effect, input points are sampled and the optimal path between the points are calculated. The optimal paths are what gives the resulting boundary. In the paper, the algorithm is evaluated against grayscale images from MRI scans. Interacting with live wire and live lane requires the user to surround the region of interest, just like when creating completely manual segments. It gives benefit of letting the user create more accurate segments with less effort Ilastik C. Sommer et al. [36] present an interactive method for segmenting images which uses the interactions from the user to teach the algorithm. The user provides input in the form of a stroke with a color and can then see a live result of how the algorithm segments the image. The strokes put each hit pixel in categories, corresponding to the selected color, that are later used in the classification of every pixel on the image. The algorithm classifies each pixel in the image separately which can result in scatter, meaning that individual pixels can be islands on their own. Every pixel in the image is always classified to a specific color as long as the user has made a stroke with a color. Ilastik can classify pixels in various images since it is trained live by the user which makes the algorithm good in a general sense. The fact that the algorithm classifies each pixel by itself and can create islands is bad when segmenting histology images since most regions of interest are coherent. The regions might also have different characteristic in different parts which would most likely result in a over-segmentation of that region Quantification of necrosis In a paper written by Homeyer et al. [17], quantification of necrosis is done in large gigapixel images in roughly a minute. The authors present a method for dividing a large image into tiles followed by doing color and pattern analysis on each tile. With the analysis they could classify the tiles as belonging to one of the classes "background", "viable" or "necrotic" and then successfully approximate the rate between sick and healthy tissue. This method is suitable for quantification since errors made in one tile should be canceled out by errors in another tile. An error in this case is a tile containing multiple types of tissue. For example, a tile classified as "necrotic" might contain some "viable" tissue while another tile classified as "viable" contains some "necrotic" tissue. The errors in these two tiles would then work towards canceling each other out. However, when it comes to segmenting or annotating an image, the quality of the segmentation is highly dependent of the resolution of the tiles, meaning that a high amount of tiles are needed for high quality segments. 14

27 3 Method This chapter describes how the method in this project was deployed. A timeline illustrating how the work was carried out can be seen in Figure 3.1. First a prestudy was conducted in order to establish a good understanding of the problem and also to find good approaches to solve the problem. After that, a requirements analysis was done in order to collect requirements from the stakeholders. The rest of the project was divided into three separate iterations, each initiated with development of a prototype and ended with an evaluation of that prototype. The initial interview in the requirements analysis phase and the evaluations of the first two iterations were only conducted with one user. This user was the only pathologist currently working with annotating images with the current software, who had the most valuable experience with manual image annotation. This could be used to get a good reference when testing new software since the user would be able to compare with the current software. Other pathologists that could be interviewed could be very biased by their own work, which was not to annotate histology images for ground truth data purposes. By interviewing these users, great feedback would most likely not be generated for the software being developed. Figure 3.1: Timeline showing the iterations during the project. 15

28 3.1. Prestudy 3.1 Prestudy Since none of the authors had any previous experience developing image analysis algorithms, most of the prestudy was spent on reading literature in the field. Additional information was gathered by talking to other stakeholders. 3.2 Requirements analysis At the start of the project an initial interview was held with the user to establish current work flows, problems with the current software, that allowed for manual annotation, and the task that the user had to perform. Using the material from the interview, a user profile and a task analysis was made. Through discussions with other stakeholders in the project the constraints and capabilities from the platform were made clear. Some general design principles could be taken from the current software along with competing software to establish what works and what does not. From this, usability goals could be developed. These usability goals worked as requirements for the product that was to be developed. CMIV also had some expectations regarding the product as requirements. A list of issues was created from the usability goals since these described what features the user wanted from the current software that were missing. After this, the prototype development could begin. 3.3 Lo-Fi Prototype Development A brainstorming session was held in order to generate ideas of how to solve different issues with the current software. Ideas were put together in order to build concepts. A concept meant having ideas which solved or tried to solve all issues found with the current software. After this, one concept was chosen based on a discussion with experts within digital pathology. The concept was realized as an initial mockup. The initial mockup was created collaboratively by the developers on a whiteboard to allow for discussion and easy fixes. Later, the prototype was drawn on paper for further discussion on what was agreed upon the whiteboard mockup. Some ideas that were good were sketched down on paper, so that the ideas would not be lost. As Rettig [31] proposes, paper prototyping can be quite useful since it is fast and does not require much knowledge. The prototype can change vastly by the flick of the pen. The prototype can be thrown away when it is not used anymore. However, a decision was made to have a prototype on the computer, since it would be easy to bring along to a meeting. The prototype was made in Balsamiq. Balsamiq is a software, in which mockups and wireframes can be created. A wireframe is simply a series of images that make up a prototype. A wireframe of the best ideas were created, with click functionality. Every frame in the prototype described the product in discrete steps after every user input. The wire-frame was created to show the user how the product could be interacted with, how the structure of the toolbox would look and what the work flow would look like. The user could give feedback on how these were organized or on the appearance of it. A number of segmentation algorithms were implemented and tested in Matlab with numerous histology images. They were then compared against each other. When comparing the algorithms, one of the main aspects were how much effort that was needed in order to make the algorithm perform well on a new, different image. Performing well meant either getting a good segmentation or getting a good over-segmentation of the image. Here it was important that the algorithm either performed equally good on numerous images without any parameter changes, or that it was very easy and not computationally heavy to correct the parameters. Another main aspect that was looked upon was how easily it was for the user to take the resulting segmentation from the algorithm and assemble a full annotation 16

29 3.4. Lo-Fi Prototype Evaluation out of it. This involved either combining or correcting segments until they corresponded to the wanted regions. Watershed was one of the main contestants since the segments given adapted well to the natural borders in the image. A downside was however the heavy over-segmentation that was created in some images. The segments were also unevenly sized due to the fact that the characteristics of the image determined the size and shape of the basins. To counter this, some post processing was needed. An algorithm that performed better, without post processing was SLIC, which had a parameter for the number of segments to create. Additionally, the segments were rather equally sized and in most cases adapted well to the characteristics of the image. Yet another algorithm that often produces an over-segmented image is the one created by Felzenszwalb [10]. The algorithm produces superpixels with much greater variance in size, where one superpixel can range over the whole image and contain much more pixels than the others. One problem with this is that the image is partly over-segmented and partly undersegmented since one superpixel might contain several different areas of interest, while others might contain only a few pixels. If this algorithm was to be used, more focus would need to lie on how to make the big superpixels smaller and how to select the small superpixels without having a cumbersome interaction. Based on the above reasoning, superpixels created with SLIC were used as a core concept in the prototype since it was thought that they enabled for the best user interactions. The user could fill in superpixels and get good borders from the them in the image, so the user would not have to create the borders themselves. It would be easy to include and exclude superpixels from a drawn area which would cover a broader range of images since it is possible to create n number of superpixels for every image given a certain resolution. It was important that the annotation toolbox could cover a range of various histology images. A valuable quality with this approach is that a prototype with understandable parameters could easily be created. This is important since it makes it easier to let the user change the algorithm s segmentation performance without having a difficult time understanding complex settings. This would make the prototype much easier to understand and focus will be put on how usable it is. In Figure 3.2 it is shown how superpixels could be used by a user to draw a region in the color red. 3.4 Lo-Fi Prototype Evaluation In order to evaluate the prototype, it had to be tested with an actual user. The test consisted of a prototype demonstration and a brief interview with a user. A meeting was held at the university hospital of Linköping. A pathologist introduced how the work was carried out now, and what the annoyances were in the current software. After the introduction the prototype was shown. The pathologist did not interact with the prototype since it was a wireframe with limited functionality. Instead, the wireframe was presented by the developers along with a description of the functionality that the end product might have. A discussion was held about the issues and potential benefits found in the wireframe. The purpose of this was to verify or kill any assumptions gathered from the initial meeting that were transferred into the concept via the brainstorming session. A concept consisted in this case of ideas that were put together to solve different kinds of issues. The entire meeting was recorded in audio, to ensure not to miss details. 3.5 Lo-Fi Prototype Refinement The second iteration started with gathering of concepts from the initial prototype that were considered solid enough to keep for the next prototype. Since the goal of the prototype was to speed up annotation, which is pretty tricky to test on a lo-fi prototype, a decision was made 17

30 3.5. Lo-Fi Prototype Refinement Figure 3.2: Superpixels created by SLIC in white borders and a stroke interaction in red. Note that this was not how the interface looked like in the hi-fi prototype later on. to diverge from the standard rapid prototyping on paper and make the next lo-fi prototype in Matlab. This prototype could mimic the behaviour of the existing product it was to be compared to in terms of functionality. Since the developers were both experienced programmers the process of developing was fast as well. To be able to test more functionality without making the prototype overly complex, two prototypes were developed in parallel. The concept of both prototypes revolved around using superpixels. Both prototypes contained some functions for creating areas by manually marking and selecting superpixels and some functions that automatically selected areas for the user with a simple click. To achieve automatic selection and grouping of superpixels, the clustering algorithm k-means was used which used a 16 value feature vector. The feature vector had three RGB values, three HSV values and 10 values based on the superpixels LBP values. To save time during development, both prototypes heavily relied upon using libraries containing the complex algorithms, such as k-means and SLIC. In order to not have to implement a complex gui for the prototypes, simple paper guides were made containing all the hotkeys to press and their functionality. This was to remove focus from remembering which keys to press to let the user focus more on the task of marking areas with the tool. Note that the borders of the created superpixels were not shown visually in the prototype since by showing the borders, the pathologist s vision might be obscured, thus making it harder to see the natural regions of the image System anatomy The system was designed to have separate steps for every process, like modules, as shown in Figure 3.3. The functionality for each module is explained below. 18

31 3.5. Lo-Fi Prototype Refinement Read image Reads an image in some format. Slic superpixeling Generates superpixels with the use of the SLIC algorithm on a image. The SLIC implementation had two parameters affecting the outcome, which were how many superpixels that would be generated and how much they could diverge from their original square form. The user was not allowed to change the parameters since this was done during the preprocessing stage and took time to perform. If the user could change the parameters, less focus would be put on the interactions. Calculate features for each superpixel Calculate features for each superpixel. These features was based on the pixel s RGB, HSV and LBP values that the superpixel contained. k-means with features Generate k clusters using the k-means algorithm with the given features from the previous module. Wait for user input Module that waits and registers user inputs. The user interacts with the prototype by clicking or drawing lines on the screen with some chosen tool which is registered in this module. This was the only stage that allowed the user to affect the outcome of using the prototype. Calculate neighbors iteratively for the selected superpixels Calculates what neighbors the selected superpixels has by doing a breadth first search. A breadth first search, in this case, is done by checking the connected neighbors and select those who belong to the same cluster, then selects these neighbors and checks their local neighbors and so on. This is explained in Algorithm 1. Algorithm 1 Calculate Neighbors iteratively for a clicked superpixel Add ClickedSuperpixel to Queue ClickedSuperpixel.Marked = True while True do if Queue is Empty then return GlobalNeighbors end if superpixel := Queue.pop() Add superpixel to GlobalNeighbors LocalNeighbors := findlocalneighbors(superpixel) for all Neighbor in LocalNeighbors do if Neighbor.Cluster = ClickedSuperPixel.Cluster and Neighbor.Marked = False then Add Neighbor to Queue Neighbor.Marked = True end if end for end while 19

32 3.6. Refined Lo-Fi Prototype Evaluation Show segment Group the superpixels for the previous steps and draw a line around the group which corresponds to the segment that has been selected. Figure 3.3: Block schedule of the matlab prototype. 3.6 Refined Lo-Fi Prototype Evaluation A user-guide was printed out along with questions, so the user test could go more smoothly without having to answer a lot of questions regarding keyboard shortcuts and to remember what to ask. A user test was held with a pathologist who tested the two different Matlab prototypes. This test was held to ensure that the tools worked as the pathologist wanted, not to show GUI parts etc. The pathologist was first shown the tools by the developer behind the prototype, then the pathologist was given an image to annotate herself while speaking freely about her experience using the tools. During that time one of the developers sat quietly taking notes while the other one gave minor instructions if needed and kept a dialogue with the pathologist. After the task was completed, questions that had been prepared beforehand were asked to the pathologist. The first part of the questionnaire had questions specific to one tool. After that came questions about the toolkit as a whole and also comparing it with the current software. After the pathologist had tried both prototypes, questions were asked comparing the two prototypes to see which tools were liked the most. After the test had been conducted the notes were gone through, trying to first identify all the obvious problems with the current prototypes. After that, each tool was compared against each other to see which tool that had the best performance and would be most likely to make it into the next prototype. Some of the tools were completely discarded due to their poor performance, while others could be kept with some adjustments to their functionality. Another aspect that was discussed was whether a combination of tools from both the prototypes could be sufficient for making the final product or if something important was missing. One thing that was missing was the ability to draw areas completely manually which could be problematic for very small areas. An idea came up on how to solve it by using layers of superpixels, letting the user access a finer layer by zooming in the image. This was tested by increasing the number of superpixels in the Matlab prototype and the results were sent to and approved by a pathologist. Lastly a discussion was held whether another lo-fi prototype should be created in the next iteration or if the recent prototype was good enough that the next prototype could be hi-fi. To decide this it was discussed what more proof or extra information another lo-fi prototype 20

33 3.7. Hi-Fi Prototype Development could bring and if it was worth the time and this was compared with what additional tests could be done if the prototype was done in a more advanced fashion. During the prototype test a number of issues were brought to light which could endanger the entire concept. These issues had to either be fixed by making the automatic algorithms better or redesigning the tools. To verify that the issues could indeed be fixed, some time was spent on improving the automatic algorithms for creating superpixels and clustering them. 3.7 Hi-Fi Prototype Development The third and last iteration started right after the usability evaluation of iteration two. Taking the positive and negative feedback received on each tool from the two Matlab prototypes, a session was held where it was decided which tools to keep and which tools to discard and then combine the tools into one concept. In order to verify that the new concept would be sound and for the developers not to lose vision of how to develop the new prototype during iteration three, the two Matlab prototypes were quickly merged into one only keeping the chosen tools. This prototype was used as a reference once developing, if the developers felt they had lost track of their goal. The development of the hi-fi prototype was done in an agile fashion, implementing one functionality at a time both at the server and client level. After a functionality was implemented all of its components were tested before moving on to the next task on the backlog. At the start of each week, a meeting was held where the backlog was updated with the current issues. When it was suitable, the developers worked in parallel with the server and client but sometimes the development had to be done using pair programming to solve difficult problems or create efficient algorithms. In the middle of the third iteration, a session was held with two of the intended users of the prototype. Before the session, the prototype under development was branched into a fully functioning prototype. The prototype was demonstrated to the users and then interacted with by them. The users were encouraged to give their feedback on the system and also give ideas on how to solve issues. After the session, the feedback and ideas received were analysed and discussed, and the ones deemed good ended up on the backlog to be realized. 3.8 Quantitative testing of Hi-Fi prototype To be able to compare the manual tools that are being used today with the new toolbox, they both were tested in a quantitative test. The test involved having users complete the same task of annotating an image by using both the manual tool and the prototype. The time it took for each user to complete an annotation, time on task, was recorded for later comparison. As mentioned in section 2.3.2, 20 users should be required in order to be able to generalize from the results. After running 8 tests, it was however concluded that the effect that could be proven by completing the remaining 12 tests would not be worth the time taken. Thus, no more quantitative tests were run after that point. The agenda of the test was to first show the user how to use the manual tool, then let the user play around with it until feeling comfortable. Once the user felt comfortable with the tool, time was measured for how long it took for the user to complete a task using both the prototype and doing it manually. The task to complete in this test was to fully annotate a colon image with four segments. The histology image that was annotated is shown in Figure 3.4 and the reference image is shown in Figure 3.5. While annotating the user could always look at the reference image so the user always knew how to annotate the image. One of the developers had the role as a test leader while the other took time measurements during the test. 21

34 3.8. Quantitative testing of Hi-Fi prototype To compare the results from the both tasks a paired t-test was done on the measured times. Paired t-tests are suitable for comparing results from two different observations such in the case of manual and assisted annotation of an image. In order to get a valid result the data need to be approximately normally distributed, which was checked before doing the actual t-test. Figure 3.4: Histology image that was annotated during quantitative tests. It represents a piece of colon. Figure 3.5: Reference image used in both qualitative and quantitative tests in order to keep the user informed on what to segment. The annotations in this image were created using the prototype. The label attached to each segment is irrelevant is this case. 22

35 3.9. Qualitative testing of Hi-Fi prototype 3.9 Qualitative testing of Hi-Fi prototype Before the tests were conducted, roles were defined for the two interviewers. One role was the observer whose task was to listen and observe while taking notes during the whole session. The other role was the facilitator having the responsibility of letting the user know what the agenda was, holding the interview session and helping out during the test if needed. The structure of the interviews was to first give a short background description of what the software should be used for. The GUI of the software was explained and a short introduction of how to interact with it was given. This was followed by letting the users interact with the software by themselves while thinking aloud. They were given a task of annotating two histology images with a time limit of 5 minutes per image. There were two reference images showing an example of how the image was supposed to be annotated which the users could look at if they did not know how to annotate the image. During that time both the interviewers tried to interfere as little as possible in order to capture as much of the users experience as possible. Once the tasks were completed or aborted due to the time limit, a semi structured interview with 10 questions was held. At the end of the interview the observer was invited to ask questions if anything was unclear. Before the user tests at the hospital were performed, pilot tests were performed with volunteers at the company. These pilot tests were issued since it would give good practice for the interviewers and also solve some basic issues with the software or interviewing technique. The real user tests were held at the hospital with pathologists Hi-Fi Prototype Evaluation The data from the qualitative usability tests were analysed using the Instant Data Analysis method [20]. A summary of all the found observations, which can be found in the results section, was written down immediately after the test sessions. From this summary a list was made which contained all the found usability issues. The issues were derived from the observations during the test. Once the list was complete, each item was analysed in sequence, putting them into categories and writing a short description of them. The observer s and interviewer s perception of a found flaw sometimes differed which opened for a discussion. 23

36 3.10. Hi-Fi Prototype Evaluation 24

37 4 Results 4.1 Initial interview Before the process of developing prototypes started, an initial interview was conducted with one of the intended users of the system. The subject was a pathologist whose task with the system under development would be to annotate ground-truth data for a future machine learning system. This would be done by encasing all the tissues in an image and label them. The encasing was done by using a mouse or a Wacom pen with an electronic drawing table. The region was labeled by writing e.g. "1 N Ca" which meant it is the first layer on a normal image and it is of malignant nature. These abbreviations were used since the pathologist thought that the real name was too long. When a region is encased and labeled, the information was moved to an Excel sheet where each tissue was mapped to an ontology together with the Latin term for it. The subject already had a working system that was sufficient, but the process of completely annotating an image was slow and laborious, which was the reason for developing a new tool set. During the interview, the subject demonstrated how she currently annotated an image, speaking freely about flaws and annoyances but also the good parts about the system. After the demonstration, a set of predefined questions were asked. The full interview is presented in Appendix A.1. A summary of the interview is shown below. The software that the pathologist used today was rather new and used daily, and was to be used to annotate approximately 50 images per tissue. The pathologist uses computers daily in work, but has no prior knowledge using a Wacom pen or mouse to draw. A feature which was liked by the pathologist was the hotkeys since they speed up the work. There were however some disliked things about the software such as that: 1. The tool has to be rechosen for every new consecutive use. 2. Similar areas has to be labeled separately. 3. It was slow to draw regions by hand and especially with the mouse. 4. The pen interaction sometimes made area creating finish prematurely, forcing the user to restart the interaction. 5. The manual transfer of information to the excel sheet was unnecessarily complex. 25

38 4.2. Requirements analysis The pathologist had some ideas of what was missing in the current software. These missing features were that: 1. Borders that were adjacent should be merged into one border so that the borders would not have to be drawn twice. 2. It should be possible to group drawn regions and annotate them together. 3. The transfer to excel should be automatic. The pathologist would not mind being assisted by a computer when annotating, as long as the control of the decisions being made still belongs to the user. The pathologist thought that the most valued qualities in a new system would be speed and robustness. The interview results were analysed and problems with current software were collected. The problems are listed below. 1. It is hard to draw a good border with the existing tools. 2. It is time-consuming to label drawn areas. 3. It is problematic to export to MS Excel. 4. It is hard to see actual borders on screen. 5. It is annoying to re-choose the tool for every new region. 6. Have to physically move between input devices. 7. It is hard to finish drawing one region, double click needed. 8. Can not choose tool with the Wacom pen. The third problem, regarding export to excel, was disregarded since it was not in the scope of this project. 4.2 Requirements analysis A requirements analysis was made after the interview. The derived user profile, contextual task analysis, platform and usability goals are presented in Table 4.1. The usability goals worked as requirements for the product that was to be developed. The most important usability goal was for the system to allow for efficient annotations. The user had a task of annotating a large amount of images, and time was of most value. While the annotation had to be quickly done they also had to be correct i.e. the system had to be effective. When working with medical image data, it is crucial that the data is not in any way altered or corrupted. With that in mind, safety was added to the list. When combining automatic and manual segmentation in a tool, it is easy for the system to become complicated and unintuitive which might repel the user. Therefore, designing an intuitive system is important. 26

39 4.3. Iteration I Table 4.1: The user profile, contextual task analysis, platform and usability goals derived from the requirements analysis phase. User profile Doctors Normal computer user Unexperienced drawer electronically Some week experience of current software Contextual task analysis Annotate areas in microscopic images by doing the following steps. 1. Draw around area 2. Label area 3. Transfer data to MS Excel sheet using an ontology Platform Windows Mouse and keyboard Electronic drawing board and Wacom pen Usability goals Efficient (time/annotation) Effective (correct annotations) Safety (images are intact, correct segmentation every time) Intuitive (time to learn) 4.3 Iteration I Iteration I started with a brainstorming session followed by concept development in order to find a suitable solution to the problems found in the requirements analysis phase. From the concept, a prototype was developed and presented to the user for feedback Prototype development From the list of problems derived from the initial interview, categories were brought forth to start the brainstorming session. These categories are meant to address the problems listed above but still be open to any ideas that might occur. The categories are listed below. 1. Draw regions. How will the user interact with the program to segment the image? 2. Label areas. How will the user interact with the program to create labels for the segments? 3. See borders. How will the program assist the user in order to better see the natural borders in the image? 4. Choose tool to use for segmenting. How will the user interact with the program to choose the segmenting tool to use? 5. Finish creating segment. How will the user end the interaction for creating a segment? For each of these categories, a 5 minute brainstorming session was held. New ideas on how to solve some of these problems were thought up. The result of the brainstorming session consisted of a collection of several items for each category. The items that were found in the brainstorming session for the categories were written on post-it notes in different colors depending on what category they addressed. The post-it notes were put on a whiteboard as seen in Figure

40 4.3. Iteration I Figure 4.1: Post-its containing ideas of how to solve problems structured on a whiteboard. After the post-it notes were written down, the concept development began. Both developers created concepts as they saw fit with the given post-its. A concept meant having at least one item per category. A total of two concepts were developed. The two concepts with the solutions for the problems are presented in Table 4.2. From concept to prototype When creating a prototype from the concepts, what was considered the best idea of how to draw regions was selected which was from concept 2 and namely: a stroke with a color will search the image for similar regions and mark them with that color. Using the idea from the concept, further brainstorming was performed in order to figure out how to realize it in a feasible way. A common approach, presented by Homeyer et al. [17], is to divide the image in several smaller regions, which lets the algorithms cluster larger pieces of data rather than single pixels. In the paper, they divide the image into tiles, which is efficient but a drawback is that the natural regions in medical images seldom are squares, which means that a tile most likely will contain parts of multiple tissue types. The only way to improve the accuracy of the tiles is to decrease their size, which however will make the algorithms run slower and manual marking of pixels will take more time. To counter this, the prototype would use superpixels, which are more similar to the natural regions of the image, making the end result closer to ground truth and allowing for larger segments than the tiles. Moreover, the workflow in the prototype was divided into multiple steps, which were shown in a sidebar in the screen. The prototype was a simple, clickable solution which showed basic ideas about how the process of annotating images could look like. Figure 4.2 shows one of several slides in the Balsamiq prototype of how it could look like when a user marks superpixels with a red line. 28

41 4.3. Iteration I Table 4.2: Comparison of concepts. Draw regions Concept 1 Concept 2 A new tool that can move borders A new tool that can move borders better than the existing way of doing better than the existing way of dodersders. it. This tool will push the boring it. This tool will push the bor- Label regions See borders A touch on the Wacom screen will draw the area that was clicked. Clicking on an area will automatically draw that area. When "hovering" an area, the program will suggest a region which will be drawn on click. Be able to group drawn regions that will be labeled at the same time with the same label. Have a sort of palette with predefined label that can be dragged on to the screen and dropped on the region to label it. Have a category for each image that the user set before working, which contain all the labels that the user have entered for that category. Auto complete labels when entering them. Tool that enhances the chosen color of the screen. A stroke with a color will search the image for similar regions and mark them with that color. "Ctrl-click" can mark several areas. Have a sort of palette with predefined label that can be dragged on to the screen and dropped on the region to label it. Have a category for each image that the user set before working, which contain all the labels that the user have entered for that category. Implement a contrast or brightness tool. Sample color on screen and enhance some of the colors. Choose tool Chosen tool is set until user changes it. Use arrow-keys on keyboard to change tool. Have a binding that toggles between the latest two tools. Sidebar containing tools which expands when mouse-over. Finish drawing Have a static footer on screen with tools which is always visible. Region can only be completed by connecting the end point to the start point. Remove auto-complete line that exist in current tool. Press Enter key. 29

42 4.4. Iteration II Figure 4.2: Slide from the Balsamiq prototype. Superpixels are marked with white borders and a possible user interaction shown as a line in red. The sidebar currently shows the "Mark regions" step Prototype Demonstration The first prototype demonstration was done to confirm or disprove any ideas pertaining to the first concept. Since this prototype was not interactive, the developers showed the core concept of it while explaining it to the subjects. After the demonstration a series of questions were asked. The full questionnaire with answers is listed in Appendix A.2 and the following paragraph contains a summarized version. The result from the prototype demonstration showed that the user liked how the work flow was implemented in the toolbox, and that the prototype had included all the steps needed to annotate an image. One important aspect that came up during the demonstration was that it was very time-consuming to correct drawn regions. The user stated that if the algorithm, with more manual pre-work, would perform a good segmentation, that is preferred. Another interesting fact that was found was that the palette with the "drag and drop" feature to label areas was to be better implemented if it was interacted with through clicks since the screen is large. The user also stated that if labeling could be done automatically, that would be preferred. 4.4 Iteration II During the second iteration, two Matlab prototypes were developed in parallel. The design of the prototypes was based on information gathered during the first iteration. The prototypes were later tested by letting a user complete tasks in both of them. The gathered information was used to evaluate the tools usability Matlab Prototypes Since the users stated that superpixels seemed to have good enough borders in the Balsamiq prototype and it would be hard to test the functionality without having an interactive proto30

43 4.4. Iteration II type, two different prototypes were created which both contained a set of different tools for marking areas in an image. The core of the prototypes was to divide the image into a set of superpixels which were used both for automatic clustering but also letting the user manually select or remove superpixels from the created segments with some interactions that are explained later. When the prototypes were started, the image underwent preprocessing where the image first was divided into superpixels. This amount of superpixels allowed the user to create segments with the intended interactions without getting to much errors. With to many superpixels, the efficiency dropped since a segment would consist of many superpixels which would be time consuming to group together in one segment. With to few superpixels, the user s options were limited, since a large segment could consist of few superpixels with a lot of errors. This was followed by calculation of a feature vector for each of them, consisting of three RGB features, three HSV features and 10 LBP features. Based on these feature vectors, the superpixels were clustered using k-means, with k=20. The clusters created small groups of similar superpixels next to each other, forming slightly larger superpixels. With a high k-value, more and smaller groups were created while a smaller k would result in larger groups. Tools for selecting these groups of superpixels were implemented along with tools for selecting single superpixels. It was tested by the developers and found that a k-value of 20 worked since the user could use the tools for creating segments of the larger superpixel groups while still not getting to many errors in the segments. The process that the prototype was supposed to illustrate was that the user selected a label, with a corresponding color. The user then used the interaction of the chosen tool to select superpixels that would be assigned to that chosen label. The prototypes however only had functionality for creating segments and not labeling them. Instead, different regions were represented by different colors. Prototype 1 The first prototype consisted of tools for drawing lines and creating enclosed areas that both selected superpixels within their regions. The thought behind this was to let the user have more control over the segmentation process at the cost of more work needed. In order to segment different tissue types, the user was able to pick different colors using the number keys on the keyboard. Line Tool The first and most basic tool in the prototype was a line drawing tool that selected every superpixel that intersected the line. Smart Line Tool There also existed a more advanced version of the line drawing tool which not only selected the intersected pixels, but also their neighboring superpixels in the same cluster as described above. The spread was however limited to a set radius around each of the intersected superpixels. Since this tool was prone to spreading into surrounding tissue, a version of the tool existed that locked the already created segments. This meant that segments could not be overridden by the newly created segment. Smart Area Tool Lastly, there was a tool for selecting larger areas called the Area tool. Visually it looked the same as the line tool but when the user finished drawing the line, the start and end point was connected with a new line creating an area. Every superpixel intersected by the line was added to the segment, as well as every superpixel within the drawn segment. In addition to this, the neighbors of the intersected superpixels were also added to the segment. Just like with the line tool, there was a possibility to restrict the tool from spreading to already segmented superpixels. Erasing The prototype did not have a dedicated erasing tool. Instead the user was able to select a "background color", which worked with all the tools presented above. Superpixels 31

44 4.4. Iteration II marked with this color would be removed from their current segments and they were effectively erased. Undo Simple functionality for undoing the latest action which allowed the user to more freely commit errors without having to worry about it. Prototype 2 The basic idea behind the second prototype was to have a simple click functionality that would either select a very large area of superpixels with the possibility of getting small "islands", and also a click which only selected coherent superpixels and grouped them which could not result in any "islands". An "island" is when superpixels that belong together are not coherent, which make it look like islands. The click functions took away some control from the user, but could prove great when wanting to select large regions quickly. Magic wand tool This tool included the two click functions stated above. Its purpose was to segment large areas with a simple interaction, which was to click somewhere in the region that the user wanted to segment. The left click only segmented a coherent group by checking the cluster that the clicked superpixel belonged to, then iteratively searched for neighbors that belonged to the same cluster, adding neighbors with the same cluster to the segment. This click function was called "local select". The right click checked the cluster that the clicked superpixel belonged to, then created a segment of all the superpixels in the image that belonged to that cluster. This was called "global select". Smart line The smart line tool was a line drawing tool which segmented all the superpixels that were hit by the line, then it added superpixels to the segment by checking if nearby superpixels were in the same cluster as the ones that were hit by the line. Remove tool The remove tool was a line drawing tool which removed all hit superpixels from any segment that they belonged to. Lock feature The lock feature enabled the user to lock all the segments that were completed, rendering the user incapable of destroying finished segments. The user could toggle between locked and unlocked. Segmenting using the prototypes Figure 4.3 shows how to interact with the prototype in order to segment the image and Figure 4.4 shows the result of the interaction. As seen in the figures, the effort to make the interaction is small in relation to the quality of the results given by the algorithm. Figure 4.5 illustrates how a complete segmentation looked like in the prototype. 32

45 4.4. Iteration II Figure 4.3: Example of a smart-line interaction in the two prototypes. The line is drawn in the white tissue of the image. Figure 4.4: Results from the interaction in Figure 4.3. The result is marked in red. 33

46 4.4. Iteration II Figure 4.5: Complete segmentation of colon with different tissues in red, green and blue after using the Matlab prototypes. Both prototypes could achieve this result Usability Test Since there were two different prototypes developed in the second iteration, two separate tests were run in succession. Each test was initialized with a short demonstration of the prototype by one of the developers. After the demonstration, the user tried to complete a task, similar to what is done with the software they have today. This task was to mark the different regions in the image as good as possible by only using the provided tools. When the task was complete a series of questions were asked following a pre-defined questionnaire. The level of frustration was asked as well, with the possibility of answering 1-5 were 1 is no frustration and 5 is maximum frustration. Prototype 1 The results from the test of Matlab prototype 1 is presented in this section. This is a summarized version of the test. The full, transcribed version is available in Appendix A.3 Line tool The user liked the interaction of the tool, however the user thought that the tool could be better if it was possible to make finer adjustments. The user tries to draw areas close to perfect initially which made the Line tool that can override other areas superfluous. The tool got a frustration grade 4. Area tool The user thought it was easier to understand than the Line tool. It was better since it was able to snap to other segments. The user did not like that some leakage occurred when a line was drawn slightly outside the intended region and sometimes the tool did not encase as much as expected. The Area tool that could override other areas was not liked. The tool got a frustration grade 2. Prototype as a whole The user got the feeling that the software sometimes understood the natural borders in the image but it did not follow the natural border exactly when drawing. The user liked the feature to have different colors for different types of tissue. It was found that instead of drawing around regions, it was more liked to draw inside of them. It was also appreciated that the user did not have to fill in the entire area, but that the algorithm helped. An issue with the prototype was leakage when drawing outside the region of interest. The 34

47 4.4. Iteration II prototype as is would probably save the user some arm motion, but the frustration could however be worse than before. A set of issues and opportunities were derived from the test. Issues derived from the test. 1. Hard to make fine adjustments. 2. Have "erasing color" for all tools was superfluous. 3. Leakage when drawing slightly outside the region of interest. 4. Did not encase as much as expected sometimes. 5. Did not follow natural borders exactly when drawing. Opportunities found in the test. Prototype 2 1. Good interactions by drawing lines. 2. Area-tool was easy to understand. 3. Sometimes understood natural borders. 4. Have different colors when creating different segments. 5. Algorithm helped by filling in areas. 6. Saving arm-movement. The results of the Matlab prototype 2 is presented in this section. This is a summarized version of the test. The full, transcribed version is available in Appendix A.4 Magic wand tool The click interaction was appreciated by the user and the user felt like the tool understood what the user wanted to do when using the local select. However, sometimes the created segment was too small. The global select was fun and had potential according to the user, with the negative effect of adding a lot of scatter. The user saw use of the local select in the work right now. The frustration-grade of using local select was 2 and of using global select was 5. Smart line The user felt like it was a good tool for correcting scattered areas. The user had a feeling of control when using it and thought it was fun when the user did not really know what it would do. It was however too weak and it felt like it was in the shadow of the "Magic wand tool". The smart line got a 4 on the frustration scale. Remove tool The user thought the tool had good response when removing, but was bothered by the fact that it did not remove everything within a drawn circle. It got a 5 on the frustration scale. Lock feature This feature was appreciated with the "Magic wand tool" since the lock feature made it respect the already created segments. The user felt safe from destroying previously created segments on the image when using it. 35

48 4.4. Iteration II Prototype as a whole The prototype was fun to use and the click interaction was more appreciated than drawing lines. However it did not segment the image as the user wished it would. Some editing of created segments was required. The user felt that the eraser tool did not really work and that the time to annotate an image could potentially be longer than before. However, the user felt that it could save arm movement and that the created segments were better with the prototype than with the current software. A set of issues and opportunities were derived from the test. Issues derived from the test. 1. Too small segments created when using "Magic wand tool (local select)" and "Smart line". 2. Scatter with "Magic wand tool (global select)". 3. Eraser does not delete everything in the encased area which the user expected it to. 4. Did not always select what the user wished. 5. Could potentially require more time to annotate image than current software. Opportunities found in the test. 1. Good interaction by clicking. 2. Felt like the "Magic wand tool (local select)" understood what the user wanted. 3. Felt safe while using "Lock feature". 4. Saving arm-movement. 5. Created segments are better than with current software. Comparing the two prototypes and general thoughts about the whole product The user would prefer to have a dedicated eraser tool instead of an erasing color, although with the added functionality to erase areas. The user liked having a lock feature instead of two versions of the same tool since it was hard to keep track of all the tools. The user liked both drawing inside areas and clicking interactions. It was better to paint with colors inside regions than to draw around them. The drawn areas were better than what could be done manually in some cases, but nothing can be said about time saving. A note from the user was that if there existed several regions of the same tissue that were not coherent, the user would like to be able to mark them one by one by clicking, while not getting scatter. The asked questions along with answers can be found in Appendix A.5 and Appendix A Usability Evaluation After the prototype was demonstrated and some questions had been answered by the pathologist, it was found that the click interaction was much appreciated along with the idea of drawing lines inside regions of interest instead of around them. Another interesting fact that was mentioned was that it was more fun to use the prototypes than the current manual program. The level of detail of the created segments obtained by prototype 1 was worse than before, and was in prototype 2 better than before. There existed some frustration when the tools did not do what the user intended. Much of the frustration was generated because the user tried to mark an area which sometimes became much larger than intended or encapsulated parts which it was not supposed to include. The most common comment when this happened was that the program had a mind of its own and did not understand what the user really wanted. 36

49 4.5. Iteration III 4.5 Iteration III In the third iteration, a hi-fi prototype was developed to address the issues found in the usability evaluation of the two Matlab prototypes. The prototype was tested with a number of pathologists. This section describes how the prototype worked and the results of the usability evaluation Hi-Fi Prototype The hi-fi prototype used SLIC for creating superpixels, similar to the matlab prototypes. A change that was made was that the feature vector for each superpixel was calculated differently along with how superpixels were grouped. Now superpixels were grouped based on similarities at the time of interaction and not in the preprocessing stage. The HSV values of each pixel were now used to cluster every pixel in the image into 15 clusters. The cluster value of k = 15 was chosen since the histology images often contain several regions of interest ranging from around four to eight. These regions of interest might contain pixels with different characteristics such as color. The k-value of 15 was tested by the developers and made a good enough separation between the pixels while still keeping uniform regions relatively intact. A k-value lower than 15 would have the effect that some characteristics that needed to be kept apart might have been merged together. If a k-value bigger than 15 was chosen, it would result in some pixels being classified in different clusters while naturally belonging to the same regions of interest. For each superpixel the count of pixels in each cluster was saved as a vector. The same was done for the LBP features where the LBP feature for each pixel was first calculated on the entire image and each pixel was given a value from 0 to 35 based on the 36 rotation invariant possibilities as mentioned in section The same procedure of counting pixels LBP values for each superpixel was then performed and the two vectors were combined into a 51 feature long vector which was normalized. This vector was later used to measure euclidean distances between superpixels, allowing a similarity measurement to be made between them. Note that the feature vector for each superpixel is a histogram of the cluster and rotation invariant LBP value of each pixel pertaining to that superpixel. The solution was divided into a server that did all the calculations and a client that only showed the results from the calculations. The server was written in Python using Flask and had functionality for preprocessing the image, drawing segments with the various tools and fetching the drawn segment data. The client had functionality to visually show the created segments along with its label, as seen in Figure 4.6. The user was able to select or create a new category that they wanted to annotate, for example colon, using a drop down list. Inside that category were a set of labels, for example mucosa, that showed up in a scrollable list below the category list. It was also possible to create new labels. Upon selecting a label the user could interact with one of the tools available to draw regions on the image that would then be labeled with the selected label. Figure 4.10 illustrates the graphical interface for the toolbox that was used during the usability tests. Since the test did not involve creating new categories or labels, those functionalities were removed from the graphical interface. When a user selected a tool, the cursor was set according to Figure 4.7. This design was used since it was shown in the pilot tests that it was hard to know what tool that was chosen. The user often erased when trying to draw a line. These cursors were assumed to solve that problem. 37

50 4.5. Iteration III Figure 4.6: Example annotations using the prototype in red and blue. Figure 4.7: Cursors used in the prototype. To the left is the Magic wand, middle is the Eraser and the right is Navigation. Credits to plainicon.com for the magic wand and eraser icons. Magic Wand Tool To counteract the issue of creating areas too large or getting "leakage" upon using the magic wand tool, as discovered during the user test in iteration two, the new magic wand tool did not use the preprocessed superpixel cluster data. Instead, it performed a live neighbor search from each superpixel hit by the drawn line. To include or exclude a superpixel into the result, the algorithm compared the distance between the feature vectors of the first superpixel hit by the line and each neighbor. If a neighbor had small enough distance to the first superpixel it was included. This design would constrain the spread of the segment to only include superpixels that were directly adjecent to the superpixels hit by the interaction. The first superpixel was chosen since it was thought to represent the tissue the user wanted to include. The different interaction methods possible for the magic wand were either a single click on the image or drawing a line. If the line s start and end points were close enough, they would snap and create an area which meant that the returned segment would also include all superpixels 38

51 4.5. Iteration III inside the area. The draw line interaction is shown in Figure 4.8 and the result is shown in Figure 4.9. Figure 4.8: How the line looks like when a user draws a line in blue using the prototype. The start of the line is marked with "Start". The line starts in the white tissue and moves on to go past the tissue and out in the "background" region which does not contain any tissue. Superpixel borders are shown in green. Note that this is an illustration to show how superpixels look like and not how it was displayed in the final prototype. 39

52 4.5. Iteration III Figure 4.9: How the result of the interaction showed in Figure 4.8 can look like in blue. Note that only the white tissue is marked and the background is not marked. Superpixel borders are shown in green. Note that this is an illustration to show how superpixels look like and not how it was displayed in the final prototype. Eraser Tool The eraser tool had similar interactions to the magic wand tool, allowing for either clicking, drawing a line or connecting the line to create an area. The eraser, however, did not do an iterative search for similar superpixels and only erased superpixels directly intersected by the line, or inside of the area. This was to give the user more control over the segmentation. Navigation Tool In the prototype it was also possible to zoom in and pan. Panning was however not possible while using the magic wand or eraser since their interactions clashed. To be able to pan the user had to chose the navigation tool, which basically meant that no drawing was done when dragging the mouse on the image. 40

53 4.5. Iteration III Figure 4.10: The graphical interface of the prototype. From top to bottom the first element is a drop down list where the user can select a category, followed by an input field where the name of the category can be edited. Next comes a list of all the labels in the chosen category and their corresponding colors. At the bottom of the toolbox is a container with the various tools available for the user. Figure 4.11: The prototype opened in the program, showed under the mini-image to the left Usability Evaluation The feedback given by the users during the usability tests was mostly positive and they saw a lot of potential in the current solution. The users thought that the graphical user interface 41

54 4.5. Iteration III was very easy and intuitive, which was also noted by the observer during the think-aloud sessions. Once getting used to the interactions, most of the users felt that it was a fun experience using the tools. The users that had been exposed to completely annotating a histology image manually expressed their gratitude for the prototype s ability to do a lot of the work for them, and in a fun way. They also said that with some minor fixes it would probably beat manual annotation as a whole. Aside from manually annotating images, users mentioned other use cases for the toolbox, like for instance measuring areas of tissue or counting cells inside a segment. Instant Data Analysis After the usability tests, a brainstorming session was held where the results were analysed and compiled into a list of issues. The focus during the session was to localize all issues and not focus on the positive aspects from the users feedback. Instead this was done separately during another session. The system does not show the user how to use the tools correctly. No indication of where to draw lines or make areas to get the wanted segment. Users do not know the result of interacting with the tools beforehand. Result is too unpredictable. User expect the magic wand tool to be smarter and fill natural regions when it does not. Users can not achieve their goal with segments. The borders of the drawn regions created by the system do not match the natural borders of the image good enough to be satisfactory. It is hard to see which tool is chosen. Users often make interactions with the wrong tool chosen. For example users are attempting to create regions with the eraser tool. It is hard to see which label is chosen. Users mentioned that they had trouble seeing the chosen label. Users mentioned that reading the current label text was cumbersome. Users do not understand the intended interactions with the tools. Users pick sample points outside the region of interest. Users draw lines around natural borders instead of inside the regions. Response time is too long. The system did not provide feedback on issued commands fast enough which led to errors committed by the user. Users tried to undo the latest action, the system did not respond quickly which led to the user undoing again. The system later undid two actions. The labeling of regions is confusing. Holes inside regions have their own label which makes them look like islands for the user. When zoomed in, labels might not be shown for regions at all. 42

55 4.5. Iteration III Users do not see results of action when on max zoom. The regions created can be extended outside the visible part of the image when zoomed in which may be confusing. Users expect to abort an ongoing interaction when pressing Esc. Users mentioned this is the usual behaviour in other programs that they use. Users think selecting a color from the toolbar should automatically select the magic wand tool. Eraser indicates specific color but removes all colors when used. The lines created by the eraser tool are the same color as the chosen color used only for creating segments. Users then expect the eraser to only remove chosen color. The selection of the navigation mode is unintuitive. The users are used to navigation being the default mode which is consistent with the rest of the current software. This causes confusion or errors. Upon opening the toolbox it is unclear which tool is selected. None of the three tool buttons are highlighted when opening the toolbox, but the magic wand tool is selected. Issues tend to overshadow the positive aspects but it was also found that the prototype had some strengths. These positive aspects are listed below. The GUI was intuitive. Easy to understand what to do. Easy to find what you are looking for. The prototype can do a lot of work for the pathologists. Saves time. Saves effort. Potential to become better than manual annotation. Minor changes needed to supersede manual annotation. Some users thought it was better already. Prototype has other use cases. Measure area of tissue. Count cells inside of area. The program offered a fun experience once learned. Can allow for users to use it for longer sessions. Some segments created were better than manually possible. Some users would not be able to reproduce the same results as the prototype with the current software. 43

56 4.5. Iteration III Quantitative test Aside from having a qualitative test with pathologists, a quantitative test was held where times were recorded for a set of users completing the task of fully annotating an image using both the prototype and the manual tool available. The results are listed in Table 4.3 and images of the annotations can be found in Figure Table 4.3: The left column shows time taken using the manual approach and the right column using hi-fi prototype. The values represent the time in seconds it took to complete a task. P1 through P8 represent different users. Manual Prototype P P P P P P P P A paired t-test was conducted using the data in Table 4.3 to compare the time of annotating an image using the two different annotation methods. As seen in Table 4.4 the mean time for completing the task using the manual method was 239 seconds while it took 213 seconds to complete the same task using the prototype which gives a difference of roughly 26 seconds in favour of the prototype. The difference was however not statistically significant (p-value = ), meaning that measured difference in time might be random. Table 4.4: Mean value and standard deviation for completing a task using the two different methods. The values are presented in seconds. Manual Prototype Mean Standard deviation An interesting find in the resulting annotations is that the quality of the annotation in the prototype does not depend on the users skills at drawing nor their annotation speed or level of ambition. Comparing the two manual annotations in Figure 4.13 it is pretty clear that the slower user did a better job at recreating the natural borders of the tissue. When looking at the time taken for the task, the slow user took almost four times longer than the fast user. If one instead compares the results from using the prototype one could even argue that the faster user got a better result than the slower user. This suggests that it is not the speed at which the annotation is made that determines its quality, and instead the understanding of the toolbox. It was noted during the tests that the slower user had issues fully understanding how to interact with the tool while the fast user learned the intended way of using it almost immediately. 44

57 4.5. Iteration III Figure 4.12: Graph showing the measured durations by annotation technique. 45

58 4.5. Iteration III Figure 4.13: a. Annotation done in prototype by a relatively slow user. Time: 370 seconds. b. Annotation done in prototype by a relatively fast user. Time: 113 seconds. c. Annotation done manually by a relatively slow user. Time: 442 seconds. d. Annotation done manually by a relatively fast user. Time: 180 seconds. All labels attached to the segments are irrelevant in this case. 46

HTTP Based Adap ve Bitrate Streaming Protocols in Live Surveillance Systems

HTTP Based Adap ve Bitrate Streaming Protocols in Live Surveillance Systems HTTP Based Adapve Bitrate Streaming Protocols in Live Surveillance Systems Daniel Dzabic Jacob Mårtensson Supervisor : Adrian Horga Examiner : Ahmed Rezine External supervisor : Emil Wilock Linköpings

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final Thesis Network usage profiling for applications on the Android smart phone by Jakob Egnell LIU-IDA/LITH-EX-G 12/004

More information

Design and evaluation of a system that coordinate clients to use the same server

Design and evaluation of a system that coordinate clients to use the same server Linköpings universitet/linköping University IDA Department of Computer and Information Science Bachelor Thesis Information Technology Spring term 2017 LIU-IDA/LITH-EX-G--17/067--SE Design and evaluation

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis Case Study of Development of a Web Community with ASP.NET MVC 5 by Haci Dogan LIU-IDA/LITH-EX-A--14/060--SE 2014-11-28

More information

Design, Implementation, and Performance Evaluation of HLA in Unity

Design, Implementation, and Performance Evaluation of HLA in Unity Linköping University IDA Bachelor Thesis Computer Science Spring 2017 LIU-IDA/LITH-EX-G-17/007--SE Design, Implementation, and Performance Evaluation of HLA in Unity Author: Karl Söderbäck 2017-06-09 Supervisor:

More information

Evaluation of BizTalk360 From a business value perspective

Evaluation of BizTalk360 From a business value perspective Linköpings universitet Institutionen för IDA Kandidatuppsats, 16 hp Högskoleingenjör - Datateknik Vårterminen 2018 LIU-IDA/LITH-EX-G--18/069--SE Evaluation of BizTalk360 From a business value perspective

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis A systematic literature Review of Usability Inspection Methods by Ali Ahmed LIU-IDA/LITH-EX-A--13/060--SE 2013-11-01

More information

Object Migration in a Distributed, Heterogeneous SQL Database Network

Object Migration in a Distributed, Heterogeneous SQL Database Network Linköping University Department of Computer and Information Science Master s thesis, 30 ECTS Computer Engineering (Datateknik) 2018 LIU-IDA/LITH-EX-A--18/008--SE Object Migration in a Distributed, Heterogeneous

More information

Creating a Framework for Consumer-Driven Contract Testing of Java APIs

Creating a Framework for Consumer-Driven Contract Testing of Java APIs Linköping University IDA Bachelor s Degree, 16 ECTS Computer Science Spring term 2018 LIU-IDA/LITH-EX-G--18/022--SE Creating a Framework for Consumer-Driven Contract Testing of Java APIs Fredrik Selleby

More information

Institutionen för datavetenskap

Institutionen för datavetenskap Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer Final thesis and Information Science Minimizing memory requirements

More information

Personlig visualisering av bloggstatistik

Personlig visualisering av bloggstatistik LiU-ITN-TEK-G-13/005-SE Personlig visualisering av bloggstatistik Tina Durmén Blunt 2013-03-22 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik

More information

Semi-automatic code-to-code transformer for Java

Semi-automatic code-to-code transformer for Java Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2016 LIU-IDA/LITH-EX-A--16/031--SE Semi-automatic code-to-code transformer for Java Transformation of library calls

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis Introducing Mock framework for Unit Test in a modeling environment by Joakim Braaf LIU-IDA/LITH-EX-G--14/004--SE

More information

Comparing Costs of Browser Automation Test Tools with Manual Testing

Comparing Costs of Browser Automation Test Tools with Manual Testing Linköpings universitet The Institution of Computer Science (IDA) Master Theses 30 ECTS Informationsteknologi Autumn 2016 LIU-IDA/LITH-EX-A--16/057--SE Comparing Costs of Browser Automation Test Tools with

More information

Functional and Security testing of a Mobile Application

Functional and Security testing of a Mobile Application Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Information Technology 2017 LIU-IDA/LITH-EX-G--17/066--SE Functional and Security testing of a Mobile Application Funktionell

More information

Slow rate denial of service attacks on dedicated- versus cloud based server solutions

Slow rate denial of service attacks on dedicated- versus cloud based server solutions Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Information technology 2018 LIU-IDA/LITH-EX-G--18/031--SE Slow rate denial of service attacks on dedicated-

More information

HTTP/2, Server Push and Branched Video

HTTP/2, Server Push and Branched Video Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Datateknik 2017 LIU-IDA/LITH-EX-G--17/073--SE HTTP/2, Server Push and Branched Video Evaluation of using HTTP/2 Server Push

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis Migration process evaluation and design by Henrik Bylin LIU-IDA/LITH-EX-A--13/025--SE 2013-06-10 Linköpings universitet

More information

Evaluation of a synchronous leader-based group membership

Evaluation of a synchronous leader-based group membership Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Information Technology Spring 2017 LIU-IDA/LITH-EX-G--17/084--SE Evaluation of a synchronous leader-based group membership protocol

More information

Analysis of GPU accelerated OpenCL applications on the Intel HD 4600 GPU

Analysis of GPU accelerated OpenCL applications on the Intel HD 4600 GPU Linköping University Department of Computer Science Master thesis, 30 ECTS Computer Science Spring term 2017 LIU-IDA/LITH-EX-A--17/019--SE Analysis of GPU accelerated OpenCL applications on the Intel HD

More information

Intelligent boundary extraction for area and volume measurement

Intelligent boundary extraction for area and volume measurement Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2017 LIU-IDA/LITH-EX-A--17/009--SE Intelligent boundary extraction for area and volume measurement Using LiveWire for

More information

Storage and Transformation for Data Analysis Using NoSQL

Storage and Transformation for Data Analysis Using NoSQL Linköping University Department of Computer Science Master thesis, 30 ECTS Information Technology 2017 LIU-IDA/LITH-EX-A--17/049--SE Storage and Transformation for Data Analysis Using NoSQL Lagring och

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis Towards efficient legacy test evaluations at Ericsson AB, Linköping by Karl Gustav Sterneberg LIU-IDA/LITH-EX-A--08/056--SE

More information

Optimizing a software build system through multi-core processing

Optimizing a software build system through multi-core processing Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2019 LIU-IDA/LITH-EX-A--19/004--SE Optimizing a software build system through multi-core processing Robin Dahlberg

More information

Creating User Interfaces Using Web-based Technologies to Support Rapid Prototyping in a Desktop Astrovisualization Software

Creating User Interfaces Using Web-based Technologies to Support Rapid Prototyping in a Desktop Astrovisualization Software LiU-ITN-TEK-A--17/062--SE Creating User Interfaces Using Web-based Technologies to Support Rapid Prototyping in a Desktop Astrovisualization Software Klas Eskilson 2017-11-28 Department of Science and

More information

Information visualization of consulting services statistics

Information visualization of consulting services statistics LiU-ITN-TEK-A--16/051--SE Information visualization of consulting services statistics Johan Sylvan 2016-11-09 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

An Approach to Achieve DBMS Vendor Independence for Ides AB s Platform

An Approach to Achieve DBMS Vendor Independence for Ides AB s Platform Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Datateknik 2017 LIU-IDA/LITH-EX-G--17/008--SE An Approach to Achieve DBMS Vendor Independence for Ides AB s Platform Niklas

More information

Design of video players for branched videos

Design of video players for branched videos Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Computer Science 2018 LIU-IDA/LITH-EX-G--18/053--SE Design of video players for branched videos Design av videospelare

More information

Visualisation of data from IoT systems

Visualisation of data from IoT systems Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2017 LIU-IDA/LITH-EX-A--17/027--SE Visualisation of data from IoT systems A case study of a prototyping tool for data

More information

Multi-Video Streaming with DASH

Multi-Video Streaming with DASH Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Datateknik 217 LIU-IDA/LITH-EX-G--17/71--SE Multi-Video Streaming with DASH Multi-video streaming med DASH Sebastian Andersson

More information

Tablet-based interaction methods for VR.

Tablet-based interaction methods for VR. Examensarbete LITH-ITN-MT-EX--06/026--SE Tablet-based interaction methods for VR. Lisa Lönroth 2006-06-16 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping, Sweden Institutionen

More information

Design and Proof-of-Concept Implementation of Interactive Video Streaming with DASH.js

Design and Proof-of-Concept Implementation of Interactive Video Streaming with DASH.js Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Datateknik 2017 LIU-IDA/LITH-EX-G--17/081--SE Design and Proof-of-Concept Implementation of Interactive Video

More information

Context-based algorithm for face detection

Context-based algorithm for face detection Examensarbete LITH-ITN-MT-EX--05/052--SE Context-based algorithm for face detection Helene Wall 2005-09-07 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping, Sweden Institutionen

More information

Automatic LOD selection

Automatic LOD selection LiU-ITN-TEK-A--17/054--SE Automatic LOD selection Isabelle Forsman 2017-10-20 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik och naturvetenskap

More information

Automatic Test Suite for Physics Simulation System

Automatic Test Suite for Physics Simulation System Examensarbete LITH-ITN-MT-EX--06/042--SE Automatic Test Suite for Physics Simulation System Anders-Petter Mannerfelt Alexander Schrab 2006-09-08 Department of Science and Technology Linköpings Universitet

More information

A Back-End for the SkePU Skeleton Programming Library targeting the Low- Power Multicore Vision Processor

A Back-End for the SkePU Skeleton Programming Library targeting the Low- Power Multicore Vision Processor Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2016 LIU-IDA/LITH-EX-A--16/055--SE A Back-End for the SkePU Skeleton Programming Library targeting the Low- Power Multicore

More information

Implementation and Evaluation of Bluetooth Low Energy as a communication technology for wireless sensor networks

Implementation and Evaluation of Bluetooth Low Energy as a communication technology for wireless sensor networks Linköpings universitet/linköping University IDA HCS Bachelor 16hp Innovative programming Vårterminen/Spring term 2017 ISRN: LIU-IDA/LITH-EX-G--17/015--SE Implementation and Evaluation of Bluetooth Low

More information

Advanced Visualization Techniques for Laparoscopic Liver Surgery

Advanced Visualization Techniques for Laparoscopic Liver Surgery LiU-ITN-TEK-A-15/002-SE Advanced Visualization Techniques for Laparoscopic Liver Surgery Dimitrios Felekidis 2015-01-22 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Computer-assisted fracture reduction in an orthopaedic pre-operative planning workflow

Computer-assisted fracture reduction in an orthopaedic pre-operative planning workflow LiU-ITN-TEK-A--17/003--SE Computer-assisted fracture reduction in an orthopaedic pre-operative planning workflow Ludvig Mangs 2017-01-09 Department of Science and Technology Linköping University SE-601

More information

Adapting network interactions of a rescue service mobile application for improved battery life

Adapting network interactions of a rescue service mobile application for improved battery life Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Information Technology Spring term 2017 LIU-IDA/LITH-EX-G--2017/068--SE Adapting network interactions of a rescue

More information

Department of Electrical Engineering. Division of Information Coding. Master Thesis. Free Viewpoint TV. Mudassar Hussain.

Department of Electrical Engineering. Division of Information Coding. Master Thesis. Free Viewpoint TV. Mudassar Hussain. Department of Electrical Engineering Division of Information Coding Master Thesis Free Viewpoint TV Master thesis performed in Division of Information Coding by Mudassar Hussain LiTH-ISY-EX--10/4437--SE

More information

Development of a Game Portal for Web-based Motion Games

Development of a Game Portal for Web-based Motion Games Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2017 LIU-IDA/LITH-EX-A--17/013--SE Development of a Game Portal for Web-based Motion Games Ozgur F. Kofali Supervisor

More information

Optimal Coherent Reconstruction of Unstructured Mesh Sequences with Evolving Topology

Optimal Coherent Reconstruction of Unstructured Mesh Sequences with Evolving Topology LiU-ITN-TEK-A-14/040-SE Optimal Coherent Reconstruction of Unstructured Mesh Sequences with Evolving Topology Christopher Birger 2014-09-22 Department of Science and Technology Linköping University SE-601

More information

Calibration of traffic models in SIDRA

Calibration of traffic models in SIDRA LIU-ITN-TEK-A-13/006-SE Calibration of traffic models in SIDRA Anna-Karin Ekman 2013-03-20 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Master s Thesis An Approach on Learning Multivariate Regression Chain Graphs from Data by Babak Moghadasin LIU-IDA/LITH-EX-A--13/026

More information

Design Optimization of Soft Real-Time Applications on FlexRay Platforms

Design Optimization of Soft Real-Time Applications on FlexRay Platforms Institutionen för Datavetenskap Department of Computer and Information Science Master s thesis Design Optimization of Soft Real-Time Applications on FlexRay Platforms by Mahnaz Malekzadeh LIU-IDA/LITH-EX-A

More information

Study of Local Binary Patterns

Study of Local Binary Patterns Examensarbete LITH-ITN-MT-EX--07/040--SE Study of Local Binary Patterns Tobias Lindahl 2007-06- Department of Science and Technology Linköpings universitet SE-60 74 Norrköping, Sweden Institutionen för

More information

Multi-Resolution Volume Rendering of Large Medical Data Sets on the GPU

Multi-Resolution Volume Rendering of Large Medical Data Sets on the GPU LITH-ITN-MT-EX--07/056--SE Multi-Resolution Volume Rendering of Large Medical Data Sets on the GPU Ajden Towfeek 2007-12-20 Department of Science and Technology Linköping University SE-601 74 Norrköping,

More information

Design and evaluation of a user interface for a WebVR TV platform developed with A-Frame

Design and evaluation of a user interface for a WebVR TV platform developed with A-Frame Linköping University Department of Computer Science Master thesis, 30 ECTS Information Technology 2017 LIU-IDA/LITH-EX-A--17/006--SE Design and evaluation of a user interface for a WebVR TV platform developed

More information

Extending the Stream Reasoning in DyKnow with Spatial Reasoning in RCC-8

Extending the Stream Reasoning in DyKnow with Spatial Reasoning in RCC-8 Institutionen för Datavetenskap Department of Computer and Information Science Master s thesis Extending the Stream Reasoning in DyKnow with Spatial Reasoning in RCC-8 by Daniel Lazarovski LIU-IDA/LITH-EX-A

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Bachelor thesis A TDMA Module for Waterborne Communication with Focus on Clock Synchronization by Anders Persson LIU-IDA-SAS

More information

Design and evaluation of an educational tool for understanding functionality in flight simulators

Design and evaluation of an educational tool for understanding functionality in flight simulators Linköping University Department of Computer Science Master thesis, 30 ECTS Computer and Information Science 2017 LIU-IDA/LITH-EX-A--17/007--SE Design and evaluation of an educational tool for understanding

More information

Ad-hoc Routing in Low Bandwidth Environments

Ad-hoc Routing in Low Bandwidth Environments Master of Science in Computer Science Department of Computer and Information Science, Linköping University, 2016 Ad-hoc Routing in Low Bandwidth Environments Emil Berg Master of Science in Computer Science

More information

Audial Support for Visual Dense Data Display

Audial Support for Visual Dense Data Display LiU-ITN-TEK-A--17/004--SE Audial Support for Visual Dense Data Display Tobias Erlandsson Gustav Hallström 2017-01-27 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Statistical flow data applied to geovisual analytics

Statistical flow data applied to geovisual analytics LiU-ITN-TEK-A--11/051--SE Statistical flow data applied to geovisual analytics Phong Hai Nguyen 2011-08-31 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Network optimisation and topology control of Free Space Optics

Network optimisation and topology control of Free Space Optics LiU-ITN-TEK-A-15/064--SE Network optimisation and topology control of Free Space Optics Emil Hammarström 2015-11-25 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Face detection for selective polygon reduction of humanoid meshes

Face detection for selective polygon reduction of humanoid meshes LIU-ITN-TEK-A--15/038--SE Face detection for selective polygon reduction of humanoid meshes Johan Henriksson 2015-06-15 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis A database solution for scientific data from driving simulator studies By Yasser Rasheed LIU-IDA/LITH-EX-A--11/017

More information

Debug Interface for Clone of DSP. Examensarbete utfört i Elektroniksystem av. Andreas Nilsson

Debug Interface for Clone of DSP. Examensarbete utfört i Elektroniksystem av. Andreas Nilsson Debug Interface for Clone of 56000 DSP Examensarbete utfört i Elektroniksystem av Andreas Nilsson LITH-ISY-EX-ET--07/0319--SE Linköping 2007 Debug Interface for Clone of 56000 DSP Examensarbete utfört

More information

Large fused GPU volume rendering

Large fused GPU volume rendering LiU-ITN-TEK-A--08/108--SE Large fused GPU volume rendering Stefan Lindholm 2008-10-07 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik och

More information

Development of water leakage detectors

Development of water leakage detectors LiU-ITN-TEK-A--08/068--SE Development of water leakage detectors Anders Pettersson 2008-06-04 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik

More information

Visual Data Analysis using Tracked Statistical Measures within Parallel Coordinate Representations

Visual Data Analysis using Tracked Statistical Measures within Parallel Coordinate Representations Examensarbete LITH-ITN-MT-EX--05/030--SE Visual Data Analysis using Tracked Statistical Measures within Parallel Coordinate Representations Daniel Ericson 2005-04-08 Department of Science and Technology

More information

Distributed Client Driven Certificate Transparency Log

Distributed Client Driven Certificate Transparency Log Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Information Technology 2018 LIU-IDA/LITH-EX-G--18/055--SE Distributed Client Driven Transparency Log Distribuerad

More information

Permissioned Blockchains and Distributed Databases: A Performance Study

Permissioned Blockchains and Distributed Databases: A Performance Study Linköping University Department of Computer and Information Science Master thesis, 30 ECTS Datateknik 2018 LIU-IDA/LITH-EX-A--2018/043--SE Permissioned Blockchains and Distributed Databases: A Performance

More information

Developing a database and a user interface for storing test data for radar equipment

Developing a database and a user interface for storing test data for radar equipment Linköping University IDA- Department of Computer and information Science Bachelor thesis 16hp Educational program: Högskoleingenjör i Datateknik Spring term 2017 ISRN: LIU-IDA/LITH-EX-G--17/006 SE Developing

More information

Hybrid Particle-Grid Water Simulation using Multigrid Pressure Solver

Hybrid Particle-Grid Water Simulation using Multigrid Pressure Solver LiU-ITN-TEK-G--14/006-SE Hybrid Particle-Grid Water Simulation using Multigrid Pressure Solver Per Karlsson 2014-03-13 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Utilize OCR text to extract receipt data and classify receipts with common Machine Learning

Utilize OCR text to extract receipt data and classify receipts with common Machine Learning Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Programming 2018 LIU-IDA/LITH-EX-G--18/043--SE Utilize OCR text to extract receipt data and classify receipts

More information

OMSI Test Suite verifier development

OMSI Test Suite verifier development Examensarbete LITH-ITN-ED-EX--07/010--SE OMSI Test Suite verifier development Razvan Bujila Johan Kuru 2007-05-04 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping, Sweden

More information

Towards automatic asset management for real-time visualization of urban environments

Towards automatic asset management for real-time visualization of urban environments LiU-ITN-TEK-A--17/049--SE Towards automatic asset management for real-time visualization of urban environments Erik Olsson 2017-09-08 Department of Science and Technology Linköping University SE-601 74

More information

Institutionen för datavetenskap

Institutionen för datavetenskap Institutionen för datavetenskap Department of Computer and Information Science Final thesis Developing a new 2D-plotting package for OpenModelica by Haris Kapidzic LIU-IDA/LITH-EX-G 11/007 SE 2011-04-28

More information

LunchHero - a student s everyday hero

LunchHero - a student s everyday hero Linköping University Department of Computer Science Bachelor thesis 18 ECTS Industrial Engineering and Management Spring 2018 LIU-IDA/LITH-EX-G--18/034--SE LunchHero - a student s everyday hero - A case

More information

Usability guided development of

Usability guided development of Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Datavetenskap 2018 LIU-IDA/LITH-EX-G--18/004--SE Usability guided development of a par cipant database system

More information

Clustered Importance Sampling for Fast Reflectance Rendering

Clustered Importance Sampling for Fast Reflectance Rendering LiU-ITN-TEK-A--08/082--SE Clustered Importance Sampling for Fast Reflectance Rendering Oskar Åkerlund 2008-06-11 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Evaluating Deep Learning Algorithms

Evaluating Deep Learning Algorithms Linköping University Department of Computer and Information Science Master thesis, 30 ECTS Datateknik 202018 LIU-IDA/LITH-EX-A--2018/034--SE Evaluating Deep Learning Algorithms for Steering an Autonomous

More information

Progressive Web Applications and Code Complexity

Progressive Web Applications and Code Complexity Linköping University Department of Computer and Information Science Master thesis, 30 ECTS Datateknik 2018 LIU-IDA/LITH-EX-A--18/037--SE Progressive Web Applications and Code Complexity An analysis of

More information

A Cycle-Trade Heuristic for the Weighted k-chinese Postman Problem

A Cycle-Trade Heuristic for the Weighted k-chinese Postman Problem Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Computer Science 2018 LIU-IDA/LITH-EX-G--18/073--SE A Cycle-Trade Heuristic for the Weighted k-chinese Postman Problem Anton

More information

Development and piloting of a fully automated, push based, extended session alcohol intervention on university students a feasibility study

Development and piloting of a fully automated, push based, extended session alcohol intervention on university students a feasibility study Department of Computer and Information Science Informationsteknologi LIU-IDA/LITH-EX-A--13/001--SE Development and piloting of a fully automated, push based, extended session alcohol intervention on university

More information

Illustrative Visualization of Anatomical Structures

Illustrative Visualization of Anatomical Structures LiU-ITN-TEK-A--11/045--SE Illustrative Visualization of Anatomical Structures Erik Jonsson 2011-08-19 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Institutionen för datavetenskap. Study of the Time Triggered Ethernet Dataflow

Institutionen för datavetenskap. Study of the Time Triggered Ethernet Dataflow Institutionen för datavetenskap Department of Computer and Information Science Final thesis Study of the Time Triggered Ethernet Dataflow by Niclas Rosenvik LIU-IDA/LITH-EX-G 15/011 SE 2015-07-08 Linköpings

More information

Automatic Clustering of 3D Objects for Hierarchical Level-of-Detail

Automatic Clustering of 3D Objects for Hierarchical Level-of-Detail LiU-ITN-TEK-A--18/033--SE Automatic Clustering of 3D Objects for Hierarchical Level-of-Detail Benjamin Wiberg 2018-06-14 Department of Science and Technology Linköping University SE-601 74 Norrköping,

More information

Implementing a scalable recommender system for social networks

Implementing a scalable recommender system for social networks LiU-ITN-TEK-A--17/031--SE Implementing a scalable recommender system for social networks Alexander Cederblad 2017-06-08 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Automatic analysis of eye tracker data from a driving simulator

Automatic analysis of eye tracker data from a driving simulator LiU-ITN-TEK-A--08/033--SE Automatic analysis of eye tracker data from a driving simulator Martin Bergstrand 2008-02-29 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Efficient implementation of the Particle Level Set method

Efficient implementation of the Particle Level Set method LiU-ITN-TEK-A--10/050--SE Efficient implementation of the Particle Level Set method John Johansson 2010-09-02 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Network Intrusion and Detection

Network Intrusion and Detection Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Datateknik 202017 LIU-IDA/LITH-EX-G--2017/085--SE Network Intrusion and Detection An evaluation of SNORT Nätverksintrång

More information

Real-Time Magnetohydrodynamic Space Weather Visualization

Real-Time Magnetohydrodynamic Space Weather Visualization LiU-ITN-TEK-A--17/048--SE Real-Time Magnetohydrodynamic Space Weather Visualization Oskar Carlbaum Michael Novén 2017-08-30 Department of Science and Technology Linköping University SE-601 74 Norrköping,

More information

Institutionen för datavetenskap

Institutionen för datavetenskap Institutionen för datavetenskap Department of Computer and Information Science Final thesis Implementation of a Profibus agent for the Proview process control system by Ferdinand Hauck LIU-IDA/LITH-EX-G--09/004--SE

More information

Towards Automatic Detection and Visualization of Tissues in Medical Volume Rendering

Towards Automatic Detection and Visualization of Tissues in Medical Volume Rendering Examensarbete LITH-ITN-MT-EX--06/012--SE Towards Automatic Detection and Visualization of Tissues in Medical Volume Rendering Erik Dickens 2006-02-03 Department of Science and Technology Linköpings Universitet

More information

Automating the process of dividing a map image into sections using Tesseract OCR and pixel traversing

Automating the process of dividing a map image into sections using Tesseract OCR and pixel traversing Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Innovative programming 2018 LIU-IDA/LITH-EX-G--18/041--SE Automating the process of dividing a map image into

More information

Machine Learning of Crystal Formation Energies with Novel Structural Descriptors

Machine Learning of Crystal Formation Energies with Novel Structural Descriptors Linköping University The Department of Physics, Chemistry, and Biology Master thesis, 30 ECTS Applied Physics and Electrical Engineering - Theory, Modelling, Visualization 2017 LIU-IFM/LITH-EX-A--17/3427--SE

More information

A latency comparison of IoT protocols in MES

A latency comparison of IoT protocols in MES Linköping University Department of Computer and Information Science Master thesis Software and Systems Division Spring 2017 LIU-IDA/LITH-EX-A--17/010--SE A latency comparison of IoT protocols in MES Erik

More information

A user-centered development of a remote Personal Video Recorder prototype for mobile web browsers

A user-centered development of a remote Personal Video Recorder prototype for mobile web browsers LiU-ITN-TEK-G--09/004--SE A user-centered development of a remote Personal Video Recorder prototype for mobile web browsers Johan Collberg Anders Sjögren 2009-01-29 Department of Science and Technology

More information

Interactive GPU-based Volume Rendering

Interactive GPU-based Volume Rendering Examensarbete LITH-ITN-MT-EX--06/011--SE Interactive GPU-based Volume Rendering Philip Engström 2006-02-20 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping, Sweden Institutionen

More information

Raspberry pi to backplane through SGMII

Raspberry pi to backplane through SGMII LiU-ITN-TEK-A--18/019--SE Raspberry pi to backplane through SGMII Petter Lundström Josef Toma 2018-06-01 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis Implementation of a Report Template Editing Tool in Java and JSP by Jacob Matiasson LIU-IDA/LITH-EX-G--14/059--SE

More information

Markörlös Augmented Reality för visualisering av 3D-objekt i verkliga världen

Markörlös Augmented Reality för visualisering av 3D-objekt i verkliga världen LiU-ITN-TEK-A-14/019-SE Markörlös Augmented Reality för visualisering av 3D-objekt i verkliga världen Semone Kallin Clarke 2014-06-11 Department of Science and Technology Linköping University SE-601 74

More information

React Native application development

React Native application development Linköpings universitet Institutionen för datavetenskap Examensarbete på avancerad nivå, 30hp Datateknik 2016 LIU-IDA/LITH-EX-A--16/050--SE React Native application development A comparison between native

More information

Applying Machine Learning to LTE/5G Performance Trend Analysis

Applying Machine Learning to LTE/5G Performance Trend Analysis Master Thesis in Statistics and Data Mining Applying Machine Learning to LTE/5G Performance Trend Analysis Araya Eamrurksiri Division of Statistics Department of Computer and Information Science Linköping

More information

Evaluation of cloud-based infrastructures for scalable applications

Evaluation of cloud-based infrastructures for scalable applications LiU-ITN-TEK-A--17/022--SE Evaluation of cloud-based infrastructures for scalable applications Carl Englund 2017-06-20 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Implementation of a Program Address Generator in a DSP processor

Implementation of a Program Address Generator in a DSP processor Implementation of a Program Address Generator in a DSP processor Roland Waltersson Reg nr: LiTH-ISY-EX-ET-0257-2003 2003-05-26 Implementation of a Program Address Generator in a DSP processor Departement

More information

Real-time visualization of a digital learning platform

Real-time visualization of a digital learning platform LiU-ITN-TEK-A--17/035--SE Real-time visualization of a digital learning platform Kristina Engström Mikaela Koller 2017-06-20 Department of Science and Technology Linköping University SE-601 74 Norrköping,

More information