Towards automatic asset management for real-time visualization of urban environments

Size: px
Start display at page:

Download "Towards automatic asset management for real-time visualization of urban environments"

Transcription

1 LiU-ITN-TEK-A--17/049--SE Towards automatic asset management for real-time visualization of urban environments Erik Olsson Department of Science and Technology Linköping University SE Norrköping, Sweden Institutionen för teknik och naturvetenskap Linköpings universitet Norrköping

2 LiU-ITN-TEK-A--17/049--SE Towards automatic asset management for real-time visualization of urban environments Examensarbete utfört i Medieteknik vid Tekniska högskolan vid Linköpings universitet Erik Olsson Handledare Patric Ljung Examinator Jonas Unger Norrköping

3 Upphovsrätt Detta dokument hålls tillgängligt på Internet eller dess framtida ersättare under en längre tid från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida Copyright The publishers will keep this document online on the Internet - or its possible replacement - for a considerable time from the date of publication barring exceptional circumstances. The online availability of the document implies a permanent permission for anyone to read, to download, to print out single copies for your own use and to use it unchanged for any non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional on the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its WWW home page: Erik Olsson

4 Linköping University Department of Science and Technology Master thesis, 30 ECTS Medieteknik LIU-ITN/LITH-EX-A--2017/001--SE Towards automatic asset management for real-time visualization of urban environments Realtidsvisualisering av stadsmiljöer Erik Olsson Supervisor : Patric Ljung and Per Larsson Examiner : Jonas Unger Linköpings universitet SE Linköping ,

5 Upphovsrätt Detta dokument hålls tillgängligt på Internet eller dess framtida ersättare under 25 år från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns lösningar av teknisk och administrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsmannenslitterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida Copyright The publishers will keep this document online on the Internet or its possible replacement for a period of 25 years starting from the date of publication barring exceptional circumstances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: c Erik Olsson

6 Abstract This thesis describes how a pipeline was obtained to reconstruct an urban environment from terrestrial laser scanning and photogrammetric 3D maps of Norrköping, visualized in first prison and real-time. Together with LIU University and the city planning office of Norrköping the project was carried out as a preliminary study to get an idea of how much work is needed and in what accuracy we can recreate a few buildings. The visualization is intended to demonstrate a new way of exploring the city in virtual reality as well as visualize the geometrical and textural details in a higher quality comparing to the 3D map that Municipality of Norrköping uses today. Before, the map has only been intended to be displayed from a bird s eye view and has poor resolution from closer ranges. In order to improve the resolution, HDR photos were used to texture the laser scanned model and cover a particular area of the low res 3D map. This thesis will explain which method was used to process a point based environment for texturing and setting up an environment in Unreal using both the 3d map and the laser scanned model.

7 Acknowledgments I would like to thank all engineers, PhDs and lecturers from C-research that helped me with guidance and expertise during my master thesis project. Especially, I would thank my supervisors Patric Ljung och Per Larsson for all their support. Additionally, I would like to thank my examiner Jonas Unger who came up with the idea of a real-time rendering of urban environments. Finally, I would thank KJ for learning the basics of cameras and image projection and Denny Lindberg for guidance in Unreal Engine. iv

8 Contents Abstract Acknowledgments Contents List of Figures iii iv v vii 1 Introduction Motivation and aim Research questions Delimitations Background and Related Work Related work d-maps Photogrammetry Lidar Texel Density Lens distortion Method Laser scanning and photography Software Survey Point cloud alignment Meshing Mesh Simplification UV-mapping HDR-image assembly Lens correction Perspective warping Panoram stitching Spherical projection Planer projection Ptex ReMake RealityCapture The Pipeline Unreal (Environment set-up) Results 31 v

9 6 Discussion Method Results Conclusion Further work Bibliography 41 vi

10 List of Figures 2.1 The environment creation of Rise The creation of Scott s apartment A small section from the municipality s latest 3d-map, produced by Slagboom en Peters Differences in texel density (a) Faro Laser Scanner Focus 3D. (b) Canon EOS 5DSR with 8m circular fisheye lens. Mounted on a calibrated Nodal Ninja tripod head Eight point clouds, measured from different locations has been aligned into one PEM in Recap. The yellow circles shows the scanner positions Meshed model, divided into 16 cells in Sequoia (a) Before simplification. (b) After 30% simplification in Simplygon (a) Before redundant geometries has been deleted. (b) After the clean up UV-mapping in Modo Compression between the result of the softwares UV-projection, generated from one mesh (cell 15) (a) The result of one HDR from Photomatix. (b) Tone mapped HDR in Photoshop (a) Original photo with perspective distortion. (b) Perspective warped photo in Photoshop Divided perspective warping Equirectangular projected panorama image, HDR merged and stitched in Photomatix (a) The black square indicates the camera position to the spherical projection. The green circle indicates position. (b) An estimation of three camera positions. The projections has been overlapped by painting smooth transitions (a) Spherical projection alignment in Mari, adjusted by typing in coordinates. (b) Spherical projection alignment in Sequoia, adjusted by rotating/translating the colored handles (a) Paint buffer mapped to wrong UVs after baking. (b) Brush tool noise that became visible after baking Differences in texel resolution per faces Texture quality in ReMake Combined laser scans and photos captured from the ground. Dimensions: 120x31x34m. Polygons: 9,2M. Texture resolution: 8k (1 texture-map) Combined laser scans and photos captured from a drone. Dimensions: 250x145x48m. Polygons: 3M. Texture resolution: 4k (44 texture-maps) (a) The lasers scanners ability to capture glass. (b) Hollow windows filled with a plane in Maya (a)the green line shows the seam between two cells. (b) An overview of how much texture I was able to paint. The building in the right corner has been left with the base channel color vii

11 4.3 Asset creation in Unreal The Pipeline Comparison between the 3d map and laser scanned model with HDR textures Screenshot of the architect model The gap between the green lines missing resolution Spherical projections. (a) View pointed from the correct camera position. (b) The view has been translated from the camera position viii

12 1 Introduction Today, the municipality of Norrköping uses two types of platforms to visualize the city in 3D. One is a web-based interface called City Planner, developed by the Swedish company Agency9. This interface is intended for the public to share ideas about urban planning by adding comments to particular destinations. The other platform is a multi touch table called Urban Explorer Table 2, developed by Rise Interactive. This map is provided with CADmodels of upcoming building projects that has been arbitrarily placed over the ground. Since the 3d map is created from oblique aerial photographs, the detail domain becomes very limited in a close view. It is rather meant to visualize the city from a Bird s-eye view than street view for an accurate resolution. To improve the visual experience we wanted to perform a real time rendering of the city in first person with higher geometry and texture quality than the existing 3d map. To achieve this, a combination of terrestrial laser scanning (tls) and HDR photos were used to recreate Yllefabriken and a smaller area of Strömmparken. The procedure of texturing a scanned model with High Dynamic Range (HDR) reference images has be done earlier by the professional VFX artist Scott Metzger [19]. By proceeding from his study we wanted to investigated how much of his pipeline could be used for our purpose and examine other softwares to get more experience and finally determine my own pipeline for a real-time rendering. 1.1 Motivation and aim The purpose of the project has been to demonstrate for the City Planning Office of Norrköping how the 3d-map and Yllefabriken can be visualized in higher resolution as well as replacing the old Yllefabriken with a new apartment model created by an architect. The geometries and the textures in the 3d-map looks a bit like melted ice cream in a close range which makes the visualization uninteresting in first person view. Therefore, terrestrial laser scanning had to be performed as well as photographing the site. How the textures should be applied on the measured geometry with best possible resolution had to be investigate as well as investigate what accuracy the geometry was required in order to not overload the rendering. In a web-based visualization like City Planner, it takes a lot of time to load the details of the models and the user can only navigate through the scene with the mouse. In order to make the visualization faster and more like a computer game, Unreal Engine will be used as gaming engine. A game engine like Unreal is suitable for creating environments including 1

13 1.2. Research questions visual effects such as animations, illuminations, procedural materials and other assets. It also has great conditions for running the game in virtual reality. The aim of the project was to obtain a pipeline that could be used for processing the scanned data to a highly detailed mesh, texturing it with HDRs and rendering it together with the 3d-map in Virtual Reality. It was also desirable to reduce as much manual work as possible and rather rely on automatic softwares tools. The pipeline would not only be useful to visualize urban planning in real time, it could also be used for a short demo video, pre-rendered with physical camera in V-ray for example. 1.2 Research questions The following questions were in focus of the thesis: 1. Is it worth the time and work to manually texturing a laser scanned environment to achieve high texture quality compared to doing it automatically by photogrammetry? 2. Can the scanned data be processed through the pipeline without manually resurfacing the mesh with a reasonable performance? 3. In what accuracy is it possible to align projected images with the geometry, directly in a texturing software without carefully estimate the camera position in an external application? 4. Which factors are the most important to take into consideration to achieve as high texture resolution as possible when the model is rendered in first person? 1.3 Delimitations Yllefabriken was the main object to measure but lots of other geometries around became also recorded during a the scanning which could be interesting to visualize. Hence it was decided to maintain 3-4 buildings plus some of the terrain but not more. The visualization did not had to be done online, like City Planner that is why Unreal Engine was used and we could spend more time on the visual result. How effectively the rendering would be, there was no requirement as long as Unreal could run without perceivable lag. The geometries would be textured using HDR-photos and as high resolution as the gaming engine was capable of, using 8-bit jpg-files as textures was not acceptable. The texture projection should be done with a combination of spherical projections with equirectangular images and planner projection with usual rectangular images. The placement of new models in Unreal did not had to align perfectly with the 3d map, just arbitrarily. The report will not describe any advanced theory about image projection, texturing, HDR-images or memory management, instead focusing on the practical work. 2

14 2 Background and Related Work 2.1 Related work The idea of recreating an environment from laser scanned data and texturing with HDRs, came from a presentation performed by the visual effects artist Scott Metzger where he demonstrates his pipeline [20] for visual effects used in the short film RISE (figure 2.1 ) as well as when he demonstrate the recreation of his apartment (figure 2.2). His pipeline was founded by: 1. Creating a mesh from a laser scanned point could in Geomagic 2. Resurfacing the mesh in Modo 3. Merging HDRs in Photomatix 4. Lining up cameras in Maya 5. Texture painting HDRs in Mari 6. Rendering in Vray In the apartment project all textures were projected using only spherical HDRs and painted on a UVless mesh by using the texture mapping system Ptex. In the Rise project, the textures were baked on the geometry by first UV-unwrapping the mesh and then texturing it by projecting both spherical and planar images. What differs Scott s project from our is that his purpose was to render a short movie sequence while we wanted to do it in real time like a computer game which means that the resolution has to be optimized everywhere where the user can view the surface in close up distances d-maps Today, there are a variety of map services that mainly focus on route planning and location information. Only a few applications have the ability to visualize the map in full 3D, where Google Maps and Apple Maps are the most dominating. When Google introduced Google Maps in 2005, the visualization consisted of satellite images together with oblique 3

15 2.3. Photogrammetry Figure 2.1: The environment creation of Rise Figure 2.2: The creation of Scott s apartment aerial photos projected from a bird s eye view. A couple of years later, some cities were provided with modeled buildings made by enthusiasts in Sketch-Up. Since there were only a few 3D-buildings in each city of varying quality and poorly aligned with the aerial photos, new technologies were investigated to automate the recreation of a three dimensional world. In 2008, hitta.se made a collaboration with C3 Technology and Agency9 where they together created an online 3D-visualization of cities in Sweden. C3 technology was a subsidiary of SAAB Dynamics who had developed a measuring technology for missile targeting and they used same measure technology for the 3D maps[2], hence the maps received enormously good precision. The maps are created by stereophotogrammetry where oblique photos have been taken from aircrafts mounted with multiple cameras from different angles. Hitta.se s 3d maps had such good quality that C3 Technology was bought by Apple maps and more cities all over the world became photographed and are used in Apple Maps today. In 2012, Google Maps also began using stereophotogrammetry and photographed cities from aircrafts. In the last few years there has emerged a whole series of other companies working with 3D visualizations of urban environments. Today, the municipality of Norrköping uses a 3d map from 2013 made by SAAB Dynamics for the visualization of the city planning project: Let s create Norrköping [22]. To our project, we were provided with a newer map from 2016 created by the Dutch company, Slagboom en Peters [24] with much better resolution than earlier maps from SAAB, shown in figure (2.3). 2.3 Photogrammetry Photogrammetry is a technique for creating a 3d model based on a variety of overlapping 2-dimensional images taken from multiple positions. In a single 2D-image there is no information about the depth, we have no idea of how far away the light has traveled from the object to the camera sensor. The fundamental of photogrammetry is to compute the camera position and the depth to recreate the exterior as a 3d-model. This is done by identifying keypoints in the pictures with common features (also called tie points). Rays are then drawn from the center of the camera lens through the image planes in the direction of the keypoints. Where the rays intersect corresponds to a point located in the 3D-space. This procedure is called exterior orientation, mathematically it is based on by solving two collinearity equations that address 3D-coordinates to the image coordinates, described in [27]. 4

16 2.3. Photogrammetry Figure 2.3: A small section from the municipality s latest 3d-map, produced by Slagboom en Peters Before the 3D coordinates can be determined, the direction and position of the cameras has to be known, given by x, y, z for the position and ω, φ, κ for the rotation. Some cameras are equipped with GPS receivers and are able to record the camera position from GPScoordinates, otherwise the camera position has to be mathematically estimated. It is a non trivial problem and requires different solutions depending on whether the camera is calibrated or not [7]. Basically the calculation is done by forming two calibration matrices K 1 and K 2 using the cameras known intrinsic parameters, such as: focal length, optical center and pixel scaling factors. Then computing the Fundamental Matrix F from two corresponding points in the images x 1 Ø x 2. We can then compute the essential matrix E that stores the rotation and translation of the camera E = K T 1 FK 2. The camera location is the finally estimated by extracting R and t from E through SVD decomposition, when E = Rt. The whole process of solving the cameras orientation is more detailed explained in chapter 9 in Multiple View Geometry in Computer Vision [12][21]. After the camera positions are calculated for all images, thousands of corresponding points are sampled over the images to obtain a dense point cloud. The method to match corresponding points is done automatically with computer vision algorithms where SIFT is the most common algorithm that registering invariant descriptors for each point [15]. The descriptors are generated from local image gradients. Invariant means that the descriptor value will not change change if camera view is translated, scaled or rotated. Each of the registered points are described by an vector which contains values for the pixels locations in the image, the image scale, the orientation in world space and the descriptor. The vectors are then compared between two different images to establishes the correspondence between points. When two corresponding points are found, rays are projected from the camera positions in the direction where the points are located and letting the rays intersect. By knowing the distance between the intersection point and the camera distance, 3d-coordinates can be calculated and a dense point cloud can be obtained. The point 5

17 2.4. Lidar cloud is finally triangulated to a mesh and texture mapped by projecting the images on the 3d-model. Aerial photogrammetry is a good method if a large area will be recreated, a 3D-map for example. But it has two disadvantages, the camera angle becomes limited from an aircraft flying meters above the terrain which makes it difficult to recreate geometries that are obscured by other geometries, trees, tunnels, narrow alleys and cars for example. This has to do with the rays from two different images will not intersect in the exact orientation that makes the object look like it have melt together with the ground. The other disadvantage is that the texture resolution will only be sharp from a upper distance. 2.4 Lidar The best accuracy for measuring geometries is obtained by using LiDAR (Light Detection and Ranging). LiDAR can be performed from spaceborne, airborne or tls regarding how large area is going to be measured. The measurement is performed by a scanner that emits thousands of laser beams (ir-light) per second in all possible directions, this laser beams records data of: the target position (X,Y,Z), intensity, and color (RBG). Terrestrial Laser Scanners can use to types of measure techniques either time-of-flight or phase based [26]. Time-of-flight scanners measure the distance by calculating the time it took for the light to reflect back. For each beam that is emitted, a point is recorded, the formula for calculating the distance between the scanner and the point is: distance = 0.5 v t, where v is the speed of light. Phase based scanners emits constant waves of infrared light, the phase shift of the outgoing waves and the returned waves are then used to calculate the distance between the object and the scanner. The recorded points will then build a 3d model and in that case the geometry becomes more detailed than creating a 3d model from a photogrammetric algorithm since we do not have to estimate the scanner s position. However, a Lidar instrument is significantly more expensive compared to airborne photogrammetry. A scanner costs around one million kr which is significantly lesser compared to manned aircraft operations. So both methods have pros and cons. A lack of both methods is the texture quality for close up views. 2.5 Texel Density Texel density is how uniformly any pixels are spread across a 3D surface, i.e. a value of how much texture resolution will be represented on the mesh. The larger the UVs are the more texels it can holds that allows higher resolution in the rendering. In order to keep an uniform texel density over the entire mesh the UV-shells should always be scaled in relations to each other and not individually [32]. If the texel density is badly distributed it will result in shifting resolution when the texture is applied, shown in see figure 2.4. This should not be a problem if the UVs are mapped automatically using a software tool, then all UV shells will be scaled with same ratio. An important rule is to never use a texture with higher (or smaller) resolution in relation to the objects texel density which L. Iezzi describes in [13]. For example if the object is determined to have a texel density of 1024 px/m the texture map s resolution should also be 1024x1024 px. If an 2k texture map is used instead, the UV-coordinates will not correspond to the pixels in the texture which result in a magnified texture when it is applied. To achieve higher resolution in the rendering, both the texel density and the texture map resolution has to be increased. But the texel density is limited in relation to the UV-space, if we scaling up the UV-shells just to get higher texel density the UV coordinates will start ranging outside the (0-1) UV-space. The solution is to use multiple UV-tiles, then we have a larger UV-space to distribute the UVs and the mesh can holds more resolution. 6

18 2.6. Lens distortion (a) No uniform texel density ñ varying resolution (b) Uniform texel density ñ equal resolution Figure 2.4: Differences in texel density 2.6 Lens distortion Lens distortion arises due to the spherical shape of the lens and flaws in the optical elements which results in that the recorded image will not be perfectly projected on the image plane [6]. Radial distortion and tangential distortion is the most important types of lens distortion to correct if the image should have same perspective as reality. Radial distortion creates an displacement of an given point in the image in relation to its real location which emphasizes unwanted effects such as straight lines becomes curved as well as disproportionate magnifications. It becomes more notable the farther away from the center of the lens because the rays are bent most there. Depending on what lens is used, different types of radial distortion appears. Distortion is typically classified into pincushion and barrel distortion. Pincushion distortion turns straight lines to curve inwards and barrel distortion makes the opposite. Lenses with longer focal length like telephoto lenses has easier to provoke pincushion distortion and wide angle lenses with short focal length, causes barrel distortion since it has more glass-elements that are curved. Tangential distortion is caused by that the image senor is not completely parallel to the camera lens which resulting in a tilted image. To remove lens distortion, the camera can be calibrated by calculating the lens distortion parameters. Using the distortion parameters and the known camera intrinsic parameters, the distortion can be removed by recalculation of pixel coordinates. Instead of going into too much detail on how lens distortion is corrected, I will explain how I used softwares to correct it automatically. The mathematical formulas for computing the distortion parameters and how it is applied to an distortion free image are described in [14]. Perspective effects When the camera is tilted up or down or angled to the side, another type of distortion appears called perspective distortion. The distortion makes the objects to looks like it is falling back/forward (shown in 3.9a) because the principle axis (center of the lens) has been pointed above/below the horizon. To avoid perspective distortion the picture must be taken when 7

19 2.6. Lens distortion the principle axis is perpendicular to the horizon. That keeps the image plane parallel to the lines of the object. Considering figure 3.9a this is the way the eyes actually perceives the building but when we see the picture on a screen our mind knows that we are looking at a picture from a different position and the perspective feels distorted. The advantage of removing the perspective distortion is to simplify the image projection during the texturing since it becomes much easier to project images in orthogonal view when the camera angle always are perpendicular to the object. There are two ways of removing such distortion. One is using a tilt-shift lens which is a movable lens that removes the distortion optically. The photographer manually change the lens in vertical/horizontal directions or by tilting it until the perspective looks straight. The other way is to correction the perspective in a post-processing software. Simply explained, four source points are selected by a user in the source images which forms a rectangular grid. The points in the source image are parameterized by x = (x, y) and are mapped into the planar shape of the grid parametrized by u = (u, v) and the correspondence are defined by two functions u(x, y) and v(x, y) [11]. When the corner points of the grid are moved to its target destination, the grid shape will change (in figure 3.9b) and the transformation can be obtained by computing the corresponding points in the image through the warp functions. This calculation is sampled in discrete points U ij over the whole image as Robert Carroll et al. describes in [25]. To generate new pixel values the discretized points are then multiplied with computed interpolation coefficients. If the picture are stretched too much, the image quality will be deteriorate because the interpolation. Using a tilt-shift lens will not cause any quality deterioration but it cost about kr and that is why I used Photoshop. 8

20 3 Method This chapter will describe how the capture and reconstruction pipeline works and which methods are used in each step. Before a pipeline could be determined, a variety of softwares were compared and evaluated to find a structured workflow which will be explained after the Laser scanning section. 3.1 Laser scanning and photography The laser scanner measurements was performed with a FARO Focus 3D laser scanner, figure 3.1a. The scanner is phase based and distributes laser beams by deflect the beams against a rotating mirror in vertical direction. Simultaneously as the mirror rotates, the scanner itself rotating horizontally to distributes beams all possible directions. When the beams hits an opaque surface the light reflects back to the scanner and the phase shift is estimated to determine the distance, simultaneously as the vertical and horizontal angle of each point is calculated by using angle encoders. A point cloud is then created by transforming the polar coordinates to Cartesian coordinates [31]. The scanner was set to it s maximum measurement speed: points/sec. It has a maximal field of view of 305 in vertical range, 360 in horizontal range and is able to register points in a range between 0.6m to 120m with a ranging error on 2mm (more technical details can be found in Faro Laser Scanner Focus 3D user manual [18]). We scanned Yllefabriken from eight different positions in Strömparken to be sure that the most important angels of Yllefabriken were captured. The laser scanner has a built-in color camera but the sensor is only 8MP and can only capture the photos in one exposure. Therefore, the additional photography were captured with a 50MP Canon EOS 5DSR preset to five different exposures, from 1/2000s to 1/2s (3 EV apart). For spherical HDRs we used an 8m circular fisheye lens, the camera was then mounted on a calibrated tripod head and for each camera position, images were taken in four different directions by rotating the camera 90 degrees after each shooting, see 3.1b. The photographs was taken from almost the same positions where the laser scanner had been placed to capture same field of view as the scanner. For normal HDRs (non spherical) we used a normal 35mm lens and photographed Yllefabriken plus some other buildings in Strömparken using overlapping directions. 9

21 3.2. Software Survey (a) (b) Figure 3.1: (a) Faro Laser Scanner Focus 3D. (b) Canon EOS 5DSR with 8m circular fisheye lens. Mounted on a calibrated Nodal Ninja tripod head 3.2 Software Survey After the data was captured it had to be processed through a several steps before it could be rendered with textures. The fundamental steps for the pipeline are listed below: Point cloud alignment Meshing Simplification UV-mapping HDR-merging Texturing Each step required new softwares to be examined to find their pros and cons. Most of the software were installed as 30-days trails and some could be activated as full versions with my supervisor s licenses. By performing basic comparisons of usability and performance, the software with best result were adopted to the pipeline. 3.3 Point cloud alignment For each new scan session the scanner creates a new point cloud and stores it in the memory card. Equal geometries will not have exactly same point coordinates if they are measured from different positions but can be aligned into a single point-based environment model (PEM [31]) by interpolating common points. This was done in Recap either by using "Auto registering" or manual point matching tool, seen in figure 3.2. The PEM was then exported and ready to be meshed in another software. 10

22 3.4. Meshing Figure 3.2: Eight point clouds, measured from different locations has been aligned into one PEM in Recap. The yellow circles shows the scanner positions 3.4 Meshing There are several different softwares to create meshes from point clouds and all using their own methods to compute the mesh. These softwares are also used to remove redundant data in the point cloud, captured during scanning. Meshlab[1] was initially used for cleaning and meshing the point cloud. In total, the point cloud consisted of points and covered an area of approximately m 2, almost the entire Strömparken including Yllefabriken and Gammelbron. First, we decided to only keep Yllefabriken and deleting everything else in Strömparken but later realised that it was interesting to visualize some other buildings in Stömparken. Since it was too difficult to do a proper clean up without deleting interesting geometries we decided to keep all data from the point cloud and perform the clean up after the point cloud was meshed. This made Meshlab to a unnecessary software in the pipeline and It was replaced with Sequoia which does not offers tools to erase points but had other useful tools like image projection and cell division. Before the meshing is executed the user have to decide how dense the vertices should be sampled. In Sequoia this is done by adjusting the a parameter called Radius, it represents the average distance between neighboring points. A high radius will sample the vertices sparsely which makes the mesh thicker and details can be lost. A low radius will sample the vertices denser, which increase the amount of polygons with a cost of memory usage. Sequoia calculates a suggested radius (about 8cm) that was too large to keep an accurate level of details, hence it is important for to try out what looked good. Sampling the vertices at millimeter precision was impossible since Sequoia always ran out of memory which led to the process failure. A point radius of 5cm was enough to emphasized details like windowsill, downspouts and fences. Which meshing method is used does has a big impact on how accurate detailed the reconstruction will be. Sequoia has three different methods to choose from: Union of Spheres Metaballs and Zhu / Birdson, where the last one was preferred. More about which parameters can be adjusted for the selected method can be found at Software developers website: Thinkbox Software [29]. The meshing algorithms creates a two-sided mesh because the method is based on a fluid simulation (developed by Yongning Zhu et al. [33]) where the thickness becomes approximately equal to the Meshing Radius times points in depth. A two-sided mesh is of 11

23 3.5. Mesh Simplification cures unwanted since we never going to inspect the mesh from under or back. Changing the mesh from double side to single side can simply be done by enable "Conform to points". 3.5 Mesh Simplification A reduction of the polygons should always be done before exporting a meshed point cloud to make it less complex and memory consuming. Dealing with 10-million polygon models during the editing was too much for a reasonable performance. I found that models of 1-2 million polygons was a more reasonable size to work with. Usually in computer games, multiple of models are created with different resolution, (LODs) which saves a lot of memory during the rendering. I decided to not use LODs because it is a lot of extra work to export and import several different models and the goal was not to perform a memory-efficient visualization. The simplification was first performed in Sequoia, where the user adjusts how many percent of the mesh s polygons will be retained. Sequoia claims that: As a rule of thumb, a value of 10.0 percent is often a very good". If the mesh is reduced too much, details will disappear and sharp edges will be smoothed out. It is important to inspect how the mesh looks afterwards and determine an appropriate reduction value. I found that a reduction of 25 percent was enough to still preserve the geometries shape. Exporting the model as a single mesh would be to much data to editing in the clean up and texture mapping process later on, given that the model consisted of 15 million polygons after the first reduction. Sequoia s Hacksaw tool makes it possible to divide the mesh into multiple cubic cells and export them as individual fbx-files. This entails that each cell will have its own texture-map which increases the texture resolution for the entire model. I chose to split up the model into 16 cells (70x50x40m each) seen in figure 3.3. Figure 3.3: Meshed model, divided into 16 cells in Sequoia The simplification tool in Sequoia works only for a rough reducing. For a better a result I used the Maya plug-in: Simplygon, which is especially good at simplify flat surfaces and preserve complex surfaces. Further percent (depending on the size of the mesh) was reduced. Figure 3.4 shows Simplygon s ability of reducing a 3,7 million model to 1.1 million polygons. After the cells were simplified, the process of cleaning up unwanted geometries went much faster. Figure 3.5 shows how much of the redundant geometries was removed afterwards in Maya. 12

24 3.6. UV-mapping (a) (b) Figure 3.4: (a) Before simplification. (b) After 30% simplification in Simplygon (a) (b) Figure 3.5: (a) Before redundant geometries has been deleted. (b) After the clean up 3.6 UV-mapping Normally, UV-mapping should be done manually to achieve best texture resolution. This is a very time consuming moment when the user has to manually marking seams, and divide groups of polygons into UV-islands (also called UV-shells). This can be done automatically in the most 3D-applications as well as packing the UVs in multiple tiles. The disadvantage is that some algorithms distributes the UV-islands very spares, which results in poor texture resolution. The applications I tested were Zbrush, Mudbox, Modo, and Sequoia. Since Zbrush uses a rather unique user interface, unlike other software, it became a difficult learning threshold, so I left that program for a moment to see if the other programs suited me better. To start learning Mudbox went quite fast since it is developed by Autodesk and the user interface reminds of Maya. The disadvantage was that it took over 5 hours to generating 8 UV-tiles for a 0.4M polygon model. The UVs also became extremely downscaled and no UV-shells were generated, all polygons were mapped individually. Another problem is that it was not possible to assign material-ids for individual UV-tiles directly i Mudbox, it has to be done in Maya or 3DS Max afterwards. Maya is provided with one UV-mapping tool that automatically projects the polygons to UVs and packs it into a UV-map. The problem was that there is no tool for packing the UVs into multiple tiles automatically, it can only be done manually by translating the UV-shells outside the (0,0)-(1,1) UV space and the single tile it generated became sparsely packed. I decided to use Modo for UV-mapping because the software was easy to use and it is developed by same company as the texture painting program Mari and there is many good 13

25 3.7. HDR-image assembly tutorials about the workflow between the softwares. To automatically create UVs in Modo, is done by selecting Atlas as projection type. As the developers of Modo explains: Atlas projection, maps every single polygon from the mesh into a UV-map while maintaining relative scale based on the 3D volume of the polygons [16]. This is exactly what we want according to maintaining a uniform texel density. One drawback with automatic UV-mapping was that UVs were created from the model s back and underside. This two sides will never be shown during the visualization, hence are these UVs unnecessary information that takes up space in the UV-map. Another UV-projection type is projection from view. This method creates UVs only from polygons who is facing the camera which optimize the space in the UV-map. It is a good method for projecting polygons from a flat wall for example but with the disadvantage that polygons that are hidden in the camera view never becomes UV-mapped. Multi-tile can be done automatically in Modo by using the Pack UV -tool. This tool packs the UV-islands automatically into a arbitrarily number of tiles as the user determines. The model that was UV-mapped with view projection, was better packed manually by increase the amount of tiles and translate the UV-islands manually into the empty tiles without scaling the UVs which the UV-packer does. Performing the atlas projection and the automatic UV-packing took about 3,5 hours for a 0.7M model. The two different UV-mapping methods are shown in figure 3.6. (a) Projection from view (b) Atlas projection Figure 3.6: UV-mapping in Modo One other advantage using Modo is the material assignment. It is simply done by selecting all the UVs in a tile, right click and assign them with a so-called "Unreal Material" and name the materials to something like u1_v1, u2_v1, u3_v1 etc. Naming the material with a unique id is an important step to later know which texture file belongs to which material slot in Unreal. Unreal does NOT number the material slots in the same order as the tiles are numbered in Modo, due to a bug in Unreal, which Epic Games claims that it is fixed in version 4.15 but it was not [9]. It is worth mentioning that each material results in a separate draw call and each draw call have some cost [8]. In conclusion, the amount of material slots/tiles should not be more than absolutely necessary. In figure 3.7 are Mudbox, Sequoia, Modo and Maya s capability of distribute UVs into one tile shown. Sequoia did not support multi-tiling, but I still did a test to compare the results of one tile with the other softwares seen in figure 3.7b. It actually seems like Sequoia s UV algorithm is the best considering how close it packs the UVs. For example, the straight lines in Sequoia s UV-map correspond to the small-scale UV area in Maya s UV-map 3.7d. For this reason, it was also worth evaluating Sequoia for image projection. 3.7 HDR-image assembly All the images, captured with different exposures, had to be combined to achieve the high dynamic range into a single image. This operation can be done automatically for all images in HDR-merging softwares like PTgui, Photoshop or Photomatix Pro. Photomatix were used to perform the HDR-merging. Most because of the basic version is free, it merging HDRs faster than Photoshop and all images can be merge in one batch. 14

26 3.8. Lens correction (a) Mudbox (b) Sequoia (c) Modo (d) Maya Figure 3.7: Compression between the result of the softwares UV-projection, generated from one mesh (cell 15) After the images were merged the result looked relay dull/matte because the monitor can not properly display a 32-bit HDR image. This is why HDRs must be tone mapped before they are rendered. I.e. colors from a high dynamic range are mapped to a more limited dynamic range that is adapted for the screen, which also induces details and balancing the brightness locally. To get rid of the dullness I tone mapped some images in Photoshop by using the default filter, see figure 3.8b. The problem was that the images had been captured under different cloudiness, hence the tone mapping gave different results for different highlight conditions. So I decided to not tone map the HDRs to be sure that all information was retained. (a) (b) Figure 3.8: (a) The result of one HDR from Photomatix. (b) Tone mapped HDR in Photoshop 3.8 Lens correction The 35 mm lens generated a small amount of barrel distortion which complicates the projection process. Lens correction of HDR images can be done in Photoshop using the Camera Raw Filter but then it has to be adjusted manually on each image. To apply lens correction for all images in one batch, the node system in Nuke was utilized. An image of a checker pattern was captured with the Canon camera and imported into an image node in Nuke. The checker pattern was analyzed with a Lens Distortion node that calculated the cameras radial distortion parameters. The Lens Distortion node was then connected to all HDRs which removed 15

27 3.9. Perspective warping the distortion. White balancing was also performed in Nuke, using one images as reference and applying its white point value over the whole image batch. 3.9 Perspective warping Perspective distortion was an inescapable fact when capturing photographs of Yllefabriken because trees obstructed the view and it could only be avoided by angle the camera. To remove perspective effects, Photoshop s perspective warp tool was used. First, was the warpgrid aligned with the vertical lines in the image and then it was transformed into a perpendicular rectangle which also transformed the image perspective perpendicular, see figure 3.9b. The image with the round part of Yllefabriken could not be perspective warped proper without dividing the images into smaller parts with some overlap and straighten them separately see figure (a) (b) Figure 3.9: (a) Original photo with perspective distortion. (b) Perspective warped photo in Photoshop Figure 3.10: Divided perspective warping 3.10 Panoram stitching The photos that were captured with the fisheye lense had to be stitched together into a spherical panoramas (also called lat/long image or equirectangular projection) before they could be used for a spherical projection. This could be done in Photoshop but it became too much manual work since the images needs to be HDR-merged first and then stitched to panorama images. Instead PTGui was used which performs: circular cropping, HDR-merging, lens correction, stitching and tone mapping in one procedure. PTGui does not requires much inputs from the user to create a panorama since all the information about the camera and the lens is 16

28 3.11. Spherical projection stored in the image file s EXIF-data. The only settings that has to be change is the projection type (equirectangular for spherical panoramas) and the export-format. Because of the tripod head had been calibrated in advance, the result became very good without notable seams (stitching errors) or differentiating brightness between overlaps, see figure Figure 3.11: Equirectangular projected panorama image, HDR merged and stitched in Photomatix 3.11 Spherical projection In the demo video of Rise[20], Metzger used the spherical images as an base layer because it fills a larger area with texture comparing to planer images. The further away the light has traveled from the objects to the lens the worse the resolution will be in the equirectangular image. This means that the resolution will be as highest on the geometries closest to the camera location. Hence, spherical projections is very well suited for texturing the ground. For projections of spherical panoramas we evaluated two applications, Sequoia and Mari. Mari reminds a lot of Photoshop in its structure. It supports multiple channels which determines the paint resolution and color depth and stores multiple layers. Layers in Mari are categorized into: paint layers, layer masks, adjustments, and procedurals. Especially adjustments layers was beneficial to change the shading of the painted texture. One good feature in Mari is that the user can chose to paint in a texture buffer which means that the image projection is not baked on the geometry until the B-key is pressed. If the painting becomes completely wrong the user can simply clear the buffer and start over instead of painting over the old layer. Like other texture painting softwares, images can be painted on the geometry with a soft brush which makes the images smoothly overlap each other. For spherical projection there are two types of parameters to adjust for the alignment; translation and rotation which changing the location of the camera s projection, relative to the models pivot point. The transformation units in the interface are relative to models scale, if the model are scaled in centimeters it means that the projection also will be translated/rotated in centimeters. Since no information about the camera positions had been recorded, apart from the image view, it could be estimated by translating the projection to it s corresponding view in the scene. Then the camera rotation was estimated, to aim the projection at a certain surface. The Z and X-rotation were almost always zero because the camera tripod had a builtin Bull s eye level that made it possible to regulate the cameras x-direction perpendicularly. When the camera estimation began to approach its proper position, the lower black area from the panorama appeared as a square, because a nadir shot [5] was never taken. This black square could instead be used as a reference for the camera. Each scanner position are 17

29 3.11. Spherical projection indicated by hollow circles because the scanner is not able to emit rays just below the tripod. Translating the black squares next to the circles was a good method to find an approximate camera position shown in figure In figure 3.12a is one of the best attempt to align a spherical HDR with a building in Strömparken. It was still to much work to only align one single image, hence, Sequoia was investigated to see if it had better facilities for spherical projection. (a) (b) Figure 3.12: (a) The black square indicates the camera position to the spherical projection. The green circle indicates position. (b) An estimation of three camera positions. The projections has been overlapped by painting smooth transitions Sequoia offers two features that makes the spherical projection a little bit easier than Mari. The first is that the projection location are shown in the scene and it can be transformed by dragging the colored handles, (displayed in figure 3.13b) which is much more intuitive than Maris interface. The second good feature is Sequoia s alignment tool, it is based on point matching. The user needs to mark at least three points on the projection image and then mark the corresponding points on the geometry, by this information the program calculates the perspective plus the camera position and projects the image from that location. Sequoia did not always align the image correctly, but in some cases the result became surprisingly good. For cases when the projection did not align very well it had to manually corrected, but sometimes it was impossible to get the alignment right, no matter of how much the projection was translated/rotated. First I thought that the equirectangular projection was incorrectly created in PTgui and posted a thread on Sequoias forum regarding spherical projection [28] but later realized that everything was correct. The projection can not perfectly align with all geometries from one angle, the projection-area is limited by the camera location. Hence, the projected texture has to be smoothly overlapped with the next projection. This can only be done by painting the new texture over the old. Since Sequoia is not equipped with any painting tool, including that the program stopped responding very often during the image alignment, there was no point to continuing using Sequoia for image projection. I went back to Mari but focusing more about painting smooth overlaps between the projections, but it took too much time to only finishing one overlap so I stopped texturing with spherical projections for a while and went over to planner projections to see if it was easier. I also examine other texture painting software In case of Mari was not the optimal for the pipeline. 18

30 3.12. Planer projection (a) (b) Figure 3.13: (a) Spherical projection alignment in Mari, adjusted by typing in coordinates. (b) Spherical projection alignment in Sequoia, adjusted by rotating/translating the colored handles 3.12 Planer projection Unlike spherical projection is the camera position for planner projection not estimated by typing in any coordinates. The projected image appears as a half transparent image plane in the screen space and then the camera position can be found by navigate the camera until the image plane align with the geometry. For fine tuning the image plane can also be rotated, translated and scaled. To perform planer image projection, three different texture painting softwares were examined; Mari, Zbrush, and Mudbox. The requirements for obtain a software to the pipeline was: easy to use, support multi-tiles, and equipped with good painting tools to overlapping images. Since the user interfaces in Zbrush was difficult to manoeuvre including that it took much time to learn the basics for texture painting, Zbruch was excluded from the project. However, it has plenty of useful tools for both sculpting and texture painting which is rather suitable for painting game characters than architecture. The worst issue with Mudbox was that it is not possible to perform a non uniform scaling of the image projection which is very important for aligning images. Some artists even make the alignment in Photoshop before. Another problem with Mudbox, is that the texture is directly baked (not storing the pixels into a buffer) when the user starts painting. Since it was 50MP HDR-images to be painted, it became too much information for the program to process in real time which made the program stop responding very often. Finally, MARI was obtained to the pipeline because it met all requirements. Mari has three different view modes for texture painting: perspective, orthogonal and UV-painting. The orthogonal view was to prefer because it was much easier to find the camera location when the view always was orthogonal to the mesh. This required that the images had been perspective warped before. The other option was to project the images from the perspective view. This was much harder since the camera position had to be located from almost the exact same position as the photo was taken. One option is to aligning the camera positions from another 3D application provided with more camera adjustments (Maya or PhotoScan for example) and exporting the cameras as a FBX-file and then import it to Mari. But that was a process I wanted to avoid to save time. Unfortunately there was some issues in Mari when the panting was baked in perspective view. The colors in the paint buffer could in some cases be mapped to wrong UVs after it was baked, see figure 3.14a. Sometimes it also produced noise in the textures (figure 3.14b) during 19

31 3.12. Planer projection the painting but it could be avoided by using the stamp tool instead of the brush tool. What is important to remember is to set the correct resolution size of each texture tile before painting. If the size is set to 2k, the resolution will not be increased if it is changed to 8k afterwards, which happened to me and I had to start over with the painting. 20

32 3.13. Ptex (a) (b) Figure 3.14: (a) Paint buffer mapped to wrong UVs after baking. (b) Brush tool noise that became visible after baking Much time was spent getting all windows to align with the projection and painting soft overlaps between the pictures. It is desirable that one image should cover up a wall as much as possible to get as less seams as possible. The manufacturer of Mari claims that "If you are using patch resolutions higher than 4K, we recommend that you zoom in to the surface when painting, to keep the resolution sharp". It was really challenging to aligning the images in a zoomed view since lots of the model and the projection was outside the screen range. When the painting was done all the texture-tiles (called UDIM in Mari) could be exported as either 16/32-bit exr or 8-bit png-files, depending on what color depth the texture channel was set to Ptex Petex has several major advantages over traditional UV mapping. No UV-unwrapping has to be done before the mesh can be textured and the resolution can be radically increased compared to UV-maps. Avoiding the unwrapping process saves lots of times and no artifacts from seams will be visible since Ptex using a seamless technique [30]. Each face of the geometry has an individual texture paths, hence, the texture resolution is determined by individual face resolutions. For example one region of faces can have 16x16 texels per face and another region 128x128 texels per face or even higher depending on what software is used. A demonstration of different texel resolutions per faces are shown in figure In Mari there is usually two options of deciding the face resolution, either by Uniform Face Size or Worldspace density. Uniform Face Size has a squared fixed size for all faces regardless of models face dimensions. It can causes non smooth transitions between the faces due to its differences in texel densities, so it is not an good option for complex models. For Worldspace density the software determine each face s resolution based on a given number of texels per unit of world space, in other words the resolution is dependent of the size (longest edge) of the faces [17]. Another advantage with Ptex is that each texture channel is not exported as separate image files. If the mesh has been painted with multiple channels: diffuse, dirt, specular, luminescence, displacement etc. all textures will be stored in one Ptex-file. In the beginning of the project Ptex were used because no good method for uv-unwrapping had been evaluated. The texture resolution became almost as good as the image itself in a relay close zoom. The problem is that today s game engines do not support Ptex since the format was intended for rendering models in animated movies rather than real time-rendering. It would save lots of work to remove uv-mapping from the pipeline but today, you just have to wait for the game engines and the graphics cards will be developed for Ptex as Neil Blevins explains in [4]. 21

33 3.14. ReMake Figure 3.15: Differences in texel resolution per faces In order to investigate whether the texturing process could be simplified into a few steps, I examined two softwares that calculates the camera, projects and baking the the texture automatically ReMake An attempt to recreate only Yllefabriken by only using the photographs was done in Re- Make. ReMake recreates models by photogrammetry and performing UV-unwrapping and texturing automatically and the model can simply be exported as an fbx file with one associated texture file. The application is really easy to use but for our purpose it did not meet the requirements. It is only supports jpg-files and it did not managed to align the images proper and both the geometry and texture quality became very low in close distances, because the software interpolates new colors from images that have been aligned, see figure 3.17b. The second problem was that images that had been captured from positions where trees obscured the view also became projected over the mesh. ReMake seams to be a better choice when smaller objects has to be recreated that can be photographed in a closer distance and from more angles, considering what they demonstrate in their tutorials. Figure 3.16: Texture quality in ReMake 22

34 3.15. RealityCapture 3.15 RealityCapture RealityCapture is a program to create accurate 3D models from images through photogrammetry and Lidar data. To obtain both highly texture quality and geometric resolution, laser scans and photos can be combined which also saving lots of time considering the manual work during UV-mapping and texture painting. Since RealityCapture can not import HDRimages, I imported an RAW-format image set, taken with same exposure. After the images and the point cloud are imported there is basically six steps to go through before the model is done: Alignment (aligns all the imported laser data into one point cloud) Reconstruction (meshing the point cloud) Alignment (aligns all the images) Texture (texture the mesh with the aligned images) Simplify (reduce the numbers of polygons) Mesh (exports the mesh and the texture) The result after the alignment can vary depending on how much data RealityCapture is able to aligning into one component. Generally, several components are created in the first alignment. These components then has to be manually realigned by point matching a few scanner-images from different components. Just like in Sequoia, the accuracy for the vertex sampling can also be adjusted by changing the Minimal sample distance. By default the accuracy is set to 2mm which they also recommend in their tutorial so I adopted this value. For some reason RealityCapture never managed to complete the mesh over the entire scanned area, it crashed every time in the middle of the process. Probably because it was too much data to process and the program ran out of memory like Sequoia did. The only way to complete the process was to import a smaller region of the scanned area. I chose too only recreate Yllefabriken. The texturing was not particularly successful either. It seemed that the program tried to overlap the textures too much and interpolate a mix between two images too much instead of just applying image by image, a problem that is avoided when the texture is done manually. In contrast to Sequoia there is no Hacksaw feature in RealityCapture, everything has to be exported in one mesh or a specific area that the user marks. This means that the exported texture file will have a very limited resolution if it is exported with a single texture. RealityCapture is supported for multiple texture-tiles but for some reason it only generated one, even though I changed to multiple. Exporting a single 8k texture file was the best option. The result is seen in figure 3.17 where the model is rendered in Unreal. For a further comparison, the city planning office had sent me a new 3d-model of an industry located at the harbor in Norrköping. This model had also been processed in Reality- Capture, with a combination of Lidar data and photographs captured from a drone. Its area was almost as big as my model and the surface was divided into 44 UV-tiles, each provided with a 4k texture image. The model was imported into Unreal and applied with its material, the result is shown in figure

35 3.15. RealityCapture (a) (b) Figure 3.17: Combined laser scans and photos captured from the ground. 120x31x34m. Polygons: 9,2M. Texture resolution: 8k (1 texture-map) Dimensions: (a) (b) Figure 3.18: Combined laser scans and photos captured from a drone. Dimensions: 250x145x48m. Polygons: 3M. Texture resolution: 4k (44 texture-maps) 24

36 4 The Pipeline This chapter describes the pipeline for the recreation of scanned site, step by step including how the scene was arranged in Unreal. Recap (point cloud alignment) The scanned data set was imported into Recap and automatically aligned into a single PEM. Sequoia (meshing and hacksaw) The PEM was imported into Sequoia and meshed with an accuracy of 5cm. The mesh was then reduced to keep 25% of the polygons and divided into 16 cells. After the reduction, all cells together contained 15 million of polygons. All 16 cells were then exported as individual FBX-files. Only the most central cells of the model were processed through the pipeline which included 6 cells, the other cells contained to much redundant geometries. Simplygon (simplification) In Simplygon, the cells were reduced 30-15% further. Maya (clean up) Incoherent geometries like bushes and detached "islands" were removed in Maya, by simply selecting the polygons and deleting it. Now the cells only consisted of polygons each. Maya (creating windows) The scanner is not capable of capture transparent materials like water or glass since the laser beams does not reflect back until they hit an opaque material, see figure 4.1a. With no texture in the windows, the model would look very incomplete. Planes were placed inside the walls to fill up the hollow windows, see figure 4.1b. The planes were merged with the entire mesh (using the combined tool) and the new models were exported as an obj-files. 25

37 (a) (b) Figure 4.1: (a) The lasers scanners ability to capture glass. (b) Hollow windows filled with a plane in Maya Modo (UV mapping) Cell number 10 (eastern part of Yllefabriken), were UV-mapped by view-projection and packed into 5 tiles. The other cells were UV-mapped by atlas-projecting and packed into 4x2 tiles. Material IDs were assigned to each tile and the models were then exported as FBXfiles. The cell with most polygons took about 3,5h to pack the UVs. The cell with fewest polygons took about 30 minutes to get done. Photomatix (HDR-merging) All exposure-images were imported into Photomatix and a HDR-batch was performed. The HDR-images were exported as 32-bit uncompressed OpenExr with retained pixel dimensions and without any tone mapping. The merging process took about 1,5 hour to complete 84 HDRs. Nuke (Lens correction) The HDRs were imported to Nuke and globally white balanced and exposure adjusted. Lens distortion was removed by applying a lens correction node to all images. Photoshop(Perspective warping) A few non-spherical images of Yllefabriken were perspective warped in Photoshop using the perspective warping tool. MARI 3 (texture projection and image baking) After all models had been; reduced, cleaned and UV-mapped, they were ready to be textured in Mari. All cells were imported at the same time which facilitates the painting when an image covers two cells, see figure 4.2a. All 32-bit HDR-images were imported and the texture buffer was set to 8k resolution and 16-bit depth, as well as the texture-map resolution were changed to 8k for all tiles. Yllefabriken was the first object to be textured by aligning and painting the warped images in ortho view. These pictures did not cover the entire building so they were supplemented 26

38 4.1. Unreal (Environment set-up) (a) (b) Figure 4.2: (a)the green line shows the seam between two cells. (b) An overview of how much texture I was able to paint. The building in the right corner has been left with the base channel color with regular images (non-warped), painted in the perspective view. The pictures that had a lower/higher brightness had their light settings adjusted to better melt together with overlaying images. Much of the area in Strömparken could not be textured since we had a limited set of images taken from only a few angles. This areas were left to have the basic gray channel color shown in figure 4.2b. After all the painting work was done, each cell s texture tiles was exported as 16-bit EXR-files into individual folders for each cell. 4.1 Unreal (Environment set-up) The municipality had sent 9 tiles of their 3D map over central Norrköping. The files were sent in Collada format and had to be converted to FBX format before they could be imported to Unreal. A new project was created with a first person character. New projects do always have a start level which contains: SkyDome SkySphere Directional light A character with a gun Fog A floor The floor was deleted and replaced by the nine map models. When a model is imported into Unreal the user is always asked if LODs should be created. I chose to never create LODs since it took too much time for Unreal to went through that process. For texture assignment it is simply to check import textures and import materials and the engine will automatically import the texture, create a materials of it and assign it to the models material slots as long as the files are located in same folder. Each map-tile is stored with SWEREF-coordinates which made the map appear several kilometers away from origin. For simplicity the tiles 27

39 4.1. Unreal (Environment set-up) were grouped and translated to origin. To enable collision detection from every polygon, the collision settings were changed to Use complex collision as simple. Static lights Since the map was photographed in daylight, shadows and highlights were already baked in the texture, hence the map should not be lit with other light sources. But the baked shadows were too dark to look realistic in a close view and had to be illuminated. The dark shadows was then reduced by adding a Skylight and increasing its intensity. This also increased the reflection/shine for each polygon. To make diffuse textures look more realistic, the material should be 100 percent matte with no reflections. This was done by changing the roughness parameter from 0 to 1. Light sources that are added to the scene will also add shadows, above the baked shadows in the texture. To avoid this, the checkbox cast shadows was disabled. The shadows in the texture tells where the sun was located when the images were taken and the sun s position was changed to a more accurate position. Water In the original map, Strömmen (the river) looked like a bumpy black snowpack, because of inaccuracies in the photogrammetry. Hence, the water surface was flattened in Maya by marking all vertices in Strömmen and translated to exact same height. On top of the water surface an oblong plane was placed with a water material assigned. In essence, two time-dependent functions moves the texture coordinates of two normal maps which induces waves from two different directions. To increase the reflections, a parameter node was connected to the metallic input and changed from 0 to 1. The base color was set to shift between dark green and dark blue which the real water in Strömmen does. In figure 4.3a is the water material shown. Trees During the cleanup process in the pipeline, all the trees were removed, only a small part of the stump was left. Six different tree-models from free internet sites were downloaded and imported. The tree models were placed in the same locations where the scanned trees had been cut off. Instead of forming the leaves with polygons which takes unnecessary amount of memory, it is better to create trees of square planes with a transparent leaf texture. The leaf texture was masked with a black and white image to make the edges transparent and then connected to the opacity input in the material editor, see figure 4.3b. All of the downloaded trees were applied with texture masks and some were also provided with bump maps for the tree trunk. Walking people A walking character (Remy) was imported in the scene 4.3c. The mesh and the walk animation were downloaded free from mixamo.com. The animation were mapped to the skeleton of Unreal s humanoid character and the Remy-mesh were applied. A Nav Mesh Bounds Volume (UE component) was also added to the scene to control which region he could move. Architect model The municipality had sent an obj-file and textures of the architect model that would demonstrate the new apartment, figure 4.3e. The model contained 119 different components, for simplicity all components were combined to one static mesh. One half of the model was removed since it did not align very well with the rest of Yllefabriken. Another problem was 28

40 4.1. Unreal (Environment set-up) that none of the 199 components found their texture even though it was imported with textures. The texture files that were attached with the model was two different wooden textures, two different brick textures and two different concrete texture. The only way to assign these textures correctly was to do it manually for all 119 components and the only hint was the components name. Components with names like oak or maple were assigned with the tree texture files and component names like chrome or glass were assigned with Unreal s own materials. The placement of the architect model was done by moving the model until it was edge-to-edge with Yllefabrik. But before the area above the tunnel had be flattened (just like the water in Strömmen) because it consisted of much bushes and trees that otherwise would stuck straight up through the architect model. A direct-light source was added to illuminate the apartment s material and rotated to match the light direction with the shadows in the map. The scanned data The six laser scanned cells were imported into Unreal with their projected texture files which were assigned to the models material slots. For each material, the texture sample node was connected to the Emissive input to keep the HDR range [10]. If the texture sample node is connected to the Base Color input, Unreal will clamp the color values to the range of 0-1. The cells were placed in the same manner as the architect model, a little bit in front of the buildings and above the ground of the map to hide the low res geometry, see figure 4.3d Normal maps Normal maps were generated from Yllefabrikens texture images in Crazy-bump 4.3f and connected to the Normal-input in the material editor. Details from bump maps can only be visible if a light source pointing to the surface. Hence, an additional directional light source was placed in front of the wall to highlighting the details. Texture streaming and mipmaps Unreal engine has an implemented texture streaming system that increasing and decreasing the resolution of each texture with respect to the memory budget and the viewing distance. The system recalculates the initial textures resolution and stores it into multiple mipmaps. For example, one 8k texture are recalculated to 14 mipmap levels. The memory budget for the textures are limited in the texture streaming pool. If the system starts to reach the limit, more textures will be rendered with the lowest mipmap level. To be sure that all textures were rendered in the highest resolution, I turned of the mipmapping for all textures and increased the texture streaming pool to 5,5 GB, because that was the total size of all textures. The default value is 1GB, increasing the streaming pool too much will waste memory and setting it too low will slow down the streaming speed, how it should be balanced properly is discussed later. VR-connection Finally, HTC Vive VR-glasses were connected to the game engine and the first-person player was replaced by a VR-player that navigates the player with two hand controls instead of the keyboard. The user moves through the scene by aiming at a location, pressing the trackpad and the actor will be transported to that position. 29

41 4.1. Unreal (Environment set-up) (a) Water material assigned to a plane (b) Leaf material and texture mask (c) Rigged character (d) The cells placed over the map (e) Architect model (f) Bump and from CrazyBump Figure 4.3: Asset creation in Unreal 30

42 5 Results Visual Result The environment in Unreal consist of the 3d-map as a base geometry, on top of it is the scanned area of Yllefabriken placed, divided into 6 cells. The cells contains between polygons and all together the model has polygons. All cells are texture mapped by atlas projection and divided into 8 tiles except from the cell with the eastern part of Yllefabriken (texture mapped by view projection). Each texture has a resolution of 8192x8192 pixels, 16-bit color depth, texture painted in Mari with a combination of spherical and planar HDR-projections. The file size of the textures varies from MB and altogether they takes up 5,56GB. Everything is visualized in real time and first person in Unreal Engine, rendered on GeForce GTX 1070, Xeon 3.50GHz, and 32GB RAM. The visualization is integrated with a VR connection and the navigation is done by two remote controls. In the right column of figure 5.2 is the result demonstrated in screen shots of the real time rendering. In figure 5.3 is the final architect rendered with a mix of Unreal materials and the attached textured. The pipeline The pipeline were designed to achieve accurate texture quality with highly detailed geometries. Scanning the area took about 3-4h. From the data had been measured it took about one week to process the entire model til its final rendering in Unreal. No advance knowledge about the software are required, only a basic understanding. In figure 5.1 is the determined pipeline shown with the most important procedures. Note that it only includes the basic steps, processes like creating window planes and bump maps are not included since it is not required to only visualize the environment. For each cell to be processed, steps 2 to 10 must be repeated. Softwares that were examined for image projection and UV-mapping but did not qualify for the pipeline were: 1. Mudbox 2. Sequoia 3. Maya 31

43 4. Zbrush 5. ReMake 6. Reality Capture Figure 5.1: The Pipeline 32

44 (a) (b) (c) (d) (e) (f) (g) (h) Figure 5.2: Comparison between the 3d map and laser scanned model with HDR textures 33

45 Figure 5.3: Screenshot of the architect model 34

46 6 Discussion 6.1 Method The Pipeline The reason why several software packages were investigated instead of directly adopting the software Metzger s pipeline was to get more experience of UV-mapping, HDR-merging and texturing in order to easier determine which tools were best suited for our purpose. The software evaluation took the most time of the project, but was necessary for a customized pipeline. Many decisions regarding texture and geometry resolution were made by embracing what looked good. Thereby, it would have helped me if I had more experience of game development, which had simplified the choice regarding the mesh and texture division. Meshing Regarding the clean up process, it should be done directly in either Recap or MeshLab before the point cloud is meshed, because it takes significantly longer for Maya to delete thousands of polygons compared to what it takes to cleaning up a point cloud. For example, the trees and windows could potentially be erased immediately in the cloud point. The more I worked with the data, the easier it was to determine which geometries that are worth saving and which should be erased. At the beginning of the projection, I had no idea how accurate the the texturing could be done hence, there was a point of not erasing too much geometries in the beginning. A vertex sampling accuracy of 5cm was too sparse to restore sharp edges, some geometries almost looked like clay in comparison to the result of RealityCaptured, sampled with 2mm accuracy. But it was my only choice since Sequoia did not manage to sample the vertices denser without failure. A better option would be using RealityCapture instead of Sequoia, divide the point in smaller regions and recreate each part individually with 2mm sampling accuracy. UV-mapping The most time-consuming step in the pipeline was to pack UVs, because of the high amount of polygons to be processed. Metzger shows in his demo[20] how he used Modo for manual 35

47 6.1. Method resurfacing by drawing quads on the mesh. Basically, they building a brand new model with the point cloud as reference which makes the UV-mapping more flexible and texture backing considerably faster. This step had been unreasonable for me to perform as it requires much time and modeling experience. Therefore, Simplygon was used instead. The choice of how many UV-tiles each model would be divided into was made with a the rule of thumb: as many photos that covered a cell as many tiles should be applied. Approximately four photos were enough to cover a cell, given that Atlas projection mapped about twice as many UVs as necessary (the backside and bottom), I also doubled the number of tiles. It was a decision that worked quite well considering the results. The fact that I reconstructed a double-side model was a huge mistake and it would be interesting to see how much it affected the UV-mapping both in UV distribution and texture resolution. To check if the estimate of the number of texture tiles was excessive, a tiling formula was used from L. Iezzi s compendium [13] where the number of tiles is calculated by multiplying the length of a model in meters with its texel density divided by the texture resolution, in equation 6.1. The texel density was calculated via a Modo plugin, developed by James O hare [23] that calculates the mesh s average texel density based on ratio between the polygons and the UV s mean area in meters, multiplied with the texture resolution to be used. The result of a 71m wide model was 317 pixels per meter. Based on equation 6.1 it resulted in 3.3 tiles. Thus I had exaggerated the number of tiles. Performance Meters ˆ PixelPerMeter TexturResolution = TilingValue ñ 71m ˆ 317px/m 8192px = 3.3 (6.1) In order to maintain as much resolution of the photos as possible, I chose to export each texture map to Unreal s highest accepted resolution, namely 8k. Using 8k, 16-bit textures ended in the average file size per texture became 90MB. Using 90Mb texture files would never work for a future visualization with more models because it would overload the graphic card s memory and impaired the performance. Since it was just a small part of Strömparken that was rendered, all textures could be rendered in full resolution without any lag, provided that the graphics card has at least 8GB of video RAM. Of course, mipmapping should not be turned off for better performance, but Unreal failed to switch between mipmap levels properly when the player moved closer to the textures. Hence, the best solution was to turn it off with the cost of performance. 8k textures are rather exaggerated resolution to use for all materials as the player can only view certain geometries from far distances. The ultimate in performance would be to carefully plan the model s texel density. Surfaces rendered far from the player would be divided into separate UV-tiles and assigned with low-res textures as Luke Ahearn describes in 3D Game Textures [3]. I found a way to downsample the texture resolution in directly in Unreal from 8k to 4k, which halved the recourse memory and it made no visual impairment. The choice of the texture s color depth can also be discussed. One solution could be tone mapping each HDRs and save it as 8-bit PNG-files, painting it in Mari and exporting it as 8- bit textures. The file sizes would be significantly smaller and details such as highlights would be more visible. But on the other side, I would have worse conditions for experimenting with the textures shading. This is why I wanted to keep as much of the dynamic range as possible and did not tone map any images, including that I could not find a good way to smoothly tone map all images. Exporting the textures in 32-bit color depth, generated way too large files, about 800MB each. Regarding the normal map, it was more an experiment to see if the details of the brick joints could be improved. It turned out slightly better, but at the same time it is wrong to use light sources with precomputed baked lighting. 36

48 6.1. Method Image projection Considering the difficulty in aligning the spherical projections, mainly due to the inaccuracy of estimating the camera position. Some kind of indication of the camera positions during the photo shooting had been necessary. It also due to the fact that there were too few captures and too far distance between the camera and the surface to be textured. When Metzger texture painted the warehouse in Rise, he took considerably more spherical images with wider overlaps, about 5m apart between each camera location. When we photographed spherical images, we moved the camera about 15-20m between each shooting. That resulted in too less information to texture paint high resolution overlaps shown in figure 6.1. Figure 6.1: The gap between the green lines missing resolution Why the resolution is deteriorated between the overlap is that Mari interpolates color values when a region is textured for which there is actually no information. Thus, the projection is only good enough from the view where the images were taken, seen in figure 6.2. (a) (b) Figure 6.2: Spherical projections. (a) View pointed from the correct camera position. (b) The view has been translated from the camera position For planer image projection, the biggest problem was also camera alignment. Metzger did it in Maya by applying an overlaying image plane in the screen space and turn down the opacity. When the camera was lined up, the location was stored as a keyframe animation, the 37

HTTP Based Adap ve Bitrate Streaming Protocols in Live Surveillance Systems

HTTP Based Adap ve Bitrate Streaming Protocols in Live Surveillance Systems HTTP Based Adapve Bitrate Streaming Protocols in Live Surveillance Systems Daniel Dzabic Jacob Mårtensson Supervisor : Adrian Horga Examiner : Ahmed Rezine External supervisor : Emil Wilock Linköpings

More information

Design and evaluation of a system that coordinate clients to use the same server

Design and evaluation of a system that coordinate clients to use the same server Linköpings universitet/linköping University IDA Department of Computer and Information Science Bachelor Thesis Information Technology Spring term 2017 LIU-IDA/LITH-EX-G--17/067--SE Design and evaluation

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final Thesis Network usage profiling for applications on the Android smart phone by Jakob Egnell LIU-IDA/LITH-EX-G 12/004

More information

Personlig visualisering av bloggstatistik

Personlig visualisering av bloggstatistik LiU-ITN-TEK-G-13/005-SE Personlig visualisering av bloggstatistik Tina Durmén Blunt 2013-03-22 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik

More information

Automatic LOD selection

Automatic LOD selection LiU-ITN-TEK-A--17/054--SE Automatic LOD selection Isabelle Forsman 2017-10-20 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik och naturvetenskap

More information

Design, Implementation, and Performance Evaluation of HLA in Unity

Design, Implementation, and Performance Evaluation of HLA in Unity Linköping University IDA Bachelor Thesis Computer Science Spring 2017 LIU-IDA/LITH-EX-G-17/007--SE Design, Implementation, and Performance Evaluation of HLA in Unity Author: Karl Söderbäck 2017-06-09 Supervisor:

More information

Optimal Coherent Reconstruction of Unstructured Mesh Sequences with Evolving Topology

Optimal Coherent Reconstruction of Unstructured Mesh Sequences with Evolving Topology LiU-ITN-TEK-A-14/040-SE Optimal Coherent Reconstruction of Unstructured Mesh Sequences with Evolving Topology Christopher Birger 2014-09-22 Department of Science and Technology Linköping University SE-601

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis Case Study of Development of a Web Community with ASP.NET MVC 5 by Haci Dogan LIU-IDA/LITH-EX-A--14/060--SE 2014-11-28

More information

Institutionen för datavetenskap

Institutionen för datavetenskap Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer Final thesis and Information Science Minimizing memory requirements

More information

Face detection for selective polygon reduction of humanoid meshes

Face detection for selective polygon reduction of humanoid meshes LIU-ITN-TEK-A--15/038--SE Face detection for selective polygon reduction of humanoid meshes Johan Henriksson 2015-06-15 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Evaluation of BizTalk360 From a business value perspective

Evaluation of BizTalk360 From a business value perspective Linköpings universitet Institutionen för IDA Kandidatuppsats, 16 hp Högskoleingenjör - Datateknik Vårterminen 2018 LIU-IDA/LITH-EX-G--18/069--SE Evaluation of BizTalk360 From a business value perspective

More information

Tablet-based interaction methods for VR.

Tablet-based interaction methods for VR. Examensarbete LITH-ITN-MT-EX--06/026--SE Tablet-based interaction methods for VR. Lisa Lönroth 2006-06-16 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping, Sweden Institutionen

More information

Markörlös Augmented Reality för visualisering av 3D-objekt i verkliga världen

Markörlös Augmented Reality för visualisering av 3D-objekt i verkliga världen LiU-ITN-TEK-A-14/019-SE Markörlös Augmented Reality för visualisering av 3D-objekt i verkliga världen Semone Kallin Clarke 2014-06-11 Department of Science and Technology Linköping University SE-601 74

More information

Automatic Clustering of 3D Objects for Hierarchical Level-of-Detail

Automatic Clustering of 3D Objects for Hierarchical Level-of-Detail LiU-ITN-TEK-A--18/033--SE Automatic Clustering of 3D Objects for Hierarchical Level-of-Detail Benjamin Wiberg 2018-06-14 Department of Science and Technology Linköping University SE-601 74 Norrköping,

More information

Object Migration in a Distributed, Heterogeneous SQL Database Network

Object Migration in a Distributed, Heterogeneous SQL Database Network Linköping University Department of Computer and Information Science Master s thesis, 30 ECTS Computer Engineering (Datateknik) 2018 LIU-IDA/LITH-EX-A--18/008--SE Object Migration in a Distributed, Heterogeneous

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis Introducing Mock framework for Unit Test in a modeling environment by Joakim Braaf LIU-IDA/LITH-EX-G--14/004--SE

More information

Large fused GPU volume rendering

Large fused GPU volume rendering LiU-ITN-TEK-A--08/108--SE Large fused GPU volume rendering Stefan Lindholm 2008-10-07 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik och

More information

Creating User Interfaces Using Web-based Technologies to Support Rapid Prototyping in a Desktop Astrovisualization Software

Creating User Interfaces Using Web-based Technologies to Support Rapid Prototyping in a Desktop Astrovisualization Software LiU-ITN-TEK-A--17/062--SE Creating User Interfaces Using Web-based Technologies to Support Rapid Prototyping in a Desktop Astrovisualization Software Klas Eskilson 2017-11-28 Department of Science and

More information

Advanced Visualization Techniques for Laparoscopic Liver Surgery

Advanced Visualization Techniques for Laparoscopic Liver Surgery LiU-ITN-TEK-A-15/002-SE Advanced Visualization Techniques for Laparoscopic Liver Surgery Dimitrios Felekidis 2015-01-22 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Clustered Importance Sampling for Fast Reflectance Rendering

Clustered Importance Sampling for Fast Reflectance Rendering LiU-ITN-TEK-A--08/082--SE Clustered Importance Sampling for Fast Reflectance Rendering Oskar Åkerlund 2008-06-11 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Computer-assisted fracture reduction in an orthopaedic pre-operative planning workflow

Computer-assisted fracture reduction in an orthopaedic pre-operative planning workflow LiU-ITN-TEK-A--17/003--SE Computer-assisted fracture reduction in an orthopaedic pre-operative planning workflow Ludvig Mangs 2017-01-09 Department of Science and Technology Linköping University SE-601

More information

Context-based algorithm for face detection

Context-based algorithm for face detection Examensarbete LITH-ITN-MT-EX--05/052--SE Context-based algorithm for face detection Helene Wall 2005-09-07 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping, Sweden Institutionen

More information

Hybrid Particle-Grid Water Simulation using Multigrid Pressure Solver

Hybrid Particle-Grid Water Simulation using Multigrid Pressure Solver LiU-ITN-TEK-G--14/006-SE Hybrid Particle-Grid Water Simulation using Multigrid Pressure Solver Per Karlsson 2014-03-13 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Calibration of traffic models in SIDRA

Calibration of traffic models in SIDRA LIU-ITN-TEK-A-13/006-SE Calibration of traffic models in SIDRA Anna-Karin Ekman 2013-03-20 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik

More information

Creating a Framework for Consumer-Driven Contract Testing of Java APIs

Creating a Framework for Consumer-Driven Contract Testing of Java APIs Linköping University IDA Bachelor s Degree, 16 ECTS Computer Science Spring term 2018 LIU-IDA/LITH-EX-G--18/022--SE Creating a Framework for Consumer-Driven Contract Testing of Java APIs Fredrik Selleby

More information

Multi-Resolution Volume Rendering of Large Medical Data Sets on the GPU

Multi-Resolution Volume Rendering of Large Medical Data Sets on the GPU LITH-ITN-MT-EX--07/056--SE Multi-Resolution Volume Rendering of Large Medical Data Sets on the GPU Ajden Towfeek 2007-12-20 Department of Science and Technology Linköping University SE-601 74 Norrköping,

More information

Audial Support for Visual Dense Data Display

Audial Support for Visual Dense Data Display LiU-ITN-TEK-A--17/004--SE Audial Support for Visual Dense Data Display Tobias Erlandsson Gustav Hallström 2017-01-27 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Visual Data Analysis using Tracked Statistical Measures within Parallel Coordinate Representations

Visual Data Analysis using Tracked Statistical Measures within Parallel Coordinate Representations Examensarbete LITH-ITN-MT-EX--05/030--SE Visual Data Analysis using Tracked Statistical Measures within Parallel Coordinate Representations Daniel Ericson 2005-04-08 Department of Science and Technology

More information

Slow rate denial of service attacks on dedicated- versus cloud based server solutions

Slow rate denial of service attacks on dedicated- versus cloud based server solutions Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Information technology 2018 LIU-IDA/LITH-EX-G--18/031--SE Slow rate denial of service attacks on dedicated-

More information

Comparing Costs of Browser Automation Test Tools with Manual Testing

Comparing Costs of Browser Automation Test Tools with Manual Testing Linköpings universitet The Institution of Computer Science (IDA) Master Theses 30 ECTS Informationsteknologi Autumn 2016 LIU-IDA/LITH-EX-A--16/057--SE Comparing Costs of Browser Automation Test Tools with

More information

HTTP/2, Server Push and Branched Video

HTTP/2, Server Push and Branched Video Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Datateknik 2017 LIU-IDA/LITH-EX-G--17/073--SE HTTP/2, Server Push and Branched Video Evaluation of using HTTP/2 Server Push

More information

Automatic Test Suite for Physics Simulation System

Automatic Test Suite for Physics Simulation System Examensarbete LITH-ITN-MT-EX--06/042--SE Automatic Test Suite for Physics Simulation System Anders-Petter Mannerfelt Alexander Schrab 2006-09-08 Department of Science and Technology Linköpings Universitet

More information

Information visualization of consulting services statistics

Information visualization of consulting services statistics LiU-ITN-TEK-A--16/051--SE Information visualization of consulting services statistics Johan Sylvan 2016-11-09 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Functional and Security testing of a Mobile Application

Functional and Security testing of a Mobile Application Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Information Technology 2017 LIU-IDA/LITH-EX-G--17/066--SE Functional and Security testing of a Mobile Application Funktionell

More information

Evaluation of a synchronous leader-based group membership

Evaluation of a synchronous leader-based group membership Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Information Technology Spring 2017 LIU-IDA/LITH-EX-G--17/084--SE Evaluation of a synchronous leader-based group membership protocol

More information

Efficient implementation of the Particle Level Set method

Efficient implementation of the Particle Level Set method LiU-ITN-TEK-A--10/050--SE Efficient implementation of the Particle Level Set method John Johansson 2010-09-02 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Development of water leakage detectors

Development of water leakage detectors LiU-ITN-TEK-A--08/068--SE Development of water leakage detectors Anders Pettersson 2008-06-04 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen för teknik

More information

Department of Electrical Engineering. Division of Information Coding. Master Thesis. Free Viewpoint TV. Mudassar Hussain.

Department of Electrical Engineering. Division of Information Coding. Master Thesis. Free Viewpoint TV. Mudassar Hussain. Department of Electrical Engineering Division of Information Coding Master Thesis Free Viewpoint TV Master thesis performed in Division of Information Coding by Mudassar Hussain LiTH-ISY-EX--10/4437--SE

More information

Illustrative Visualization of Anatomical Structures

Illustrative Visualization of Anatomical Structures LiU-ITN-TEK-A--11/045--SE Illustrative Visualization of Anatomical Structures Erik Jonsson 2011-08-19 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Study of Local Binary Patterns

Study of Local Binary Patterns Examensarbete LITH-ITN-MT-EX--07/040--SE Study of Local Binary Patterns Tobias Lindahl 2007-06- Department of Science and Technology Linköpings universitet SE-60 74 Norrköping, Sweden Institutionen för

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis Migration process evaluation and design by Henrik Bylin LIU-IDA/LITH-EX-A--13/025--SE 2013-06-10 Linköpings universitet

More information

Statistical flow data applied to geovisual analytics

Statistical flow data applied to geovisual analytics LiU-ITN-TEK-A--11/051--SE Statistical flow data applied to geovisual analytics Phong Hai Nguyen 2011-08-31 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Optimizing a software build system through multi-core processing

Optimizing a software build system through multi-core processing Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2019 LIU-IDA/LITH-EX-A--19/004--SE Optimizing a software build system through multi-core processing Robin Dahlberg

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis A systematic literature Review of Usability Inspection Methods by Ali Ahmed LIU-IDA/LITH-EX-A--13/060--SE 2013-11-01

More information

Design and Proof-of-Concept Implementation of Interactive Video Streaming with DASH.js

Design and Proof-of-Concept Implementation of Interactive Video Streaming with DASH.js Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Datateknik 2017 LIU-IDA/LITH-EX-G--17/081--SE Design and Proof-of-Concept Implementation of Interactive Video

More information

Analysis of GPU accelerated OpenCL applications on the Intel HD 4600 GPU

Analysis of GPU accelerated OpenCL applications on the Intel HD 4600 GPU Linköping University Department of Computer Science Master thesis, 30 ECTS Computer Science Spring term 2017 LIU-IDA/LITH-EX-A--17/019--SE Analysis of GPU accelerated OpenCL applications on the Intel HD

More information

Multi-Video Streaming with DASH

Multi-Video Streaming with DASH Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Datateknik 217 LIU-IDA/LITH-EX-G--17/71--SE Multi-Video Streaming with DASH Multi-video streaming med DASH Sebastian Andersson

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis Towards efficient legacy test evaluations at Ericsson AB, Linköping by Karl Gustav Sterneberg LIU-IDA/LITH-EX-A--08/056--SE

More information

Design of video players for branched videos

Design of video players for branched videos Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Computer Science 2018 LIU-IDA/LITH-EX-G--18/053--SE Design of video players for branched videos Design av videospelare

More information

Storage and Transformation for Data Analysis Using NoSQL

Storage and Transformation for Data Analysis Using NoSQL Linköping University Department of Computer Science Master thesis, 30 ECTS Information Technology 2017 LIU-IDA/LITH-EX-A--17/049--SE Storage and Transformation for Data Analysis Using NoSQL Lagring och

More information

An Approach to Achieve DBMS Vendor Independence for Ides AB s Platform

An Approach to Achieve DBMS Vendor Independence for Ides AB s Platform Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Datateknik 2017 LIU-IDA/LITH-EX-G--17/008--SE An Approach to Achieve DBMS Vendor Independence for Ides AB s Platform Niklas

More information

OMSI Test Suite verifier development

OMSI Test Suite verifier development Examensarbete LITH-ITN-ED-EX--07/010--SE OMSI Test Suite verifier development Razvan Bujila Johan Kuru 2007-05-04 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping, Sweden

More information

Multi-Volume Rendering in OpenSpace Using A-Buffers for Space Weather Visualizations

Multi-Volume Rendering in OpenSpace Using A-Buffers for Space Weather Visualizations LiU-ITN-TEK-A--17/006--SE Multi-Volume Rendering in OpenSpace Using A-Buffers for Space Weather Visualizations Jonas Strandstedt 2017-02-24 Department of Science and Technology Linköping University SE-601

More information

Network optimisation and topology control of Free Space Optics

Network optimisation and topology control of Free Space Optics LiU-ITN-TEK-A-15/064--SE Network optimisation and topology control of Free Space Optics Emil Hammarström 2015-11-25 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Automatic analysis of eye tracker data from a driving simulator

Automatic analysis of eye tracker data from a driving simulator LiU-ITN-TEK-A--08/033--SE Automatic analysis of eye tracker data from a driving simulator Martin Bergstrand 2008-02-29 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Semi-automatic code-to-code transformer for Java

Semi-automatic code-to-code transformer for Java Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2016 LIU-IDA/LITH-EX-A--16/031--SE Semi-automatic code-to-code transformer for Java Transformation of library calls

More information

Interactive GPU-based Volume Rendering

Interactive GPU-based Volume Rendering Examensarbete LITH-ITN-MT-EX--06/011--SE Interactive GPU-based Volume Rendering Philip Engström 2006-02-20 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping, Sweden Institutionen

More information

A Back-End for the SkePU Skeleton Programming Library targeting the Low- Power Multicore Vision Processor

A Back-End for the SkePU Skeleton Programming Library targeting the Low- Power Multicore Vision Processor Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2016 LIU-IDA/LITH-EX-A--16/055--SE A Back-End for the SkePU Skeleton Programming Library targeting the Low- Power Multicore

More information

Design Optimization of Soft Real-Time Applications on FlexRay Platforms

Design Optimization of Soft Real-Time Applications on FlexRay Platforms Institutionen för Datavetenskap Department of Computer and Information Science Master s thesis Design Optimization of Soft Real-Time Applications on FlexRay Platforms by Mahnaz Malekzadeh LIU-IDA/LITH-EX-A

More information

Intelligent boundary extraction for area and volume measurement

Intelligent boundary extraction for area and volume measurement Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2017 LIU-IDA/LITH-EX-A--17/009--SE Intelligent boundary extraction for area and volume measurement Using LiveWire for

More information

Debug Interface for Clone of DSP. Examensarbete utfört i Elektroniksystem av. Andreas Nilsson

Debug Interface for Clone of DSP. Examensarbete utfört i Elektroniksystem av. Andreas Nilsson Debug Interface for Clone of 56000 DSP Examensarbete utfört i Elektroniksystem av Andreas Nilsson LITH-ISY-EX-ET--07/0319--SE Linköping 2007 Debug Interface for Clone of 56000 DSP Examensarbete utfört

More information

A collision framework for rigid and deformable body simulation

A collision framework for rigid and deformable body simulation LiU-ITN-TEK-A--16/049--SE A collision framework for rigid and deformable body simulation Rasmus Haapaoja 2016-11-02 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Motion Capture to the People: A high quality, low budget approach to real time Motion Capture

Motion Capture to the People: A high quality, low budget approach to real time Motion Capture Examensarbete LITH-ITN-MT-EX--05/013--SE Motion Capture to the People: A high quality, low budget approach to real time Motion Capture Daniel Saidi Magnus Åsard 2005-03-07 Department of Science and Technology

More information

Towards Automatic Detection and Visualization of Tissues in Medical Volume Rendering

Towards Automatic Detection and Visualization of Tissues in Medical Volume Rendering Examensarbete LITH-ITN-MT-EX--06/012--SE Towards Automatic Detection and Visualization of Tissues in Medical Volume Rendering Erik Dickens 2006-02-03 Department of Science and Technology Linköpings Universitet

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Master s Thesis An Approach on Learning Multivariate Regression Chain Graphs from Data by Babak Moghadasin LIU-IDA/LITH-EX-A--13/026

More information

Adapting network interactions of a rescue service mobile application for improved battery life

Adapting network interactions of a rescue service mobile application for improved battery life Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Information Technology Spring term 2017 LIU-IDA/LITH-EX-G--2017/068--SE Adapting network interactions of a rescue

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Final thesis A database solution for scientific data from driving simulator studies By Yasser Rasheed LIU-IDA/LITH-EX-A--11/017

More information

Institutionen för datavetenskap Department of Computer and Information Science

Institutionen för datavetenskap Department of Computer and Information Science Institutionen för datavetenskap Department of Computer and Information Science Bachelor thesis A TDMA Module for Waterborne Communication with Focus on Clock Synchronization by Anders Persson LIU-IDA-SAS

More information

Raspberry pi to backplane through SGMII

Raspberry pi to backplane through SGMII LiU-ITN-TEK-A--18/019--SE Raspberry pi to backplane through SGMII Petter Lundström Josef Toma 2018-06-01 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Implementing a scalable recommender system for social networks

Implementing a scalable recommender system for social networks LiU-ITN-TEK-A--17/031--SE Implementing a scalable recommender system for social networks Alexander Cederblad 2017-06-08 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Reality Modeling Drone Capture Guide

Reality Modeling Drone Capture Guide Reality Modeling Drone Capture Guide Discover the best practices for photo acquisition-leveraging drones to create 3D reality models with ContextCapture, Bentley s reality modeling software. Learn the

More information

Evaluation of cloud-based infrastructures for scalable applications

Evaluation of cloud-based infrastructures for scalable applications LiU-ITN-TEK-A--17/022--SE Evaluation of cloud-based infrastructures for scalable applications Carl Englund 2017-06-20 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Monte Carlo Simulation of Light Scattering in Paper

Monte Carlo Simulation of Light Scattering in Paper Examensarbete LITH-ITN-MT-EX--05/015--SE Monte Carlo Simulation of Light Scattering in Paper Ronnie Dahlgren 2005-02-14 Department of Science and Technology Linköpings Universitet SE-601 74 Norrköping,

More information

Visualisation of data from IoT systems

Visualisation of data from IoT systems Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2017 LIU-IDA/LITH-EX-A--17/027--SE Visualisation of data from IoT systems A case study of a prototyping tool for data

More information

Implementation and Evaluation of Bluetooth Low Energy as a communication technology for wireless sensor networks

Implementation and Evaluation of Bluetooth Low Energy as a communication technology for wireless sensor networks Linköpings universitet/linköping University IDA HCS Bachelor 16hp Innovative programming Vårterminen/Spring term 2017 ISRN: LIU-IDA/LITH-EX-G--17/015--SE Implementation and Evaluation of Bluetooth Low

More information

Extending the Stream Reasoning in DyKnow with Spatial Reasoning in RCC-8

Extending the Stream Reasoning in DyKnow with Spatial Reasoning in RCC-8 Institutionen för Datavetenskap Department of Computer and Information Science Master s thesis Extending the Stream Reasoning in DyKnow with Spatial Reasoning in RCC-8 by Daniel Lazarovski LIU-IDA/LITH-EX-A

More information

Utilize OCR text to extract receipt data and classify receipts with common Machine Learning

Utilize OCR text to extract receipt data and classify receipts with common Machine Learning Linköping University Department of Computer and Information Science Bachelor thesis, 16 ECTS Programming 2018 LIU-IDA/LITH-EX-G--18/043--SE Utilize OCR text to extract receipt data and classify receipts

More information

Ad-hoc Routing in Low Bandwidth Environments

Ad-hoc Routing in Low Bandwidth Environments Master of Science in Computer Science Department of Computer and Information Science, Linköping University, 2016 Ad-hoc Routing in Low Bandwidth Environments Emil Berg Master of Science in Computer Science

More information

Development of a Game Portal for Web-based Motion Games

Development of a Game Portal for Web-based Motion Games Linköping University Department of Computer Science Master thesis, 30 ECTS Datateknik 2017 LIU-IDA/LITH-EX-A--17/013--SE Development of a Game Portal for Web-based Motion Games Ozgur F. Kofali Supervisor

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

Institutionen för datavetenskap

Institutionen för datavetenskap Institutionen för datavetenskap Department of Computer and Information Science Final thesis Developing a new 2D-plotting package for OpenModelica by Haris Kapidzic LIU-IDA/LITH-EX-G 11/007 SE 2011-04-28

More information

Real-Time Ray Tracing on the Cell Processor

Real-Time Ray Tracing on the Cell Processor LiU-ITN-TEK-A--08/102--SE Real-Time Ray Tracing on the Cell Processor Filip Lars Roland Andersson 2008-09-03 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Semi-automated annotation of histology images

Semi-automated annotation of histology images Linköping University Department of Computer science Master thesis, 30 ECTS Computer science 2016 LIU-IDA/LITH-EX-A--16/030--SE Semi-automated annotation of histology images Development and evaluation of

More information

GPU accelerated Nonlinear Soft Tissue Deformation

GPU accelerated Nonlinear Soft Tissue Deformation LiU-ITN-TEK-A--12/020--SE GPU accelerated Nonlinear Soft Tissue Deformation Sathish Kottravel 2012-03-29 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Institutionen

More information

Permissioned Blockchains and Distributed Databases: A Performance Study

Permissioned Blockchains and Distributed Databases: A Performance Study Linköping University Department of Computer and Information Science Master thesis, 30 ECTS Datateknik 2018 LIU-IDA/LITH-EX-A--2018/043--SE Permissioned Blockchains and Distributed Databases: A Performance

More information

Real-time visualization of a digital learning platform

Real-time visualization of a digital learning platform LiU-ITN-TEK-A--17/035--SE Real-time visualization of a digital learning platform Kristina Engström Mikaela Koller 2017-06-20 Department of Science and Technology Linköping University SE-601 74 Norrköping,

More information

Real-Time Magnetohydrodynamic Space Weather Visualization

Real-Time Magnetohydrodynamic Space Weather Visualization LiU-ITN-TEK-A--17/048--SE Real-Time Magnetohydrodynamic Space Weather Visualization Oskar Carlbaum Michael Novén 2017-08-30 Department of Science and Technology Linköping University SE-601 74 Norrköping,

More information

Design and evaluation of a user interface for a WebVR TV platform developed with A-Frame

Design and evaluation of a user interface for a WebVR TV platform developed with A-Frame Linköping University Department of Computer Science Master thesis, 30 ECTS Information Technology 2017 LIU-IDA/LITH-EX-A--17/006--SE Design and evaluation of a user interface for a WebVR TV platform developed

More information

Tutorial (Beginner level): Orthomosaic and DEM Generation with Agisoft PhotoScan Pro 1.3 (with Ground Control Points)

Tutorial (Beginner level): Orthomosaic and DEM Generation with Agisoft PhotoScan Pro 1.3 (with Ground Control Points) Tutorial (Beginner level): Orthomosaic and DEM Generation with Agisoft PhotoScan Pro 1.3 (with Ground Control Points) Overview Agisoft PhotoScan Professional allows to generate georeferenced dense point

More information

A latency comparison of IoT protocols in MES

A latency comparison of IoT protocols in MES Linköping University Department of Computer and Information Science Master thesis Software and Systems Division Spring 2017 LIU-IDA/LITH-EX-A--17/010--SE A latency comparison of IoT protocols in MES Erik

More information

Evaluating Deep Learning Algorithms

Evaluating Deep Learning Algorithms Linköping University Department of Computer and Information Science Master thesis, 30 ECTS Datateknik 202018 LIU-IDA/LITH-EX-A--2018/034--SE Evaluating Deep Learning Algorithms for Steering an Autonomous

More information

Institutionen för datavetenskap. Study of the Time Triggered Ethernet Dataflow

Institutionen för datavetenskap. Study of the Time Triggered Ethernet Dataflow Institutionen för datavetenskap Department of Computer and Information Science Final thesis Study of the Time Triggered Ethernet Dataflow by Niclas Rosenvik LIU-IDA/LITH-EX-G 15/011 SE 2015-07-08 Linköpings

More information

A Cycle-Trade Heuristic for the Weighted k-chinese Postman Problem

A Cycle-Trade Heuristic for the Weighted k-chinese Postman Problem Linköping University Department of Computer Science Bachelor thesis, 16 ECTS Computer Science 2018 LIU-IDA/LITH-EX-G--18/073--SE A Cycle-Trade Heuristic for the Weighted k-chinese Postman Problem Anton

More information

Adaptive Probabilistic Routing in Wireless Ad Hoc Networks

Adaptive Probabilistic Routing in Wireless Ad Hoc Networks LiU-ITN-TEK-A-13/018-SE Adaptive Probabilistic Routing in Wireless Ad Hoc Networks Affaf Hasan Ismail Liaqat 2013-05-23 Department of Science and Technology Linköping University SE-601 7 Norrköping, Sweden

More information

Progressive Web Applications and Code Complexity

Progressive Web Applications and Code Complexity Linköping University Department of Computer and Information Science Master thesis, 30 ECTS Datateknik 2018 LIU-IDA/LITH-EX-A--18/037--SE Progressive Web Applications and Code Complexity An analysis of

More information

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 7: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapter

More information

Exterior Orientation Parameters

Exterior Orientation Parameters Exterior Orientation Parameters PERS 12/2001 pp 1321-1332 Karsten Jacobsen, Institute for Photogrammetry and GeoInformation, University of Hannover, Germany The georeference of any photogrammetric product

More information

Efficient Simulation and Rendering of Sub-surface Scattering

Efficient Simulation and Rendering of Sub-surface Scattering LiU-ITN-TEK-A--13/065-SE Efficient Simulation and Rendering of Sub-surface Scattering Apostolia Tsirikoglou 2013-10-30 Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden

More information

Developing a database and a user interface for storing test data for radar equipment

Developing a database and a user interface for storing test data for radar equipment Linköping University IDA- Department of Computer and information Science Bachelor thesis 16hp Educational program: Högskoleingenjör i Datateknik Spring term 2017 ISRN: LIU-IDA/LITH-EX-G--17/006 SE Developing

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information