Math D Printing Group Final Report

Similar documents
Edge and local feature detection - 2. Importance of edge detection in computer vision

EE 584 MACHINE VISION

3-D D Euclidean Space - Vectors

CS 130 Final. Fall 2015

ME/CS 133(a): Final Exam (Fall Quarter 2017/2018)

CALCULATING TRANSFORMATIONS OF KINEMATIC CHAINS USING HOMOGENEOUS COORDINATES

Design and Analysis of an Euler Transformation Algorithm Applied to Full-Polarimetric ISAR Imagery

Fall 2016 Semester METR 3113 Atmospheric Dynamics I: Introduction to Atmospheric Kinematics and Dynamics

Notes on metric spaces and topology. Math 309: Topics in geometry. Dale Rolfsen. University of British Columbia

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Quaternions & Rotation in 3D Space

Math 253, Section 102, Fall 2006 Practice Final Solutions

9. Three Dimensional Object Representations

Analysis of Euler Angles in a Simple Two-Axis Gimbals Set

Answers to practice questions for Midterm 1

Transformations. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico

dp i = u ray c2 dt with t the travel time.

OBJECT detection in general has many applications

Lab1: Use of Word and Excel

SYSTEMS OF NONLINEAR EQUATIONS

the projective completion of the affine plane with four points

Introduction to Homogeneous coordinates

A GENTLE INTRODUCTION TO THE BASIC CONCEPTS OF SHAPE SPACE AND SHAPE STATISTICS

Rasterization: Geometric Primitives

Histograms. h(r k ) = n k. p(r k )= n k /NM. Histogram: number of times intensity level rk appears in the image

Upgraded Swimmer for Computationally Efficient Particle Tracking for Jefferson Lab s CLAS12 Spectrometer

Chapter 6. Curves and Surfaces. 6.1 Graphs as Surfaces

Image Reconstruction 3 Fully 3D Reconstruction

Morphological Pyramids in Multiresolution MIP Rendering of. Large Volume Data: Survey and New Results

generator graphical interface for generating the IHRSR script

Topological Classification of Data Sets without an Explicit Metric

+ i a y )( cosφ + isinφ) ( ) + i( a x. cosφ a y. = a x

Math 11 Fall 2016 Section 1 Monday, October 17, 2016

Lecture #13. Point (pixel) transformations. Neighborhood processing. Color segmentation

Digital Image Processing COSC 6380/4393

How do microarrays work

EE795: Computer Vision and Intelligent Systems

Inverse and Implicit functions

MATH3016: OPTIMIZATION

Chapter 11 Image Processing

3. The three points (2, 4, 1), (1, 2, 2) and (5, 2, 2) determine a plane. Which of the following points is in that plane?

Quick Start with CASSY Lab. Bi-05-05

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps

Topology 550A Homework 3, Week 3 (Corrections: February 22, 2012)

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

ME 115(a): Final Exam (Winter Quarter 2009/2010)

Rotations in 3D Graphics and the Gimbal Lock

Lecture : Topological Space

521493S Computer Graphics Exercise 2 Solution (Chapters 4-5)

Image Alignment and Application of 2D Fast Rotational Matching in Single Particle cryo-electron Microscopy. Yao Cong. Baylor College of Medicine

Linear algebra deals with matrixes: two-dimensional arrays of values. Here s a matrix: [ x + 5y + 7z 9x + 3y + 11z

Polar Coordinates. Calculus 2 Lia Vas. If P = (x, y) is a point in the xy-plane and O denotes the origin, let

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming

GENERALIZING THE HOUGH TRANSFORM TO DETECT ARBITRARY SHAPES. D. H. Ballard Pattern Recognition Vol. 13 No

Review 1. Richard Koch. April 23, 2005

Experiments with Edge Detection using One-dimensional Surface Fitting

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Multiple Integrals. max x i 0

Motivation. Gray Levels

Fingerprint Classification Using Orientation Field Flow Curves

EE565:Mobile Robotics Lecture 3

Motion Control (wheeled robots)

Hello, welcome to the video lecture series on Digital Image Processing. So in today's lecture

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

9 length of contour = no. of horizontal and vertical components + ( 2 no. of diagonal components) diameter of boundary B

TEAMS National Competition High School Version Photometry Solution Manual 25 Questions

Interactive Computer Graphics. Hearn & Baker, chapter D transforms Hearn & Baker, chapter 5. Aliasing and Anti-Aliasing

Robots are built to accomplish complex and difficult tasks that require highly non-linear motions.

Introduction to Complex Analysis

THREE LECTURES ON BASIC TOPOLOGY. 1. Basic notions.

12.5 Triple Integrals

CIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM

Final Exam, F11PE Solutions, Topology, Autumn 2011

3D Rotations and Complex Representations. Computer Graphics CMU /15-662, Fall 2017

3. Image formation, Fourier analysis and CTF theory. Paula da Fonseca

Short on camera geometry and camera calibration

Calibrating an Overhead Video Camera

Blob-Representation of Multidimensional Objects and Surfaces

MA 243 Calculus III Fall Assignment 1. Reading assignments are found in James Stewart s Calculus (Early Transcendentals)

Performance Study of Quaternion and Matrix Based Orientation for Camera Calibration

Animation. CS 4620 Lecture 32. Cornell CS4620 Fall Kavita Bala

Search Spaces I. Before we get started... ME/CS 132b Advanced Robotics: Navigation and Perception 4/05/2011

GEOMETRIC TRANSFORMATIONS AND VIEWING

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

ξ ν ecliptic sun m λ s equator

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt

Answers. Chapter 2. 1) Give the coordinates of the following points:

TEAMS National Competition Middle School Version Photometry Solution Manual 25 Questions

AN ANALYTICAL APPROACH TREATING THREE-DIMENSIONAL GEOMETRICAL EFFECTS OF PARABOLIC TROUGH COLLECTORS

MICROTUBULE FILAMENT TRACING AND ESTIMATION. Rohan Chabukswar. Department of Electrical and Computer Engineering Carnegie Mellon University

Lecture 8 Object Descriptors

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS

GEOG 4110/5100 Advanced Remote Sensing Lecture 4

CSE 554 Lecture 6: Fairing and Simplification

Interactive Math Glossary Terms and Definitions

Chapter 11 Representation & Description

Measuring Lengths The First Fundamental Form

Transcription:

Math 4020 3D Printing Group Final Report Gayan Abeynanda Brandon Oubre Jeremy Tillay Dustin Wright Wednesday 29 th April, 2015 Introduction 3D printers are a growing technology, but there is currently no objective and widely-used method of comparing the quality of 3D printers. The goal of our project is to develop a good metric for measuring the quality of these printers. The essential task is to compare an STL file s ideal representation in the 3D world, which is the file sent to the 3D printer to print, to a real-world printed object. If the objects are very similar, the printer is essentially very good. If there are significant differences due to low resolution or because parts of the object fall off, then it is a very bad printer. Our task mathematically is to compare an STL file to a real-world scan of these objects. We have to ensure that our comparison is "fair" in the sense that it compares them once they are aligned and scaled appropriately. If someone scans the picture but misplaces or misrotates it slightly, but the object is a great representation of the ideal object, it should not see the skew the metric. Similarly, a bad printed object shouldn t be placed in such a way that the metric does not identity how truly different the idealized and ideal objects are. Then, once the objects are aligned and scaled appropriately, we can define a metric to compare them. Approach Our approach has two inputs: An STL file representing an object to be 3D printed and the 3D scan of the actual printed object. The scan will be provided to our algorithm as either a series of 2D cross sections of the actual object or as a 3D intensity matrix. We will refer to this set of cross sections as the actual image series and to the 3D intensity matrix as the actual intensity matrix. From the STL file we will generate another 3D intensity matrix that we will refer to as the ideal intensity matrix. In order to align the actual matrix (or series) to the ideal matrix, we will use an algorithm we developed that is described in a later section. Once the two objects have been aligned, we can then apply a metric to them in order to determine how close the objects are and assess the printer s quality. Ideal Matrix Creation One method that we have proposed for creating an ideal intensity matrix is as follows: First, we want to assume that all values of both matrices are on a predefined interval. To achieve this, we can simply apply the following transformation to the actual 3D intensity matrix where A i,j,k is the i, j, k th coordinate of the actual intensity matrix and A max is the largest value in the matrix: A i,j,k A i,j,k A max 1

This transformation guarantees that all points in the actual intensity matrix are now scaled down to the [0, 1] interval. Also, the intensity matrix is a negative image. Intensity values are lower where the object existed in the scan. For intuitive purposes and for later calculations, we can transform this into a positive image where higher values represent object presence by applying the following transformation after the previous transformation: A i,j,k 1 A i,j,k Now that we have applied some transformations to the actual intensity matrix, we can proceed with creating an ideal intensity matrix. We propose accomplishing this by creating a 3D matrix that is a pixelated version of the STL file. That is, we create a matrix of the same dimension as the actual intensity matrix, and if given the scale information from the actual matrix we assume the same scale for the ideal matrix. Each cell in the ideal matrix now represents a point in 3-space. One way to fill this matrix is for each cell in the matrix, we test if the associated point is on the interior or exterior of the STL file object. If on the interior, we assign the point as 1, if on the exterior we assign the point as 0. Matrix Alignment Our method of alignment is based off of the assumption that we are printing objects that can be laid flat. We are also assuming that when the object is scanned, the person placing the object on the scanner has placed it on the right side and with a close angular orientation to the orientation of our ideal file. Assuming this, we generate a matrix of "ideal" values, where the matrix A i,j,k = 1 if there is supposed to be matter printed at that point and 0 otherwise. The size of this matrix corresponds to the amount of pixels in our scanner. Then we have our 3D matrix that comes from the scan of the actual object, which will will denote B i,j,k. Now, we are going to create functions defined over R 3 so that we can perform continuous operations such as scaling and rotating in order to properly align the objects. Let f (x, y, z) = A round(x),round(y),round(z). So if the matrix A is essentially a cloud of points evaluating whether there is mass or not at a discrete set of points, our function is a filling in of the 3-space with values 1 or 0 corresponding to the values at these integer points. Let g(x, y, z) = 1 if B round(x),round(y),round(z) is above a certain numerical threshold σ that we determine and 0 otherwise. This numerical threshold exists because B is a 3D matrix corresponding to a scan of a physical object. We must set the threshold to correspond to a number that would realistically come from a scan of air. Any number above that would presumably correspond to a point with matter. Now recall we can find the center of mass (x, y, z ) of a function f (x, y, z) by calculating: f (x, y, z)xdv (x ) f = f (x, y, z)dv f (x, y, z)ydv (y ) f = f (x, y, z)dv f (x, y, z)zdv (z ) f = f (x, y, z)dv We will first shift our functions by the center of the mass, so that now the center of mass of the object will be moved to the center. A shift of the object in the direction x, y, z is given by f s (x, y, z) = f (x + (x ) f, y + (y ) f, z + (z ) f ) g s (x, y, z) = g(x + (x ) g, y + (y ) g, z + (z ) g ) Doing this moves the center of mass to the origin for both objects. We do this because our assumption is that when the objects are properly aligned, they will have a center of mass at the same point. Also, we move this point to the origin, so that when we rotate around the z-axis, the center of mass remains at the origin. This way, we avoid having to iterate between shifting, rotating, reshifting, rotating, etc. Finally, we do a rotation of the object. A rotation of g s by θ degrees clockwise can 2

be expressed as g s (x cos θ y sin θ, x sin θ + y cos θ, z). Now we must find the rotation that will maximize overlap between the objects, aligning them as closely as possible. We can do this by finding max [ f s (x, y, z) θ g s (x cos θ y sin θ, x sin θ + y cos θ, z)]dv This maximum will occur whenever there is a maximum overlap between the two functions, and corresponds to the θ that we must rotate g s by to properly align the functions. We can search within a small range around 0 since our assumption is a human carefully placed the object and got somewhat close to maximal alignment. This is computationally very slow, but can get closer to perfect alignment of an ideal and real object with enough time. There is potential for improvement, perhaps a more iterative process we found in our research that we did not have time to implement. Alternative approach to alignment - Alignment via cross correlation of Radon transforms In this approach, the alignment of the actual intensity matrix (corresponding to the printed object) to the ideal intensity matrix (reference, corresponding to the actual object) is achieved with the use of a method developed for 3D rotational alignment of particles in microscopy [1, 2]. The method is based on cross-correlations of two and three dimensional Radon transforms. The technique can essentially be described as follows: All possible two dimensional projections of the actual object are calculated and used as references for alignment of the experimental projections of the printed object by cross-correlation techniques. The Radon transforms provides the necessary mathematical tool for the alignment since the three-dimensional Radon transform contains the same information as a complete set of two-dimensional projections, in a much smaller data set and facilitates simultaneous implementation of translational and rotational alignment. The translation and the orientation of the printed object relative to the reference (actual object) is determined simultaneously by calculating a 5 dimensional (3 Euler angles α, β, γ corresponding to the orientation and the magnitude and the argument of the shift vector, rand η) cross-correlation function. The values of α, β, γ, η and rfor which the crosscorrelation function assumes its maximum define the alignment of the printed object relative to the reference. Definition: Given a 3 dimensional function f (r), r = (x, y, z), its Radon transform is defined as R ( f (r)) ˆf (p, ξ) = δ(p ξ.r) (r) δ (p ξ.r) dr ; determines the plane over which the integration is carried out ξ = cos (θ) sin (φ) cos (φ) sin (θ) sin (φ) is the unit vector orthogonal to the plane. Analogously, given a 2 dimensional function g (r), r = (x, y), its Radon transform is defined as R (g(r)) ĝ (p, ζ) = (r) δ (p ζ.r) dr ; δ(p ξ.r) determines the line over which the integration is carried out ξ = ( sin(ɛ) cos (ɛ) is the unit vector orthogonal to the line The two-dimensional Radon transform can be calculated by calculating all onedimensional projections (in small angular increments) of a two-dimensional image. A ) 3

three-dimensional Radon transform can be calculated in two steps, by first calculating all two-dimensional projections perpendicular to the x zplane, followed by a two dimensional Radon transform of all of the two-dimensional projections. Implementation: Given the three dimensional Radon transform of the reference (actual object), represented as ˆf (p, φ, θ) which is identified as a set of integrals over lines defined by the angles θof the two-dimensional projection and a second angle φ defined by the projecting direction within the plane at angle θand the two-dimensional Radon transform of the projections ĝ (p, ɛ) identified as the set of integrals over lines at angles ɛthe method is implemented through a series of computations. First, the two-dimensional Radon transform of a projection at Euler angles (α, β, γ) is extracted from the three-dimensional Radon transform ˆf (p, φ, θ). Let ˆr β,γ (p, α ) be this two-dimensional Radon transform extracted from ˆf (p, φ, θ) and let ĝ (p, ɛ) be the twodimensional Radon transform of one projection, where α = α + ɛ. Then angles φ, θin the three-dimensional Radon transform belong to the line at angle ɛin the two-dimensional Radon transform of the projection at angles α, β, γ. Then to find the orientation and translational relationship of the printed object relative to the reference, the cross-correlation function of the Radon transforms is defined. To achieve this, the shifting property of the Radon transform has to be used: R ( f (r a)) ˆf (p, ξ) = (r a) δ (p ξ.r) dr = (s) δ ((p ξ.a) ξ.s) ds The cross- correlation function depending on the three Euler angles α, β, γand a shift vector of magnitude r and argument ηis then defined by; c (α, β, γ, r, η) = ˆr β,γ (p, ɛ) ĝ (p rsin(ɛ + η), ɛ + α) dp dɛ For higher computational efficiency this cross-correlations are calculated by a multiplication of the Fourier transform of the corresponding Radon transforms followed by inverse radial Fourier transformation. The angles α 0, β 0, γ 0, η 0 and the value r 0 for which the cross-correlation function assumes its maximum provides the alignment of the printed object relative to the reference. Metric Once the objects are aligned, there are a fair number of ways to define a metric evaluating the printer. First, we can print a set of objects that adequately test the printer s capabilities. Such test objects might include something with a long extraneous edge that might easily break off, a very bumpy object, or a simple sphere/cube to test its capabilities at a very basic level. Then, we can take i(x) r(x) dx where i(x) is a function of the ideal object that has been aligned and scaled. It will be 0 where there is empty air and 1 where there is supposed to be matter. r(x) is a function of the scan of the printed structure that has been aligned and scaled. It can again be 0 where there is empty air and 1 where is supposed to be matter. Alternatively, instead of 1, we can assign different values that will highlight the importance of certain parts of the structure. For example, the center of a sphere or cube could have a higher value if we care more about the 3-d printer ensuring that the core of the object is printed. If the center is hollow, the integral will be much larger. If we print an objects with bumps, the bumps could have relatively higher values if we really want to highlight whether the printer adequately 4

Conclusion Our group has developed a methodology that will assist in evaluating the quality of 3D printers. Given an object s STL file and data from the 3D scan of a printed object, we can align the scanned data to match the data from the STL file and then compare the actual object to its idealized version. This comparison can then be used to assess the quality of the print and used in determining the overall quality of the printer. References [1] Radermacher, M. (1997). Radon Transform Techniques for Alignment and Three- Dimensional Reconstruction from Random Projections. Scanning Microscopy Vol 11, Pages 171-177. [2] Radermacher, M. (1994). Threedimensional reconstruction from random projections: orientational alignment via Radon transforms. Ultramicroscopy Vol 53, Pages 121-136. 5