FINAL REPORT. Prepared for Evergreen Unmanned Systems and Shell International Exploration and Production Inc.

Size: px
Start display at page:

Download "FINAL REPORT. Prepared for Evergreen Unmanned Systems and Shell International Exploration and Production Inc."

Transcription

1 FINAL REPORT Prepared for Evergreen Unmanned Systems and Shell International Exploration and Production Inc. Center for Collaborative Control of Unmanned Vehicles University of California, Berkeley PI: Prof. Raja Sengupta, Project Manager: John Connors Project Team: Ben Kehoe, Zu Kim, Tom Kuhn and Jared Wood Contains Proprietary Information belonging to: UC Berkeley and Evergreen Unmanned Systems

2 Table of Contents Executive Summary... 3 Search and Rescue... 3 System Overview... 4 Assessment of the ScanEagle as a platform for Autonomous Intelligent Control... 5 Computer Vision Detection... 6 Probabilistic Search... 8 Path Planning and Control Cursor-on-Target Interface Simulation Environment Flight Experiments Experiment Setup Day 1 (December 8 th, 2009) Day 2 (December 9 th, 2009) Day 3 (December 10 th, 2009) Day 4 (December 14 th, 2009) Conclusions and Analysis Future Work

3 Executive Summary The Center for Collaborative Control of Unmanned Vehicles (C 3 UV) at the University of California, Berkeley is an interdisciplinary research group focused on making unmanned vehicles operate autonomously, without extensive monitoring or intervention by human operators. C 3 UV, in conjunction with Evergreen Unmanned Systems (EUS), demonstrated a new technology to autonomously search and detect targets for use in search and rescue applications. An autonomously controlled, fixed-wing aircraft performed detection and localization of an orange kayak. The system maintained a probability density function (PDF) that expressed the likely location of the target. Video data from the aircraft was analyzed in real time to detect the target and update the PDF. The aircraft and camera sensor s paths were generated from the PDF to maximize the likelihood of detection of the kayak. Following this method, the aircraft searched the given space to detect and localize the target in an efficient manner. A series of flight tests were conducted at McMillan Airfield, Camp Roberts, CA on December 9 th -14 th. The aircraft were launched and operated by EUS. The automation technology was computed on a standard desktop computer operated by C 3 UV. The results of the experiment show positive advancement for autonomous search and rescue. Once the autonomous intelligence was adapted to the Scan Eagle system, the search and rescue target, i.e., an orange kayak, could be autonomously detected and localized to approximately 40 meters. The UAS flew at an altitude of 100 meters. We have also been able to identify a few simple ways in which the Scan Eagle can be improved to make it a robust platform for intelligent autonomous operation. These are summarized in the Section titled Future Work. Search and Rescue Search and Rescue (SAR) refers to missions performed by emergency crews to locate and retrieve people or equipment from dangerous situations. Typical scenarios include aircraft/vehicle accidents, natural disasters or acts of war. The two-fold challenge in search rescue is to find and locate those in need of rescue, and to access those individuals and provide support for their rescue. In traditional SAR mission, numerous human searchers require long periods of time in order to locate a target. Though the process of searching is not technically challenging to a human being, it can be very demanding and requires precision. Due to the repetitive and accurate nature of the exercise, it is an ideal problem to be solved through computer based techniques. Current manned SAR techniques, as employed by groups such as the Coast Guard, involves flying predefined patterns over a given search area. Examples of these patterns are outwardly travelling spirals, or transect paths. The limited number of patterns limits the search team s 3

4 ability to adapt to new scenarios, and may not incorporate information specific to each search. The system developed here can integrate prior knowledge of the search area and continuously reevaluates flight plans to produce the highest likelihood of detection. The system can also revisit areas of interest without interfering with pre-defined flight plans. In this work, the goal is to detect and localize an orange kayak (representing a life raft), within a 2 sq. km area. A fixed wing aircraft, the Insitu ScanEagle, will be automated to fly above the area with an electro-optical (EO) camera, and search for the target. The system will detect and localize the target in order to aid rescuers in the SAR task. System Overview The SAR system, shown in Figure 1, utilizes numerous science disciplines, including computer vision, mathematical modeling and controls. An EO camera on board the aircraft broadcasts an analog video signal to the system ground station. This video feed is digitized and each frame is made available to the C 3 UV system. Computer vision techniques analyze each video frame to detect the presence of the target. These algorithms map the image to the surface of the ground and estimate the location of the target. A likelihood function expresses the probability that the target exists at that location. These functions are merged into a larger map of the search area. The result is a probability distribution function (PDF) that expresses the probability that the target exists at any given location. As the target is not equally likely to be everywhere in the search area, some aircraft and sensor paths will be more likely to find that target than others. A desired aircraft and sensor path is created from the PDF and current aircraft telemetry in order to maximize the probability of detection. These paths are sent to the ground station through the Cursor-on-Target (CoT) interface, and broadcast to the aircraft. The aircraft follows the paths, and returns the new video stream as described above, closing the control loop on the SAR application. 4

5 C 3 UV System Vision Detection Probabilistic Search Likelihood Functions PDFs Path Planning Analog Video Telemetry Aircraft/Sensor Paths ScanEagle Aircraft Electro-Optical Camera ScanEagle Ground Station CoT Interface Figure 1: System Architecture for SAR Experiment Assessment of the ScanEagle as a platform for Autonomous Intelligent Control The ScanEagle system presents various restrictions that limit performance or flexibility. The majority of these issues are due to limitations of the CoT interface. On the ScanEagle aircraft, this interface is the only automated way to exert control over the aircraft. A direct interface to the autopilot is available on board the aircraft and may be a more appropriate way for future systems to interact. The telemetry data sent by the ground station demonstrated erratic behavior that made the integration process more difficult. Data packets were occasionally sent out of order or were duplicated. Additionally, some packets contained identical information with different timestamps. Because accurate vision detection relies on precise synchronization of telemetry with video, the lack of reliable telemetry reduces the accuracy with which the synchronization problem can be addressed. Control of the aircraft is performed through a limited CoT interface. Two modes of control are available: path following, or fixed-radius orbits. The current path-following techniques of the ScanEagle's autopilot present the greatest obstacles to increasing performance. The aircraft fails to implement a sufficient planning horizon, often flying direct line paths to each waypoint. Only after passing a waypoint does the aircraft attempt to turn toward the next. For this reason, the aircraft consistently overshoots the desired path and reduces the effectiveness of 3 rd party aircraft control. Because placement of the aircraft is a key factor in placing the sensor footprint, these behaviors limit the effectiveness of autonomous path planning. 5

6 The alternate control method is through the use of orbit points. Given a center point and radius, the aircraft will circle the point at the given radius. This feature can be a very useful way to gain lower level control of the aircraft as the path follower does not execute tight turns well. Unfortunately, the CoT interface implemented on ScanEagle does not allow the rotational direction of the orbit to be specified. The aircraft will therefore always orbit in the same direction, drastically decreasing the effectiveness of this control input. The ScanEagle provides a 2 axis, inertially stabilized gimbal system. The system provides accurate and stable images of a specified location. The system is limited however by the commands available through CoT. Ideally the interface would provide access to a lower level of control of the sensor. At present, the system is only capable of pointing the camera at a specific location. Due to the high rate at which the computer can process the vision data, the camera must be constantly re-commanded to point at a new location. The ground station limits the rate at which these updates may happen. The gimbal provides other operation modes that could be used to combat some of these issues. However, those capabilities are not available through the CoT interface. In addition to the control limitations, the gimbal provides poor telemetry feedback. The ground station reports gimbal orientation in terms of Earth coordinates. For computation purposes, this data must be translated in to relative gimbal angles. This is an unnecessary step as the true angles are available on the aircraft, but not provided through CoT. Computer Vision Detection Airborne sensors present unique challenges in real-time vision detection due to their perspective and relative speed. C 3 UV has focused on developing accurate vision-based detection algorithms for airborne systems. It is not possible to achieve perfect detection or localization performance, due to the intrinsic difficulty of the detection problem and sensor accuracy. The system accounts for the possibility of detection and localization errors by using target likelihood estimation algorithms to best approximate the target position based on the given error range. These likelihood functions can be used to update the probabilistic search framework to generate the PDF. The vision framework consists of two core components: target detection and target localization. First, each video frame is compared to a target model with a color- and shape-based target detection algorithm. The model is designed to detect a bright orange life boat in an ocean. The raft s distinctive color is a useful cue for target detection. A detection example is shows in Figure 2, where a color filter is applied for both the target and background. The dark background simulates that look of the Arctic Ocean from above, and provides good color contrast. A pixel grouping and shape analysis algorithm is used to filter out noise. The raft model is compared to real images by scaling the model based on altitude and zoom levels, thus filtering out objects that are not of a reasonable size. The use of these filters provides for a more robust algorithm and greater accuracy. 6

7 Figure 2: Color-based life boat detection algorithm: Original image (left), after color filter for target (middle), result of grouping (right). The real world coordinates of the detected target are calculated from the location and orientation of the sensor at the time the image was captured. Instead of specifying the target location as a single point, a likelihood map is created that can account for detection errors and sensor inaccuracies. Based on the map generated from it, a flight path is generated to find an efficient path for the search. Ground based real-time image processing for target search using the ScanEagle presents several challenges: Video quality: The wireless video downlink degrades video quality and produces various types of noise affecting color processing. At the flight altitude, the Scan Eagle video resolution also makes shape based detection difficult. Time synchronization: There exists an unknown time difference between the video processing computer and the ScanEagle. Inaccurate sensor readings: The attitude reading given by ScanEagle telemetry is not accurate. The attitude estimation results in an error greater than several degrees due to both sensor error and delay between different sensors. In addition, the ScanEagle telemetry only provides GPS based altitude, which is less accurate than barometric altitude. The GPSbased altitude estimate can have errors as large as 100 meters. We introduced new components to the image processing system to address these problems. The vision detection portion of the system uses improved color filters, and adds a layer of background detection. As an additional measure, sensor readings are incorporated to constrain the size of the projected target in the image. To adjust for time synchronization issues, a timing verification tool is used to tune the offset and delay between the aircraft and ground computers. A virtual point is superimposed on the video image based on aircraft telemetry. When synchronization is correct, the virtual point remains even when the camera is in motion, but with a synchronization offset, the point precedes or follows the camera motion. We compensate for the offset by manually manipulating the camera and observing the relative motion of this virtual point. This ensures the video images are correctly paired with telemetry data. Upon detecting a target, the vision components return an estimate of the target s location, calculated from the aircraft s telemetry data. When significant errors are present in this data, the localization of the target will not be accurate. The location estimate is fused with the PDF based 7

8 on a likelihood of that observation. The likelihood increases the probably of the target being in that location, while simultaneously decreasing the probability that the target is anywhere else. When error is present in the localization estimate, the likelihood functions place probability in the incorrect locations, as well as eliminated probability that may have been in the correct location. To reduce the effects of sensor error, the likelihoods returned from the vision detection are broader than in our on-board systems and represent greater uncertainty in the location of the target. The overlap of multiple target detections sharpens the localization of the target, without eliminating previous detection results. Multiple detections of the same target produce more accurate localization of the target. Probabilistic Search The purpose of a search is to find an object referred to as the target. When a search begins, the location of the target is unknown. Utilizing prior information about the last known location of the target is a key step in a successful search. Given an approximate location within some search space at some initial time, a distribution of the target s probable location, called the prior, can be constructed. For example, in Search and Rescue, the last known position of a ship may be given. A conservative distribution of the probable target location would assume the worst case propagation from the last known location (e.g., and model the probable target location as a Gaussian). Consequently, a search is initialized with a highly uncertain estimate of the target location. The goal of the search is then to decrease this uncertainty to the point that the target can be considered localized (i.e., found). The uncertainty in the target location distribution is decreased by making observations within the search space. An attribute of large scale probabilistic searches is that each observation covers a relatively small area as compared to the entire search area, so most observations will report that the target was not detected. Consequently there are two types of observations target detected and target not detected. Each type of observation (detection or no-detection) is modeled according to an observation likelihood function, where is the observation (target detected or target not detected). Figure 3 shows sample observation likelihood functions for both types of observation. The likelihood function is used to update the target s distribution of probable locations as which is known as Bayes' Rule. Figure 4 is a representation of the distribution of probable target locations after a sequence of no-detection observations have been made. Note that the distribution is represented as a contour plot with red being the highest level of probability and blue being the lowest. The data shown in Figure 4 and Figure 5 is from flight experiments performed on the ScanEagle while at Camp Roberts. The distribution is used to direct the effort of the search to highly probable target locations. How the aircraft is guided through the search space is determined by a path planner. 8

9 Figure 3: Observation likelihood functions for target not detected (left) and target detected (right). Figure 4: Distribution of probable target locations before a target has been detected. 9

10 Figure 5: Distribution of probable target locations after the target has been detected. Path Planning and Control The purpose of the path planner is to find an optimal path for the aircraft and sensor. Ideally, a path could be chosen such that its estimated reduction in entropy would be more than any other path and the dynamics of the aircraft would be accounted for such that it would be feasible for the aircraft to follow. However, it is not feasible to determine an optimal path for an infinite time horizon in real time. This leads to minimizing the cumulative predicted entropy over a path of finite length. The predicted entropy at some time into the future of a distribution is defined as The exact cumulative entropy cannot be minimized in real time. As such, a suitable alternative objective must be minimized such that the path produced approximates the entropy-optimal path. The objective we have considered is the probability of not detecting the target at every step in the path. Minimizing this is equivalent to maximizing the probability swept out by the path. The probability of not detecting the target over an entire path is 10

11 One side effect of using a finite horizon path is that special cases arise. The ScanEagle is a fixed wing aircraft, and as such it has a maximum bank angle, resulting in a minimum orbit radius. Consider the scenario in which the point of interest is to the left of the aircraft. If the path length is not long enough for the aircraft to perform a full turn-around, then the path planner will command the aircraft to make a hard left turn. If the point of interest is within the minimum orbit radius of the aircraft, the aircraft will end up in an orbit around the point, unable to fly over it. Let this area be termed the unreachable area. In reality this unreachable area is in fact reachable, but not with a constant bank angle. With a short path planning horizon, points within this area are essentially unreachable. However, points close to the plane can be imaged using the gimbaled camera. Figure 6 shows an approximation of the area. It is necessary to account for special cases and prevent them from occurring. Figure 6: Unreachable set. For our application, the point of interest is the distribution peak the best estimate of the target location. To account for the peak, the path planner not only minimizes the probability of not detecting the target over the path, but also guarantees that the peak will always be outside of the unreachable area (thus ensuring that the aircraft will never enter an infinite orbit around the peak). In typical flight conditions the planner will determine a finite-horizon path through the probability distribution, while heading roughly toward the peak. The planner guarantees that the aircraft will head roughly toward the peak by checking if peak is within a cone over the heading of the aircraft. If the peak happens to leave this heading cone, then the planner will initiate a hard turn path toward the peak. The path planning algorithm is shown in Figure 7. Notice that there are additional cases that the path planner checks. These cases are 1. The entropy of the distribution is low enough to consider the target localized 2. The aircraft will head outside of the search space unless it makes a hard turn. 3. The peak is unreachable and the aircraft is heading outside of the search space. 11

12 For case 1, if the target can be considered localized, then the path of the aircraft should be directed to orbit the target so that it can be kept in sight this is a "tracking" path. The target is considered localized when the estimate distribution s entropy drastically decreases. Figure 8 shows this decrease in entropy for one of the experiments. For case 2, most real searches will involve airspace restrictions. These restrictions must be accounted for and checked by the path planner. In order to guarantee that the aircraft will not cross airspace boundary, the planner must check if part of the unreachable area will cross the airspace boundary. Figure 9 represents how the unreachable area is related to the aircraft potentially crossing the airspace boundary. In Figure 9, is the unreachable space to the left of the aircraft. Similarly is the unreachable space to the right of the aircraft. Figure 9 shows crossing the airspace boundary. In this situation, the aircraft is commanded to make a hard left turn toward the peak. This is what is meant by "head to PDF peak". Case 3 is a combination of the peak being unreachable and the aircraft is heading outside of the search space. In this case the aircraft cannot simply make a hard turn toward the peak because the peak will be unreachable. This is what is meant by "unreachable turn-around path". In normal conditions, if the peak is unreachable, then the control is simply relaxed, allowing the aircraft to travel forward, removing itself from the neighborhood of the peak. This is what is meant by "straight forward path". Figure 7: High level view of path planning algorithm. 12

13 Figure 8: A drop in target estimate standard deviation (squared error sense) due to target detection. Figure 9: Detection of airspace boundary crossing. Cursor-on-Target Interface The ScanEagle provides a subset of the Cursor-on-Target (CoT) interface to allow automated control and telemetry interfaces. To implement the CoT standard, a new component was added to the system to handle the interface to the ScanEagle. The interface component receives a desired sensor path from the path planning component, and turns this sensor path into a series of waypoints for the aircraft and a target for the camera. The data is sent over IP to the ScanEagle 13

14 ground station via CoT. This component also receives telemetry data from the ground station, parses and translates the data, and makes it available to the other components in the system. Simulation Environment Due to the high cost of flight testing, extensive simulations are run to maximize the likelihood of success and minimize required flight time. To provide the most accurate results, simulations are kept as close to a real scenario as possible. The system was run on the same computer used for flight tests with as much of the real system as possible. The vision detection portions were simulated as actually video data is not available indoors. Software from the manufacturer of the ScanEagle simulated the flight characteristics of the aircraft, as well as the CoT interface. Running the system with the simulated aircraft provided the most realistic testing environment that could be achieved without access to an aircraft. Each component of the system was tested for errors and was tuned to increase performance for the ScanEagle. Flight Experiments Experiment Setup Flight tests for this project were conducted at McMillan Airfield at the Camp Roberts National Guard base in California. The maintenance, launch and recovery of the aircraft was performed by EUS. As stated previously, the system developed here communicated with the ScanEagle ground station through the CoT interface over IP. All software for this system resided on a standard off the shelf desktop computer, connected through a single Ethernet cable to the ScanEagle ground station. The only additional hardware was a USB frame grabber used to digitize the video down link from the aircraft. The search area was defined by a two square kilometer rectangle approximately centered about the west end of the runway. The target was an orange inflatable kayak approximately ten feet in length and was located toward the west end of the runway. Numerous locations were tested, including brush and dirt, but ultimately the dark runway best simulated the look of the arctic sea. The aircraft flew at a height of approximately 100 m AGL. Video data was transmitted to the system through a wireless radio and USB frame grabber. Control command and aircraft telemetry were communicated through the CoT interface. 14

15 Figure 10: Search space above Camp Roberts, CA. Day 1 (December 8 th, 2009) The primary goal of the first flight day was to establish proper communication with the aircraft, understand the movement of the gimbal, and gather images of the life raft from the aircraft. Simulation testing had established a working communication link with the ScanEagle system, but some time was required to verify and adjust gimbal commands that were not testable in simulation. Several experiment trials were performed without the use of vision techniques. The purpose of these tests was to understand the behavior of the aircraft, and gimbal system. The path planning algorithms and vision control were updated and tuned in order to compensate for the behavior of the ScanEagle command center. At multiple times throughout the day, test data was gathered of the life raft. That aircraft flew manually set flight plans while the camera was focused on the target. These tests were performed at the same altitude and zoom level as would be used during detection scenarios. Data was gathered throughout the day in order to have training data in various lighting conditions. The results of the first day of testing proved positive, verifying all underlying mechanisms of the system. The C 3 UV system was able to control the aircraft and gimbal, receive and process vision data, and perform target estimation and path planning. The test data gathered that day provided the basis for the life raft classifier used throughout the experiment. 15

16 Day 2 (December 9 th, 2009) Having established that all system components were functioning, the second set of flight experiments aimed to vision algorithms developed from the test data. Using video data collected on Dec. 8 th, a computer vision classifier was developed to detect the orange kayak provided for this experiment. The real-time detection performed very well, consistently identifying the target. Some false-positive detections were seen due to features of the terrain, such as shrubs and ground effects. Under the assumed conditions of an arctic search and rescue, these features would not be present and would not contribute to the false detection rate. The unique way in which the PDF stores and updates information protects against false detections corrupting the system. The system will attempt to revisit a detection to ensure the accuracy and location of that detection. Generally, false-positives are not consistently generated by these ground anomalies and will not be redetected when the system images the area again. The nature of the PDF will remove that target likelihood and the restore the relative mass of data from before the false detection. Having established accurate vision detection, the entire system was tested using a search and rescue scenario. The system performed well, searching the space, updating the PDF and detecting the target. The localization of the target proved to be less accurate than desired due to several issues inherent in the ScanEagle system. The main source of error was the clock skew between the aircraft s system clock, and that of the C 3 UV ground based system. In order to localize a target, the video frame containing the target detection is compared to telemetry information describing the location and orientation of the aircraft and sensor at the time the image was taken. The ground location of the video frame can be extrapolated, giving the ground location of the target. Because the telemetry data is time stamped using the aircraft clock, and the video data is linked to the ground clock, and offset between the clocks will result in video frames being matched with incorrect telemetry data. The target detection is thereby localized using the incorrect set of data, and the location becomes in accurate. These errors vary based on the behavior of the aircraft at the time of detection and therefore do not result in consistent localization errors. Therefore, when the system attempts to verify the location of the target, the miscalculated target location will fail to produce an additional detection, and a new location may be found if the system detects the true target with new system error. Additional inaccuracies in the aircraft s gimbal angle readings add further error to the localization process. The results of this testing indicated a need for additional tools and techniques to compensate of the timing and measurement angles present in the system. Day 3 (December 10 th, 2009) While no flight tests were performed on this day, the issues of clock offset and data delays were investigated. Of principal interest was whether these timing issues were deterministic and consistent. A test aircraft was used for ground based tests. The gimbal was held at a fixed angle to establish a steady-state point. By commanding a quick change in the gimbal position, an event was created that would appear in both the video and data streams. The event was recorded in both data sets using the local clock. By locating these events in both data sets, the relative time of each could be compared to determine the offset between the two clock domains. Several sets of data were gathered consisting of 4 or 5 events each and testing two different aircraft. An analysis of the data indicated that the offset remained fairly consistent over time, but that jitter in 16

17 transmission delays, coupled with idiosyncrasies in the data rate from the aircraft introduced jitter on the order of several hundred milliseconds. This conclusion suggested that the offset could be accounted for, but precise synchronization would not be possible in the short term. The clocks could be compensated to account for the offset, while the jitter could be dealt with by modifying the likelihood functions returned from the vision detection component. A broader detection likelihood suggests greater uncertainty in the location of the detection and produce less defined peaks in the PDF. With broader peaks, a previous detection will not be eliminated if the same location fails to produce a new detection during the next visit. Multiple broad detections in the same general area (limited in size by the errors in the system) overlap to create a more accurate prediction of the target location. Figure 11: Timing measurements show that timing error remain fairly consistent over time. Tests 1 and 2 were conducted on a different aircraft than test 3, and shows variance from one airframe to another. Figure 12: Timing measurements, centered about the average delay, indicated clock jitter of several hundred milliseconds. 17

18 Day 4 (December 14 th, 2009) Having established the functionality of the system, the goal of these flights was to increase the performance of the localization process by compensating for the timing issues discovered in previous tests. A new technique was implemented to measure and compensate for the timing offset. Using the telemetry data, a fixed GPS point was projected on to the video stream. As the aircraft and gimbal moves, the projection of the point within the image moves accordingly. These movements also result in changes in the video data, and a correlation of the two images provides a convenient way to adjust and verify the offset. The process produced reasonable results but had to be performed several times throughout the day due to drift in the clocks. After adjusting for the timing offset and modifying the likelihood functions, the performance of the target localization increased dramatically and provided consistent results. Detections were localized to accurately enough to reinforce previous detection and provide sufficient probability of target presence and location. Several tests were performed of the full search and rescue scenario. The aircraft was tasked to search for the target within the specified area and consistently detected and located that target to enough accuracy to allow a rescue team to reach the location. Conclusions and Analysis The data fusion algorithm performed very well and demonstrated robustness in the presence of localization errors. Throughout the final flight testing week, the precision in the localization of the search target improved. We believe tit finally reached the best accuracy possible with the ScanEagle system, i.e., we could search and localize the target with an error less than 40 meters. The GPS location of the target was recorded before the experiments so as to properly evaluate the accuracy of the target location estimated by the system. The process iterated between 1) experimentally flight testing the target estimation and path planning algorithms, analyzing the behavior of the aircraft, and determining how the algorithms should be modified; and 2) implementing modifications into a new version of the algorithms. After each set of flights, the error in the estimated target location was calculated and the performance of the path planning algorithm analyzed. Figure 13 shows the evolution of the target estimation algorithm s accuracy as it was enhanced throughout the experiments. In Figure 13, each point is an average of several target detections made in several experiments for a given algorithm version. The maximum and minimum detection errors are represented by the error bars. Figure 13 shows that the error in the target estimate for the last version of the algorithm was reduced to approximately 40 meters. Figure 14 shows the path of the aircraft during one of the experiments flying the final version of the path planning algorithm. In Figure 14 you can see that the aircraft heads directly toward the areas of highest probability to find a target. In its first pass through the area, the target is not detected. The aircraft then returns to observe the new highest probability area and detects the target. Finally the aircraft comes back to track the target. The accuracy of the target position estimate was ultimately dependent on 1) the communication delay of the camera orientation and aircraft state packets (less than 5 Hz) as well as on the synchronization of video time with ScanEagle autopilot time and calibration of the camera; and 2) false detections. 18

19 The use of a gimbaled sensor instead of a fixed sensor provides increased search coverage and greater planning flexibility, but the movement of the sensor introduced additional errors in the measured camera orientation. The primary sources of error are due to the low frequency of aircraft and camera position and orientation data from the ground station. Due to the low frequency of data packets, the error in camera orientation could be as high as 10 degrees. This error is determined by the speed of the gimbal and the frequency of the telemetry packets. When applied to the projection of the image, this orientation error leads to significant localization errors. Consequently, the accuracy of the target localization estimates is degraded. Figure 13: Improvement of error in the target location estimate over successive flight experiments. 19

20 20 Figure 14: Path of the aircraft for the final version of the algorithm.

21 Future Work We find the Scan Eagle a promising platform for autonomous Search and Rescue. To realize this promise we have identified a few areas for a phase 2 of this project. These are listed below. Onboard Computing For the purposes of this experiment, all detection, planning and control was performed on a ground based computer. Transitioning this computing to the aircraft will provide faster control feedback and aid in the resolution of clock synchronization issues. Low latency access to the aircraft s clock would allow synchronization of the sensor and telemetry data for greater accuracy in localization. Furthermore, direct access to the autopilot, or different autopilot hardware, could eliminate many of the limitations imposed by CoT. This would allow greater control over the aircraft and sensor as well as minimize errors in aircraft data and lead to great improvements in performance. The use of onboard computing would also eliminate the need for a wireless video link, resulting in a clean video image, and allowing for the use of higher resolution cameras. Multiple Aircraft The focus of much of C 3 UV s work is the coordination and management of multiple aircraft. The technologies deployed for these experiments were originally developed for scenarios requiring multiple autonomous agents. Distributed Data Fusion allows multiple aircraft to participate in the same search by sharing likelihood functions. Each aircraft maintains its own PDF, but becomes aware of the observations made by other agents from likelihood functions broadcast by each plane. In this way, an area can be search more efficiently, as multiple sensors aid in the same search. Because aircraft do not rely on the information of other agents, the system is robust to failure, communication issues or the loss of an aircraft. Search and Rescue with a multi-sensor payload: The aim here is to enhance the ocean search and localization module by fusing SAR and infrared sensors for robust detection in actual ocean operating environments and higher altitude flight. The C3UV control and sensor fusion algorithms could be transitioned to multi-sensor payloads such as those developed by the University of Alaska. The multi-sensor autonomy would drive the Cursor-on-Target implementation and deliver more precise gimbal control. Likelihood Functions and Multi-Target Distributions The data fusion work use for this experiment can be expanded to include several new features. One addition would modify the likelihood functions to account for the error in gimbal orientation measurement. The certainty of the likelihoods should adapt in real-time based on the sequence of gimbal orientation measurements that have recently been received. If the orientation hasn t changed much over a couple time steps, then the likelihood should be fairly certain. However, if there is significant difference in recent orientation measurements, suggesting greater gimbal movement, then the likelihood should have less certainty 21

22 Another area of development involves extended the data fusion to generalize the probability distribution for multiple targets. This extension would make preliminary localization of targets faster. The overall distribution would be less effected false target detection. Additionally, the localization would allow searching for a potentially unspecified number of targets. This extension would require modification of the current distribution estimation algorithm and restrict the likelihood functions coverage to be within the camera image. Path Planning Algorithm Improvement The final path planning algorithm performed very robustly, however, the algorithm could be improved in a couple ways. One approach would add multiple high level behavioral modes. These high level modes would be perform explore, search, and track functions. In explore mode, the path would be optimized for searching less observed areas. In search mode, the path would be optimized for observing areas with a high probability of detection. In track mode, the path would be optimized to keep the highest probable area in view as much as possible. These modes could even be dynamically switched between a team of aircraft if multiple aircraft were used for the search. Vision-Based Object Detection The video feed used for this work was transmitted from the aircraft through an analog video transmitter which introduced various types of noise, such as color distortion, degrading the contrast of the life raft. Additionally, the pixel color of an object in each image varies greatly under different illumination conditions and varying background colors. In many cases, the color of the life raft is not a significantly distinctive color for vision detection to be effective based entirely on color filters. In the original image in Figure 2, the color of the raft is very similar to the surrounding earth. Therefore, although color is a good cue for fast prototyping and also useful in general as a supplementary cue, it should not be the ultimate main cue for object detection. For future work, advanced methods using wavelet or histogram of oriented gradients, should be used for shape-based object recognition. 22

An Autonomous Unmanned Aerial Vehicle System for Sensing and Tracking

An Autonomous Unmanned Aerial Vehicle System for Sensing and Tracking An Autonomous Unmanned Aerial Vehicle System for Sensing and Tracking Jared M. Garvey, Benjamin Kehoe, Brandon Basso, Mark F. Godwin, Jared Wood, Joshua Love, Shih-Yuan Liu, Zu Kim, Stephen Jackson, Yaser

More information

THE VANGUARD LONG RANGE SURVEILLANCE DRONE BEST USED FOR SURVEILLANCE & SECURITY INSPECTION & DETECTION WILDLIFE & GAME

THE VANGUARD LONG RANGE SURVEILLANCE DRONE BEST USED FOR SURVEILLANCE & SECURITY INSPECTION & DETECTION WILDLIFE & GAME THE VANGUARD LONG RANGE SURVEILLANCE DRONE The Vanguard, our most popular UAV, is a long range surveillance drone with a configurable Data Link video Range and flight times. The Vanguard drone system is

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras UAV Position and Attitude Sensoring in Indoor Environment Using Cameras 1 Peng Xu Abstract There are great advantages of indoor experiment for UAVs. Test flights of UAV in laboratory is more convenient,

More information

Video Georegistration: Key Challenges. Steve Blask Harris Corporation GCSD Melbourne, FL 32934

Video Georegistration: Key Challenges. Steve Blask Harris Corporation GCSD Melbourne, FL 32934 Video Georegistration: Key Challenges Steve Blask sblask@harris.com Harris Corporation GCSD Melbourne, FL 32934 Definitions Registration: image to image alignment Find pixel-to-pixel correspondences between

More information

Navigational Aids 1 st Semester/2007/TF 7:30 PM -9:00 PM

Navigational Aids 1 st Semester/2007/TF 7:30 PM -9:00 PM Glossary of Navigation Terms accelerometer. A device that senses inertial reaction to measure linear or angular acceleration. In its simplest form, it consists of a case-mounted spring and mass arrangement

More information

INSPIRE 1 Release Notes

INSPIRE 1 Release Notes 2016.12.15 1. All-in-One firmware version updated to v1.10.1.40. 2. DJI GO app ios version updated to v3.1.1. 3. DJI GO app Android version updated to v3.1.1. What s New: 1. Optimized Flight Restriction

More information

INSPIRE 1 Release Notes

INSPIRE 1 Release Notes 2017.07.10 1. All-in-One firmware version updated to v01.11.01.50. 2. Remote Controller firmware version updated to v1.7.80. 3. DJI GO app ios version updated to v3.1.13. 4. DJI GO app Android version

More information

TRAINING MATERIAL HOW TO OPTIMIZE ACCURACY WITH CORRELATOR3D

TRAINING MATERIAL HOW TO OPTIMIZE ACCURACY WITH CORRELATOR3D TRAINING MATERIAL WITH CORRELATOR3D Page2 Contents 1. UNDERSTANDING INPUT DATA REQUIREMENTS... 4 1.1 What is Aerial Triangulation?... 4 1.2 Recommended Flight Configuration... 4 1.3 Data Requirements for

More information

IEEE 1588 PTP clock synchronization over a WAN backbone

IEEE 1588 PTP clock synchronization over a WAN backbone Whitepaper IEEE 1588 PTP clock synchronization over a WAN backbone A field study comparing PTP clock synchronization accuracy against GPS external time reference in a live production WAN environment Contents

More information

University of Technology Building & Construction Department / Remote Sensing & GIS lecture

University of Technology Building & Construction Department / Remote Sensing & GIS lecture 5. Corrections 5.1 Introduction 5.2 Radiometric Correction 5.3 Geometric corrections 5.3.1 Systematic distortions 5.3.2 Nonsystematic distortions 5.4 Image Rectification 5.5 Ground Control Points (GCPs)

More information

AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S

AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S Radha Krishna Rambola, Associate Professor, NMIMS University, India Akash Agrawal, Student at NMIMS University, India ABSTRACT Due to the

More information

PixHawk and Marvelmind Integration Manual. PixHawk/APM Integration with Marvelmind mobile beacon

PixHawk and Marvelmind Integration Manual. PixHawk/APM Integration with Marvelmind mobile beacon PixHawk and Marvelmind Integration Manual PixHawk/APM Integration with Marvelmind mobile beacon The guide assumes that: 1) The user has configured the copter according to the recommendations available

More information

THE preceding chapters were all devoted to the analysis of images and signals which

THE preceding chapters were all devoted to the analysis of images and signals which Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to

More information

Probabilistic Double-Distance Algorithm of Search after Static or Moving Target by Autonomous Mobile Agent

Probabilistic Double-Distance Algorithm of Search after Static or Moving Target by Autonomous Mobile Agent 2010 IEEE 26-th Convention of Electrical and Electronics Engineers in Israel Probabilistic Double-Distance Algorithm of Search after Static or Moving Target by Autonomous Mobile Agent Eugene Kagan Dept.

More information

INTEGRATING LOCAL AND GLOBAL NAVIGATION IN UNMANNED GROUND VEHICLES

INTEGRATING LOCAL AND GLOBAL NAVIGATION IN UNMANNED GROUND VEHICLES INTEGRATING LOCAL AND GLOBAL NAVIGATION IN UNMANNED GROUND VEHICLES Juan Pablo Gonzalez*, William Dodson, Robert Dean General Dynamics Robotic Systems Westminster, MD Alberto Lacaze, Leonid Sapronov Robotics

More information

Obstacle Avoidance Project: Final Report

Obstacle Avoidance Project: Final Report ERTS: Embedded & Real Time System Version: 0.0.1 Date: December 19, 2008 Purpose: A report on P545 project: Obstacle Avoidance. This document serves as report for P545 class project on obstacle avoidance

More information

DetectWORKS License Plate Recognition System. User Manual.

DetectWORKS License Plate Recognition System. User Manual. DetectWORKS License Plate Recognition System. User Manual. Contents Overview...4 1. Installation Guide...5 1.1. Camera...5 1.1.1 Camera Model Choice...5 1.1.2 Camera mounting...5 1.1.3 Capture at night...6

More information

Multi-agent Collaborative Flight Experiment. Karl Hedrick UC Berkeley

Multi-agent Collaborative Flight Experiment. Karl Hedrick UC Berkeley Multi-agent Collaborative Flight Experiment Karl Hedrick UC Berkeley 1 Operated by the Naval Post Graduate School 2 !!" " #! " " $! %&!! % " ' "!! " $! %" " " %" $ " ' "!!" ("!! " $" " " ' $ " ' ) " $!*

More information

FPVMODEL. Rescue-2. Integrated with 1 OX Zoom HD Camera Intended for Search and Rescue Missions USER MANUAL

FPVMODEL. Rescue-2. Integrated with 1 OX Zoom HD Camera Intended for Search and Rescue Missions USER MANUAL FPVMODEL Rescue-2 USER MANUAL Integrated with 1 OX Zoom HD Camera Intended for Search and Rescue Missions FP IU n-= WWW.FPVMODEL.COM Copyright 201 7 FPVMODEL LIMITED WARNING AND DISCLAIMER Make sure not

More information

Robust Horizontal Line Detection and Tracking in Occluded Environment for Infrared Cameras

Robust Horizontal Line Detection and Tracking in Occluded Environment for Infrared Cameras Robust Horizontal Line Detection and Tracking in Occluded Environment for Infrared Cameras Sungho Kim 1, Soon Kwon 2, and Byungin Choi 3 1 LED-IT Fusion Technology Research Center and Department of Electronic

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

CS 4758 Robot Navigation Through Exit Sign Detection

CS 4758 Robot Navigation Through Exit Sign Detection CS 4758 Robot Navigation Through Exit Sign Detection Aaron Sarna Michael Oleske Andrew Hoelscher Abstract We designed a set of algorithms that utilize the existing corridor navigation code initially created

More information

UAS based laser scanning for forest inventory and precision farming

UAS based laser scanning for forest inventory and precision farming UAS based laser scanning for forest inventory and precision farming M. Pfennigbauer, U. Riegl, P. Rieger, P. Amon RIEGL Laser Measurement Systems GmbH, 3580 Horn, Austria Email: mpfennigbauer@riegl.com,

More information

Estimation of Altitude and Vertical Velocity for Multirotor Aerial Vehicle using Kalman Filter

Estimation of Altitude and Vertical Velocity for Multirotor Aerial Vehicle using Kalman Filter Estimation of Altitude and Vertical Velocity for Multirotor Aerial Vehicle using Kalman Filter Przemys law G asior, Stanis law Gardecki, Jaros law Gośliński and Wojciech Giernacki Poznan University of

More information

Sensor Fusion: Potential, Challenges and Applications. Presented by KVH Industries and Geodetics, Inc. December 2016

Sensor Fusion: Potential, Challenges and Applications. Presented by KVH Industries and Geodetics, Inc. December 2016 Sensor Fusion: Potential, Challenges and Applications Presented by KVH Industries and Geodetics, Inc. December 2016 1 KVH Industries Overview Innovative technology company 600 employees worldwide Focused

More information

Characterizing Strategies of Fixing Full Scale Models in Construction Photogrammetric Surveying. Ryan Hough and Fei Dai

Characterizing Strategies of Fixing Full Scale Models in Construction Photogrammetric Surveying. Ryan Hough and Fei Dai 697 Characterizing Strategies of Fixing Full Scale Models in Construction Photogrammetric Surveying Ryan Hough and Fei Dai West Virginia University, Department of Civil and Environmental Engineering, P.O.

More information

Reality Modeling Drone Capture Guide

Reality Modeling Drone Capture Guide Reality Modeling Drone Capture Guide Discover the best practices for photo acquisition-leveraging drones to create 3D reality models with ContextCapture, Bentley s reality modeling software. Learn the

More information

Techniques to Reduce Measurement Errors in Challenging Environments

Techniques to Reduce Measurement Errors in Challenging Environments Techniques to Reduce Measurement Errors in Challenging Environments Jon Semancik VTI Instruments Corp. 2031 Main Street Irvine, CA 92614 949-955-1894 jsemancik@vtiinstruments.com INTRODUCTION Test engineers

More information

07ATC 22 Image Exploitation System for Airborne Surveillance

07ATC 22 Image Exploitation System for Airborne Surveillance 07ATC 22 Image Exploitation System for Airborne Surveillance Karunakaran. P, Frederick Mathews, S.H.Padmanabhan, T.K.Sateesh HCL Technologies, India Copyright 2007 SAE International ABSTRACT The operational

More information

Available online at ScienceDirect. Energy Procedia 69 (2015 )

Available online at   ScienceDirect. Energy Procedia 69 (2015 ) Available online at www.sciencedirect.com ScienceDirect Energy Procedia 69 (2015 ) 1885 1894 International Conference on Concentrating Solar Power and Chemical Energy Systems, SolarPACES 2014 Heliostat

More information

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016 edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract

More information

Decentralized Control of a Quadrotor Aircraft Fleet to seek Information

Decentralized Control of a Quadrotor Aircraft Fleet to seek Information Decentralized Control of a Quadrotor Aircraft Fleet to seek Information Claire J. Tomlin, Gabe Hoffmann, Maryam Kamgarpour, Robin Raffard, and Steven Waslander Electrical Engineering and Computer Sciences

More information

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

An Angle Estimation to Landmarks for Autonomous Satellite Navigation 5th International Conference on Environment, Materials, Chemistry and Power Electronics (EMCPE 2016) An Angle Estimation to Landmarks for Autonomous Satellite Navigation Qing XUE a, Hongwen YANG, Jian

More information

Lecture: Autonomous micro aerial vehicles

Lecture: Autonomous micro aerial vehicles Lecture: Autonomous micro aerial vehicles Friedrich Fraundorfer Remote Sensing Technology TU München 1/41 Autonomous operation@eth Zürich Start 2/41 Autonomous operation@eth Zürich 3/41 Outline MAV system

More information

Project 1 : Dead Reckoning and Tracking

Project 1 : Dead Reckoning and Tracking CS3630 Spring 2012 Project 1 : Dead Reckoning and Tracking Group : Wayward Sons Sameer Ansari, David Bernal, Tommy Kazenstein 2/8/2012 Wayward Sons CS3630 Spring 12 Project 1 Page 2 of 12 CS 3630 (Spring

More information

UCSD AUVSI Unmanned Aerial System Team. Joshua Egbert Shane Grant

UCSD AUVSI Unmanned Aerial System Team. Joshua Egbert Shane Grant UCSD AUVSI Unmanned Aerial System Team Joshua Egbert Shane Grant Agenda Project background and history System design overview Gimbal Stabilization Target Recognition Lessons Learned Future Work Q&A UCSD

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

Performance Behavior of Unmanned Vehicle Aided Mobile Backbone Based Wireless Ad Hoc Networks

Performance Behavior of Unmanned Vehicle Aided Mobile Backbone Based Wireless Ad Hoc Networks Performance Behavior of Unmanned Vehicle Aided Mobile Backbone Based Wireless Ad Hoc Networks Izhak Rubin and Runhe Zhang Abstract We introduce Unmanned Vehicles (UVs) to aid the operation of the mobile

More information

PHOTOGRAMMETRIC SOLUTIONS OF NON-STANDARD PHOTOGRAMMETRIC BLOCKS INTRODUCTION

PHOTOGRAMMETRIC SOLUTIONS OF NON-STANDARD PHOTOGRAMMETRIC BLOCKS INTRODUCTION PHOTOGRAMMETRIC SOLUTIONS OF NON-STANDARD PHOTOGRAMMETRIC BLOCKS Dor Yalon Co-Founder & CTO Icaros, Inc. ABSTRACT The use of small and medium format sensors for traditional photogrammetry presents a number

More information

Geometric Rectification of Remote Sensing Images

Geometric Rectification of Remote Sensing Images Geometric Rectification of Remote Sensing Images Airborne TerrestriaL Applications Sensor (ATLAS) Nine flight paths were recorded over the city of Providence. 1 True color ATLAS image (bands 4, 2, 1 in

More information

Calibration of Inertial Measurement Units Using Pendulum Motion

Calibration of Inertial Measurement Units Using Pendulum Motion Technical Paper Int l J. of Aeronautical & Space Sci. 11(3), 234 239 (2010) DOI:10.5139/IJASS.2010.11.3.234 Calibration of Inertial Measurement Units Using Pendulum Motion Keeyoung Choi* and Se-ah Jang**

More information

2 OVERVIEW OF RELATED WORK

2 OVERVIEW OF RELATED WORK Utsushi SAKAI Jun OGATA This paper presents a pedestrian detection system based on the fusion of sensors for LIDAR and convolutional neural network based image classification. By using LIDAR our method

More information

ifp Universität Stuttgart Performance of IGI AEROcontrol-IId GPS/Inertial System Final Report

ifp Universität Stuttgart Performance of IGI AEROcontrol-IId GPS/Inertial System Final Report Universität Stuttgart Performance of IGI AEROcontrol-IId GPS/Inertial System Final Report Institute for Photogrammetry (ifp) University of Stuttgart ifp Geschwister-Scholl-Str. 24 D M. Cramer: Final report

More information

Autonomous Robot Navigation: Using Multiple Semi-supervised Models for Obstacle Detection

Autonomous Robot Navigation: Using Multiple Semi-supervised Models for Obstacle Detection Autonomous Robot Navigation: Using Multiple Semi-supervised Models for Obstacle Detection Adam Bates University of Colorado at Boulder Abstract: This paper proposes a novel approach to efficiently creating

More information

MicaSense RedEdge-MX TM Multispectral Camera. Integration Guide

MicaSense RedEdge-MX TM Multispectral Camera. Integration Guide MicaSense RedEdge-MX TM Multispectral Camera Integration Guide Revision: 01 October 2018 MicaSense, Inc. Seattle, WA 2018 MicaSense, Inc. Page 1 of 19 TABLE OF CONTENTS Introduction and Scope 3 Camera

More information

NAVIGATION AND ELECTRO-OPTIC SENSOR INTEGRATION TECHNOLOGY FOR FUSION OF IMAGERY AND DIGITAL MAPPING PRODUCTS. Alison Brown, NAVSYS Corporation

NAVIGATION AND ELECTRO-OPTIC SENSOR INTEGRATION TECHNOLOGY FOR FUSION OF IMAGERY AND DIGITAL MAPPING PRODUCTS. Alison Brown, NAVSYS Corporation NAVIGATION AND ELECTRO-OPTIC SENSOR INTEGRATION TECHNOLOGY FOR FUSION OF IMAGERY AND DIGITAL MAPPING PRODUCTS Alison Brown, NAVSYS Corporation Paul Olson, CECOM Abstract Several military and commercial

More information

The Applanix Approach to GPS/INS Integration

The Applanix Approach to GPS/INS Integration Lithopoulos 53 The Applanix Approach to GPS/INS Integration ERIK LITHOPOULOS, Markham ABSTRACT The Position and Orientation System for Direct Georeferencing (POS/DG) is an off-the-shelf integrated GPS/inertial

More information

APPLICATION OF AERIAL VIDEO FOR TRAFFIC FLOW MONITORING AND MANAGEMENT

APPLICATION OF AERIAL VIDEO FOR TRAFFIC FLOW MONITORING AND MANAGEMENT Pitu Mirchandani, Professor, Department of Systems and Industrial Engineering Mark Hickman, Assistant Professor, Department of Civil Engineering Alejandro Angel, Graduate Researcher Dinesh Chandnani, Graduate

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 3.2: Sensors Jürgen Sturm Technische Universität München Sensors IMUs (inertial measurement units) Accelerometers

More information

Image restoration. Restoration: Enhancement:

Image restoration. Restoration: Enhancement: Image restoration Most images obtained by optical, electronic, or electro-optic means is likely to be degraded. The degradation can be due to camera misfocus, relative motion between camera and object,

More information

Using surface markings to enhance accuracy and stability of object perception in graphic displays

Using surface markings to enhance accuracy and stability of object perception in graphic displays Using surface markings to enhance accuracy and stability of object perception in graphic displays Roger A. Browse a,b, James C. Rodger a, and Robert A. Adderley a a Department of Computing and Information

More information

3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots

3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots 3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots Yuncong Chen 1 and Will Warren 2 1 Department of Computer Science and Engineering,

More information

A New Protocol of CSI For The Royal Canadian Mounted Police

A New Protocol of CSI For The Royal Canadian Mounted Police A New Protocol of CSI For The Royal Canadian Mounted Police I. Introduction The Royal Canadian Mounted Police started using Unmanned Aerial Vehicles to help them with their work on collision and crime

More information

UAV Autonomous Navigation in a GPS-limited Urban Environment

UAV Autonomous Navigation in a GPS-limited Urban Environment UAV Autonomous Navigation in a GPS-limited Urban Environment Yoko Watanabe DCSD/CDIN JSO-Aerial Robotics 2014/10/02-03 Introduction 2 Global objective Development of a UAV onboard system to maintain flight

More information

Stable Vision-Aided Navigation for Large-Area Augmented Reality

Stable Vision-Aided Navigation for Large-Area Augmented Reality Stable Vision-Aided Navigation for Large-Area Augmented Reality Taragay Oskiper, Han-Pang Chiu, Zhiwei Zhu Supun Samarasekera, Rakesh Teddy Kumar Vision and Robotics Laboratory SRI-International Sarnoff,

More information

DYNAMIC POSITIONING CONFERENCE September 16-17, Sensors

DYNAMIC POSITIONING CONFERENCE September 16-17, Sensors DYNAMIC POSITIONING CONFERENCE September 16-17, 2003 Sensors An Integrated acoustic positioning and inertial navigation system Jan Erik Faugstadmo, Hans Petter Jacobsen Kongsberg Simrad, Norway Revisions

More information

Multi-Camera Calibration, Object Tracking and Query Generation

Multi-Camera Calibration, Object Tracking and Query Generation MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Multi-Camera Calibration, Object Tracking and Query Generation Porikli, F.; Divakaran, A. TR2003-100 August 2003 Abstract An automatic object

More information

Trimble Engineering & Construction Group, 5475 Kellenburger Road, Dayton, OH , USA

Trimble Engineering & Construction Group, 5475 Kellenburger Road, Dayton, OH , USA Trimble VISION Ken Joyce Martin Koehler Michael Vogel Trimble Engineering and Construction Group Westminster, Colorado, USA April 2012 Trimble Engineering & Construction Group, 5475 Kellenburger Road,

More information

Calibration of IRS-1C PAN-camera

Calibration of IRS-1C PAN-camera Calibration of IRS-1C PAN-camera Karsten Jacobsen Institute for Photogrammetry and Engineering Surveys University of Hannover Germany Tel 0049 511 762 2485 Fax -2483 Email karsten@ipi.uni-hannover.de 1.

More information

Motion Estimation for Video Coding Standards

Motion Estimation for Video Coding Standards Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression

More information

EVOLUTION OF POINT CLOUD

EVOLUTION OF POINT CLOUD Figure 1: Left and right images of a stereo pair and the disparity map (right) showing the differences of each pixel in the right and left image. (source: https://stackoverflow.com/questions/17607312/difference-between-disparity-map-and-disparity-image-in-stereo-matching)

More information

Gaussian Process-based Visual Servoing Framework for an Aerial Parallel Manipulator

Gaussian Process-based Visual Servoing Framework for an Aerial Parallel Manipulator Gaussian Process-based Visual Servoing Framework for an Aerial Parallel Manipulator Sungwook Cho, David Hyunchul Shim and Jinwhan Kim Korea Advanced Institute of Science and echnology (KAIS), Daejeon,

More information

ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning

ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning 1 ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning Petri Rönnholm Aalto University 2 Learning objectives To recognize applications of laser scanning To understand principles

More information

4. Describe the correlation shown by the scatter plot. 8. Find the distance between the lines with the equations and.

4. Describe the correlation shown by the scatter plot. 8. Find the distance between the lines with the equations and. Integrated Math III Summer Review Packet DUE THE FIRST DAY OF SCHOOL The problems in this packet are designed to help you review topics from previous mathematics courses that are essential to your success

More information

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method Intro to Template Matching and the Lucas-Kanade Method Appearance-Based Tracking current frame + previous location likelihood over object location current location appearance model (e.g. image template,

More information

Simple and Robust Tracking of Hands and Objects for Video-based Multimedia Production

Simple and Robust Tracking of Hands and Objects for Video-based Multimedia Production Simple and Robust Tracking of Hands and Objects for Video-based Multimedia Production Masatsugu ITOH Motoyuki OZEKI Yuichi NAKAMURA Yuichi OHTA Institute of Engineering Mechanics and Systems University

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Requirements and Execution Plan

Requirements and Execution Plan Requirements and Execution Plan David E Smith Mike Kasper Ryan Raub 2008/02/13 Rev 1.1 Page 2 Table of Contents Introduction..3 Problem Statement... 3 Solution Statement... 3 Functional Specification..

More information

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

SIGHTLINE PRODUCT OVERVIEW. June 2016

SIGHTLINE PRODUCT OVERVIEW. June 2016 SIGHTLINE PRODUCT OVERVIEW June 2016 SightLine Applications provides onboard video processing to integrators. Make your system AMAZING! 2 APPLICATIONS Advanced Video Processing for a Wide Range of End

More information

Inertial Navigation Static Calibration

Inertial Navigation Static Calibration INTL JOURNAL OF ELECTRONICS AND TELECOMMUNICATIONS, 2018, VOL. 64, NO. 2, PP. 243 248 Manuscript received December 2, 2017; revised April, 2018. DOI: 10.24425/119518 Inertial Navigation Static Calibration

More information

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features

More information

The intelligent mapping & inspection drone

The intelligent mapping & inspection drone The intelligent mapping & inspection drone 3 reasons to choose exom 1 flight, 3 types of imagery With exom you can switch between capturing video, still and thermal imagery during the same flight, without

More information

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry Steven Scher December 2, 2004 Steven Scher SteveScher@alumni.princeton.edu Abstract Three-dimensional

More information

VisionGauge OnLine Spec Sheet

VisionGauge OnLine Spec Sheet VisionGauge OnLine Spec Sheet VISIONx INC. www.visionxinc.com Powerful & Easy to Use Intuitive Interface VisionGauge OnLine is a powerful and easy-to-use machine vision software for automated in-process

More information

ALGORITHMS FOR DETECTING DISORDERS OF THE BLDC MOTOR WITH DIRECT CONTROL

ALGORITHMS FOR DETECTING DISORDERS OF THE BLDC MOTOR WITH DIRECT CONTROL Journal of KONES Powertrain and Transport, Vol. 23, No. 4 2016 ALGORITHMS FOR DETECTING DISORDERS OF THE BLDC MOTOR WITH DIRECT CONTROL Marcin Chodnicki, Przemysław Kordowski Mirosław Nowakowski, Grzegorz

More information

Vision-Based Navigation Solution for Autonomous Indoor Obstacle Avoidance Flight

Vision-Based Navigation Solution for Autonomous Indoor Obstacle Avoidance Flight Vision-Based Navigation Solution for Autonomous Indoor Obstacle Avoidance Flight Kirill E. Shilov 1, Vladimir V. Afanasyev 2 and Pavel A. Samsonov 3 1 Moscow Institute of Physics and Technology (MIPT),

More information

Attack Resilient State Estimation for Vehicular Systems

Attack Resilient State Estimation for Vehicular Systems December 15 th 2013. T-SET Final Report Attack Resilient State Estimation for Vehicular Systems Nicola Bezzo (nicbezzo@seas.upenn.edu) Prof. Insup Lee (lee@cis.upenn.edu) PRECISE Center University of Pennsylvania

More information

IRIS SEGMENTATION OF NON-IDEAL IMAGES

IRIS SEGMENTATION OF NON-IDEAL IMAGES IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322

More information

CS4758: Moving Person Avoider

CS4758: Moving Person Avoider CS4758: Moving Person Avoider Yi Heng Lee, Sze Kiat Sim Abstract We attempt to have a quadrotor autonomously avoid people while moving through an indoor environment. Our algorithm for detecting people

More information

Autonomous Ground Vehicle (AGV) Project

Autonomous Ground Vehicle (AGV) Project utonomous Ground Vehicle (GV) Project Demetrus Rorie Computer Science Department Texas &M University College Station, TX 77843 dmrorie@mail.ecsu.edu BSTRCT The goal of this project is to construct an autonomous

More information

Aerial Visual Intelligence for GIS

Aerial Visual Intelligence for GIS Aerial Visual Intelligence for GIS Devon Humphrey Geospatial Consultant copyright 2013 waypoint mapping LLC 1 Just a few definitions (Pop quiz at the end of presentation...) Unmanned Aerial wing or rotor

More information

A Plexos International Network Operating Technology May 2006

A Plexos International Network Operating Technology May 2006 A Plexos International Network Operating Technology May 2006 BY 4664 Jamestown Ave, Suite 325 Baton Rouge, LA 70808 225.218.8002 1.0 Introduction. is a software environment comprised of proven technologies

More information

Objective. Introduction A More Practical Model. Introduction A More Practical Model. Introduction The Issue

Objective. Introduction A More Practical Model. Introduction A More Practical Model. Introduction The Issue Taming the Underlying Challenges of Reliable Multihop Routing in Sensor Networks By Byron E. Thornton Objective We now begin to build a Wireless Sensor Network model that better captures the operational

More information

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of

More information

Outline Sensors. EE Sensors. H.I. Bozma. Electric Electronic Engineering Bogazici University. December 13, 2017

Outline Sensors. EE Sensors. H.I. Bozma. Electric Electronic Engineering Bogazici University. December 13, 2017 Electric Electronic Engineering Bogazici University December 13, 2017 Absolute position measurement Outline Motion Odometry Inertial systems Environmental Tactile Proximity Sensing Ground-Based RF Beacons

More information

3.2 Level 1 Processing

3.2 Level 1 Processing SENSOR AND DATA FUSION ARCHITECTURES AND ALGORITHMS 57 3.2 Level 1 Processing Level 1 processing is the low-level processing that results in target state estimation and target discrimination. 9 The term

More information

Chapters 1 9: Overview

Chapters 1 9: Overview Chapters 1 9: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 9: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapters

More information

IMAGINE Objective. The Future of Feature Extraction, Update & Change Mapping

IMAGINE Objective. The Future of Feature Extraction, Update & Change Mapping IMAGINE ive The Future of Feature Extraction, Update & Change Mapping IMAGINE ive provides object based multi-scale image classification and feature extraction capabilities to reliably build and maintain

More information

Bridging Link Power Asymmetry in Mobile Whitespace Networks Sanjib Sur and Xinyu Zhang

Bridging Link Power Asymmetry in Mobile Whitespace Networks Sanjib Sur and Xinyu Zhang Bridging Link Power Asymmetry in Mobile Whitespace Networks Sanjib Sur and Xinyu Zhang University of Wisconsin - Madison 1 Wireless Access in Vehicles Wireless network in public vehicles use existing infrastructure

More information

Sensor Integration and Image Georeferencing for Airborne 3D Mapping Applications

Sensor Integration and Image Georeferencing for Airborne 3D Mapping Applications Sensor Integration and Image Georeferencing for Airborne 3D Mapping Applications By Sameh Nassar and Naser El-Sheimy University of Calgary, Canada Contents Background INS/GPS Integration & Direct Georeferencing

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

A Path Planning Algorithm to Enable Well-Clear Low Altitude UAS Operation Beyond Visual Line of Sight

A Path Planning Algorithm to Enable Well-Clear Low Altitude UAS Operation Beyond Visual Line of Sight A Path Planning Algorithm to Enable Well-Clear Low Altitude UAS Operation Beyond Visual Line of Sight Swee Balachandran National Institute of Aerospace, Hampton, VA Anthony Narkawicz, César Muñoz, María

More information

Marcel Worring Intelligent Sensory Information Systems

Marcel Worring Intelligent Sensory Information Systems Marcel Worring worring@science.uva.nl Intelligent Sensory Information Systems University of Amsterdam Information and Communication Technology archives of documentaries, film, or training material, video

More information

ESTGIS 2012 Priit Leomar 1

ESTGIS 2012 Priit Leomar 1 5.12.2012 ESTGIS 2012 Priit Leomar 1 ELI 1995 Haljas tee 25 Tallinn 12012 Estonia +372 6480242 eli@eli.ee 5.12.2012 ESTGIS 2012 Priit Leomar 2 MAIN ACTIVITIES ELI ELI MILITARY SIMULATIONS ELI ENGINEERING

More information

Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras

Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras Proceedings of the 5th IIAE International Conference on Industrial Application Engineering 2017 Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras Hui-Yuan Chan *, Ting-Hao

More information

Gianluca Palermo Sapienza - Università di Roma. Paolo Gaudenzi Sapienza - Università di Roma

Gianluca Palermo Sapienza - Università di Roma. Paolo Gaudenzi Sapienza - Università di Roma Optimization of satellite s onboard data processing workload and related system resources, by offloading specific CPUintensive tasks onto external computational nodes Gianluca Palermo Sapienza - Università

More information