Relative Pose of IMU-Camera Calibration Based on BP Network

Similar documents
Quaternion Kalman Filter Design Based on MEMS Sensors

Computationally Efficient Visual-inertial Sensor Fusion for GPS-denied Navigation on a Small Quadrotor

Lecture 13 Visual Inertial Fusion

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

High-precision, consistent EKF-based visual-inertial odometry

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

Dynamic Obstacle Detection Based on Background Compensation in Robot s Movement Space

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains

RESEARCH ON THE APPLICATION OF STABLE ATTITUDE ALGORITHM BASED ON DATA FUSION OF MULTI- DIMENSIONAL MEMS INERTIAL SENSORS

MPU Based 6050 for 7bot Robot Arm Control Pose Algorithm

Research on Evaluation Method of Product Style Semantics Based on Neural Network

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

MEMS technology quality requirements as applied to multibeam echosounder. Jerzy DEMKOWICZ, Krzysztof BIKONIS

POTENTIAL ACTIVE-VISION CONTROL SYSTEMS FOR UNMANNED AIRCRAFT

Selection and Integration of Sensors Alex Spitzer 11/23/14

An adaptive container code character segmentation algorithm Yajie Zhu1, a, Chenglong Liang2, b

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview

VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem

A Data Classification Algorithm of Internet of Things Based on Neural Network

Inertial Navigation Systems

Real-time Gesture Pattern Classification with IMU Data

Inertial Navigation Static Calibration

Implementation of Neural Network Methods in Measurement of the Orientation Variables of Spherical Joints

Unmanned Aerial Vehicles

Satellite and Inertial Navigation and Positioning System

Calibration of Inertial Measurement Units Using Pendulum Motion

Geometrical Feature Extraction Using 2D Range Scanner

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Calibration and Noise Identification of a Rolling Shutter Camera and a Low-Cost Inertial Measurement Unit

Autonomous Mobile Robot Design

Design and Development of Control System for Three- Dimensional Wireless Human-Computer

navigation Isaac Skog

Research on Multi-sensor Image Matching Algorithm Based on Improved Line Segments Feature

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Monocular Visual-Inertial SLAM. Shaojie Shen Assistant Professor, HKUST Director, HKUST-DJI Joint Innovation Laboratory

International Journal of Modern Engineering and Research Technology

AC : ADAPTIVE ROBOT MANIPULATORS IN GLOBAL TECHNOLOGY

Simplified Orientation Determination in Ski Jumping using Inertial Sensor Data

A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models

Parameter estimation. Christiano Gava Gabriele Bleser

Study on the Camouflaged Target Detection Method Based on 3D Convexity

Simplified Walking: A New Way to Generate Flexible Biped Patterns

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

(1) and s k ωk. p k vk q

Multimodal Movement Sensing using. Motion Capture and Inertial Sensors. for Mixed-Reality Rehabilitation. Yangzi Liu

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Journal of Chemical and Pharmaceutical Research, 2015, 7(3): Research Article

Short Survey on Static Hand Gesture Recognition

Dynamic Analysis of Structures Using Neural Networks

Application of Geometry Rectification to Deformed Characters Recognition Liqun Wang1, a * and Honghui Fan2

OBSERVER BASED FRACTIONAL SECOND- ORDER NONSINGULAR TERMINAL MULTISEGMENT SLIDING MODE CONTROL OF SRM POSITION REGULATION SYSTEM

MULTI-MODAL MAPPING. Robotics Day, 31 Mar Frank Mascarich, Shehryar Khattak, Tung Dang

ANN-Based Modeling for Load and Main Steam Pressure Characteristics of a 600MW Supercritical Power Generating Unit

Dynamic Modelling for MEMS-IMU/Magnetometer Integrated Attitude and Heading Reference System

ADVANTAGES OF INS CONTROL SYSTEMS

Keywords: Motion Capture System, Virtual Interaction, Wireless Sensor Network, Motion Builder

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

Low Cost solution for Pose Estimation of Quadrotor

Estimation of Altitude and Vertical Velocity for Multirotor Aerial Vehicle using Kalman Filter

Fusion of unsynchronized optical tracker and inertial sensor in EKF framework for in-car Augmented Reality delay reduction

Recognition of Human Body Movements Trajectory Based on the Three-dimensional Depth Data

Algorithm research of 3D point cloud registration based on iterative closest point 1

Operation Trajectory Control of Industrial Robots Based on Motion Simulation

Image Compression: An Artificial Neural Network Approach

Master Automática y Robótica. Técnicas Avanzadas de Vision: Visual Odometry. by Pascual Campoy Computer Vision Group

Motion Estimation for Video Coding Standards

Tracking of Human Arm Based on MEMS Sensors

Trajectory Optimization of Composite-pipe Cutting Based on Robot. Harbin , China. Abstract

Research on the Wood Cell Contour Extraction Method Based on Image Texture and Gray-scale Information.

Orientation Capture of a Walker s Leg Using Inexpensive Inertial Sensors with Optimized Complementary Filter Design

Tracking driver actions and guiding phone usage for safer driving. Hongyu Li Jan 25, 2018

Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm

Face Detection and Tracking Control with Omni Car

Latency and Distortion compensation in Augmented Environments using Electromagnetic trackers

Research on Evaluation Method of Video Stabilization

Nonlinear Filtering with IMM Algorithm for Coastal Radar Target Tracking System

Design of the Robust Controller of the APT Fine Tracking System Based on the Structured Singular Value Theory

Study on Gear Chamfering Method based on Vision Measurement

User Activity Recognition Based on Kalman Filtering Approach

Video Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen

Camera and Inertial Sensor Fusion

Real Time Motion Authoring of a 3D Avatar

Vision-based Frontal Vehicle Detection and Tracking

Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices

LOAM: LiDAR Odometry and Mapping in Real Time

The Research of Delay Characteristics in CAN Bus Networked Control System

Model-based segmentation and recognition from range data

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

State-space models for 3D visual tracking

Control of a quadrotor manipulating a beam (2 projects available)

Self-Calibration of Inertial and Omnidirectional Visual Sensors for Navigation and Mapping

Traffic Flow Prediction Based on the location of Big Data. Xijun Zhang, Zhanting Yuan

Perspective Projection Describes Image Formation Berthold K.P. Horn

Self-calibration of a pair of stereo cameras in general position

Transcription:

Modern Applied Science; Vol., No. ; 27 ISSN 93-844 E-ISSN 93-852 Published by Canadian Center of Science and Education Relative Pose of IMU-Camera Calibration Based on BP Network Shasha Guo, Jing Xu, Ming Fang & Ying Tian School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China Correspondence: Ming Fang, School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China. Tel: 86-43-8558-333. E-mail: fangming@cust.edu.cn Received: August 4, 27 Accepted: August 26, 27 Online Published: September 9, 27 doi:.5539/mas.vnp5 URL: https://doi.org/.5539/mas.vnp5 Abstract There are many applications of the combination of IMU (Inertial Measurements Unit) and camera in fields of electronic image stabilization, enhancement reality and navigation where camera-imu relative pose calibration is one of the key technologies, which may effectively avoid the cases of insufficient feature points, unclear texture, blurred image, etc. In this paper, a new camera-imu relative pose calibration method is proposed by establishing a BP neural network model. Thus we can obtain the transform from IMU inertial measurements to images and achieve camera-imu relative pose calibration. The advantage of our method is the application of BP neural network using Levenberg-Marquardt algorithm, avoiding more complex calculations for the whole process. And it is convient for the application of camera-imu combination system. Meanwhile, nonlinearities and noises are compensated while training and the impact of gravity can be ignored. Our experimental results demonstrated that this method can achieve camera-imu relative pose calibration and the accuracy of quaternion estimation has reached about.. Keywords: BP Network, camera-imu, pose, calibration. Introduction We can t get correct camera motion trajectory by a single visual sensor in cases that image features are scarce, the texture is not obvious, the image is blurred and illumination changes, etc., which may influence the effect of image stabilization and tracking. However, inertial sensor IMU can still provide estimate of their own attitude measurements and predictive information without considering the image and external scenes. Therefore, in many fields such as electronic image stabilization, enhancement reality, navigation etc., combination system of visual sensor and visual sensor is used. They use IMU data to reflect the motion trajectory of the camera by estimating the rigid body transformation relationship between IMU and camera, among which camera-imu relative pose calibration is very important. Traditional camera-imu relative pose calibration methods are mostly based on Extend Kalman Filter (EKF) (Li M. & Mourikis A I., 2; Jia & Evans, 27; Li M., Kim B H & Mourikis A I., 23; Jia C. & Evans B L., 24.) or unscented Kalman Filter (UKF) (SJ Julier & JK Uhlmann, 24). And the motion estimation are mainly obtained according to image characteristics, and then predicted and updated according to IMU data while visual tracking is failed. Jonathan Kelly (Kelly & Sukhatme, 29) employs an unscented Kalman filter to estimate the relative pose of different sensors, which implemented based on hardware support devices. They have achieved camera-imu calibration where the accuracy of their experimental translation error is.43cm,.23cm,.24cm in the x direction, the y direction and the z direction respectively, and the three angle error of Euler angel is all about.6 degree. Though the method is effective, the error accuracy is not high, external hardware support is needed as well. BP network (Chen Ming, 23) can be used to implement a mapping function from input to output, which has been proved to have the ability to implement any complex nonlinear mapping. So it is suitable for complex problems of internal mechanisms. Considering this, we propose a new algorithm based on BP neural network to achieve camera-imu online calibration, framework is shown in Figure. While there are less feature points, input IMU data to trained model and the prediction quaternion quat IMU is finalquat representing relative rotation between adjacent frames. While feature points are adequate, compare quaternion from images quat img and trained 5

mas.ccsenet.org Modern Applied Science Vol., No. ; 27 model output quat IMU, if errors exceed a certain threshold, feedback quaternion from images to trained model to update. Now we have implement camera-imu offline and the comparison module and feedback will be implemented in the future.. The main contribution of our work is the model based on BP algorithm, which uses nonlinear least squares method--levenberg-marquardt (LM) (Mor, 978) algorithm to convert time domain data to spatial domain data and keep their correspondence, and through which we can obtain accurate prediction, i.e. our method can achieve accurate camera-imu calibration. Besides, calibrate relative pose IMU and camera by training, avoiding traditional complex calibration on industrial applications, for example, by extracting corners on calibration plate. And everyone without specialized knowledge can also use an IMU and apply to image stabilization. The reminder of paper is organized as follows. In Section 2, we introduce our prior work on time synchronization in our lab. In Section 3 and Section 4, we introduce our algorithm and provide our experimental results and analyze the experimental results. Our results show excellent performance. And Section 5 is conclusion. IMU Acceleration Angle velocity Trained model quat IMU Quaternion comparison feedback finalquat quat img Camera images Feature points extraction Figure. Framework of prosed calibration method 2. Prior Work Given that the start time, sampling frequency and transmission delay are not the same (S. Gupta, U. Dalal & V.N. Mishra, 25) between the visual sensors and the inertial sensor, data acquired at the same time is often asynchronous, so it is needed for time registration to ensure the matched relationship between different sensors. We employ Newton interpolation method to achieve the time synchronization (Tian Y & Fang M, 26) between camera and IMU. First, extend their frequencies to a same frequency according to the minimum common multiples of the two sensor frequencies, use central difference method to interpolate for the set of extension points to ensure the acquisition frequency synchronization. After that through the minimum error matching of the data from the static to the motion part on the rigid fixed camera-imu specified, and thus achieve time synchronization on the two datasets. 3. Alogorithm As the IMU acquisition frequency is much higher than the camera's acquisition frequency, there are multiple IMU data changes during two adjacent frames captured by a camera. Considering the IMU data changes may reflect the transform between two adjacent frames, that is, after a transformation, it is likely to obtain camera pose by multiple sets of IMU data in same time interval, regardless of translation, we propose a new method to calibrate the transform between camera and IMU based BP algorithm. Fix the camera and IMU rigidly and begin to collect data at the same time. Calculate rotate matrix of the adjacent images and represent it by quaternion and take the quaternion as output, take the IMU data acquired during the same time interval including acceleration (A. Modh, M. Dabhi, L.N. Mishra & V.N. Mishra, 25) and angular velocity as the input. Quaternion (Liu J F, 24) is composed by a real unit and three imaginary units. Three-dimensional rotation can be represented as q=q +q i +q 2 i 2 +q 3 i 3 () Where q is the real part of quaternion, q, q 2 and q 3 are the three imaginary parts of quaternion, i, i 2 and i 3 can 6

mas.ccsenet.org Modern Applied Science Vol., No. ; 27 be interpreted as a rotation from the aspect of geometric. For images, estimate the relative rotation between adjacent images by extracting ORB (ORiented Brief) feature points. Represent the relative rotation by unit quaternion and regard the quaternion training labels. Then establish BP neural network model to obtain the relationship between IMU data and camera pose estimated by images. For hidden layer, we use tan-sigmoid as activation function to com press the data between - and, shown in equation (2). And then a linear output layer is added to the network. tansig(n) = 2/(+e -2n )- (2) I t+ input layer hidden layer hidden layer2 output layer I t+2 q (t : t+n) q (t : t+n) frame t I t+3 q 2 (t : t+n) q 3 (t : t+n) frame t+n I t+n Figure 2. BP network model of the proposed method The basic idea of BP algorithm is the signal positive transmission and error back propagation while learning. The learning rule of BP network is to use the steepest descent method to continuously adjust the weight and threshold of the network by back propagation to minimize the square error. BP neural network model topology includes input layer, hide layer and output layer. The number of hidden layer is not sure. And we establish a BP network of three layers, as shown in Figure 2 to find the relationship F in equation (3) between IMU data and images. Quaternion represents the rotate matrix from frame t to frame T+N and I t is a six dimensional vector representing IMU inertial measurements data acquired at time oft, including three values of acceleration [a x, a y, a z ] and three values of angle velocity [w x, w y, w z ]. [q, q, q 2, q 3 ] (t : t+n) = F(I t : I t+n ) (3) I t = [a xt, a yt, a zt, w xt, w yt, w zt ] (4) Assume the frequency of camera is f c and the frequency of IMU is f i, the number of IMU data measured while camera captured per frame N=f i / f c. Regard the three-layer BP neural network training as a numerical optimization, and the error surface is just a highly nonlinear function (L.N. Mishra, 27; Deepmala, 24) of the weights.4 Here, we employ Levenberg-Marquardt algorithm, i.e. with a gradient seeking maximum (minimum) value of the algorithm to find a set of optimal parameters vector to minimum value function, which compromises the Newton method and gradient method. In part of evaluating, we use Mean Squared Error (MSE) as evaluation to describe the accuracy of the prediction model. MSE is the mean expected square value of the differences between the estimated value and the true value from group. The smaller MSE is, the higher the accuracy of the prediction model is. n 2 MSE = [ ( truei predictedi) ]/ n (5) i= In equation (5), i refers to groups number of data, true i here is quaternion obtained from frame i to frame i +, predicted i is estimated value by IMU data of group i. 7

mas.ccsenet.org Modern Applied Science Vol., No. ; 27 4. Experiments 4. Experimental Platform Fix the IMU and the camera rigidly together to form a visual-inertial system, as shown in Figure 3. Then collect images and inertial measurements data at the same time synchronously based on our preliminary work. Figure 3. Combination system of IMU and camera In our experiments, we use industrial camera GX9 as visual sensor, the resolution is 92*8, and keep its acquisition frequency to be fps; for inertial sensor, we use MPU65 which provides three-axis angular rate and linear acceleration measurements at Hz, and the baud rate is set to 96 4.2 Experimental Dataset Start the combination system of camera and IMU at the same time. We collect two data sets, one is fast sequence and the other is slow sequence. For the fast sequence, we shake the combination system intensely and quickly, thus IMU data during interval of two frames may change significantly. While the IMU data changes for per N group are not obvious, and even there are no changes. From part 4., N = f i / f c =, so for each sequence, there are ten sets of IMU data including acceleration and angle velocity corresponding two images. Thus the training input data for each group is a 6 dimension vector. The training source data needed be acquired in scenes where there are clear and quantities of features points. Then optical flow method is used to extract the images feature points to calculate rotate matrix between adjacent images to generate training labels automatically, i.e., pose of the camera. For convenience, we use quaternion instead of rotate matrix, thus in fact the training output is a 4-dimension vector. We have captured 2 frames image and 2 sets of IMU data correspondingly for each sequence. Figure 4. Network established in the experiment Figure 4 is the three-layer network established in the experiment, there are sixty nodes in the first hidden layer, thirty nodes in the second hidden layer and four nodes in the output layer and each layer we use tan-sigmoid activation function. While training, Levenberg-Marquardt algorithm is used to get optimal parameters. 4.3 Experimental Results Preprocess the IMU data and images and transform them to training input and output data needed. After preprocessing, we get 2 sets of training source data for each sequence which are divided to two parts to train and test respectively in sequential. Among all the training source data, the proportion of training data and final test data is 8%, 2%. Training sequence is used to train model tune model. While training, input data is actually 8

mas.ccsenet.org Modern Applied Science Vol., No. ; 27 divided to three parts, training data, validation data and test data to a percent of 4:: in order to achieve optimal. Final test sequence is used to test the trained model. Figure 5 shows the convergence tendency of the error goal for both fast sequence and slow sequence. Mean Squared Error (mse) Best Validation Performance is.875 at epoch 4 - -2 Train Validation Test Best Mean Squared Error (mse) Best Validation Performance is.684 at epoch 4 - -2 Train Validation Test Best -3 2 3 4 5 6 7 8 9 Epochs -3 2 3 4 5 6 7 8 9 Epochs (a) Error curve for fast sequence (b) Error curve for slow sequence Figure 5. Error goal convergence tendency Figure 5(a) shows that the optimal error goal for fast sequence is.875 and Figure 5(b) shows the optimal error for slow sequence is.684. And these results demostrate that the precision of trained model for both fast and slow sequence is almost the same. Next, input training IMU data and test IMU data of both sequences to the their own trained model respectively to predict. Figure 5(a) and Figure 5(b) show predicted quaternion and truth quaternion of training data for both fast sequence and slow sequence. Figure 6(a) and Figure 6(b) show predicted quaternion and truth quaternion of test data for both fast sequence and slow sequence. Here the true quaternion refer to quaternion obtained by extacting image feature points. In Figure 6 and Figure 7, horizontal axis notes data of each group in sequential, vertical axis notes value of quaternion respectively. Each group data includes a 4-dimention quaternion vector obtained by adjacent frames and a 6-dimention imu data vector, composed by groups IMU data corresponding the two frames. q quaternion of training sequence truequat predquat - 2 4 6 8 2 4 6 8.5 q -.5 2 4 6 8 2 4 6 8.2 q2 -.2 2 4 6 8 2 4 6 8 q3-2 4 6 8 2 4 6 8 datanum(group) (a) Quaternion of training of fast sequence 9

mas.ccsenet.org Modern Applied Science Vol., No. ; 27 q.2 quaternion of training sequence truequat predquat.998 2 4 6 8 2 4 6 8. q -. 2 4 6 8 2 4 6 8. q2 -. 2 4 6 8 2 4 6 8. q3 -. 2 4 6 8 2 4 6 8 datanum(group) (b) Quaternion of training for slow sequence Figure 6. Quaternion of training data q.9 quaternion of test sequence truequat predquat.8 2 4 6 8 2 4 6 8 2.5 q -.5 2 4 6 8 2 4 6 8 2.2 q2 -.2 2 4 6 8 2 4 6 8 2.5 q3 -.5 2 4 6 8 2 4 6 8 2 datanum(group) (a) Quaternion of test for fast sequence 2

mas.ccsenet.org Modern Applied Science Vol., No. ; 27 q quaternion of test sequence.9999 truequat predquat.9998 2 4 6 8 2 4 6 8 2. q -. 2 4 6 8 2 4 6 8 2. q2 -. 2 4 6 8 2 4 6 8 2. q3 -. 2 4 6 8 2 4 6 8 2 datanum(group) (b) Quaternion of test for slow sequence Figure 7. Quaternion of test data By comparing Figure 6(a) and Figure 6(b), Figure 7(a) and Figure 7(b), we can know the method works better on slow sequence and the prediction quaternion can better reflect changes of true changes. A sudden change exceeding a certain threshold indicates a false predictor. For this problem, in practical application of visual-inertial combination system, the value can be replaced by measurement from images. And the threshold is determined by the error convergence of training model. Table. Mean orientation error comparison mean orientation error sequences training data test data q q q 2 q 3 q q q 2 q 3 fast sequence.9.38.95.23.4.29.26.44 slow sequence..4.37.7..4.9.8 For each data sequence, calculate mean orientation error of prediction and true quaternion obtained from images. Table explains comparison of mean orientation error of each data sequence. Taking the number of input data and the probability of error values appears into consideration, and in conjunction with Figure 6 and Figure 7, the table clearly demonstrates that error of training data is lower than that of test data. Besides, there is some relation between four variables of quaternion, and the combination of the four corresponding variables of quaternion can reflect an actual pose. In short, errors in Table indicate that BP neural network can indeed be trained to achieve camera-imu relative pose calibration and predict relative camera pose from IMU data. 5. Conclusion In this paper, a new approach is proposed where a BP neural network using Levenberg-Marquardt algorithm is trained to estimate the relationship from IMU data to relative camera pose. The presented experimental results show that the method is reliable and after training, the predictive value is accurate in view of rotation. For future work, we will try to try to fusion visual and inertial measurements and achieve online calibration in real-time, and then apply the method to electronic real-time image stabilization. 2

mas.ccsenet.org Modern Applied Science Vol., No. ; 27 Acknowledgments The authors are very grateful to the editorial board members and the reviewers of esteemed journal for their valuable suggestions and constructive comments, which grately helped them to improve the research paper. This work is supported by the program of Jilin Science and Technology Development Plan (No.27372GX). References Chen, M. (23). Principle and example of MATLAB Neural Network. Tsinghua University press. Deepmala, A. (24). Study on Fixed Point Theorems for Nonlinear Contractions and its Applications, Ph.D. Thesis (24), Pt. Ravishankar Shukla University, Raipur 492, Chhatisgarh, India. Gupta, S., Dalal, U., & Mishra, V. N. (25). Performance on ICI self cancellation in FFT-OFDM and DCT-OFDM system, Journal of Function Spaces, Volume 25, Article ID 854753, 7 pages. Jia, C., & Evans, B. L. (27). Probabilistic 3-D motion estimation for rolling shutter video rectification from visual and inertial measurements. International Workshop on Multimedia Signal Processing (pp. 23-28). https://doi.org/.9/mmsp.22.634344 Jia, C., & Evans, B. L. (24). Online calibration and synchronization of cellphone camera and gyroscope (pp. 73-734). https://doi.org/.9/globalsip.23.6736995 Julier, S. J., & Uhlmann, J. K. (24). Unscented Filtering and Nonlinear Estimation. https://doi.org/.9/jproc.24.837637 Kelly, J., & Sukhatme, G. S. (29). Visual-Inertial Sensor Fusion: Localization, Mapping and Sensor-to-Sensor Selfcalibration. International Journal of Robotics Research, 3(), 56-79. https://doi.org/.9/cira.29.542378 Li, M., & Mourikis, A. I. (2). Improving the accuracy of EKF-based visual-inertial odometry. Logic for Programming and Automated Reasoning (pp.362-38). https://doi.org/.9/icra.22.6225229 Li, M., Kim, B. H., & Mourikis, A. I. (23). Real-time motion tracking on a cellphone using inertial sensing and a rollingshutter camera (pp. 472-479). https://doi.org/.9/icra.23.663248 Liu, J. F. (24). Three dimensional rotation represented by quaternion. Mishra, L. N. (27). On Existence Theorems for Some Generalized Nonlinear Functional-Integral Equations with Applications. https://doi.org/.2298/fil778n Modh, A., Dabhi, M., Mishra, L. N., & Mishra, V. N. (25). Wireless Network Controlled Robot using a Website, Android Application or Simple hand Gestures. Journal of Computer Networks, 3(), -5. https://doi.org/.269/jcn-3-- Mor, J. J. (978). The Levenberg-Marquardt algorithm: Implementation and theory (pp. 5-6). Tian, Y., & Fang, M. (26). A time synchronization method for inertial sensor and visual sensor. Intelligent Control and Automation (WCICA), 836-839. https://doi.org/.9/wcica.26.757856 Copyrights Copyright for this article is retained by the author(s), with first publication rights granted to the journal. This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4./). 22