Identification of Multisensor Conversion Characteristic Using Neural Networks

Similar documents
Robustness of Selective Desensitization Perceptron Against Irrelevant and Partially Relevant Features in Pattern Classification

Procedia Computer Science

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

A Data Classification Algorithm of Internet of Things Based on Neural Network

Video Inter-frame Forgery Identification Based on Optical Flow Consistency

Proceedings of the 2016 International Conference on Industrial Engineering and Operations Management Detroit, Michigan, USA, September 23-25, 2016

Nelder-Mead Enhanced Extreme Learning Machine

Week 3: Perceptron and Multi-layer Perceptron

Performance Analysis of Data Mining Classification Techniques

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

Face Recognition using SURF Features and SVM Classifier

Research Article Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

HANDWRITTEN GURMUKHI CHARACTER RECOGNITION USING WAVELET TRANSFORMS

9. Lecture Neural Networks

DIFFERENTIAL IMAGE COMPRESSION BASED ON ADAPTIVE PREDICTION

Procedia Computer Science

Linear Separability. Linear Separability. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons

DEVELOPMENT OF NEURAL NETWORK TRAINING METHODOLOGY FOR MODELING NONLINEAR SYSTEMS WITH APPLICATION TO THE PREDICTION OF THE REFRACTIVE INDEX

Research Article International Journals of Advanced Research in Computer Science and Software Engineering ISSN: X (Volume-7, Issue-6)

Image Classification using Fast Learning Convolutional Neural Networks

Fast Learning for Big Data Using Dynamic Function

International Journal of Electrical and Computer Engineering 4: Application of Neural Network in User Authentication for Smart Home System

A faster model selection criterion for OP-ELM and OP-KNN: Hannan-Quinn criterion

Optimizing Number of Hidden Nodes for Artificial Neural Network using Competitive Learning Approach

Transactions on Information and Communications Technologies vol 19, 1997 WIT Press, ISSN

A System for Joining and Recognition of Broken Bangla Numerals for Indian Postal Automation

Image Compression: An Artificial Neural Network Approach

Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging

Knowledge-Defined Networking: Towards Self-Driving Networks

COLLABORATIVE AGENT LEARNING USING HYBRID NEUROCOMPUTING

A neural network that classifies glass either as window or non-window depending on the glass chemistry.

Data Mining. Neural Networks

ANN-Based Modeling for Load and Main Steam Pressure Characteristics of a 600MW Supercritical Power Generating Unit

Image Compression using a Direct Solution Method Based Neural Network

OMBP: Optic Modified BackPropagation training algorithm for fast convergence of Feedforward Neural Network

Ship Energy Systems Modelling: a Gray-Box approach

A Boosting-Based Framework for Self-Similar and Non-linear Internet Traffic Prediction

PARALLEL TRAINING OF NEURAL NETWORKS FOR SPEECH RECOGNITION

Keywords: ANN; network topology; bathymetric model; representability.

APPLICATIONS OF INTELLIGENT HYBRID SYSTEMS IN MATLAB

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation

COMBINING NEURAL NETWORKS FOR SKIN DETECTION

3D RECONSTRUCTION OF BRAIN TISSUE

Use of Artificial Neural Networks to Investigate the Surface Roughness in CNC Milling Machine

Publication A Institute of Electrical and Electronics Engineers (IEEE)

Exercise: Training Simple MLP by Backpropagation. Using Netlab.

Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks

Neural Network Weight Selection Using Genetic Algorithms

International Journal of Advanced Research in Computer Science and Software Engineering

CS6220: DATA MINING TECHNIQUES

Human Identification at a Distance Using Body Shape Information

World Journal of Engineering Research and Technology WJERT

Neural Networks and Deep Learning

Open Access Research on the Prediction Model of Material Cost Based on Data Mining

The Application Research of Neural Network in Embedded Intelligent Detection

Ensembles of Neural Networks for Forecasting of Time Series of Spacecraft Telemetry

NEURO-PREDICTIVE CONTROL DESIGN BASED ON GENETIC ALGORITHMS

IN recent years, neural networks have attracted considerable attention

International Journal of Scientific Research & Engineering Trends Volume 4, Issue 6, Nov-Dec-2018, ISSN (Online): X

Accelerometer Gesture Recognition

Genetic Algorithm for Seismic Velocity Picking

Neural Network Assisted Tile Size Selection

Diagnostics of Product Defects by Clustering and Machine Learning Classification Algorithm

Implementation of Neural Network Methods in Measurement of the Orientation Variables of Spherical Joints

Development of an Artificial Neural Network Surface Roughness Prediction Model in Turning of AISI 4140 Steel Using Coated Carbide Tool

Data Mining on Agriculture Data using Neural Networks

Efficient Voting Prediction for Pairwise Multilabel Classification

Simulation of Back Propagation Neural Network for Iris Flower Classification

SNIWD: Simultaneous Weight Noise Injection With Weight Decay for MLP Training

Classification and Regression using Linear Networks, Multilayer Perceptrons and Radial Basis Functions

Using CODEQ to Train Feed-forward Neural Networks

CS 4510/9010 Applied Machine Learning

ESTIMATION OF SUBSURFACE QANATS DEPTH BY MULTI LAYER PERCEPTRON NEURAL NETWORK VIA MICROGRAVITY DATA

Data Compression. The Encoder and PCA

Climate Precipitation Prediction by Neural Network

Earthquake Engineering Problems in Parallel Neuro Environment

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network

Introduction to Neural Networks: Structure and Training

2 OVERVIEW OF RELATED WORK

SELECTION OF A MULTIVARIATE CALIBRATION METHOD

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Neural Networks Laboratory EE 329 A

Observational Learning with Modular Networks

Gesture Recognition using Neural Networks

Acoustic to Articulatory Mapping using Memory Based Regression and Trajectory Smoothing

Implementation of Neural Network with a variant of Turing Machine for Traffic Flow Control

Liquefaction Analysis in 3D based on Neural Network Algorithm

A Combined Method for On-Line Signature Verification

Keywords Fuzzy, Set Theory, KDD, Data Base, Transformed Database.

MODELLING OF ARTIFICIAL NEURAL NETWORK CONTROLLER FOR ELECTRIC DRIVE WITH LINEAR TORQUE LOAD FUNCTION

Selection and objective comparison of actuator models

Artificial Neuron Modelling Based on Wave Shape

Visual object classification by sparse convolutional neural networks

Artificial Neural Network and Multi-Response Optimization in Reliability Measurement Approximation and Redundancy Allocation Problem

Neural Networks (pp )

Evaluation of Neural Networks in the Subject of Prognostics As Compared To Linear Regression Model

Noise-based Feature Perturbation as a Selection Method for Microarray Data

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM

THE discrete multi-valued neuron was presented by N.

Transcription:

Sensors & Transducers 3 by IFSA http://www.sensorsportal.com Identification of Multisensor Conversion Characteristic Using Neural Networks Iryna TURCHENKO and Volodymyr KOCHAN Research Institute of Intelligent Computer Systems, Ternopil National Economic University, 3 Peremoga Square, 46, Ternopil, Ukraine E-mail: itu@tneu.edu.ua, vk@tneu.edu.ua Received: 5 May 3 /Accepted: 6 August 3 /Published: 3 August 3 Abstract: A method of individual conversion characteristic identification of multisensor using reduced number of its calibration/testing results is described in this paper. The proposed method is based on the neural-based reconstruction (approximation or prediction) of surface of multisensor conversion characteristic. Each neural network module reconstructs separate point of the surface. Our results show that the use of a Support Vector Machine (SVM) model allows improving the reconstruction accuracy of multisensor conversion characteristic. The reconstruction results obtained by SVM are compared with the results obtained by a multilayer perceptron (MLP). Copyright 3 IFSA. Keywords: Multisensor, Conversion characteristic, Neural networks.. Introduction An interest to sensors with an output signal specially depended on several physical quantities (multisensors MS []) is significantly increased in the last decade. Such sensors are successfully used for simultaneous data acquisition of several physical quantities in chemical engineering, safety systems, ecological monitoring and other appropriate fields. The main advantages of such MSs are (i) measurement of a big number of physical quantities that cannot be measured by traditional sensors, (ii) easy use and (iii) relatively small price. One of the disadvantages of a MS is a substantial deviation of its conversion characteristic from a nominal []. In most cases a nominal conversion characteristic is specified with a low accuracy. Therefore MS based measurement devices are not-accurate. An improvement is possible by usage of an individual conversion characteristic, which is defined by calibration/testing results obtained in real exploitation conditions of MS [3]. This approach leads to huge laboriousness during fulfilling the calibration/testing procedures, thus their number should be reduced as much as possible. In our previous work [4] we have developed the neural-based identification method of MS conversion characteristic using reduced number (only nine) of real calibration results to reconstruct all 49 of the surface of its conversion characteristic. In that paper we have (i) specified the MS conversion characteristic surface in a threedimensional coordinate system on the example of a gas multisensor TGS-83 [], (ii) overviewed existing approaches and proved that neural network (NN)-based methods have provided better reconstruction accuracy of the surfaces of MS conversion characteristics, (iii) presented the development of NN-based method consisting of 8 Article number P_SI_48

three phases and (iv) showed the high reconstruction accuracy (relative reconstruction error does not exceed.36 ) of the first phase of the method, i.e. the reconstruction of the surface between real calibration by approximating neural network. In our previous paper [5] we have presented the reconstruction results for the second phase of the proposed method [4], i.e. the reconstruction accuracy of the surface outside real calibration by predicting neural network. This paper is an extended version of the paper [5] and along with the experimental results for the second phase we present the experimental results of the third final phase of the proposed method. Considering good generalization abilities of standard MLP [6], its good approximating abilities [7] and its good predicting properties on a small number of an input training data set [8], we have chosen this widely used model for the fulfillment both the approximation and prediction tasks. Meanwhile, a SVM model has successfully applied for a lot of real-world applications and showed good predicting abilities on the number of prediction tasks [9]. Therefore we also have applied this model to our task and compared the results of both approaches. The rest of the paper is organized as follows. Since we will be referring to the assigned codes of the predicting calibration during our experiments, we have presented the developed neural-based identification method [4] of MS conversion characteristic in Section. Section 3, subsection 3. presents the comparison of the reconstruction results of MS conversion characteristic by MLP and SVM for the second phase of the method, Section 3, subsection 3. presents these results for the third phase of the method. Section 4 concludes the paper.. Development of NN-based Method of Identification of Individual Conversion Characteristic of Multisensor A basic idea of the proposed method of identification an individual MS conversion characteristic based on a reduced number of real calibration/testing results could be explained with the help of Fig.. Thus the idea is: to calculate the value of the white on the basis of the values of the black [4]. As an example in Fig., the 49 calibration for identification of an individual conversion characteristic of MS measuring two physical quantities are encoded by two digits: the first digit describes a point by vertical axes of the physical quantity B, the second digit describes a point by horizontal axes of the physical quantity A. The nine black, 4, 6, 4, 44, 46, 6, 64, 66 are the real MS calibration/testing, others 4 should be by the proposed NN-based method. The reconstruction process of all 4 consists of the three phases. The first phase is to reconstruct the calibration by approximation (interpolation) between the real calibration using Approximating Neural Network (ANN). The codes of the are showed in the third column in Table. The codes of the real calibration which serve as an input of approximation are showed in the second column in Table. The codes of the NNs for such approximation are showed in the first column in Table. In order to provide a transparent coding of all NNs modules used in the proposed method, the code of NN consists of the three digits: the first digit is the code of phase, the second digit is the code of variant within this phase and the third digit is the own code of the NN module. In the result of the first phase execution we reconstruct the 6 new, obtained directly using the real calibration results. Thus, at the end of this phase we have the 5 in total, which belong to a surface of individual MS conversion characteristic. Physical quantity B 7 6 5 4 3 3 4 5 6 Physical quantity A Reconstructed calibration Real calibration Fig.. A placement of real and calibration in the coordinates of the physical quantities. The second phase is to reconstruct the calibration by prediction (extrapolation) on the basis of the real calibration using Predicting Neural Network (PNN). There are possible two variants: (i) without accounting the, in the first phase, in the training set and (ii) with accounting these. The assigning of NNs for the second phase, first variant is showed in Table. The assigning of NNs for the second phase, second variant is showed in Table 3. In the result of the second phase execution we reconstruct the 6 new according to the third column on Table or the fourth column in Table 3. Thus, at the end of this phase, we have the 4 in total, which belong to a surface of individual MS conversion characteristic. 7 9

NN Table. First phase. Input of ANN real calibration Output of ANN calibration, 4, 6 3, 5 4, 44, 46 43, 45 3 6, 64, 66 63, 65 4, 4, 6 3, 5 5 4, 44, 64 34, 54 6 6, 46, 66 36, 56 7, 44, 66 33, 55 8 6, 44, 6 53, 35 NN Table. Second phase, first variant. Input of PNN real calibration Output of PNN calibration, 4, 6, 7 4, 44, 46 4, 47 3 6, 64, 66 6, 67 4, 4, 6, 7 5 4, 44, 64 4, 74 6 6, 46, 66 6, 76 7, 44, 66, 77 8 6, 44, 6 7, 7 Code of NN Table 3. Second phase, second variant. real calibration Input of PNN, on the first phase Output of PNN, on the second phase, 4, 6 3, 5, 7, 4, 6 3, 5, 7 4, 44, 46 43, 45 4, 47 3 6, 64, 66 63, 65 6, 67 4, 4, 6 3, 5, 7 5 4, 44, 64 34, 54 4, 74 6 6, 46, 66 36, 56 6, 76 7, 44, 66 33, 55, 77 8 6, 44, 6 53, 35 7, 7 The third phase is to reconstruct the calibration by prediction (extrapolation) based on the, in the first phase using PNN. The assigning of the NNs for the third phase is showed in Table 4. In the result of the third phase execution we reconstruct the 8 new according to the third column in Table 4. Thus, at the end of this last phase we have the 49 in total, which belong to a surface of individual MS conversion characteristic. Therefore, the proposed NN-based method allows reconstructing the 4 on the surface based on the 9 real calibration/testing. NN Table 4. Third phase. Input of PNN, on the first phase by interpolation Output of PNN 3 3, 33, 34, 35, 36 3, 37 3 5, 53, 54, 55, 56 5, 57 33 3, 33, 43, 53, 63 3, 73 34 5, 35, 45, 55, 65 5, 75 A graphical interpretation of NN-based method of individual MS conversion characteristic identification is presented in Fig.. As we have mentioned above, the NNs with the codes -8 fulfill the approximation tasks, thus we have called them the Approximating Neural Networks (ANNs). The NNs with the codes -8 and 3-34 fulfill the prediction tasks and we have called them the Predicting Neural Networks (PNNs). The wellknown method of back propagation error [5] with adaptive learning rate [] is used for the training of both types of NNs (in a case of MLP). 3. Simulation Modeling Results Our previous researches showed the high reconstruction accuracy of the first phase of the considered method [4], i.e. reconstruction of the 6 surface between real calibration by the ANN. The maximum relative error of approximation did not exceed.36 (only for two 33 and 55), the average relative error of approximation was.3. 3.. Results for the Second Phase of the Method In this section we present the reconstruction results for the second phase of the proposed method, i.e. the reconstruction accuracy of the surface outside the real calibration by the PNN. We provide the experimental results for the second phase, second variant to predict the from fourth column of Table 3 because this variant corresponds better to the real exploitation conditions of MS. The preliminary fulfilled experiments have showed: One PNN, for example with the code (see Table 3), cannot predict two and 7 simultaneously because they are located in the different ends in relation to the real available data (Fig. ). Therefore we have to apply different modules of PNN for prediction, for example, point 7 on the basis of,3,4,5,6 and point on the basis of 6,5,4,3,; Only 5 available training data for each predicting point are not enough for appropriate PNN training in order to provide high-accurate prediction. Therefore we have artificially increased the quantity of input data for each predicting point by applying additional ANN. 3

First phase Second phase NN, 3, 4, 5, 6, 7 NN, 4, 6 3, 5 NN 4, 44, 46 43, 45 NN3 6, 64, 66 63, 65 NN4, 4, 6 3, 5 NN5 4, 44, 64 34, 54 NN6 6, 46, 66 36, 56 NN7, 44, 66 33, 55 NN8 6, 44, 6 53, 35 NN 4, 43, 44, 45, 46 4, 47 NN3 6, 63, 64, 65, 66 6, 67 NN4, 3, 4, 5, 6, 7 NN5 4, 34, 44, 54, 64 4, 74 NN6 6, 36, 46, 56, 66 6, 76 NN7, 33, 44, 55, 66, 77 NN8 6, 53, 44, 35, 6 7, 7 Third phase NN3 3, 33, 34, 35, 36 3, 37 NN3 5, 53, 54, 55, 56 5, 57 Conductivity, μs Z 8 6 4 Y 5 СН 4-3, 5 5 СО -3, 5 X NN33 3, 33, 43, 53, 63 3, 73 NN34 5, 35, 45, 55, 65 5, 75 Fig.. Graphical interpretation of NN-based method of individual MS conversion characteristic identification. Thus, in order to prepare an input training set for each PNN, the additional ANN a MLP with the structure -3- and sigmoid neurons in the hidden and output layers was used. We have increased the quantity of the input data for PNN from 5 to. The limit number of training iteration was fixed to 7, the results of approximation are presented in Table 5. Table 5. Approximation results of input data preparation for PNN. Reached SSE Relative approximation error in real calibration # ( 4 etc) # (4 44 etc) #3 (6 46 etc), 7.9-8.5..4 4, 47 4.5-8..5. 6, 67 8. -9..., 7 9-8.6..5 4, 74-8.9..5 6, 76.6-6..9.39, 77 3.3-7.6.. 7, 7 6-8.6..7 We have evaluated the relative approximation errors only for 3 real calibration (see Table 3, column ) because these data are known for each approximation case. For example, for the approximation, 7 we have evaluated the approximation errors in the, 4, 6, for the case 4, 47 the approximation errors in the 4, 44, 46 and so on (Table 5). As we can see, a MLP, as a universal approximator [7], showed very low relative approximation errors, the maximal error does not exceed.39, the average error is equal.. Thus we have used the obtained as the training set for the appropriate PNN to predict the values of each point of fourth column of Table 3. As the PNN we have used a MLP model with the structure 3-- and sigmoid neurons in the hidden and output layers. The ANN and PNN routines are developed on C. All experiments were fulfilled on the computer with Intel Core Duo processor.4 GHz with 3 GB of RAM. The MLP training parameters and the prediction results are collected in Table 6. As another model of PNN we have used a nu-svr model, a SVM working in a regression mode available within the LIBSVM library []. All the input parameters and the prediction results of SVM model are collected in Table 7. Using input parameters s=4 and t= we have 3

chosen the nu-svr type and polynomial kernel of the SVM respectively. All other input parameters c, d, g, r were chosen empirically. The training time of the SVM model does not exceed several milliseconds for each predicting calibration point on the same computer. Thus the maximum and average relative errors of prediction are.69 and.8 by the SVM model and 5. and.9 by the MLP model respectively. The comparative analysis of the relative errors of prediction is depicted in Fig. 3 and Fig. 4. As it is seen, the SVM model showed much better prediction results in comparison with the MLP model for the 4 cases. Only for two cases, for the 77 and 7, the MLP model outperformed the SVM model. Relative error of prediction, 6 5 4 3 MLP SVM 7 74 4 76 6 7 # of calibration Table 6. MLP training parameters and prediction results for the second phase. Point Reached SSE time, s Rel. error of prediction, 7 3. -7 3 6 5.. 3.3-8 3 7 47. 5. 74 8.9-8 3 7 48..5 4 6.6-8 3 7 48. 3.4 76.7-7 6 7 49..5 6 7.8-9 3 7 43..4 7 7.8-9 3 7 47..6.8-9 3 7 47..5 47.3-8.5 7..4 4. -9 3.8 7 34..3 67 6. -9 3 7 5..5 6 3.5-9 7 64..8 77 6.8-9 3 7 63.. 5.5-9 3 7 43..7 7.3-8 3 7 48.. 7 3.6-9 3 7 46..5 Table 7. SVM training parameters and prediction results for the second phase. Point c g d r MSE 7 4 4.5.5-37 5.4-6.8 4 4.5.5-86 7. -7.9 74 4 4.5.5-3.6-6.34 4 4 4.5.5-4.4-7.6 76 4.5.5 7686.5-8. 6 45 5.5-397 9. -7.67 7 4.5.5.8-7.5 4.5.5.7-9.4 47 4.5.5-3. -7.5 4 4.5.5.3-8.8 67 4.5.5 3.4-7.9 6 4.5.5. -8.6 77 4.5.5.5-7.64 4.5.5 897.8-7.69 7 5.5.5 39.3-6.44 7 5.5.5. -7.8 Rel. error of predicttion, Fig. 3. Comparison of prediction results by MLP and SVM for the first part of calibration : second phase. Relative error of prediction,.8.6.4..8.6.4. MLP SVM 47 4 67 6 77 7 7 # of calibration Fig. 4. Comparison of prediction results by MLP and SVM for the second part of calibration : second phase. 3.. Results for the Third Phase of the Method In this section we present the reconstruction results for the third phase of the proposed method, i.e. the reconstruction accuracy of the surface from third column of Table 4. We have used the same methodology, hardware and software as in the previous subsection 3.. We have used the same approach with approximation, described in Table 5, for the improvement of the further prediction accuracy of the PNN and we have increased the quantity of the input data for PNN from 5 to. Similarly to the results of the second phase, the reconstruction (prediction) results for the third phase are collected in Tables 8 and 9 using MLP and SVM respectively. Thus the maximum and average relative errors of prediction are.54 and.8 by the SVM model and.3 and.8 by the MLP model respectively. The comparative analysis of the relative errors of prediction is depicted in Fig. 5. As it is seen, the SVM model showed much better prediction results in comparison with the MLP model for all the of the third phase. Thus these highaccurate prediction results provided by the SVM 3

model allow applying the proposed method in real measurement conditions to reconstruct an individual conversion characteristic of a multisensor. Table 8. MLP training parameters and prediction results for third phase. # Architecture MLP, SIG- SIG SSE (required.) time, s Relative error of predicttion, 3 3--.67 3 7 43..3 73 3--.36 3 7 44.. 5 3--.49 3 7 4..3 75 3--.56 3 7 39..5 3 3--.9 3 7 4..7 37 3--.4 3 7 4..5 5 3--.43 3 7 47..6 57 3--.76 3 7 473..6 Table 9. SVM training parameters and prediction results for third phase. # s t C g d r Mean squared error Rel. error of predicttion, 3 4 5.5.5 5 4.9-8.7 73 4 5.5.5.6-6.38 5 4 6.5 3.5 86 5. -7.54 75 4 4.5.5 99. -6.6 3 4 6.5 3.5 3. -7.38 37 4 6.5 3.5 3. -7.8 5 4 6.5 3.5 4. -7.3 57 4 3.5.8.5.4-6. Relative error of prediction,.5.5.5 MLP SVM 3 73 5 75 3 37 5 57 # of calibration Fig. 5. Comparison of prediction results by MLP and SVM for the third phase. 4. Conclusions The neural network based method of individual conversion characteristic identification of multisensor using reduced number of its calibration/testing results is considered in this paper. The proposed NN-based method allows reconstructing the 4 on the surface based on only 9 real calibration/testing. We have evaluated the reconstruction accuracy of the on multisensor conversion characteristic surface outside the real calibration predicted by a Multi-Layer Perceptron and a Support Vector Machine. Our results show good potential abilities of a Support Vector Machine model to provide high-accurate prediction of a multisensor conversion characteristic surface at the small quantity of input data for neural network training. The maximum relative prediction error during the reconstruction of the surface does not exceed.7. These high-accurate prediction results provided by the SVM model allow applying the proposed method in real exploitation conditions to improve the measurement accuracy of multisensors. References []. A. H. Taner and J. E. Brignell, Virtual instrumentation and intelligent sensors, Sensors and Actuators A: Physical, Vol. 6, No. -3, 997, pp. 47-43. []. http://www.figarosensor.com/products/83pdf.pdf. [3]. E. J. Brignell, Digital compensation of sensors, Scientific Instruments, Vol., No. 9, 987, pp. 97-. [4]. I. Turchenko, O. Osolinsky, V. Kochan, A. Sachenko, R. Tkachenko, V. Svyatnyy and M. Komar, Approach to neural-based identification of multisensor conversion characteristic, in Proceedings of the 5 th IEEE International Workshop on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS 9), 9, Rende, Italy, pp. 7-3. [5]. I. Turchenko, V. Kochan, Improvement of identification accuracy of multisensor conversion characteristic using SVM, in Proceedings of the 6 th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS' ), Prague, Czech Republic,, pp. 388-39. [6]. S. Haykin, Neural networks: A comprehensive foundation, nd edition: eds. by N. N. Kussul, Williams, Moscow, 6, 4 p. (in Russian). [7]. K. Hornik, M. Stinchcombe and H. White, Multilayer feedforward networks are universal approximators, Neural Networks, Vol., 989, pp. 359-366. [8]. A. Sachenko, V. Kochan and V. Turchenko, Instrumentation for gathering data, IEEE Instrumentation & Measurement Magazine, Vol. 6, Issue 3, 3, pp. 34-4. [9]. Support Vector Machine application site, http://www.clopinet.com/isabelle/projects/svm/appli st.html []. V. Golovko, Neural networks: training, models and applications, Radiotechnika, Moscow,, 56 p. (in Russian). []. Chih-Chung Chang and Chih-Jen Lin, LIBSVM: a library for support vector machines,. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm 33

3 Copyright, International Frequency Sensor Association (IFSA). All rights reserved. (http://www.sensorsportal.com) 34