HU Sheng-neng* Resources and Electric Power,Zhengzhou ,China

Similar documents
Determining the Optimal Bandwidth Based on Multi-criterion Fusion

The Research of Support Vector Machine in Agricultural Data Classification

Cluster Analysis of Electrical Behavior

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Solving two-person zero-sum game by Matlab

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

BioTechnology. An Indian Journal FULL PAPER. Trade Science Inc.

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

Smoothing Spline ANOVA for variable screening

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Application of Improved Fish Swarm Algorithm in Cloud Computing Resource Scheduling

A Binarization Algorithm specialized on Document Images and Photos

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Support Vector Machines

Meta-heuristics for Multidimensional Knapsack Problems

A New Approach For the Ranking of Fuzzy Sets With Different Heights

S1 Note. Basis functions.

Resource and Virtual Function Status Monitoring in Network Function Virtualization Environment

An Improved Image Segmentation Algorithm Based on the Otsu Method

An Optimal Algorithm for Prufer Codes *

A Saturation Binary Neural Network for Crossbar Switching Problem

GA-Based Learning Algorithms to Identify Fuzzy Rules for Fuzzy Neural Networks

Review of approximation techniques

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

The Application Model of BP Neural Network for Health Big Data Shi-xin HUANG 1, Ya-ling LUO 2, *, Xue-qing ZHOU 3 and Tian-yao CHEN 4

Positive Semi-definite Programming Localization in Wireless Sensor Networks

Study on Fuzzy Models of Wind Turbine Power Curve

X- Chart Using ANOM Approach

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

Feature Reduction and Selection

Optimal Design of Nonlinear Fuzzy Model by Means of Independent Fuzzy Scatter Partition

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

THE PATH PLANNING ALGORITHM AND SIMULATION FOR MOBILE ROBOT

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010

Chinese Word Segmentation based on the Improved Particle Swarm Optimization Neural Networks

Classifying Acoustic Transient Signals Using Artificial Intelligence

Network Intrusion Detection Based on PSO-SVM

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

Journal of Chemical and Pharmaceutical Research, 2014, 6(6): Research Article. A selective ensemble classification method on microarray data

The Shortest Path of Touring Lines given in the Plane

The Comparison of Calibration Method of Binocular Stereo Vision System Ke Zhang a *, Zhao Gao b

Wishing you all a Total Quality New Year!

Network Coding as a Dynamical System

Recommended Items Rating Prediction based on RBF Neural Network Optimized by PSO Algorithm

Mathematics 256 a course in differential equations for engineering students

A NOTE ON FUZZY CLOSURE OF A FUZZY SET

Incremental Learning with Support Vector Machines and Fuzzy Set Theory

Load-Balanced Anycast Routing

A Statistical Model Selection Strategy Applied to Neural Networks

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION

Load Balancing for Hex-Cell Interconnection Network

GSLM Operations Research II Fall 13/14

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

A Fusion of Stacking with Dynamic Integration

An Indian Journal FULL PAPER ABSTRACT KEYWORDS. Trade Science Inc.

Intra-Parametric Analysis of a Fuzzy MOLP

A Notable Swarm Approach to Evolve Neural Network for Classification in Data Mining

The Study of Remote Sensing Image Classification Based on Support Vector Machine

BIN XIA et al: AN IMPROVED K-MEANS ALGORITHM BASED ON CLOUD PLATFORM FOR DATA MINING

Maximum Variance Combined with Adaptive Genetic Algorithm for Infrared Image Segmentation

Classifier Selection Based on Data Complexity Measures *

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Analysis on the Workspace of Six-degrees-of-freedom Industrial Robot Based on AutoCAD

FAHP and Modified GRA Based Network Selection in Heterogeneous Wireless Networks

Virtual Machine Migration based on Trust Measurement of Computer Node

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

Available online at Available online at Advanced in Control Engineering and Information Science

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

Type-2 Fuzzy Non-uniform Rational B-spline Model with Type-2 Fuzzy Data

An Accurate Evaluation of Integrals in Convex and Non convex Polygonal Domain by Twelve Node Quadrilateral Finite Element Method

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

Complexity Analysis of Problem-Dimension Using PSO

Lecture 5: Multilayer Perceptrons

Kernel Collaborative Representation Classification Based on Adaptive Dictionary Learning

Machine Learning 9. week

Machine Learning: Algorithms and Applications

Design of Structure Optimization with APDL

High-Boost Mesh Filtering for 3-D Shape Enhancement

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

Artificial Intelligence (AI) methods are concerned with. Artificial Intelligence Techniques for Steam Generator Modelling

The Man-hour Estimation Models & Its Comparison of Interim Products Assembly for Shipbuilding

A Simple Methodology for Database Clustering. Hao Tang 12 Guangdong University of Technology, Guangdong, , China

Relevance Assignment and Fusion of Multiple Learning Methods Applied to Remote Sensing Image Analysis

Support Vector Machines

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z.

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Classification / Regression Support Vector Machines

An Image Fusion Approach Based on Segmentation Region

Modular PCA Face Recognition Based on Weighted Average

TRAFFIC FATALITIES PREDICTION BASED ON SUPPORT VECTOR MACHINE

A Simple and Efficient Goal Programming Model for Computing of Fuzzy Linear Regression Parameters with Considering Outliers

Fuzzy Rough Neural Network and Its Application to Feature Selection

From Comparing Clusterings to Combining Clusterings

CS 534: Computer Vision Model Fitting

Research of Dynamic Access to Cloud Database Based on Improved Pheromone Algorithm

A New Feature of Uniformity of Image Texture Directions Coinciding with the Human Eyes Perception 1

Transcription:

do:10.21311/002.31.6.09 Applcaton of new neural network technology n traffc volume predcton Abstract HU Sheng-neng* 1 School of Cvl Engneerng &Communcaton, North Chna Unversty of Water Resources and Electrc Power,Zhengzhou 450011,Chna In vew of the dsadvantages of tradtonal neural network technology applcaton, neural network ntegraton technology s appled to traffc forecast for the frst tme. Neural network ntegraton s used to study the same problem wth a fnte number of neural networks, and the output of each network s syntheszed, whch sgnfcantly mproves the generalzaton ablty of the learnng system.based on Boostng and Baggng ntegraton method, the neural network ntegraton method s proposed based on dvde and conquer strategy, and dscussed the network weghts allocaton algorthm. Usng these three knds of neural network ntegraton predcton model, the real-tme traffc volume of a certan ntersecton n Zhengzhou cty s predcted, and the result s better than that of sngle neural network forecastng method. The experments show that the neural network ntegraton s better used n traffc forecastng. Keywords: weght, traffc volume predcton, neural network. 1. INTRODUCTION In recent years, many researchers conducted n-depth research on the traffc flow, and proposes some effectve methods, the neural network technology s regarded as a better non model method, has been wdely used(zhang et al., 2013; Lelthaet al., 2014; Park, 2012).Although the lterature (Hornket al., 2015)has proved that only a nonlnear hdden layer feed-forward network can be arbtrary precson approxmaton of arbtrary complexty functon, however, the confguraton and tranng of the network s the NP problem.in practcal applcatons, due to the lack of pror knowledge of the problem, often need to go through a lot of laborous and tme-consumng expermental exploraton, n order to determne the approprate neural network model, algorthm and parameter settngs, the effect depends entrely on the user experence (Kennedy et al., 2013; Krby et al., 2014; Zhuet al., 2008; Zhuet al., 2015), ths wll affect the generalzaton ablty of the network to mprove. Hansen and Salamon put forward a creatve method (network ntegraton neural), whch provdes a smple and feasble soluton for the above problems. The research results show that the neural network ntegraton method s not only easy to use, but also can sgnfcantly mprove the generalzaton ablty of the learnng system wth a very small computatonal cost (Doughetry et al., 2012; Smth et al., 2013; Cornne, 2015). Therefore, the technology has been successfully used n many felds. In ths paper, neural network ntegraton technology s frst ntroduced nto traffc flow predcton, and t s used to predct the real tme traffc flow of agrcultural road - huayuan road ntersecton n Zhengzhou cty. Usng three dfferent ntegraton methods of neural network predcton of real-tme traffc flow, the expermental results show that the accuracy of the model and the predcton results are deal. The neural network ntegraton for real-tme traffc predcton s feasble and effectve, and the predcton than the sngle neural network model s more superor. 75

2. NEURAL NETWORK INTEGRATION METHOD At present there s no unform defnton of neural network ntegraton, a wdely accepted defnton s as follows: the ntegraton of neural networks s a problem wth learnng a neural network wth lmted output, ntegrated output n a sample of the nput conssts of the ntegrated neural network n the sample under the jont decson (Zhuet al., 2012). There are two key problems n neural network ntegraton. One s how to generate the ndvdual network, and the two s how to combne the output of multple neural networks. The frst overvew of the famous Boostng and Baggng algorthm to solve the above two problems of the scheme, followed by a dscusson of an ndvdual network s effectve to determne the weght of new method based on ntegraton, fnally put forward a neural network ntegraton method based on dvde and conquer strategy. 2.1 Indvdual network generaton method In the generaton of ntegraton n the ndvdual network, the most classcal and mportant technology s the Boostng and Baggng methods. In the Boostng algorthm, the tranng set of each network s determned by the network performance whch s produced before t. The example of the of the exstng network wll appear n the tranng set of the new network wth great probablty. In ths way, the new network wll be able to handle a very good example of the exstng network s very dffcult (Hansen et al., 2010; Breman, 2012; Slveraet al., 2013). Baggng s smlar to Boostng technology, the bass of whch s repeatable samplng. In ths method, the tranng set of each neural network s randomly selected from the orgnal tranng set. The sze of the tranng set s usually equal to the orgnal tranng set. As a result, some samples of the orgnal tranng set may appear several tmes n the new tranng set, whle others may not appear at once. The Baggng method ncreases the dfference degree of the neural network ntegraton by the repeated selecton of tranng set, whch mproves the generalzaton ablty(ghasem et al., 2010; Despotovcet al, 2012; Kobayakawaet al., 2009).The man dfference between Baggng and Boostng methods s Baggng n the selecton of tranng set s random, the tranng set s ndependent of each other, the ndvdual network parallel generaton; and the Boostng tranng set selecton are not ndependent, choose the tranng set and n front of the learnng effect, the ndvdual network can only be generated sequentally. The theoretcal study of Krogh et al., showed that the larger the dfference of the network ntegraton, the better the effect of ntegraton, and the neural network whch s very smlar to each other may not have a role to mprove the generalzaton ablty of the ntegraton(yakubov, 2009; Bbk, 2007; Luet al., 2016). In the case that the generalzaton of the ntegrated network s kept constant, the of the neural network ntegraton can be reduced effectvely by ncreasng the dfference degree.lterature(wuet al., 2010)uses genetc algorthm to select the neural network whch has a dfferent degree of dfference, so as to form a neural network ntegraton, whch s a better way to select ndvdual network. 2.2 Concluson generatng method The ntegrated output s usually generated by the smple average (equal weght) or weghted average of the output of each network.baggng n the use of smple average, whle the use of Boostng weghted average; on whether the use of weghts, there are dfferent vews. It s consdered that the weghted average can get better generalzaton 76

ablty than the smple average. The other s that the optmzaton of weghts wll lead to over- fttng, whch can reduce the generalzaton ablty of the ntegraton. If the equal rght s regarded as a knd of weghted, the ntegraton of ndvdual network results becomes the problem of how to determne the weght. Concluson generaton s a combnaton of predcton, but here, the combnaton of the varous predcton methods are neural network. Therefore, a seres of methods and models to determne the weghts n the combnaton forecastng can be used here. The commonly used methods for solvng the combned forecastng weghts are lnear, nonlnear dynamc programmng and neural networks. In lterature (Chen, 2015), the concept of forecastng method valdty s proposed to predct the accuracy of forecastng method, whch s reasonable. Predcton method based on a combnaton of the maxmum avalablty as the optmzaton objectve, the mathematcal model for solvng the weght coeffcent s a relatvely new method, but the solvng process s very complex, the lterature (Wang, 2014) gves an approxmate method of ths optmzaton model to fnd the optmal soluton. In ths paper, based on the concept of effectve degree, usng a more drect and smple, physcal meanng of the method to determne the weght. The greater the effectveness of an ndvdual network, the hgher the accuracy of the network predcton, the more effectve the network, the larger the weght should be assgned. Consderng the normatve constrants of the weghted coeffcent, the effectveness of the network can be normalzed as a weghted factor. yˆ ( 1,2,, m, t 1,2,, N) Assumng that there are m neural network, t s the respectvelyth neural network to the real of the predctve of y t, so that A t s the th neural network of the accuracy of the sequence, then A t 1 yt yˆ t y t (1) The mean E of the sequence and mean square devaton were 1 E N N t1 1 At, N N t1 ( A t E ) 2 1 2 (2) The valdty of the th neural network s defned as the s S E ( 1 ) Assumng that k s the weght of the th neural network, and s wll be normalzed to get the weghted coeffcent k, then (3) k S m S j j1 1,2,, m (4) 77

2.3 Neural network ntegraton based on dvde and conquer strategy Based on the classcal Boostng and Baggng method, the author proposes a neural network ntegraton method based on dvde and conquer, accordng to the dvson of the sample space tme characterstcs of traffc flow, each tme the tranng set s used to tran an ndvdual network, can better deal wth an nput space of each sub network to make; the results obtaned by usng the weghted network ntegraton, and accordng to the weght of fuzzy predctve tme calculaton. In order to facltate the descrpton, the orgnal tranng set s dvded nto 2 groups: day and nght, tranng 2 neural network. The day and the nght are 2 fuzzy sets, and ts membershp degree curve s shown n fgure 1, fgure 2. Fgure 1. Membershp degree of fuzzy set n the daytme. Fgure 2. Membershp degree of fuzzy set at nght. Assumng the predcton pont s t, the daytme, nght membershp was f 1 (t)and f 2 (t), the membershp normalzed weght calculaton k f t) / f ( t) f ( ) 1, 2 ( 1 2 t (5) Ths wll gve a sample groupng, neural network learnng method can reduce the scale of network partton, shorten the learnng tme, fast convergence, good learnng ablty. The followng experments show the effectveness of ths approach. 3. EXAMPLE OF TRAFFIC VOLUME PREDICTION As the applcaton of neural network ntegraton, usng Boostng, Baggng and partton ntegrated 3 schemes to predct the real tme traffc flow of agrcultural road - huayuan road ntersecton n Zhengzhou cty.the November 27, 2014 24h to the ntersecton of traffc flow (a total of 96 data, the samplng tme s 15mn) as the tranng samples.set S={x =1,2,,N} as the ntal tranng, sets=n, N=96 for all tranng samples.the traffc volume data of November 28th s the test sample, and t s ndependent. 78

3.1 Neural network ntegraton based on Boostng Usng the back-propagaton learnng algorthm, the maxmum learnng tmes s 25000, the learnng rate of Ir=0. 01, =0. 1 and err goal square s learnng goals. Set the ntal of the network connecton weghts [-1, 1] random number. Usng Boostng technology to tran two networks, all tranng samples of S to tran frst network NN 1, the network structure s 4x4x1 BP net,usng the back propagaton learnng algorthm, the maxmum number of learnng s 25000 tmes, learnng rate Ir=0.01, learnng goal s the square goalerr_goal=0.1.set the ntal of the network connecton weghts [-1, 1] random number. After the studyofnn 1, so thats 2 ={x x fttng s greater than 0.10},set all the samples of S 2 as a tranng to tran the network NN 2, set the lower lmt of S 2,M=0.75N,when the concentraton of NN 2 tranng samples s less than M, the number of samples from the fttng s less than 0.10 n the sample set randomly selected to add to the S 2, so that the number of samples contanng S 2 to M.The structure of NN 2 and ts parameters are set to the same as NN 1. The ntegraton results of NN 1 and NN 2 usng weghted average, NN 1 and NN 2 weghts are calculated accordng to the network's effectveness. The ntal weght k 1 =k 2 =0.5,the output of the NN 1 and NN 2 weghted average as the predctve. The effectveness of the two networks s re calculated for each predcted, and then the new k 1 and k 2 are obtaned for the ntegraton of the next output of the two networks. On November 28th 7: 00 ~ 9: 30 tme perod traffc volume forecast results are shown n Table 1, at the same tme, the output of NN 1 and NN 2 for comparson, the relatve of the unt s %. 3.2 Neural network ntegraton based on Baggng Two networks are traned usng the Baggng method.s 1 and S 2 were randomly selected from the ntal sample set S,S 1 =0.75N,S 2 =0.75N,S 1 tranng network s NN 1, S 2 tranng network s NN 2, the NN 1 network structure s 4*4*1 forward to the network, NN 2 network structure s 4*6*1 forward to the network, other parameters settng up the same as above. The results of the ntegrated use of smple average, on November 28th 2: 30 ~ 5: 00 tme perod of traffc volume predcton results and the relatve n table 2. Tme perod Actual Table 1 Boostng neural network ntegraton predcton results NN 1 NN 2 Boostng 7:00~7:15 1444 1293.8 10.4 1791.5-24.06 1542.6-6.83 7:15~7:30 1736 1713.3 1.31 1523.1 12.26 1626.1 6.33 7:30~7:45 1524 1624.9-6.62 1925-26.32 1764.5-15.78 7:45~8:00 1684 2153-27.86 1583.8 5.95 1892.8-12.4 8:00~8:15 2148 1886.9 12.15 1881.5 12.41 1884.3 12.28 79

8:15~8:30 1528 1609.5-5.33 1921.1-25.76 1761.2-15.26 8:30~8:45 2112 2191.9-3.79 1796.9 14.92 2002.7 5.18 8:45~9:00 1776 1600.6 9.88 1426.8 19.66 1517.6 14.55 9:00~9:15 1948 1893.4 2.8 1819.3 6.6 1858.1 4.62 9:15~9:30 1634 2015-23.32 1936.3-18.5 1977.4-21.01 Average relatve /% Root mean square Maxmum relatve /% Mnmum relatve /% 10.35 16.64 11.42 71.67 94.43 67.22 27.85 26.32 21.02 1.31 5.95 4.62 3.3 Neural network ntegraton based on dvde and conquer Thought to dvde the sample above to ntroduce the use of dvde and rule, by nght and daytme traffc volume of each tranng a network.the two network structures are 4*1*4 forward network, and other parameters are set up as above.nn 1 and NN 2 ntegraton of the results of the weghted average, the weght based on the fuzzy membershp of the predcton pont calculaton, see above.in November 28th 4: 45~7: 15 tme perod traffc volume forecast result and the relatve s shown n table 3. Tme perod Actual Table 2 Baggng neural network ntegraton predcton results NN 1 NN 2 Boostng 2:30~2:45 542 764.58-41.07 617.18-13.87 690.88-27.47 2:45~3:00 664 771.42-16.18 562.75 15.25 667.08-0.46 3:00~3:15 656 710.10-8.25 514.26 21.61 612.18 6.68 3:15~3:30 628 751.69-19.70 645.08-2.72 698.39-11.21 3:30~3:45 584 771.05-32.03 552.72 5.36 661.39-13.34 3:45~4:00 754 751.97 0.27 530.20 29.68 641.08 14.98 4:00~4:15 582 754.38-29.62 645.13-10.85 699.75-20.23 4:15~4:30 664 777.90-17.15 582.99 12.50 679.44-2.33 4:30~4:45 568 741.5-30.55 524.96 7.58 633.23-11.48 4:45~5:00 766 757.04 1.17 571.30 25.42 664.17 13.29 Average relatve /% Root mean square Maxmum relatve /% Mnmum relatve /% 19.60 14.48 12.15 43.26 37.15 27.68 41.07 29.38 27.47 0.27 2.72 0.47 80

Tme perod Table 3 Neural network ntegraton predcton results based on dvde and conque Actual NghtNN 1 DayNN 2 Relatv e Dvde and conquer Weght strategy k 1 K 2 7:00~7:15 1444 1293.8 10.4 1791.5-24.06 1542.6-6.83 1 0 7:15~7:30 1736 1713.3 1.31 1523.1 12.26 1626.1 6.33 0.88 0.13 7:30~7:45 1524 1624.9-6.62 1925-26.32 1764.5-15.78 0.75 0.25 7:45~8:00 1684 2153-27.86 1583.8 5.95 1892.8-12.4 0.63 0.38 8:00~8:15 2148 1886.9 12.15 1881.5 12.41 1884.3 12.28 0.5 0.5 8:15~8:30 1528 1609.5-5.33 1921.1-25.76 1761.2-15.26 0.38 0.63 8:30~8:45 2112 2191.9-3.79 1796.9 14.92 2002.7 5.18 0.25 0.75 8:45~9:00 1776 1600.6 9.88 1426.8 19.66 1517.6 14.55 0.13 0.88 9:00~9:15 1948 1893.4 2.8 1819.3 6.6 1858.1 4.62 0 1 9:15~9:30 1634 2015-23.32 1936.3-18.5 1977.4-21.01 0 1 Average relatve /% Root mean square Maxmum relatve/% Mnmum relatve /% 4. CONCLUSION 10.35 16.64 11.42 71.67 94.43 67.22 27.85 26.32 21.02 1.31 5.95 4.62 From the analyss of the above examples, t s shown that the applcaton of neural network ntegraton technology can acheve better results than the sngle neural network. In practcal applcaton, because t can not be known n advance whch network generalzaton s the smallest, so the neural network ensemble has practcal applcaton. The ntegrated effect of the method s remarkable, even ordnary engneerng engneer who lack the experence of neural computng canoperate properly. Therefore, ths technque s a very effectve method of engneerng neural computaton. The essence of neural network ntegraton method s combnaton forecastng, and t has been proved that the precson of combnaton forecastng s certanly better than that of the combned forecastng model. The lterature (Wuet al., 2010)has shown that the weghted generalzaton of neural network ntegraton s not greater than the average of the neural network generalzaton, namely n any case ntegrated performance can reach or exceed the average performance of each network composed of the ntegrated network. The generalzaton can be reduced by ncreasng the dfference among dfferent networks. Therefore, as far as possble ndependent tranng of neural networks, the use of dfferent tranng sets, network structure, learnng algorthm to generate the dfference between the network that can mprove the accuracy of neural network ensemble predcton. In specfc applcatons, the use of the two smlar network ntegraton structure,but n the neural network based on dvde and conquer strategy ntegraton, can also accordng to the traffc flow peak to non peak to dvde the sample and generate the ndvdual network. In addton, the author makes a comparson between the results of smple 81

average and weghted average ntegraton n the same scheme, whch can sometmes lead to excessve confguraton. When t comes to the specfc traffc flow predcton problem, how to determne the structure and number of the sub network, whether or not the weght, how to assgn the weght and so on, stll need to be further studed. 5. ACKNOWLEDGMENTS Thanks to Zhengzhou cty road management department to provde the relevant data, thanks to the colleagues n the project team to deal wth a large number of data. 6. REFERENCES BbkY.V. (2007).The second Hamltonan structure for a specal case of the Lotka- Volterra equatons. Computatonal Mathematcs and Mathematcal Physcs,47(8): 1285 1294 Breman L. (2012).Baggng predctors. Machne Learnng, 24(2):123-140 Chen H.Y. (2015). Research on the combnaton forecastng model based on forecastng effectveness. Forecast, 20(3): 72-73 Cornne Ledoux. (2015).An urban traffc flow model ntegratng neural networks. Transportaton Research Part C: Emergng Technologes, 13(1): 21-31 Despotovc V., Goertz N., Perc Z. (2012).Nonlnear long-term predcton of speech based on truncated Volterra seres. IEEE Transactons on Audo, Speech, and Language Processng,20(3): 1069 1073 DoughetryM.S., CobbettM.R. (2012). Short-term nter-urban traffc forecasts usng neural networks. Internatonal Journal of Forecastng, 13(1): 21-31 Ghasem M., TavassolK.M., Babolan E. (2010).Numercal solutons of the nonlnear VolterraFredholm ntegral equatons by usng homotopy perturbaton method. Appled Mathematcs and Computaton, 188(1): 446 449 Hansen L.K., Salamon P. (2010).Neural Network ntegraton. IEEE Trans Pattern Analyss and Machne Intellgence, 12(10):993-1001 HornkK.M., Stnchcombe M., Whte H.(2015).Multlayer Feedforward Networks are Unversal Approxmators. Neural Networks, 2(2):259-366 Kennedy, R.E. (2013).Partcle swarm optmzaton proc. IEEE Internatonal Conference on Neural Networks (Perth, Australa). IEEE Servce Center, Pscataway, NJ, 5: 1942-1948 Krby H., Watson S., Dougherty M. (2014).Should we use neural networks or statstcal models for short-term motorway traffc forecastng. Internatonal Journal of Forecastng, Elsever, 13: 43-50 Kobayakawa S, Yoko H. (2009).Evaluaton of predcton capablty of Non-recurson type 2nd-order Volterra neuron network for electrocardogram. In: Proceedngs of the 15th Internatonal Conference on Neuro-Informaton Processng of the Asa Pacfc Neural Network Assembly, Lecture Notes n Computer Scence. Berln, Hedelberg: Sprnger, 5507: 679 686 Leltha V., Laurence R.R. (2014).A comparson of the performance of artfcal neural networks and support vector machnes for the predcton of traffc speed. 2004 IEEE Intellgent Vehcles Symposum. Parma, Italy: Insttute of Electrcal and Electroncs Engneers,194-199 Lu L.L., L K.K., Lu F.(2016).Dynamc Smulaton Modelng of Inkng System Based on Elastohydrodynamc Lubrcaton, Internatonal Journal of Heat and Technology,34(3),124-128 Park B. (2012).Hybrd neural-fuzzy applcaton n short-term freeway traffc volume forecastng. Transportaton Research Record 1802, 190-196 SlveraD.D., GlabertP.L., Dos-Santos A.B., Gadrnger M.(2013).Analyss of varatons of volterra seres models for RF power am plers. IEEE Mcrowave and Wreless Components Letters, 23(8): 442 444 82

Smth B.L., DemetskyM.J. (2013).Short-term traffc flow predcton: neural network approach. Transportaton Research Record 1453, 98-104 Wang M.T. (2014).Research on the method of determnng the optmal soluton of the combned forecastng weght coeffcent. System engneerng theory and practce, (3): 104-109 Wu J.X., Zhou Z.H., Shen X.H. (2010).A selectve neural network ntegraton constructon method. Computer research and development, 37(9): 1039-1044 YakubovY.A.(2009). On nonlnear Volterra equatons of convoluton type. Deferental Equatons, 45(9): 1326 1336 Yn H.B., Xu J.M.(2013).Huang Huang. Research on ntersecton traffc volume predcton method based on fuzzy neural network. Chna Journal of hghway and transport, 13(3): 78-81 Zhu Z., Yang Z.S. (2015).Predcton model of real tme traffc flow artfcal neural network. Chna Journal of hghway and transport, 11(4): 89-92 Zhou Z.H., Chen S.F. (2012).Neural network ntegraton. Chnese Journal of computers, 25(1): 1-8 83