Computing Cloud Cover Fraction in Satellite Images using Deep Extreme Learning Machine

Similar documents
Color Texture Classification using Modified Local Binary Patterns based on Intensity and Color Information

Research on Neural Network Model Based on Subtraction Clustering and Its Applications

Boosting Weighted Linear Discriminant Analysis

Lecture 5: Multilayer Perceptrons

Link Graph Analysis for Adult Images Classification

LOCAL BINARY PATTERNS AND ITS VARIANTS FOR FACE RECOGNITION

The Simulation of Electromagnetic Suspension System Based on the Finite Element Analysis

Gabor-Filtering-Based Completed Local Binary Patterns for Land-Use Scene Classification

Progressive scan conversion based on edge-dependent interpolation using fuzzy logic

Steganalysis of DCT-Embedding Based Adaptive Steganography and YASS

Matrix-Matrix Multiplication Using Systolic Array Architecture in Bluespec

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Performance Evaluation of TreeQ and LVQ Classifiers for Music Information Retrieval

Connectivity in Fuzzy Soft graph and its Complement

Cluster ( Vehicle Example. Cluster analysis ( Terminology. Vehicle Clusters. Why cluster?

Multilabel Classification with Meta-level Features

Pattern Classification: An Improvement Using Combination of VQ and PCA Based Techniques

Parallelism for Nested Loops with Non-uniform and Flow Dependences

A Binarization Algorithm specialized on Document Images and Photos

An Image Fusion Approach Based on Segmentation Region

Improved Accurate Extrinsic Calibration Algorithm of Camera and Two-dimensional Laser Scanner

Fuzzy Modeling for Multi-Label Text Classification Supported by Classification Algorithms

Adaptive Class Preserving Representation for Image Classification

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Performance Analysis of Hybrid (supervised and unsupervised) method for multiclass data set

Chinese Word Segmentation based on the Improved Particle Swarm Optimization Neural Networks

Machine Learning 9. week

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010

Using Fuzzy Logic to Enhance the Large Size Remote Sensing Images

Positive Semi-definite Programming Localization in Wireless Sensor Networks

Multi-scale and Discriminative Part Detectors Based Features for Multi-label Image Classification

A MPAA-Based Iterative Clustering Algorithm Augmented by Nearest Neighbors Search for Time-Series Data Streams

Avatar Face Recognition using Wavelet Transform and Hierarchical Multi-scale LBP

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

A Novel Adaptive Descriptor Algorithm for Ternary Pattern Textures

A Fast Way to Produce Optimal Fixed-Depth Decision Trees

Pixel-Based Texture Classification of Tissues in Computed Tomography

Optimal shape and location of piezoelectric materials for topology optimization of flextensional actuators

Fuzzy C-Means Initialized by Fixed Threshold Clustering for Improving Image Retrieval

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

Bottom-Up Fuzzy Partitioning in Fuzzy Decision Trees

An Improved Image Segmentation Algorithm Based on the Otsu Method

Load Balancing for Hex-Cell Interconnection Network

Cluster Analysis of Electrical Behavior

Interval uncertain optimization of structures using Chebyshev meta-models

Collaboratively Regularized Nearest Points for Set Based Recognition

AVideoStabilizationMethodbasedonInterFrameImageMatchingScore

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

Session 4.2. Switching planning. Switching/Routing planning

Clustering Algorithm of Similarity Segmentation based on Point Sorting

A Toolbox for Easily Calibrating Omnidirectional Cameras

Parallelization of a Series of Extreme Learning Machine Algorithms Based on Spark

Solving two-person zero-sum game by Matlab

Bit-level Arithmetic Optimization for Carry-Save Additions

Smoothing Spline ANOVA for variable screening

FULLY AUTOMATIC IMAGE-BASED REGISTRATION OF UNORGANIZED TLS DATA

3D Scene Reconstruction System from Multiple Synchronized Video Images

TAR based shape features in unconstrained handwritten digit recognition

BioTechnology. An Indian Journal FULL PAPER. Trade Science Inc.

DETECTING AND ANALYZING CORROSION SPOTS ON THE HULL OF LARGE MARINE VESSELS USING COLORED 3D LIDAR POINT CLOUDS

A Real-Time Detecting Algorithm for Tracking Community Structure of Dynamic Networks

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1. SSDH: Semi-supervised Deep Hashing for Large Scale Image Retrieval

Feature Reduction and Selection

arxiv: v3 [cs.cv] 31 Oct 2016

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

Minimize Congestion for Random-Walks in Networks via Local Adaptive Congestion Control

The Research of Support Vector Machine in Agricultural Data Classification

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

A Robust Algorithm for Text Detection in Color Images

Evaluation of Segmentation in Magnetic Resonance Images Using k-means and Fuzzy c-means Clustering Algorithms

CS 534: Computer Vision Model Fitting

Edge Detection in Noisy Images Using the Support Vector Machines

FUZZY SEGMENTATION IN IMAGE PROCESSING

International Conference on Applied Science and Engineering Innovation (ASEI 2015)

Measurement and Calibration of High Accuracy Spherical Joints

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

Elsevier Editorial System(tm) for NeuroImage Manuscript Draft

PCA Based Gait Segmentation

Maximum Variance Combined with Adaptive Genetic Algorithm for Infrared Image Segmentation

S1 Note. Basis functions.

Multi-Collaborative Filtering Algorithm for Accurate Push of Command Information System

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016)

ALEXNET FEATURE EXTRACTION AND MULTI-KERNEL LEARNING FOR OBJECT- ORIENTED CLASSIFICATION

Support Vector Machines

Method of Wireless Sensor Network Data Fusion

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters

Learning a Class-Specific Dictionary for Facial Expression Recognition

Clustering incomplete data using kernel-based fuzzy c-means algorithm

Fuzzy Logic Based RS Image Classification Using Maximum Likelihood and Mahalanobis Distance Classifiers

An Adaptive Filter Based on Wavelet Packet Decomposition in Motor Imagery Classification

Deep learning is a good steganalysis tool when embedding key is reused for different images, even if there is a cover source-mismatch

International Journal of Pharma and Bio Sciences HYBRID CLUSTERING ALGORITHM USING POSSIBILISTIC ROUGH C-MEANS ABSTRACT

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

Classifying Acoustic Transient Signals Using Artificial Intelligence

A Novel Dynamic and Scalable Caching Algorithm of Proxy Server for Multimedia Objects

A Clustering Algorithm for Key Frame Extraction Based on Density Peak

A Clustering Algorithm Solution to the Collaborative Filtering

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Optimizing Document Scoring for Query Retrieval

Transcription:

Computng Cloud Cover Fraton n Satellte Images usng Deep Extreme Learnng Mahne L-guo WENG, We-bn KONG, Mn XIA College of Informaton and Control, Nanjng Unversty of Informaton Sene & Tehnology, Nanjng Jangsu 244, Chna Abstrat Cloud over fraton omputaton s an essental applaton of meteorologal satellte mages. Current researh annot take full advantage of the haratersts and optal parameters of the satellte loud mage, and the loud deteton and loud fraton are neffetve. In order to solve ths problem, ths paper uses deep extreme learnng mahne to detet and lassfy the loud n a satellte mage, and dvdes them nto thk loud, thn loud and lear sky. Then we use spatal orrelaton method to alulate the total loud fraton. The results show that the deep extreme learnng mahne an extrat features of loud mages effetvely, and an dstngush the boundary between thk loud and thn loud well. The loud lassfaton and loud fraton auray of deep extreme learnng mahne s better than tradtonal threshold method, extreme learnng mahne and onvolutonal neural network. Keywords - Cloud fraton; deep extreme learnng mahne; loud deteton; spatal orrelaton; satellte magery. Ⅰ.INTRODUCTION Cloud overs more than 5% surfae of the earth, and t s one of the mportant meteorologal and lmat fators. In order to obtan an aurate dstrbuton of loudness, frstly we should detet and lassfy louds on satellte mage and then alulate the louds fraton on the bass of louds lassfaton. Currently, the nternatonal satellte loud mage alulaton methods nlude ISCCP [], CLAVR [2] and APOLLO [3,4], et. ISCCP algorthm assumes that the value of the observed radaton s from only one of the two louds and skes. When we ompare pxel observaton radaton and lear sky radane values, f the dfferene between the two s greater than the maxmum ampltude hange of lear sky radaton tself, and t determned that the pxel s a loud. CLAVR algorthm uses 2x2 matrx bloks as a deteton unt, when four pxels through the loud are not deteted, we an judge the Matrx loudless, when all pass the deteton, judged t s loud, otherwse onsdered t to be hybrd loud. For the hybrd matrx, f there are lear skes and louds o-exstng matrx to meet other e or snow and lear sky deson ondton, the matrx s re-onvted to lear sky. POLLO algorthm uses Log and form, that s only when the pxel meets all threshold deteton ondtons, the pxel s lear sky. Otherwse, the pxel s loud. In addton, there are other methods, suh as MODIS [5], NIR/VIS [6].Cloud fraton omputng methods desrbed above an be dvded nto two ategores; Frst, through the louds pxels n the regon than total pxels to ompute the loud [7,8]; seond, by the amount of radaton than the pxel refletane of loud fraton omputng equvalent [9,].The frst method s onvenent, but sne you an t analyze sub-pxel loud, loud fraton omputng wll ause hgh results; the seond method solves the problem of sub-pxel loud to some extent, but n some ases t s stll not applable, suh as the regon has multlayer loud or the type of surfae hanges dramatally, et. Cloud deteton s the foundaton of loud fraton omputng, therefore, n order to mprove the auray of loud fraton omputng, we must frst to obtan a better loud deteton results. Current loud deteton tehnologes an be dvded nto two ategores: threshold method and luster analyss method. Threshold method [, 2] manly adopts nfrared temperature threshold, vsble threshold, et., but beause satellte mages are very omplex, wth a fxed global threshold for mage deteton wll produe relatvely large errors. Cluster analyss [3,4] manly nludes hstogram lusterng, adaptve threshold lusterng, dynam lusterng threshold, et., but the loud ategory usually has a lot of features, at ths stage, the researh manly fouses on one haraterst of loud, and t s not effetve to extrat useful nformaton on loud mages. Therefore, the effet of the above loud deteton method s not very good. In addton, loud deteton method based on mahne learnng has been wdely used, nludng support vetor mahnes [5], K-Nearest Neghbor [6], fuzzy strateges [7] as well as neural networks [8], wheren the deteton auray of neural networks s better than other methods, but there are also some shortomngs, man problem s not full use of loud features, and an t extrat enough effetve nformaton. In reent years, deep learnng [9] has developed rapdly, nludng the deep extreme learnng mahne developed from the tradtonal Extreme Learnng Mahne [2], whh n many applaton areas has shown strong adaptablty and robustness, and hgh speed n learnng. Deep extreme learnng mahne as a speal desgned multlayer pereptron wth multple hdden layers, t an fully extrat effetve features for lassfyng louds. After analyzng the problems exstng n many loud fraton omputng methods, ths paper use the optmzed deep extreme learnng mahne to detet louds n satellte mage. On the bass of the deteton result of the loud, we use Spatal Correlaton Method to alulate the total loud fraton. Expermental results show that, ompared wth the tradtonal threshold method, tradtonal extreme learnng mahne and onvolutonal neural network, the D-ELM method n ths paper s better, and more sutable for the researh of loud fraton omputng of satellte mage. DOI.53/IJSSST.a.7.48.37 37. ISSN: 473-84x onlne, 473-83 prnt

Ⅱ.CLOUD DETECTION MODEL AND OPTIMIZATION BASED ON DEEP EXTREME LEARNING MACHINE atvaton funton. W [,,,2,...,, ] T w w w n s nput Ths paper useshj-a/bsatellte mage as expermental data. HJ-A/Bare small satelltes whh are used manly for envronment and dsaster montorng and foreastng,n these two satelltes, HJ-A satellte s equpped wth CCD ameras and hyper spetral mager, HJ-B satellte s also equpped wth a CCD amera and an nfrared amera. The samples used n ths paper are all from the olleton of atmospher expert. Tranng samples nlude thk louds, thn louds and lear skes and eah ategory has 3 samples, testng samples nlude thk louds, thn louds and lear skes and eah ategory has samples. Eah sample s a 28x28 pxel sze mage. We realze loud deteton of the whole satellte mages on the bass of the traned deep Extreme Learnng Mahne. The nput mage sze of deep extreme learnng mahne must be same. All satellte mages are dvded nto numerous28x28 pxel mages, as the nput of deep extreme learnng mahne. Extreme Learnng Mahne (ELM) algorthm s proposed for solvng the sngle hdden layer neural network by Huang. For the tradtonal neural network, espeally the frst sngle hdden layer feedforward neural network, the most obvous feature of the ELM s the ablty to ensure the auray of learnng, and s faster than tradtonal learnng algorthm. ELM s a new fast learnng algorthm, for the sngle hdden layer neural network, ELM an randomly ntalze the nput weghts and bas and get the orrespondng output weghts. For a sngle hdden layer neural network, assumng that there are N random sample ( X, t ),where, X [, 2..., ] T n x x xn R, t [, 2,..., ] T m t t tm R.A sngle hdden layer neural network wth L hdden layer nodes an be expressed as L g( WI X j b) oj, j,... N,where, g( x) s weght. s output weght. b s the bas of the -th hdden layer unt, and W X j represents the nner produt of and X j.the learnng goal of sngle hdden layer neural network s to make the smallest output error, It an be expressed as b, makng, L N oj tj,that s exstng j g( WI X j b) oj, j,... N () It an be expressed wth matrx as: H T (2), W W and Where, H s the output of the hdden layer nodes, s the output weght,t s the expeted output. = HW (,..., W, b,..., b, X,..., X ) L L L gw X b gwl X bl gw X b gw X b N L N L T T T... T... T T L T Lm N, Nm (4) NL In order to tran the sngle hdden layer neural network, we hope to get W, b,,makng, H W, b T mn HW, b T (5) Wb,, (3) Where,,..., L,ths s equvalent to the mnmzng loss DOI.53/IJSSST.a.7.48.37 37.2 ISSN: 473-84x onlne, 473-83 prnt

funton, 2 N L j j (6) j E g W X b t Some of the tradtonal desent gradent-based algorthm an be used to solve ths problem, but generally the learnng algorthm based on the gradent needs to adjust all parameters n the teratve proess. The ELM algorthm, one the nput weghts and bases are determned at random, the output matrx H of hdden layer s unquely determned. Tranng a sngle hdden layer neural networks an be transformed nto solvng a lnear system H T, and the output weghts an be determned by: H T (7) where H s the Moore-Penrose generalzed nverse of matrx H,and t an be proved that the norm of the obtaned soluton s the smallest and unque. the nput raw data should be transformed nto an ELM random feature spae, whh an help to explot hdden nformaton among tranng samples. Then, a N-layer unsupervsed learnng s performed to eventually obtan the hgh-level sparse features. Mathematally, the output of eah hdden layer an be represented as: H g H (8) whereh s the output of the -th layer, H s the output of ( )-th layer, g s the atvaton funton of hdden layer, and β represents the output weghts. The eah layer of D-ELM s a separate module, and funtons as a separate feature extrator. As the layers nreasng, the resultng feature beomes more ompat. One the feature of the prevous hdden layer s extrated, the weghts or parameters of the urrent hdden layer wll be fxed, and need not to be fne-tuned. From Fg.2 we an see that, after unsupervsed herarhal tranng n the D-ELM, the resultant outputs of the K-th layer H are vewed as the k hgh-level features extrated from the nput data. When used for lassfaton, they are random perturbed, and then used as the nputs of the supervsed ELM-based regresson to obtan the fnal results of the whole network. The reason for the perturbaton of H k s that random projeton of Fg.. Network struture of extreme learnng mahne Deep Extreme Learnng Mahne (D-ELM) s bult n the form of mult-layer network struture, manly dvded nto two separate phases: Unsupervsed herarhal feature representaton and supervsed feature lassfaton. As shown n Fg.2, the frst phase s ELM-based automat enoder for extratng mult-sparse features of the nput data, the seond phase usng the orgnal ELM to make the fnal lassfaton. Before unsupervsed feature learnng, the nputs s requred to mantan the unversal approxmaton apablty of ELM. In theory, usng random mappng features as the nputs of the output weghts, the herarhal network an approxmate or lassfy any nput data. Ths part manly ntrodues the automat enoder, Automat enoder as a knd of feature extrator was used n mult-layer network learnng framework. It uses the enoded output to restore the orgnal nput by mnmzng the reonstruton error. Mathematally, automat enoder maps nput data x to a hgher level representaton, and then through spef mappng YH X g AXB DOI.53/IJSSST.a.7.48.37 37.3 ISSN: 473-84x onlne, 473-83 prnt

potentally representng Y, parameterzed by A, b Where, g s the atvaton funton, A s a, d' d weghtng matrx, b s the bas vetor. The resultng potental representaton y s mapped bak to a reonstruted vetor z n the nput spae ',where, ' A', b ' z h y g A y b '. By usng randomly mapped output as potental representaton y, one an easly buld the automat enoder based on ELM, the reonstruton proess of x an be regarded as ELM learnng proess, n whh A s obtaned by solvng regularzed least mean-square optmzaton. We frst to optmze the layer number of the network struture of deep extreme learnng mahne.if the network layer s too lttle, there may an t effetvely extrat mage feature, and also feature nformaton may express too redundant. If the network layer s too muh, the extraton of feature nformaton wll be relatvely more effetvely, however, due to the too many layers, the effetve nformaton on eah network wll lose some, wth the loss of effetve feature nformaton nreases, t an't extrat the relatvely useful feature nformaton. The optmzaton problem of deep neural networks has always been an mportant and hot ssues n the feld of mahne learnng. Currently aadema and ndustry optmze the deep neural networks manly based on experene, through the experene of staff and a large number of smulaton experments to get the optmal struture and parameters. By a large number of experments, we selet the three layers network struture, former two layers as unsupervsed feature extraton, and the thrd layer as supervsed feature lassfaton. After determnng the number of layers of the network, we wll begn to hoose the number of hdden nodes n eah layer of deep extreme learnng mahne, f the number of nodes s less, then there wll extrat less feature map, and the obtanng feature nformaton s less, may affet the lassfaton results, f the number of nodes s more, the extrated feature map are more, but there may be some haraterst nformaton that s not needed,and too muh redundant data also may affet the lassfaton result. From a lot of experments we got the Table,we used 9 tranng samples and 3 testng samples. Analyss of Table shows that the number of nodes n eah hdden layer have a great effet on the auray of the lassfaton, when the number of nodes s relatvely small, the extrated feature maps s relatvely small, t wll ause under fttng, resultng n the lassfaton auray s relatvely low. When nodes are relatvely many, the extrated feature maps are more, but some nformaton may not the requred feature data, and wll be too redundant,ausng overfttng,the dfferene between tranng auray and testng auray s great. Through the above analyss, we seleted 2 hdden nodes n the frst layer of deep extreme learnng mahne, and 2 hdden nodes n the seond layer, and 5 hdden nodes n the thrd layer. Through the above optmzaton, as shown n Fg.3, the steps of feature extratng and loud lassfyng of deep extreme learnng mahne are as follows: )Dvdng the satellte mage nto many 28 x28 small pees, and let these 28 x28 mages as the nput of deep extreme learnng mahne, and ntalzng the nput weghts randomly. 2)Usng extreme learnng mahne automat enoder wth 2 hdden nodes to enode the nput mage; 3)Agan usng extreme learnng mahne automat enoder wth 2 hdden nodes to enode the ouput of the step 2; 4)Puttng the feature-length vetor obtaned n step 3 nto the orgnal extreme learnng mahne whh has 5 hdden nodes, and let ts output as the nput of the lassfer n the fnal layer. Fnally, the loud lassfaton s obtaned by the probablty of Softmax. DOI.53/IJSSST.a.7.48.37 37.4 ISSN: 473-84x onlne, 473-83 prnt

Fg.2 Network struture of deep extreme learnng mahne. Fg.3 Shemat dagram of feature learnng. TABLEⅠ. THE NUMBER OF HIDDEN NODES SELECTION Combnaton of Hdden Nodes Tranng Auray /% Test Auray /% Tranng Tme /s TestngTme /s N=;N2=;N3=2; 92.3 89.95 8.85 3.27 N=;N2=;N3=5; 92.33 9.42. 3.8 N=;N2=;N3=; 92.63 89.47.59 4.24 N=2;N2=2;N3=2; 9.82 89.58 9. 3.6 N=2;N2=2;N3=5; 93.23 9.47.23 3.83 N=2;N2=2;N3=; 92.83 89.36.65 4.37 N=5;N2=5;N3=2; 9.72 87.9 9.56 3.7 N=5;N2=5;N3=5; 94.45 85.23.73 4.8 N=5;N2=5;N3=; 94.85 86.9 2.3 4.6 By the Softmax probablty,we an determne neural network lassfaton, and through the maxmum value judgment of probablty to determne what knd of loud s,but t s dffult to determne the overlappng porton, therefore, we use the probablty value dfferene between the thk loud and thn loud to determne the overlap of thk loud and thn loud, omputaton formula s as follows: S h S.2 (9) b In the formula, Sh represent the probablty value of thk louds after deteton, Sb represent the probablty value of thn louds, through ths formula, we an generally alulate the repeat part of the thk loud and thn loud. Ⅲ.CLOUD FRACTION COMPUTING BASED ON CLOUD DETECTION Based on the haratersts of the CCD data and on the bass of loud deteton, we adopt the algorthm based on refletane deteton to arry out the alulaton of total loud fraton, and usng HJ-A/Bsatellte as the data soure to arry out the researh of total loud fraton alulaton. In order to solve the problem of partal loud over, we use the spatal orrelaton method to alulate the total loud fraton. The bas prnple of the spatal orrelaton method s to obtan the total loud fraton by detetng the amount of radaton n the stuaton of thk loud and lear sky DOI.53/IJSSST.a.7.48.37 37.5 ISSN: 473-84x onlne, 473-83 prnt

overed by the radaton of a sngle pxel, the formula s as follows: lr ld I A I A I () In the formula (), I s the amount of radaton reeved by the pxel, A s the total loud fraton. Ilr s the upper bound of the brghtness of the lear sky pxel and I ld s the lower bound of the brghtness of the thk loud pxel. The formula for the total loud fraton s: I Ilr A () I I ld In ths paper, loud fraton omputng model s manly for the part of the thk loud and thn louds, thk loud of loud fraton defaults to, and the lear sky s. Then gve spef loud fraton omputngresults based on satellte mage deteton. Ⅳ.ANALYSIS OF EXPERIMENTAL RESULTS A. Cloud Deteton and Analyss of Satellte Cloud Image Based on Optmzed Deep Extreme Learnng Mahne Cloud deteton s the bass of loudfraton omputng, and ths paper uses the deep extreme learnng mahne for loud deteton.fg.4 shows the results of loud deteton under dfferent methods.fg.4(a) s the orgnal mage of the satellte loud mage, Fg.4 (b) s the result of the tradtonal threshold method,fg.4 () s the result of the extreme learnng mahne, Fg.4 (d) s the result of onvolutonal neural network and Fg.4 (e) s the result of the deep extreme learnng mahne. In the fgure, the thk loud lr regon s represented by red, and the overlap of thn loud and thk loud s represented by whte, the lear sky regon s represented by blak, and the thn loud regon s represented by blue. From the analyss of the deteted mage n Fg.4,we an observe that, Fg.4 (b) on the whole whte area s obvously more, that s the boundary between thk loud and thn loud s too muh, ndatng that there s no lear showng the boundary of thk loud and thn loud and Clear sky. In Fg4 (), red area of the extreme learnng mahne s more than that on the deep extreme learnng mahne, too many thn louds was be regarded as thk louds. From Fg 4 (d) we an see, n the onvolutonal neural network, the proessng about the boundary of thk loud and thn loud s relatvely smple, and at the boundary of thn loud and thk loud, the whte area s too muh, whle Fg4 (e) shows the orgnal mage more learly, and t looks more delate and rh. Deep extreme learnng mahne detets satellte loud mage aordng the tranng samples of thk loud, thn loud and lear sky, so the junton area s less, that s, the whte part s less. We use the spatal orrelaton method to alulate the loud fraton, so only need to alulate a lear part of the thk loud, n whh the junton and the thn loud seton, we use lnear alulaton method, t an better alulate the loud fraton of the junton and the thn loud. Therefore, t an be seen that the effet of Fg 4 (e) s more n lne wth our requrements than Fg 4 (b) and Fg 4 () and Fg 4 (d). (a)satellte mage(b)threshold method() ELM model (d) CNN model (e) Proposed method Fg.4 Comparson of dfferent loud deteton methods. Another set of expermental results s gven below. As shown n Fg.5, t an be seen from the fgure that the proessng of the onvolutonal neural network s not very good at the boundary between thk loud and thn loud, whh leads to too muh whte area at the junton of thk loud and thn loud, and the proposed method learly shows the dstrbuton of all knds of loud. DOI.53/IJSSST.a.7.48.37 37.6 ISSN: 473-84x onlne, 473-83 prnt

(a) Satellte mage (b) CNN model()proposed method Fg.5 Comparson of dfferent loud deteton methods In order to do effetve quanttatve analyss, we ompare deep extreme learnng mahne (D-ELM) wth extreme learnng mahne (ELM) as well as onvolutonal neural network (CNN) and tradtonal threshold method (TT).The omparson hart s shown n Fg.6.The results n Fg.6 are based on the testng results of the 35 satellte loud mages, whh are omposed of dfferent regons. From the analyss of Fg.6, we an see that tradtonal threshold method needs the threshold value of expert experene, so ts auray s the lowest, and the auray of the threshold method depends on hangng the threshold onstantly, and ths method s also more stupd. Followed by the ELM model, ELM model s more promnent n learnng the haratersts of the samples, and t an be more aurate to dstngush the features of dfferent types of samples, and the auray rate s muh hgher than the tradtonal threshold method. Then the CNN model, we an see that D-ELM and CNN model ompare wth the other two methods have more advantages. Compared to CNN model, the lassfaton auray of D-ELM s slghtly better. A u r a y R a t e 95 9 85 8 75 7 Cloud Deteton Fg.6 Comparson of dfferent loud deteton methods Through the omparson of the auray n Fg.6, we manly fous on the deep extreme learnng mahne (D-ELM) and onvolutonal neural network (CNN) for tme effeny omparson. Table 2 shows the tme effeny of TT ELM CNN D ELM the D-ELM and CNN model for the same sample set, where the total tranng samples s 9, the sze of total testng sample s 3, and eah sample s a 28x28 pxel mage blok. CNN model needs onstant teraton and adjustng parameters n network struture, so ts tranng and learnng speed s very slow and takes a long tme. D-ELM s a mult-layer neural network model based on the Extreme Learnng Mahne, and Extreme Learnng Mahne randomly ntalze weghts and bas and does not requre adjustment durng the tranng proess, havng a speed advantage. The D-ELM nherts ts advantages Speed advantage, tranng and testng tme are very short, therefore, ompared to CNN model,d-elm has more speed advantages. TableⅡ.CNN and D-ELM Tme Consumng Comparson Methods Tranng Tme/s Testng Tme/s CNN 258 55.83 D-ELM.35 3.92 B. Analyss of Cloud Fraton Computng Based on the loud deteton results, we use the spatal orrelaton method to alulate the loud fraton n the loud mage.fg.7 s a set of loud fraton omputng results graph, the olor s from blak to whte, the rght sde shows the proess from to.from the loud fraton alulaton hart of Fg.7 we an see,fg.7 (b), Fg 7 (), Fg 7 (d) and Fg.7 (e) are gradually progressve n terms of olorful degree on the whole.fg.7 (d) and Fg.7 (e) are more smooth and vvd n pseudo olor than that n Fg.7 (b) and 7 (), whh shows that the detals n the orgnal mage are well represented by the algorthm, the overall loud fraton omputng s more aurate. Then another set of expermental results s gven below. From the Fg.8 we an see the advantages of onvolutonal neural network and deep extreme learnng mahne, whh shows that the two algorthms are better able to analyze the detals of hanges n loud mage, and more aurate to alulate the overall loud fraton. However, the olor of Fg.8 (d) s too brght, resultng n loud fraton alulaton s too hgh, and the whte part of Fg.8 (b) and Fg. 8 () s DOI.53/IJSSST.a.7.48.37 37.7 ISSN: 473-84x onlne, 473-83 prnt

too muh. The proposed method has more advantages than these two. Fg.9 shows the omparson hart of the auray of several loud fraton omputng methods, we stll ompare deep extreme learnng mahne (D-ELM) wth extreme learnng mahne (ELM) as well as onvolutonal neural network (CNN) and tradtonal threshold method (TT).The testng data n Fg. 9 are onsstent wth the testng data n Fg.6.It s lear from the fgure that the auray of TT s stll the lowest beause of the poor results of the loud deteton, followed by the ELM model, the auray of loud fraton omputng of extreme learnng mahne ompared to TT has been sgnfantly mproved, the gap between CNN model and ELM model s not very large. Compared wth CNN model, the auray of D-ELM s slghtly hgher, but the omputatonal effeny of D-ELM s muh better than that of CNN, therefore, the deep extreme learnng mahne s more sutable for the alulaton researh of large amount of data suh as satellte loud mage..9.8.7.6.5.4.3.2. (a)satellte mage (b)threshold method () ELM model (d) CNN model (e) Proposed method Fg.7 omparson of dfferent loud fraton omputng methods..9.8.7.6.5.4.3.2. (a)satellte mage (b)threshold method () ELM model (d) CNN model (e) Proposed method Fg.8 Comparson of dfferent loud fraton omputng methods..9.8.7.6.5.4.3.2..9.8.7.6.5.4.3.2..9.8.7.6.5.4.3.2..9.8.7.6.5.4.3.2..9.8.7.6.5.4.3.2..9.8.7.6.5.4.3.2. A u r a y R a t e 95 9 85 8 75 7 Cloud Fraton Computng Fg.9 Auray omparson hart of dfferent loud fraton omputng methods Ⅴ. CONCLUSION AND OUTLOOK In ths paper, n order to solve ths problem of an t take full advantage of the haratersts and optal parameters of the satellte loud mage, we use deep extreme learnng mahne to detet and lassfy the loud n satellte mages. Frstly, we optmze the number of layers of deep extreme learnng mahne and the hdden node number n eah layer, then use the optmzed deep extreme learnng mahne to dvde the loud mages nto thk louds, thn louds and TT ELM CNN D ELM lear sky, on ths bass,use spatal orrelaton method to alulate the total loud fraton. Ths paper, we use HJ-A/B satellte mages as the data to verfy the relablty of deep extreme learnng mahne for loud fraton omputng. Testng results show that n the loud lassfaton thk loud and thn loud an be dstngushed well by deep extreme learnng mahne, and have a lear boundary area. The loud lassfaton and loud fraton omputng auray of deep extreme learnng mahne s better than tradtonal threshold method, extreme learnng mahne and onvolutonal neural network. The loud fraton omputng auray of deep extreme learnng mahne an reah more than 89%, and effeny s hgher. Ths paper manly uses deep extreme learnng mahne to study the satellte mages, although has made some progress, but the applaton of neural network n the loud feld s stll n the start stage, there are stll some defenes, we need further n-depth study, manly has the followng several aspets:) In ths paper, based on the researh of deep extreme learnng mahne, n the proess of loud lassfaton, although the tranng and reognton speed of the model s faster, but the reognton auray s not partularly hgh, so the network struture should be further optmzed, and the haratersts of DOI.53/IJSSST.a.7.48.37 37.8 ISSN: 473-84x onlne, 473-83 prnt

satellte mage should be more n-depth researhed.2) Ths paper manly lassfy the thk louds,thn louds, lear sky and the overlap of thk louds and thn louds of the satellte mage, but n the satellte mage, some glaers, haze would have a great mpat on the researh on lassfaton, therefore, we wll also fous on mprovng the ant-jammng apablty of deteton.3) In the future, the loud fraton omputng work of satellte mage needs to be faster, and more n lne wth the needs of meteorologal serve, therefore, we should not only mprove the performane of the model, but also need to ombne wth the atual hardware deves, to mprove the effeny of loud fraton omputng. REFERENCES [] Rossow W B, Garder L C. Cloud Deteton Usng Satellte Measurements of Infrared and Vsble Radanes for ISCCP.[J]. Journal of Clmate, 994, 6(2):234-2369. [2] Stowe L L, Davs P A, Mlan E P. Sentf Bass and Intal Evaluaton of the CLAVR-Global Clear/Cloud Classfaton Algorthm for the AdvanedVeryHghResoluton Radometer[J].J.atmos.oeanTehnol, 2, 6(6):656-68. [3] Saunders R W, Krebel K T. An mproved method of detetng lear sky and loudyradanes from AVHRR data[j]. Int J Remote Sens, 998,9:23 一 I 5; [4] Krebel K T, Gesell G, Kaestner M, et al. Cloud deteton n AVHRR and ATSR data wth APOLLO[J]. Passve Infrared Remote Sensng of Clouds & the Atmosphere II, 994, 239:36-44. [5] Frey R A, Akerman S A, Lu Y, et al. Cloud Deteton wth MODIS. Part I: Improvements n the MODIS Cloud Mask for Colleton 5[J]. Journal of Atmospher & Oean Tehnology, 28, 25(7):57-72. [6] R. W. SAUNDERS, K. T. KRIEBEL. An mproved method for detetng lear sky and loudy radanes from AVHRR data[j]. Internatonal Journal of Remote Sensng (43-6), 988, 9():23-5 [7] Stammes P, Sneep M, De Haan J F, et al. Effetve loud fratons from the Ozone Montorng Instrument: Theoretal framework and valdaton[j]. Journal of Geophysal Researh Atmospheres, 28, 3(D6):6-38. [8] Wlson D R, Bushell A C, Kerr-MunslowA M, et al. PC2: A prognost loud fraton and ondensaton sheme. I: Sheme desrpton[j]. Quarterly Journal of the Royal Meteorologal Soety, 28, 34(637):293-27. [9] Kassanov E, Long C N, Ovthnnkov M. Cloud Sky Cover versus Cloud Fraton: Whole-Sky Smulatons and Observatons.[J]. Journal of Appled Meteorology, 25, 44(44):págs. 86-98. [] Ghate V P, Albreht B A, Farall C W, et al. Clmatology of surfae meteorology, surfae fluxes, loud fraton, and radatve forng over the southeast Paf from buoy observatons.[j]. Journal of Clmate, 29, 22(2):5527-555. [] Luo Y F, Wang L, Guo X N. Threshold Value Method and Its Applaton n Dynam Analyss of Spatal Latted Strutures[J]. Advanes n Strutural Engneerng, 22, 5(2):225-2226. [2] Yan W U, Yn Y, Sh C X, et al. Valdaton of NOAA/AVHRR Cloud Detetons by an Automated Dynam Threshold Cloud-Maskng Algorthm[J]. Plateau Meteorology, 22, 3(3):745-75. [3] Gordon N D, Norrs J R, Weaver C P, et al. Cluster analyss of loud regmes and haraterst dynams of mdlattude synopt systems n observatons and a model[j]. Journal of Geophysal Researh Atmospheres, 25, (D5):5-7. [4] Zagouras A, Kazantzds A, Nktdou E, et al. Determnaton of measurng stes for solar rradane, based on luster analyss of satellte-derved loud estmatons[j]. Solar Energy, 23, 97(5):-. [5] Gu Y, Wang S, Sh T, et al. Multple-kernel learnng-based unmxng algorthm for estmaton of loud fratons wth MODIS and CloudSatdata[J]. 22:785-788. [6] Ipe A, Bertrand C, Clerbaux N, et al. Valdaton and homogenzaton of loud optal depth and loud fraton retrevals for GERB/SEVIRI sene dentfaton usng Meteosat-7 data[j]. Atmospher Researh, 24, 72(-4):7-37. [7] Chand D, Wood R, Anderson T L, et al. Satellte-derved dret radatve effet of aerosols dependent on loud over[j]. Nature Geosene, 29, 2(3):8-84. [8] Shmdhuber J. Deep learnng n neural networks: An overvew[j]. Neural Networks, 24, 6:85-7. [9] Saponaro G, Kolmonen P, Karhunen J, et al. A neural network algorthm for loud fraton estmaton usng NASA-Aura OMI VIS radane measurements[j]. Atmospher Measurement Tehnques, 23, 6(9):23-239. [2] Huang G B, Zhu Q Y, Sew C K. Extreme learnng mahne: Theory and applatons[j]. Neuroomputng, 26, 7(-3):489-5. DOI.53/IJSSST.a.7.48.37 37.9 ISSN: 473-84x onlne, 473-83 prnt