Numerical solution of hybrid fuzzy differential equations by fuzzy neural network
|
|
- Charleen Newton
- 5 years ago
- Views:
Transcription
1 Available online at ttp://ijimsrbiauacir/ Int J Industrial Matematics (ISSN ) Vol 6, No 2, 2014 Article ID IJIM-00371, 15 pages Researc Article Numerical solution of ybrid fuzzy differential equations by fuzzy neural network M Otadi, M Mosle Abstract Te ybrid fuzzy differential equations ave a wide range of applications in science and engineering We consider te problem of finding teir numerical solutions by using a novel ybrid metod based on fuzzy neural network Here neural network is considered as a part of large field called neural computing or soft computing Te proposed algoritm is illustrated by numerical examples and te results obtained using te sceme presented ere agree well wit te analytical solutions Keywords : Hybrid fuzzy differential equations; Fuzzy neural networks; Learning algoritm 1 Introduction e topic of Fuzzy Differential Equations T (FDEs) as been rapidly growing in recent years Te concept of fuzzy derivative was first introduced by Cang and Zade [11], it was followed up by Dubois and Prade [13] wo used te extension principle in teir approac Oter metods ave been discussed by Puri and Ralescu [35] and by Goetscel and Voxman [14] Fuzzy differential equations were first formulated by Kaleva [20] and Seikkala [37] in time dependent form Kaleva ad formulated fuzzy differential equations, in terms of Hukuara derivative [20] Buckley and Feuring [10] ave given a very general formulation of a fuzzy first-order initial value problem Tey first find te crisp solution, make it fuzzy and ten ceck if it satisfies te FDE Also, te fuzzy initial value problem ave been studied by several autors Corresponding autor otadi@iaufbacir Department of Matematics, Firoozkoo Branc, Islamic Azad University, Firoozkoo, Iran Department of Matematics, Firoozkoo Branc, Islamic Azad University, Firoozkoo, Iran [1, 2, 6, 7, 8, 9, 12, 25, 36] Hybrid system is a dynamic system tat exibits bot continuous and discrete dynamic beavior Te ybrid systems are devoted to modeling, design, and validation of interactive systems of computer programs and continuous systems Te differential equations containing fuzzy value functions and interaction wit a discrete time controller are named as ybrid fuzzy differential equations (HFDEs) [32] In tis work, we propose a new solution for te approximated solution of HFDEs using innovative matematical tools and neural-like systems of computation based on te Hukuara derivative Tis ybrid metod can result in improved numerical metods for solving HFDEs In tis proposed metod, FNNM is applied as universal approximator Te main aim of tis paper is to illustrate ow fuzzy connection weigts are adjusted in te learning of fuzzy neural networks by te back-propagation-type learning algoritms [17, 19] for te approximated solution of HFDEs In tis paper, our fuzzy neural network is a tree-layer feedforward neural network were connection weigts and biases are fuzzy 141
2 142 M Otadi, et al /IJIM Vol 6, No 2 (2014) numbers Furter, we sow tat te solutions obtained by fuzzy neural network is more accurate and well agree wit te exact solutions For many years tis tecnology as been successfully applied to a wide variety of real-word applications [34] Recently, fuzzy neural network model (FNNM) successfully used for solving fuzzy polynomial equation and systems of fuzzy polynomials [3, 4], approximate fuzzy coefficients of fuzzy regression models [27, 28, 29], approximate solution of fuzzy linear systems and fully fuzzy linear systems [30, 31] and fuzzy differential equations [26] 2 Preliminaries Definition 21 A fuzzy number u is a fuzzy subset of te real line wit a normal, convex and upper semicontinuous membersip function of bounded support Definition 22 [20] A fuzzy number u is a pair (u, u) of functions u(r), u(r); 0 r 1 wic satisfy te following requirements: i u(r) is a bounded monotonic increasing left continuous function on (0, 1] and rigt continuous at 0 ii u(r) is a bounded monotonic decreasing left continuous function on (0, 1] and rigt continuous at 0 iii u(r) u(r), 0 r 1 Te set of all te fuzzy numbers is denoted by E Tis fuzzy number space as sown in [38], can be embedded into te Banac space B = C[0, 1] C[0, 1] were te metric is usually defined as (u, v) = max{sup 0 r 1 u(r), sup 0 r 1 v(r) }, for arbitrary (u, v) C[0, 1] C[0, 1] Before describing a fuzzy neural network arcitecture, we denote real numbers and fuzzy numbers by lowercase letters (eg, a, b, c, ) and uppercase letters (eg, A, B, C, ), respectively 21 Operations of fuzzy numbers We briefly mention fuzzy number operations defined by te extension principle [39] Since input vector of feedforward neural network is fuzzy in tis paper, te following addition, multiplication and nonlinear mapping of fuzzy numbers are necessary for defining our fuzzy neural network: µ A+B (z) = max{µ A (x) µ B (y) z = x+y}, (21) µ AB (z) = max{µ A (x) µ B (y) z = xy}, (22) µ f(net) (z) = max{µ Net (x) z = f(x)}, (23) were A, B, Net are fuzzy numbers, µ () denotes te membersip function of eac fuzzy number, is te minimum operator and f() is a continuous activation function (like sigmoidal activation function) inside idden neurons Te above operations of fuzzy numbers are numerically performed on level sets (ie, α-cuts) Te -level set of a fuzzy number A is defined as [A] = {x R µ A (x) } for 0 < 1, (24) and [A] 0 = (0,1] [A] Since level sets of fuzzy numbers become closed intervals, we denote [A] as [A] = [[A] L, [A]U ], (25) were [A] L and [A]U are te lower limit and te upper limit of te -level set [A], respectively From interval aritmetic [5], te above operations of fuzzy numbers are written for -level sets as follows: [A] + [B] = [[A] L + [B]L, [A]U + [B]U ], (26) [A] [B] = [min{[a] L [B]L, [A]L [B]U, [A] U [B]L, [A]U [B]U }, max{[a] L [B]L, [A]L [B]U, [A] U [B]L, [A]U [B]U }], (27) f([net] ) = f([[net] L, [Net]U ]) = [f([net] L ), f([net]u )], (28) were f is increasing function In te case of 0 [B] L [B]U, Eq (27) can be simplified as [A] [B] = [min{[a] L [B]L, [A]L [B]U }, max{[a] U [B]L, [A]U [B]U }]
3 M Otadi, et al /IJIM Vol 6, No 2 (2014) Input-output relation of eac unit Our fuzzy neural network is a tree-layer feedforward neural network were connection weigts, biases and targets are given as fuzzy numbers and inputs are given as real numbers For convenience in tis discussion, FNNM wit an input layer, a single idden layer, and an output layer is represented as a basic structural arcitecture Here, te dimension of FNNM is denoted by te number of neurons in eac layer, tat is n m s, were m, n and s are te number of neurons in te input layer, te idden layer and te output layer, respectively Te arcitecture of te model sows ow FNNM transforms te n inputs (x 1,, x i,, x n ) into te s outputs (Y 1,, Y k,, Y s ) trougout te m idden neurons (Z 1,, Z j,, Z m ), were te cycles represent te neurons in eac layer Let B j be te bias for neuron Z j, C k be te bias for neuron Y k, W ji be te weigt connecting neuron x i to neuron Z j, and W kj be te weigt connecting neuron Z j to neuron Y k In order to derive a crisp learning rule, we restrict connection weigts and biases by four types of (real numbers, symmetric triangular fuzzy numbers, asymmetric triangular fuzzy numbers and asymmetric trapezoidal fuzzy numbers) wile we can use any type of fuzzy numbers for fuzzy targets For example, an asymmetric triangular fuzzy connection weigt is specified by its tree parameters as W kj = (w L kj, wc kj, wu kj ) Wen an n-dimensional input vector (x 1,, x i,, x n ) is presented to our fuzzy neural network, its input-output relation can be written as follows, were f : R n E s : Input units: Hidden units: o i = x i, i = 1, 2,, n (29) Z j = f(net j ), j = 1, 2,, m, (210) Output units: Net j = n o i W ji + B j (211) i=1 Y k = f(net k ), k = 1, 2,, s, (212) Net k = m W kj Z j + C k, (213) were connection weigts, biases, and targets are fuzzy and inputs are real numbers Te inputoutput relation in Eqs(29)-(213) is defined by te extension principle 23 Calculation of fuzzy output Te fuzzy output from eac unit in Eqs(29)- (213) is numerically calculated for real inputs and level sets of fuzzy weigts and fuzzy biases Te input-output relations of our fuzzy neural network can be written for te -level sets: Input units: Hidden units: o i = x i, i = 1, 2,, n (214) [Z j ] = f([net j ] ), j = 1, 2,, m, (215) [Net j ] = Output unit: n o i [W ji ] + [B j ] (216) i=1 [Y k ] = f([net k ] ), k = 1, 2,, s, (217) [Net k ] = m [W kj ] [Z j ] + [C k ] (218) From Eqs(214)-(218), we can see tat te - level sets of te fuzzy outputs Y k s are calculated from tose of te fuzzy weigts, fuzzy biases and crisp inputs From Eqs(26)-(29), te above relations are rewritten as follows wen te inputs x i s are nonnegative, ie, 0 x i : Input units: o i = x i (219) Hidden units: [Z j ] = [[Z j ] L, [Z j] U ] = [f([net j ] L ), f([net j] U )], were f is increasing function [Net j ] L = [Net j ] U = Output units: n o i [W ji ] L + [B j] L, (220) i=1 n o i [W ji ] U + [B j] U (221) i=1 [Y k ] = [[Y k ] L, [Y k] U ] =
4 144 M Otadi, et al /IJIM Vol 6, No 2 (2014) [f([net k ] L ), f([net k] U )], (222) were f is increasing function [Net k ] L = j a[w kj ] L [Z j] L + [W kj ] L [Z j] U + [C k] L, j b [Net k ] U = j c[w kj ] U [Z j] U + [W kj ] U [Z j] L + [C k] U, (223) j d for [Z j ] U [Z j] L 0, were a = {j [W kj] L 0}, b = {j [W kj ] L < 0}, c = {j [W kj] U 0},d = {j [W kj ] U < 0}, a b = {1,, m} and c d = {1,, m} 3 Hybrid fuzzy differential equations In tis paper, we will study te HFDE dy(x) = f(x, y(x), λ k (y k )), y(a) = y 0, x [x k, x k+1 ], k = 0, 1, 2, (324) were y is a fuzzy function of x, y k denotes y(x k ), f : [x 0, ) E E E is continuous, eac λ k : E E is continuous and {t k } k=0 is strictly increasing and unbounded A solution to Eq(324) will be a fuzzy function y : [x 0, ) E satisfying Eq(324) For k = 0, 1, 2,, let f k : [x k, x k+1 ] E E, were f k (x, y k (x)) = f(x, y k (x), λ k (y k )) Te HFDE (324) can be written in expanded form as dy 0 (x) = f(x, y 0 (x), λ 0 (y 0 )) f 0 (x, y 0 (x)), x 0 x x 1, dy 1 (x) = f(x, y 1 (x), λ 1 (y 1 )) dy(x) = y(x) = f 1 (x, y 1 (x)), x 1 x x 2, dy k (x) = f(x, y k (x), λ k (y k )) f k (x, y k (x)), x k x x k+1, (325) and a solution of (324) can be expressed as y 0 (x), x 0 x x 1, y 1 (x), x 1 x x 2, y k (x), x k x x k+1, (326) A solution y of (324) will be continuous and piecewise differentiable over [x 0, ) and differentiable in eac interval (x k, x k+1 ) for k = 0, 1, 2, Teorem 31 [33] Consider te HFDE (324) expanded as (325) were for k = 0, 1, 2,, eac f k : [x k, x k+1 ] E E is suc tat (i) [f k (x, y)] = [[f k (x, [y] L, [y]u )]L, [f k(x, [y] L, [y]u )]U ], (ii) [f k ] L and [f k ] U are equicontinuous and uniformly bounded on any bounded set, (iii) Tere exists an L k > 0 suc tat [f k (x, y 1, z 1 )] L,U [f k (x, y 2, z 2 )] L,U L k max{ y 2 y 1, z 2 z 1 } Ten (324) and te ybrid system of ODEs are equivalent [ dy k(x) ]L = [f k(x, [y k ] L, [y k] U )]L, [ dy k(x) ]U = [f k(x, [y k ] L, [y k] U )]U, [y k (x k )] L = [y k 1(x k )] L [y 0 (x 0 )] L = [y 0] L, [y k (x k )] U = [y k 1(x k )] U [y 0 (x 0 )] U = [y 0] U, if k > 0, if k > 0, Let us assume tat a general approximation solution to Eq(324) is in te form y T,k (x, P k ) in [x k, x k+1 ], k = 0, 1, 2, were y T,k as a dependent variable to x and P k, were P k is an adjustable parameter involving weigts and biases in te structure of te tree-layered feed forward fuzzy neural network (see Fig 1) Te fuzzy trial solution y T k is an approximation solution to y k for te optimized value of unknown weigts and biases Tus te problem of finding te approximated fuzzy solutions for Eq(324) over some collocation points in [x k, x k+1 ], k = 0, 1, 2, by a set of + 1 regularly spaced grid points (including te endpoints) is equivalent to calculate te functional y T,k (x, P k ) Te grid points on [x k, x k+1 ] will be x k,n = x k + n k were k = x k+1 x k and 0 n In order to obtain fuzzy approximate solution y T,k (x, P k ), we solve unconstrained optimization problem tat is simpler to
5 M Otadi, et al /IJIM Vol 6, No 2 (2014) x Input unit Bias unit W 1,k W m,k W Hidden units Z 1,k Z Z m,k V1,k V V m,k Output unit N k Figure 1: Tree layer fuzzy neural network wit one input and one output deal wit, we define te fuzzy trial function to be in te following form: y T,k (x, P k ) = α k (x) + β k [x, N k (x, P k )], (327) were te first term in te rigt and side does not involve wit adjustable parameters and satisfies te fuzzy initial conditions Te second term in te rigt and side is a feed forward treelayered fuzzy neural network consisting of an input x and te output N k (x, P k ) In te next subsection, tis FNNM wit some weigts and biases is considered and we train in order to compute te approximate solutions of problem (325) Let us consider a tree-layered FNNM wit one unit entry x, one idden layer consisting of m activation functions and one unit output N k (x, P k ) In tis paper, we use te sigmoidal activation function for te idden units of our fuzzy neural network Here, te dimension of FNNM is 1 m 1 For every entry x te input neuron makes no canges in its input, so te input to te idden neurons is Net = xw + B, j = 1,, m, k = 0, 1, 2, (328) were W is a weigt parameter from input layer to te jt unit in te idden layer, B is an jt bias for te jt unit in te idden layer Te output, in te idden neurons is Z = s(net ), j = 1,, m, k = 0, 1, 2, (329) were s is te activation function wic is normally nonlinear function, te usual coices of te activation function [15] are te sigmoid transfer function, and te output neuron make no cange its input, so te input to te output neuron is equal to output N k = V 1,k Z 1,k + + V Z + + V m,k Z m,k, (330) were V is a weigt parameter from jt unit in te idden layer to te output layer From Eqs(219)-(223), we can see tat te - level sets of te Eqs(328)-(330) are calculated from tose of te fuzzy weigts, fuzzy biases and crisp inputs For our fuzzy neural network, we can derive te learning algoritm witout assuming tat te input x is non-negative For reducing te complexity of te learning algoritm, input x 0 usually assumed as non-negative in fully fuzzy neural networks, ie, 0 x 0 [17]: Input unit: o = x (331) Hidden units: Output unit: [Z ] = [[Z ] L, [Z ] U ] = [s([net ] L ), s([net ] U )], [Net ] L = o[w ] L + [B ] L, [Net ] U = o[w ] U + [B ] U [N k ] = [[N k ] L, [N k] U ], (332) [N k ] L = [V ] L [Z ] L + ] j a j b[v L [Z ] U, [N k ] U = j c [V ] U [Z ] U + j d[v ] U [Z ] L, for [Z ] U [Z ] L 0, were a = {j [V ] L 0}, b = {j [V ] L < 0}, c = {j [V ] U 0},d = {j [V ] U < 0}, a b = {1,, m} and c d = {1,, m} A F NN 4 (fuzzy neural network wit crisp set input signals, fuzzy number weigts and fuzzy number output) [16] solution to Eq(325) is given in Figure 1 How is te F NN 4 going to solve te fuzzy differential equations? Te training data in [x k, x k+1 ] are x k < x k + k < < x k + k = x k+1 for input We propose a learning algoritm from te cost function for adjusting weigts
6 146 M Otadi, et al /IJIM Vol 6, No 2 (2014) Consider te following fuzzy initial value problem for a first order differential equation (325), te related trial function will be in te form y T,0 (x, P 0 ) = y 0 + (x x 0 )N 0 (x, P 0 ), x 0 x x 1, y T,k (x, P k ) = y k 1 (x k ) + (x x k )N k (x, P k ), x k x x k+1, k = 1, 2, (333) In [17], te learning of our fuzzy neural network is to minimize te difference between te fuzzy target vector B = (B 1,, B s ) and te actual fuzzy output vector O = (O 1,, O s ) Te following cost function was used in [3, 17, 26] for measuring te difference between B and O: e = e = { s ([B k ] L [O k] L )2 /2+ k=1 s ([B k ] U [O k] U )2 /2}, (334) k=1 were e is te cost function for te -level sets of B and O Te squared errors between te -level sets of B and O are weigted by te value of in (334) In [18], it is sown by computer simulations tat teir paper, te fitting of fuzzy outputs to fuzzy targets is not good for te -level sets wit small values of wen we use te cost function in (334) Tis is te squared errors for te -level sets are weigted by in (334) Krisnamraju et al [22] used te cost function witout te weigting sceme: e = + e = { s ([B k ] L [O k] L )2 /2 k=1 s ([B k ] U [O k] U )2 /2} (335) k=1 In te computer simulations included in tis paper, we mainly use te cost function in (335) witout te weigting sceme Te error function tat must be minimized for problem (325) is in te form e k = e n,k = e n,,k = {e L n,,k + eu n,,k }, (336) were e L n,,k = ([ dy T,k(x k,n,p k ) ] L [f k] L )2, (337) 2 e U n,,k = ([ dy T,k(x k,n,p k ) ] U [f k] U )2, (338) 2 were {x k,n } are discrete points belonging to te interval [x k, x k+1 ] and in te cost function (336), e L n,,k and eu n,,k can be viewed as te squared errors for te lower limits and te upper limits of te -level sets, respectively It is easy to express te first derivative of N k (x, P k ) in terms of te derivative of te sigmoid function, ie [N k ] L + b [N k ] U = a [V ] L [Z ] L [Net ] L [Net ] L [V ] L [Z ] U [Net ] U [Net ] U = c [V ] U [Z ] U [Net ] U [Net ] U (339) + [V ] U [Z ] L [Net d ] L [Net ] L (340) were a = {j [V ] L 0}, b = {j [V ] L < 0}, c = {j [V ] U 0},d = {j [V ] U < 0}, a b = {1,, m} and c d = {1,, m} and [Z ] L [Net ] L [Net ] L [Net ] U = [W ] L, = [Z ] L (1 [Z ] L ), = [W ] U, [Z ] U [Net ] U = [Z ] U (1 [Z ] U ) Now differentiating from trial function y T,k (x, P k ) in (337) and (338), we obtain [y T,k (x, P k )] L [N k (x, P k )] L + (x x k) [N k(x, P k )] L [y T,k (x, P k )] U [N k (x, P k )] U + (x x k) [N k(x, P k )] U, tus te expression in (339) and (340) is applicable ere A learning algoritm is derived in Appendix A = =,
7 M Otadi, et al /IJIM Vol 6, No 2 (2014) Partially fuzzy neural networks One drawback of fully fuzzy neural networks wit fuzzy connection weigts is long computation time Anoter drawback is tat te learning algoritm is complicated For reducing te complexity of te learning algoritm, we propose a partially fuzzy neural network (PFNN) arcitecture were connection weigts to output unit are fuzzy numbers wile connection weigts and biases to idden units are real numbers [18, 26] Since we ad good simulation results even from partially fuzzy tree-layer neural networks, we do not tink tat te extension of our learning algoritm to neural networks wit more tan tree layer is an attractive researc direction Te input-output relation of eac unit of our partially fuzzy neural network in (331)-(332) can be rewritten for -level sets as follows: Input unit: Let x [x k, x k+1 ] Hidden units: o = x z = s(net ), j = 1,, m, k = 0, 1, Output unit: net = ow + b [N k ] = [[N k ] L, [N k] U ] = m m [ [V ] L z, [V ] U z ] Te error function tat must be minimized for problem (325) is in te form e k = e n,k = e n,,k = {e L n,,k + eu n,,k }, (341) were {x k,n } are discrete points belonging to te interval [x k, x k+1 ] and in te cost function (341), e L n,,k and eu n,,k can be viewed as te squared errors for te lower limits and te upper limits of te -level sets, respectively It is easy to express te first derivative of N k (x, P k ) in terms of te derivative of te sigmoid function, ie [N k ] L m = [V ] L z net = net m [V ] L z (1 z )w, (342) [N k ] U = m [V ] U z net net m [V ] U z (1 z )w (343) Now differentiating from trial function y T,k (x, P k ), we obtain [y T,k (x, P k )] L [N k (x, P k )] L + (x x k) [N k(x, P k )] L [y T,k (x, P k )] U [N k (x, P k )] U + (x x k) [N k(x, P k )] U tus te expressions in (342) and (343) are applicable ere A learning algoritm is derived in Appendix B Pederson and Sambandam [32, 33] numerically solved te HFDEs by using te Euler, Runge- Kutta, improved Euler, Adams-Basfort and Adams-Moulton metod A similar example was recently considered in [21] for HFDEs using improved Predictor-corrector metod Te FNNM implemented in tis paper gives better approximation Consider te following HFDE [32] dy(x) = y(x) + m(x)λ k (y(x k )), [y(0)] = [ , ], x [x k, x k+1 ], x k = k, k = 0, 1, 2, (344) were 2(x(mod 1)) if x(mod 1) 05, m(x) = 2(1 x(mod 1)) if x(mod 1) > 05, λ k (µ) = ˆ0(x) = { ˆ0 if k = 0, = = µ if k {1, 2, }, { 1 if x = 0, 0 if x 0 =,,
8 148 M Otadi, et al /IJIM Vol 6, No 2 (2014) By applying Example 61 0f Kaleva [20], (344) as a unique solution By [23] for x [0, 1] te exact solution of (344) is given by [y(x)] = [( )e x, ( )e x ] By [32] for x [1, 2] te exact solution of (344) is given by [y(x)] = [y(1)] (3e x 1 2x), x [1, 15], [y(1)] (2x 2+ e x 15 (3 e 4)), x [15, 2] Using te FNNM developed in Section 3, we will solve te HFDE (344) to obtain a numerical solution Suppose k = 0 and x [x 0, x 1 ] Ten (344) is equivalent to dy(x) = y(x), [y(0)] = [ , ] (345) Te fuzzy trial function for tis problem is y T,0 (x, P 0 ) = (075, 1, 1125) + xn 0 (x, P 0 ) Here, te dimension of PFNNM is In te computer simulation of tis section, we use te following specifications of te learning algoritm: (1) M 0 = 5 (2) Number of idden units: five units (3) Stopping condition: 100 iterations of te learning algoritm (4) Learning constant: η 0 = 03 (5) Momentum constant: α 0 = 02 () =0,02,,=1 (7) Initial value of te weigts and biases of PFNNM are sown in Table 1, tat we suppose V i,0 = (v (1) i,0, v(2) i,0, v(3) i,0 ) for i = 1,, 5 We apply te proposed metod to te approximate realization of solution of problem (346) Analytical solution and fuzzy trial function are sown in Table 2 and Table 3 for x = 01 Suppose k = 1 and x [x 1, x 2 ] Ten (344) is equivalent to dy(x) = y(x) + m(x)y(1), (346) [y(1)] = [[y(1)] L, [y(1)]u ] Te fuzzy trial function for tis problem is [y T,1 (x, P 1 )] = [y(1)] + (x x 1 )[N 1 (x, P 1 )] Here, te dimension of PFNNM is In te computer simulation of tis section, we use te above conditions for te learning algoritm Analytical solution and fuzzy trial function are sown in Table 4 and Table 5 for x = 11 4 Conclusion Solving HFDEs by using FNNM is presented in tis paper To obtain te Best-approximated solution of HFDEs, te adjustable parameters of FNNM are systematically adjusted by using te learning algoritm of fuzzy weigts wose inputoutput relations were defined by extension principle Te effectiveness of te derived learning algoritm was demonstrated by computer simulation on numerical examples Our computer simulation in tis paper were performed for treelayer feedforward neural networks using te backpropagation-type learning algoritm Since we ad good simulation result even from partially fuzzy tree-layer neural networks, we do not tink tat te extension of our learning algoritm to neural networks wit more tan tree layers is an attractive researc direction Good simulation result was obtained by tis neural network in sorter computation times tan fully fuzzy neural networks in our computer simulations Appendix A Derivation of a learning algoritm in fuzzy neural networks Let us denote te fuzzy connection weigt V by its parameter values as V = (v (1),, v(q),, v(r) ) were V is a weigt parameter from jt unit in te idden layer to te output layer Te amount of modification of eac parameter value is written as [16, 26] v (q) (t + 1) = v(q) (t) + v(q) (t),
9 M Otadi, et al /IJIM Vol 6, No 2 (2014) Table 1: Te initial values of weigts i v (1) i, v (2) i, v (3) i, w i, b i, Table 2: Comparision of exact and approximate solution in x = 01 Exact PFNNM Error e e e e e e 5 Table 3: Comparision of exact and approximate solution in x = 01 Exact PFNNM Error e e e e e e 5 Table 4: Comparision of exact and approximate solution in x = 11 Exact PFNNM Error e e e e e e 4 Table 5: Comparision of exact and approximate solution in x = 11 Exact PFNNM Error e e e e e e 4 v q (t) = η k + α k v (q) were (t 1), t indexes te number of adjustments, η is a learning rate and α is a momentum term con-
10 150 M Otadi, et al /IJIM Vol 6, No 2 (2014) stant Tus our problem is to calculate te derivatives Let us rewrite as follows: = [V ] L [V ] U [V ] U [V ] L In tis formulation, [V ] L and [V ] U are easily calculated from te membersip function of te fuzzy connection weigt V For example, wen te fuzzy connection weigt V is trapezoidal (ie, V = (v (1), v(2), v(3), v(4) ),, [V ] L and [V ] U [V j ] L v (1) are calculated as follows: [V ] L v (2) [V ] L v (3) [V ] L v (4) = 1, =, = 0, = 0, [V ] U v (1) [V ] U v (2) [V ] U v (3) [V ] U v (4) + = 0, = 0, =, = 1 Tese derivatives are calculated from te following relation between te -level set of te fuzzy connection weigt V and its parameter values (see Fig 3): [V ] L v (3) = 2, [V ] U v (3) = 1 2, and v (2) (t + 1) is updated by te following rule: v (2) (t + 1) = v(1) (t + 1) + v(3) 2 (t + 1) On te oter and, te derivatives [V ] U [V ] L and are independent of te sape of te fuzzy connection weigt Tey can be calculated from te cost function e n,,k using te input-output relation of our fuzzy neural network for te -level sets Wen we use te cost function wit te e weigting sceme in (336), i,,k and [V ] L [V ] U are calculated as follows: [Calculation of ] [V ] L (i) If [V ] L 0 ten were [V ] L [Z ] L = δ L k ([Z ] L + (x k,n x k ) [f k(x, y T,k (x i, P k ))] L [y T,k (x k,n, P k )] L (x k,n x k )[Z ] L ), δk L = ([dy T,k(x k,n, P k ) ] L [f k] L ) (ii) If [V ] L < 0 ten [V ] L = (1 )v(1) + v(2), [V ] L = δ L k ([Z ] U + (x k,n x k ) [V ] U = v(3) + (1 )v(4) Wen te fuzzy connection weigt V is a symmetric triangular fuzzy number, te following relations old for its -level set [V ] = [[V ] L, [V ] U ] : Terefore, [V ] L = (1 2 )v(1) + 2 v(3), [V ] U = 2 v(1) + (1 2 )v(3) [V ] L v (1) = 1 2, [V ] U v (1) = 2, [Z ] U [f k ] L [y T,k (x k,n, P k )] L (x k,n x k )[Z ] U ) [Calculation of ] [V ] U (i) If [V ] U 0 ten [V ] U [Z ] U = δ U k ([Z ] U + (x k,n x k ) [f k(x, y T,k (x k,n, P k ))] U [y T,k (x k,n, P k )] U (x k,n x k )[Z ] U ),
11 M Otadi, et al /IJIM Vol 6, No 2 (2014) were δk U = ([dy T,k(x k,n, P k ) ] U [f k] U ) (ii) If [V ] U < 0 ten (1 [Z ] L )w (x k,n x k )[V ] L ([Z ] L )2 2(x k,n x k )x k,n [V ] L ([Z ] L )2 [V ] U = δ U k ([Z ] L + (x k,n x k ) (1 [Z ] L )w ( [f k(x, y T,k (x k,n, P k ))] L [y T,k (x k,n, P k )] L [Z ] L [f k ] U [y T,k (x k,n, P k )] U (x k,n x k )[Z ] L ) In our fuzzy neural network, te connection weigts and biases to te idden units are updated in te same manner as te parameter values of te fuzzy connection weigts V as follows: w (q) (t + 1) = w(q) w q (t) = η e i,,k e i,,k (t) + w(q) (t), + α w (q) (t 1) Tus our problem is to calculate te derivatives Let us rewrite e i,,k as follows: e i,,k = e i,,k [W ] L e i,,k [W ] U [W ] L [W ] U In tis formulation, [W ] L and [W ] U are easily calculated from te membersip function of te fuzzy connection weigt W [W ] L and [W ] U + Derivatives can be calculated from te cost function e i,,k using te input-output relation of our fuzzy neural network for te -level sets Wen we use te cost function wit te weigting sceme in (336), is calculated as follows: [Calculation of ] [W ] L (i) If [V ] L 0 ten [W ] L (x k,n x k )[V ] L [Z ] L (1 [Z ] L )x k,n)] (ii) If [V ] U < 0 ten [W ] L = δ U k [[V ] U [Z ] L (1 [Z ] L )x k,n + (x k,n x k )[V ] U [Z ] L + (x k,n x k )x k,n [V ] U [Z ] L (1 [Z ] L )w (x k,n x k )[V ] U ([Z ] L )2 2(x k,n x k )x k,n [V ] U ([Z ] L )2 (1 [Z ] L )w ( [f k(x, y T,k (x k,n, P k ))] U [y T,k (x k,n, P k )] U (x k,n x k )[V ] U [Z ] L (1 [Z ] L )x k,n)] In te oter cases, [W ] U and te fuzzy biases to te idden units are updated in te same manner as te fuzzy connection weigts to te idden units and fuzzy connection to te output unit Appendix B Derivation of a learning algoritm in partially fuzzy neural networks Let us denote te fuzzy connection weigt V to te output unit by its parameter values as V = (v (1),, v(q),, v(r) ) Te amount of modification of eac parameter value is written as [16, 26] [W ] L = δ L k [[V ] L [Z ] L (1 [Z ] L ) v (q) (t + 1) = v(q) (t) + v(q) (t), x n,k + (x k,n x k )[V ] L [Z ] L +(x k,n x k )x k,n [V ] L [Z ] L v q (t) = η k + α k v (q) (t 1)
12 152 M Otadi, et al /IJIM Vol 6, No 2 (2014) were t indexes te number of adjustments, η is a learning rate (positive real number) and α is a momentum term constant (positive real number) Tus our problem is to calculate te derivatives Let us rewrite as follows: were [y T,k(x k,n, P k ))] U [V ] U δk U = ([dy T,k(x k,n, P k ) ] U [f k] U ), ], = [V ] L [V ] L [y T,k (x k,n, P k ))] U [V ] U ] = (x k,n x k )z, + [V ] U In tis formulation, [V ] L [V ] U and [V ] U are easily calculated from te membersip function of te fuzzy connection weigt V On te oter and, te derivatives [V ] U [V ] L and are independent of te sape of te fuzzy connection weigt Tey can be calculated from te cost function e n,,k using te input-output relation of our fuzzy neural network for te -level sets Wen we use te cost function wit te e weigting sceme in (341), n,,k and [V ] L [V ] U are calculated as follows: [Calculation of ] [V ] L [V ] L = δ L k [ [N k(x k,n, P k )] L [V ] L N k (x k,n, P k )] U [V ] U = z In our partially fuzzy neural network, te connection weigts and biases to te idden units are real numbers Te non-fuzzy connection weigt w to te jt idden unit is updated in te same manner as te parameter values of te fuzzy connection weigt V as follows: w (t + 1) = w (t) + w (t), w (t) = η w + α w (t 1) Te derivative w can be calculated from te cost function e n,,k using te input-output relation of our partially fuzzy neural network for te -level sets Wen we use te cost function wit e te weigting sceme in (336), n,,k w is calculated as follows: +(x k,n x k ) z [f k ] L [y T,k (x k,n, P k )] L w = δ L k [ [N k(x k,n, P k )] L w + (x k,n x k ) were [y T,k (x k,n, P k ))] L [V ] L δk L = ([dy T,k(x k,n, P k ) ] L [f k (x k,n, y T,k (x k,n, P k ))] L ), ], [V ] L z + (x k,n x k )x k,n [V ] L z (1 z )w (x k,n x k )[V ] L z2 2(x k,n x k )x k,n [V ] L z2 (1 z )w [y T,k (x k,n, P k ))] L [V ] L N k (x k,n, P k )] L [V ] L [Calculation of ] [V ] U [V ] U ] = (x k,n x k )z, = z = δ U [ N k(x k,n, P k )] U [V ] U +(x k,n x k ) z [f k ] U [y T,k (x k,n, P k )] U [f k ] L ( [y T,k (x k,n, P ) )] L [f k ] L [y T,k (x, P k ))] U δ U k [ N k(x k,n, P k )] U w [y T,k (x k,n, P k )] L w + [y T,k (x k,n, P k )] U w )]+ + (x k,n x k )[V ] U z + (x k,n x k )x k,n [V ] U z (1 z )w (x k,n x k )[V ] U
13 M Otadi, et al /IJIM Vol 6, No 2 (2014) were z 2 2(x k,n x k ) x k,n [V ] U z2 (1 z )w [f k ] U ( [y T,k (x,k,n P k ))] L [y T,k(x k,n, P k )] L w + [f k(x k,n, y T,k (x k,n, P k ))] U [y T,k (x k,n, P k ))] U [y T,k(x k,n, P k )] U w jk, )], [N k (x k,n, P k )] L w z net net w [N k (x k,n, P k )] U w = [N k(x k,n, P k )] L z = [V ] L z (1 z )x k,n, = [N k(x k,n, P k )] U z z net net w = [V ] U z (1 z )x k,n, [y T,k (x k,n, P k ))] L w = (x k,n x k ) [N k (x k,n, P k )] L w, [y T,k (x k,n, P k ))] U w = (x k,n x k ) [N k (x k,n, P k )] U w Te non-fuzzy biases to te idden units are updated in te same manner as te non-fuzzy connection weigts to te idden units References [1] S Abbasbandy, J J Nieto and M Alavi, Tuning of reacable set in one dimensional fuzzy differential inclusions, Caos, Solitons & Fractals 26 (2005) [2] S Abbasbandy, T Allaviranloo, O Lopez- Pouso and J J Nieto, Numerical metods for fuzzy differential inclusions, Computers & matematics wit applications 48 (2004) [3] S Abbasbandy and M Otadi, Numerical solution of fuzzy polynomials by fuzzy neural network, Appl Mat Comput 181 (2006) [4] S Abbasbandy, M Otadi and M Mosle, Numerical solution of a system of fuzzy polynomials by fuzzy neural network, Inform Sci 178 (2008) [5] G Alefeld and J Herzberger, Introduction to Interval Computations, Academic Press, New York, 1983 [6] T Allaviranloo, E Amady, N Amady, Nt-order fuzzy linear differential eqations, Inform Sci 178 (2008) [7] T Allaviranloo, N Amady, E Amady, Numerical solution of fuzzy differential eqations by predictor-corrector metod, Inform Sci 177 (2007) [8] T Allaviranloo, N A Kiani, N Motamedi, Solving fuzzy differential equations by differential transformation metod, Information Sciences 179 (2009) [9] B Bede, I J Rudas, A L Bencsik, First order linear fuzzy differential eqations under generalized differentiability, Inform Sci 177 (2007) [10] JJ Buckley and T Feuring, Fuzzy differential equations, Fuzzy Sets and Systems 110 (2000) [11] S L Cang, L A Zade, On fuzzy mapping and control, IEEE Trans Systems Man Cybemet 2 (1972) [12] M Cen, C Wu, X Xue, G Liu, On fuzzy boundary value problems, Inform Sci 178 (2008) [13] D Dubois, H Prade, Towards fuzzy differential calculus: Part3, differentiation, Fuzzy Sets and Systems 8 (1982) [14] R Goetscel, W Voxman, Elementary fuzzy calculus, Fuzzy Sets and Systems 18 (1986) [15] M T Hagan, H B Demut, M Beale, Neural Network Design, PWS publising company, Massacusetts, 1996
14 154 M Otadi, et al /IJIM Vol 6, No 2 (2014) [16] H Isibuci, K Kwon and H Tanaka, A learning algoritm of fuzzy neural networks wit triangular fuzzy weigts, Fuzzy Sets and Systems 71 (1995) [17] H Isibuci, K Morioka, I B Turksen, Learning by fuzzified neural networks, Int J Approximate Reasoning 13 (1995) [18] H Isibuci, M Nii, Numerical analysis of te learning of fuzzified neural networks from fuzzy if-ten rules, Fuzzy Sets and Systems 120 (2001) [19] H Isibuci, H Tanaka, H Okada, Fuzzy neural networks wit fuzzy weigts and fuzzy biases, Proceedings of 1993 IEEE International Conferences on Neural Networks, 1993, pp [20] O Kaleva, Fuzzy differential equations, Fuzzy Sets and Systems 24 (1987) [21] H Kim and R Saktivel, Numerical solution of ybrid fuzzy differential equations using improved predictor-corrector metod, Commun nonlinear Sci numerv simulat 17 (2012) [22] PV Krisnamraju, JJ Buckley, KD Relly, Y Hayasi, Genetic learning algoritms for fuzzy neural nets, Proceedings of 1994 IEEE International Conference on Fuzzy Systems 1994, pp [23] M Ma, M Friedman and A Kandel, Numerical solutions of fuzzy differential equations, Fuzzy Sets and Systems 105 (1999) [24] A J Meade Jr, A A Fernandez, Solution of nonlinear ordinary differential equations by feedforward neural networks, Matematical and Computer Modelling 20 (1994) [25] M Mosle, Numerical solution of fuzzy differential equations under generalized differentiability by fuzzy neural network, Int J Industrial Matematics 5 (2013) [26] M Mosle and M Otadi, Simulation and evaluation of fuzzy differential equations by fuzzy neural network, Applied Soft Computing 12 (2012) [27] M Mosle, M Otadi and S Abbasbandy, Evaluation of fuzzy regression models by fuzzy neural network, Journal of Computational and Applied Matematics 234 (2010) [28] M Mosle, M Otadi and S Abbasbandy, Fuzzy polynomial regression wit fuzzy neural networks, Applied Matematical Modelling 35 (2011) [29] M Mosle, T Allaviranloo and M Otadi, Evaluation of fully fuzzy regression models by fuzzy neural network, Neural Comput and Applications 21 ( 2012) [30] M Otadi and M Mosle, Simulation and evaluation of dual fully fuzzy linear systems by fuzzy neural network, Applied Matematical Modelling 35 (2011) [31] M Otadi, M Mosle and S Abbasbandy, Numerical solution of fully fuzzy linear systems by fuzzy neural network, Soft Computing 15 (2011) [32] S Pederson, M Sambandam, Numerical solution to ybrid fuzzy systems, Matematical and Computer Modelling 45 (2007) [33] S Pederson, M Sambandam, Numerical solution of ybrid fuzzy differential equation IVPs by a caracterization teorem, Inf Sci 179 (2009) [34] P Picton, Neural Networks, second ed, Palgrave, Great Britain, 2000 [35] M L Puri, D A Ralescu, Differentials of fuzzy functions, J Mat Anal Appl 91 (1983) [36] O Sedagatfar, P Darabi and S Moloudzade, A metod for solving first order fuzzy differential equation, Int J Industrial Matematics 5 (2013) [37] S Seikkala, On te fuzzy initial value problem, Fuzzy Sets and Systems 24 (1987) [38] Wu Congxin, Ma Ming, On embedding problem of fuzzy number space, Fuzzy Sets and Systems 44 (1991) 33-38
15 M Otadi, et al /IJIM Vol 6, No 2 (2014) [39] L A Zade, Te concept of a liguistic variable and its application to approximate reasoning: Parts 1-3, Inform Sci 9 (1975) Mamood Otadi borned in 1978 in Iran He received te BS, MS, and PD degrees in applied matematics from Islamic Azad University, Iran, in 2001, 2003 and 2008, respectively He is currently an Associate Professor in te Department of Matematics, Firoozkoo Branc, Islamic Azad University, Firoozkoo, Iran He is actively pursuing researc in fuzzy modeling, fuzzy neural network, fuzzy linear system, fuzzy differential equations and fuzzy integral equations He as publised numerous papers in tis area Maryam Mosle borned in 1979 in Iran Se received te BS, MS, and PD degrees in applied matematics from Islamic Azad University, Iran, in 2001, 2003 and 2009, respectively Se is currently an Associate Professor in te Department of Matematics, Firoozkoo Branc, Islamic Azad University, Firoozkoo, Iran Se is actively pursuing researc in fuzzy modeling, fuzzy neural network, fuzzy linear system, fuzzy differential equations and fuzzy integral equations Se as publised numerous papers in tis area
4.1 Tangent Lines. y 2 y 1 = y 2 y 1
41 Tangent Lines Introduction Recall tat te slope of a line tells us ow fast te line rises or falls Given distinct points (x 1, y 1 ) and (x 2, y 2 ), te slope of te line troug tese two points is cange
More informationPiecewise Polynomial Interpolation, cont d
Jim Lambers MAT 460/560 Fall Semester 2009-0 Lecture 2 Notes Tese notes correspond to Section 4 in te text Piecewise Polynomial Interpolation, cont d Constructing Cubic Splines, cont d Having determined
More information4.2 The Derivative. f(x + h) f(x) lim
4.2 Te Derivative Introduction In te previous section, it was sown tat if a function f as a nonvertical tangent line at a point (x, f(x)), ten its slope is given by te it f(x + ) f(x). (*) Tis is potentially
More informationThe Euler and trapezoidal stencils to solve d d x y x = f x, y x
restart; Te Euler and trapezoidal stencils to solve d d x y x = y x Te purpose of tis workseet is to derive te tree simplest numerical stencils to solve te first order d equation y x d x = y x, and study
More informationMATH 5a Spring 2018 READING ASSIGNMENTS FOR CHAPTER 2
MATH 5a Spring 2018 READING ASSIGNMENTS FOR CHAPTER 2 Note: Tere will be a very sort online reading quiz (WebWork) on eac reading assignment due one our before class on its due date. Due dates can be found
More informationBounding Tree Cover Number and Positive Semidefinite Zero Forcing Number
Bounding Tree Cover Number and Positive Semidefinite Zero Forcing Number Sofia Burille Mentor: Micael Natanson September 15, 2014 Abstract Given a grap, G, wit a set of vertices, v, and edges, various
More informationSection 2.3: Calculating Limits using the Limit Laws
Section 2.3: Calculating Limits using te Limit Laws In previous sections, we used graps and numerics to approimate te value of a it if it eists. Te problem wit tis owever is tat it does not always give
More informationNumerical Derivatives
Lab 15 Numerical Derivatives Lab Objective: Understand and implement finite difference approximations of te derivative in single and multiple dimensions. Evaluate te accuracy of tese approximations. Ten
More informationLinear Interpolating Splines
Jim Lambers MAT 772 Fall Semester 2010-11 Lecture 17 Notes Tese notes correspond to Sections 112, 11, and 114 in te text Linear Interpolating Splines We ave seen tat ig-degree polynomial interpolation
More informationFast Calculation of Thermodynamic Properties of Water and Steam in Process Modelling using Spline Interpolation
P R E P R N T CPWS XV Berlin, September 8, 008 Fast Calculation of Termodynamic Properties of Water and Steam in Process Modelling using Spline nterpolation Mattias Kunick a, Hans-Joacim Kretzscmar a,
More informationCubic smoothing spline
Cubic smooting spline Menu: QCExpert Regression Cubic spline e module Cubic Spline is used to fit any functional regression curve troug data wit one independent variable x and one dependent random variable
More informationMaterials: Whiteboard, TI-Nspire classroom set, quadratic tangents program, and a computer projector.
Adam Clinc Lesson: Deriving te Derivative Grade Level: 12 t grade, Calculus I class Materials: Witeboard, TI-Nspire classroom set, quadratic tangents program, and a computer projector. Goals/Objectives:
More informationA Bidirectional Subsethood Based Similarity Measure for Fuzzy Sets
A Bidirectional Subsetood Based Similarity Measure for Fuzzy Sets Saily Kabir Cristian Wagner Timoty C. Havens and Derek T. Anderson Intelligent Modelling and Analysis (IMA) Group and Lab for Uncertainty
More informationFUZZY INVENTORY MODEL WITH SINGLE ITEM UNDER TIME DEPENDENT DEMAND AND HOLDING COST.
Int J of Intelligent omputing and pplied Sciences 5 FZZY INVENORY MODEL WIH SINGLE IEM NDER IME DEPENDEN DEMND ND HOLDING OS S Barik SKPaikray S Misra K Misra orresponding autor: odma@driemsacin bstract:
More informationUnsupervised Learning for Hierarchical Clustering Using Statistical Information
Unsupervised Learning for Hierarcical Clustering Using Statistical Information Masaru Okamoto, Nan Bu, and Tosio Tsuji Department of Artificial Complex System Engineering Hirosima University Kagamiyama
More information, 1 1, A complex fraction is a quotient of rational expressions (including their sums) that result
RT. Complex Fractions Wen working wit algebraic expressions, sometimes we come across needing to simplify expressions like tese: xx 9 xx +, xx + xx + xx, yy xx + xx + +, aa Simplifying Complex Fractions
More informationProceedings of the 8th WSEAS International Conference on Neural Networks, Vancouver, British Columbia, Canada, June 19-21,
Proceedings of te 8t WSEAS International Conference on Neural Networks, Vancouver, Britis Columbia, Canada, June 9-2, 2007 3 Neural Network Structures wit Constant Weigts to Implement Dis-Jointly Removed
More informationMore on Functions and Their Graphs
More on Functions and Teir Graps Difference Quotient ( + ) ( ) f a f a is known as te difference quotient and is used exclusively wit functions. Te objective to keep in mind is to factor te appearing in
More information3.6 Directional Derivatives and the Gradient Vector
288 CHAPTER 3. FUNCTIONS OF SEVERAL VARIABLES 3.6 Directional Derivatives and te Gradient Vector 3.6.1 Functions of two Variables Directional Derivatives Let us first quickly review, one more time, te
More informationApproximation of a Fuzzy Function by Using Radial Basis Functions Interpolation
International Journal of Mathematical Modelling & Computations Vol. 07, No. 03, Summer 2017, 299-307 Approximation of a Fuzzy Function by Using Radial Basis Functions Interpolation R. Firouzdor a and M.
More informationCESILA: Communication Circle External Square Intersection-Based WSN Localization Algorithm
Sensors & Transducers 2013 by IFSA ttp://www.sensorsportal.com CESILA: Communication Circle External Square Intersection-Based WSN Localization Algoritm Sun Hongyu, Fang Ziyi, Qu Guannan College of Computer
More information13.5 DIRECTIONAL DERIVATIVES and the GRADIENT VECTOR
13.5 Directional Derivatives and te Gradient Vector Contemporary Calculus 1 13.5 DIRECTIONAL DERIVATIVES and te GRADIENT VECTOR Directional Derivatives In Section 13.3 te partial derivatives f x and f
More information2.8 The derivative as a function
CHAPTER 2. LIMITS 56 2.8 Te derivative as a function Definition. Te derivative of f(x) istefunction f (x) defined as follows f f(x + ) f(x) (x). 0 Note: tis differs from te definition in section 2.7 in
More informationAlternating Direction Implicit Methods for FDTD Using the Dey-Mittra Embedded Boundary Method
Te Open Plasma Pysics Journal, 2010, 3, 29-35 29 Open Access Alternating Direction Implicit Metods for FDTD Using te Dey-Mittra Embedded Boundary Metod T.M. Austin *, J.R. Cary, D.N. Smite C. Nieter Tec-X
More information12.2 TECHNIQUES FOR EVALUATING LIMITS
Section Tecniques for Evaluating Limits 86 TECHNIQUES FOR EVALUATING LIMITS Wat ou sould learn Use te dividing out tecnique to evaluate its of functions Use te rationalizing tecnique to evaluate its of
More information2 The Derivative. 2.0 Introduction to Derivatives. Slopes of Tangent Lines: Graphically
2 Te Derivative Te two previous capters ave laid te foundation for te study of calculus. Tey provided a review of some material you will need and started to empasize te various ways we will view and use
More informationSymmetric Tree Replication Protocol for Efficient Distributed Storage System*
ymmetric Tree Replication Protocol for Efficient Distributed torage ystem* ung Cune Coi 1, Hee Yong Youn 1, and Joong up Coi 2 1 cool of Information and Communications Engineering ungkyunkwan University
More informationInvestigating an automated method for the sensitivity analysis of functions
Investigating an automated metod for te sensitivity analysis of functions Sibel EKER s.eker@student.tudelft.nl Jill SLINGER j..slinger@tudelft.nl Delft University of Tecnology 2628 BX, Delft, te Neterlands
More informationTwo Modifications of Weight Calculation of the Non-Local Means Denoising Method
Engineering, 2013, 5, 522-526 ttp://dx.doi.org/10.4236/eng.2013.510b107 Publised Online October 2013 (ttp://www.scirp.org/journal/eng) Two Modifications of Weigt Calculation of te Non-Local Means Denoising
More informationDesign of PSO-based Fuzzy Classification Systems
Tamkang Journal of Science and Engineering, Vol. 9, No 1, pp. 6370 (006) 63 Design of PSO-based Fuzzy Classification Systems Cia-Cong Cen Department of Electronics Engineering, Wufeng Institute of Tecnology,
More informationAn Explicit Formula for Generalized Arithmetic-Geometric Sum
Applied Matematical Sciences, Vol. 9, 2015, no. 114, 5687-5696 HIKARI Ltd, www.m-ikari.com ttp://dx.doi.org/10.12988/ams.2015.57481 An Explicit Formula for Generalized Aritmetic-Geometric Sum Roberto B.
More informationFault Localization Using Tarantula
Class 20 Fault localization (cont d) Test-data generation Exam review: Nov 3, after class to :30 Responsible for all material up troug Nov 3 (troug test-data generation) Send questions beforeand so all
More informationThe (, D) and (, N) problems in double-step digraphs with unilateral distance
Electronic Journal of Grap Teory and Applications () (), Te (, D) and (, N) problems in double-step digraps wit unilateral distance C Dalfó, MA Fiol Departament de Matemàtica Aplicada IV Universitat Politècnica
More informationPYRAMID FILTERS BASED ON BILINEAR INTERPOLATION
PYRAMID FILTERS BASED ON BILINEAR INTERPOLATION Martin Kraus Computer Grapics and Visualization Group, Tecnisce Universität Müncen, Germany krausma@in.tum.de Magnus Strengert Visualization and Interactive
More informationNumerical solution of Fuzzy Hybrid Differential Equation by Third order Runge Kutta Nystrom Method
Numerical solution of Fuzzy Hybrid Differential Equation by Third order Runge Kutta Nystrom Method N.Saveetha 1* Dr.S.Chenthur Pandian 2 1. Department of Mathematics, Vivekanandha College of Technology
More informationImplementation of Integral based Digital Curvature Estimators in DGtal
Implementation of Integral based Digital Curvature Estimators in DGtal David Coeurjolly 1, Jacques-Olivier Lacaud 2, Jérémy Levallois 1,2 1 Université de Lyon, CNRS INSA-Lyon, LIRIS, UMR5205, F-69621,
More informationUtilizing Call Admission Control to Derive Optimal Pricing of Multiple Service Classes in Wireless Cellular Networks
Utilizing Call Admission Control to Derive Optimal Pricing of Multiple Service Classes in Wireless Cellular Networks Okan Yilmaz and Ing-Ray Cen Computer Science Department Virginia Tec {oyilmaz, ircen}@vt.edu
More information19.2 Surface Area of Prisms and Cylinders
Name Class Date 19 Surface Area of Prisms and Cylinders Essential Question: How can you find te surface area of a prism or cylinder? Resource Locker Explore Developing a Surface Area Formula Surface area
More informationCHAPTER 7: TRANSCENDENTAL FUNCTIONS
7.0 Introduction and One to one Functions Contemporary Calculus 1 CHAPTER 7: TRANSCENDENTAL FUNCTIONS Introduction In te previous capters we saw ow to calculate and use te derivatives and integrals of
More informationOptimal In-Network Packet Aggregation Policy for Maximum Information Freshness
1 Optimal In-etwork Packet Aggregation Policy for Maimum Information Fresness Alper Sinan Akyurek, Tajana Simunic Rosing Electrical and Computer Engineering, University of California, San Diego aakyurek@ucsd.edu,
More informationComputing geodesic paths on manifolds
Proc. Natl. Acad. Sci. USA Vol. 95, pp. 8431 8435, July 1998 Applied Matematics Computing geodesic pats on manifolds R. Kimmel* and J. A. Setian Department of Matematics and Lawrence Berkeley National
More informationTraffic Sign Classification Using Ring Partitioned Method
Traffic Sign Classification Using Ring Partitioned Metod Aryuanto Soetedjo and Koici Yamada Laboratory for Management and Information Systems Science, Nagaoa University of Tecnology 603- Kamitomioamaci,
More informationA stepwise fuzzy linear programming model with possibility and necessity relations
Journal of Intelligent & Fuzzy Systems 25 (2013) 81 93 DOI:10.3233/IFS-2012-0616 IOS Press 81 A stepwise fuzzy linear programming model wit possibility and necessity relations Adel Hatami-Marbini a, Per
More informationA Cost Model for Distributed Shared Memory. Using Competitive Update. Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science
A Cost Model for Distributed Sared Memory Using Competitive Update Jai-Hoon Kim Nitin H. Vaidya Department of Computer Science Texas A&M University College Station, Texas, 77843-3112, USA E-mail: fjkim,vaidyag@cs.tamu.edu
More informationDensity Estimation Over Data Stream
Density Estimation Over Data Stream Aoying Zou Dept. of Computer Science, Fudan University 22 Handan Rd. Sangai, 2433, P.R. Cina ayzou@fudan.edu.cn Ziyuan Cai Dept. of Computer Science, Fudan University
More informationAn Algorithm for Loopless Deflection in Photonic Packet-Switched Networks
An Algoritm for Loopless Deflection in Potonic Packet-Switced Networks Jason P. Jue Center for Advanced Telecommunications Systems and Services Te University of Texas at Dallas Ricardson, TX 75083-0688
More informationA Novel QC-LDPC Code with Flexible Construction and Low Error Floor
A Novel QC-LDPC Code wit Flexile Construction and Low Error Floor Hanxin WANG,2, Saoping CHEN,2,CuitaoZHU,2 and Kaiyou SU Department of Electronics and Information Engineering, Sout-Central University
More informationSection 1.2 The Slope of a Tangent
Section 1.2 Te Slope of a Tangent You are familiar wit te concept of a tangent to a curve. Wat geometric interpretation can be given to a tangent to te grap of a function at a point? A tangent is te straigt
More information1.4 RATIONAL EXPRESSIONS
6 CHAPTER Fundamentals.4 RATIONAL EXPRESSIONS Te Domain of an Algebraic Epression Simplifying Rational Epressions Multiplying and Dividing Rational Epressions Adding and Subtracting Rational Epressions
More informationMean Shifting Gradient Vector Flow: An Improved External Force Field for Active Surfaces in Widefield Microscopy.
Mean Sifting Gradient Vector Flow: An Improved External Force Field for Active Surfaces in Widefield Microscopy. Margret Keuper Cair of Pattern Recognition and Image Processing Computer Science Department
More informationThe impact of simplified UNBab mapping function on GPS tropospheric delay
Te impact of simplified UNBab mapping function on GPS troposperic delay Hamza Sakidin, Tay Coo Cuan, and Asmala Amad Citation: AIP Conference Proceedings 1621, 363 (2014); doi: 10.1063/1.4898493 View online:
More informationCoarticulation: An Approach for Generating Concurrent Plans in Markov Decision Processes
Coarticulation: An Approac for Generating Concurrent Plans in Markov Decision Processes Kasayar Roanimanes kas@cs.umass.edu Sridar Maadevan maadeva@cs.umass.edu Department of Computer Science, University
More informationOur Calibrated Model has No Predictive Value: An Example from the Petroleum Industry
Our Calibrated Model as No Predictive Value: An Example from te Petroleum Industry J.N. Carter a, P.J. Ballester a, Z. Tavassoli a and P.R. King a a Department of Eart Sciences and Engineering, Imperial
More information12.2 Techniques for Evaluating Limits
335_qd /4/5 :5 PM Page 863 Section Tecniques for Evaluating Limits 863 Tecniques for Evaluating Limits Wat ou sould learn Use te dividing out tecnique to evaluate its of functions Use te rationalizing
More informationDistributed and Optimal Rate Allocation in Application-Layer Multicast
Distributed and Optimal Rate Allocation in Application-Layer Multicast Jinyao Yan, Martin May, Bernard Plattner, Wolfgang Mülbauer Computer Engineering and Networks Laboratory, ETH Zuric, CH-8092, Switzerland
More informationExcel based finite difference modeling of ground water flow
Journal of Himalaan Eart Sciences 39(006) 49-53 Ecel based finite difference modeling of ground water flow M. Gulraiz Akter 1, Zulfiqar Amad 1 and Kalid Amin Kan 1 Department of Eart Sciences, Quaid-i-Azam
More informationA Practical Approach of Selecting the Edge Detector Parameters to Achieve a Good Edge Map of the Gray Image
Journal of Computer Science 5 (5): 355-362, 2009 ISSN 1549-3636 2009 Science Publications A Practical Approac of Selecting te Edge Detector Parameters to Acieve a Good Edge Map of te Gray Image 1 Akram
More informationMAC-CPTM Situations Project
raft o not use witout permission -P ituations Project ituation 20: rea of Plane Figures Prompt teacer in a geometry class introduces formulas for te areas of parallelograms, trapezoids, and romi. e removes
More informationComparison of the Efficiency of the Various Algorithms in Stratified Sampling when the Initial Solutions are Determined with Geometric Method
International Journal of Statistics and Applications 0, (): -0 DOI: 0.9/j.statistics.000.0 Comparison of te Efficiency of te Various Algoritms in Stratified Sampling wen te Initial Solutions are Determined
More informationVector Processing Contours
Vector Processing Contours Andrey Kirsanov Department of Automation and Control Processes MAMI Moscow State Tecnical University Moscow, Russia AndKirsanov@yandex.ru A.Vavilin and K-H. Jo Department of
More informationApproximation of Multiplication of Trapezoidal Epsilon-delta Fuzzy Numbers
Advances in Fuzzy Mathematics ISSN 973-533X Volume, Number 3 (7, pp 75-735 Research India Publications http://wwwripublicationcom Approximation of Multiplication of Trapezoidal Epsilon-delta Fuzzy Numbers
More informationA Finite Element Scheme for Calculating Inverse Dynamics of Link Mechanisms
WCCM V Fift World Congress on Computational Mecanics July -1,, Vienna, Austria Eds.: H.A. Mang, F.G. Rammerstorfer, J. Eberardsteiner A Finite Element Sceme for Calculating Inverse Dynamics of Link Mecanisms
More informationInternational Journal of Advanced Research in Computer Science and Software Engineering
Volume 2, Issue 9, September 2012 ISSN: 2277 128X International Journal of Advanced Researc in Computer Science and Software Engineering Researc Paper Available online at: www.ijarcsse.com Performance
More informationAll truths are easy to understand once they are discovered; the point is to discover them. Galileo
Section 7. olume All truts are easy to understand once tey are discovered; te point is to discover tem. Galileo Te main topic of tis section is volume. You will specifically look at ow to find te volume
More informationCE 221 Data Structures and Algorithms
CE Data Structures and Algoritms Capter 4: Trees (AVL Trees) Text: Read Weiss, 4.4 Izmir University of Economics AVL Trees An AVL (Adelson-Velskii and Landis) tree is a binary searc tree wit a balance
More informationClassify solids. Find volumes of prisms and cylinders.
11.4 Volumes of Prisms and Cylinders Essential Question How can you find te volume of a prism or cylinder tat is not a rigt prism or rigt cylinder? Recall tat te volume V of a rigt prism or a rigt cylinder
More informationOn the Use of Radio Resource Tests in Wireless ad hoc Networks
Tecnical Report RT/29/2009 On te Use of Radio Resource Tests in Wireless ad oc Networks Diogo Mónica diogo.monica@gsd.inesc-id.pt João Leitão jleitao@gsd.inesc-id.pt Luis Rodrigues ler@ist.utl.pt Carlos
More informationAuthor's personal copy
Autor's personal copy Information Processing Letters 09 (009) 868 875 Contents lists available at ScienceDirect Information Processing Letters www.elsevier.com/locate/ipl Elastic tresold-based admission
More informationParallel Simulation of Equation-Based Models on CUDA-Enabled GPUs
Parallel Simulation of Equation-Based Models on CUDA-Enabled GPUs Per Ostlund Department of Computer and Information Science Linkoping University SE-58183 Linkoping, Sweden per.ostlund@liu.se Kristian
More informationClassification of Osteoporosis using Fractal Texture Features
Classification of Osteoporosis using Fractal Texture Features V.Srikant, C.Dines Kumar and A.Tobin Department of Electronics and Communication Engineering Panimalar Engineering College Cennai, Tamil Nadu,
More informationSome Handwritten Signature Parameters in Biometric Recognition Process
Some Handwritten Signature Parameters in Biometric Recognition Process Piotr Porwik Institute of Informatics, Silesian Uniersity, Bdziska 39, 41- Sosnowiec, Poland porwik@us.edu.pl Tomasz Para Institute
More informationHaar Transform CS 430 Denbigh Starkey
Haar Transform CS Denbig Starkey. Background. Computing te transform. Restoring te original image from te transform 7. Producing te transform matrix 8 5. Using Haar for lossless compression 6. Using Haar
More informationTHANK YOU FOR YOUR PURCHASE!
THANK YOU FOR YOUR PURCHASE! Te resources included in tis purcase were designed and created by me. I ope tat you find tis resource elpful in your classroom. Please feel free to contact me wit any questions
More informationLimits and Continuity
CHAPTER Limits and Continuit. Rates of Cange and Limits. Limits Involving Infinit.3 Continuit.4 Rates of Cange and Tangent Lines An Economic Injur Level (EIL) is a measurement of te fewest number of insect
More informationApplication of Neural Network in SOCN for Decision-Making
Application of Neural Network in SOCN for Decision-Making Priyanka Podutwar, Ms. V. M. Desmuk M.E. CSE, Prof., M.E. CSE PRMIT&R Badnera Podutwar3@gmail.com, msvmdesmuk@rediffmail.com Abstract-An Artificial
More informationMulti-Stack Boundary Labeling Problems
Multi-Stack Boundary Labeling Problems Micael A. Bekos 1, Micael Kaufmann 2, Katerina Potika 1 Antonios Symvonis 1 1 National Tecnical University of Atens, Scool of Applied Matematical & Pysical Sciences,
More informationEconomic design of x control charts considering process shift distributions
J Ind Eng Int (2014) 10:163 171 DOI 10.1007/s40092-014-0086-2 ORIGINAL RESEARCH Economic design of x control carts considering process sift distributions Vijayababu Vommi Rukmini V. Kasarapu Received:
More informationRedundancy Awareness in SQL Queries
Redundancy Awareness in QL Queries Bin ao and Antonio Badia omputer Engineering and omputer cience Department University of Louisville bin.cao,abadia @louisville.edu Abstract In tis paper, we study QL
More informationAn Efficient Algorithm of Polygonal Fuzzy Neural Network 1
An Efficient Algorithm of Polygonal Fuzzy Neural Network PUYIN LIU Department of Mathematics National University of Defense Technology Changsha 40073, Hunan, P R CHINA Abstract: Symmetric polygonal fuzzy
More informationScheduling Non-Preemptible Jobs to Minimize Peak Demand. Received: 22 September 2017; Accepted: 25 October 2017; Published: 28 October 2017
algoritms Article Sceduling Non-Preemptible Jobs to Minimize Peak Demand Sean Yaw 1 ID and Brendan Mumey 2, * ID 1 Los Alamos National Laboratoy, Los Alamos, NM 87545, USA; yaw@lanl.gov 2 Gianforte Scool
More informationA UPnP-based Decentralized Service Discovery Improved Algorithm
Indonesian Journal of Electrical Engineering and Informatics (IJEEI) Vol.1, No.1, Marc 2013, pp. 21~26 ISSN: 2089-3272 21 A UPnP-based Decentralized Service Discovery Improved Algoritm Yu Si-cai*, Wu Yan-zi,
More informationHASH ALGORITHMS: A DESIGN FOR PARALLEL CALCULATIONS
HASH ALGORITHMS: A DESIGN FOR PARALLEL CALCULATIONS N.G.Bardis Researc Associate Hellenic Ministry of te Interior, Public Administration and Decentralization 8, Dragatsaniou str., Klatmonos S. 0559, Greece
More informationChapter K. Geometric Optics. Blinn College - Physics Terry Honan
Capter K Geometric Optics Blinn College - Pysics 2426 - Terry Honan K. - Properties of Ligt Te Speed of Ligt Te speed of ligt in a vacuum is approximately c > 3.0µ0 8 mês. Because of its most fundamental
More informationIMAGE ENCRYPTION BASED ON CHAOTIC MAP AND REVERSIBLE INTEGER WAVELET TRANSFORM
Journal of ELECTRICAL ENGINEERING, VOL. 65, NO., 014, 90 96 IMAGE ENCRYPTION BASED ON CHAOTIC MAP AND REVERSIBLE INTEGER WAVELET TRANSFORM Xiaopeng Wei Bin Wang Qiang Zang Cao Ce In recent years, tere
More informationLECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS
LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class
More informationGrid Adaptation for Functional Outputs: Application to Two-Dimensional Inviscid Flows
Journal of Computational Pysics 176, 40 69 (2002) doi:10.1006/jcp.2001.6967, available online at ttp://www.idealibrary.com on Grid Adaptation for Functional Outputs: Application to Two-Dimensional Inviscid
More information12.2 Investigate Surface Area
Investigating g Geometry ACTIVITY Use before Lesson 12.2 12.2 Investigate Surface Area MATERIALS grap paper scissors tape Q U E S T I O N How can you find te surface area of a polyedron? A net is a pattern
More informationAn Overview of New Features in
6. LS-DYNA Anwenderforum, Frankental 2007 Optimierung An Overview of New Features in LS-OPT Version 3.3 Nielen Stander*, Tusar Goel*, David Björkevik** *Livermore Software Tecnology Corporation, Livermore,
More informationIntroduction to Computer Graphics 5. Clipping
Introduction to Computer Grapics 5. Clipping I-Cen Lin, Assistant Professor National Ciao Tung Univ., Taiwan Textbook: E.Angel, Interactive Computer Grapics, 5 t Ed., Addison Wesley Ref:Hearn and Baker,
More informationTuning MAX MIN Ant System with off-line and on-line methods
Université Libre de Bruxelles Institut de Recerces Interdisciplinaires et de Développements en Intelligence Artificielle Tuning MAX MIN Ant System wit off-line and on-line metods Paola Pellegrini, Tomas
More informationCHAPTER The elevation can be determined as. The partial derivatives can be evaluated,
1 CHAPTER 14 14.1 Te elevation can be determined as (.8,1. (.81. 1.5(1. 1.5(.8 Te partial derivatives can be evaluated,.5 (1..5(.8.4 1.5 4 (.8 1.5 4(1. 1.7 (1. 5 5.4 wic can be used to determine te gradient
More informationANTENNA SPHERICAL COORDINATE SYSTEMS AND THEIR APPLICATION IN COMBINING RESULTS FROM DIFFERENT ANTENNA ORIENTATIONS
NTNN SPHRICL COORDINT SSTMS ND THIR PPLICTION IN COMBINING RSULTS FROM DIFFRNT NTNN ORINTTIONS llen C. Newell, Greg Hindman Nearfield Systems Incorporated 133. 223 rd St. Bldg. 524 Carson, C 9745 US BSTRCT
More informationA geometric analysis of heuristic search
A geometric analysis of euristic searc by GORDON J. VANDERBRUG University of Maryland College Park, Maryland ABSTRACT Searc spaces for various types of problem representations can be represented in one
More informationAsynchronous Power Flow on Graphic Processing Units
1 Asyncronous Power Flow on Grapic Processing Units Manuel Marin, Student Member, IEEE, David Defour, and Federico Milano, Senior Member, IEEE Abstract Asyncronous iterations can be used to implement fixed-point
More informationA Statistical Approach for Target Counting in Sensor-Based Surveillance Systems
Proceedings IEEE INFOCOM A Statistical Approac for Target Counting in Sensor-Based Surveillance Systems Dengyuan Wu, Decang Cen,aiXing, Xiuzen Ceng Department of Computer Science, Te George Wasington University,
More informationSimultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Network and Fuzzy Simulation
.--- Simultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Networ and Fuzzy Simulation Abstract - - - - Keywords: Many optimization problems contain fuzzy information. Possibility
More informationYou Try: A. Dilate the following figure using a scale factor of 2 with center of dilation at the origin.
1 G.SRT.1-Some Tings To Know Dilations affect te size of te pre-image. Te pre-image will enlarge or reduce by te ratio given by te scale factor. A dilation wit a scale factor of 1> x >1enlarges it. A dilation
More informationUUV DEPTH MEASUREMENT USING CAMERA IMAGES
ABCM Symposium Series in Mecatronics - Vol. 3 - pp.292-299 Copyrigt c 2008 by ABCM UUV DEPTH MEASUREMENT USING CAMERA IMAGES Rogerio Yugo Takimoto Graduate Scool of Engineering Yokoama National University
More informationCSE 332: Data Structures & Parallelism Lecture 8: AVL Trees. Ruth Anderson Winter 2019
CSE 2: Data Structures & Parallelism Lecture 8: AVL Trees Rut Anderson Winter 29 Today Dictionaries AVL Trees /25/29 2 Te AVL Balance Condition: Left and rigt subtrees of every node ave eigts differing
More informationAN APPROXIMATION APPROACH FOR RANKING FUZZY NUMBERS BASED ON WEIGHTED INTERVAL - VALUE 1.INTRODUCTION
Mathematical and Computational Applications, Vol. 16, No. 3, pp. 588-597, 2011. Association for Scientific Research AN APPROXIMATION APPROACH FOR RANKING FUZZY NUMBERS BASED ON WEIGHTED INTERVAL - VALUE
More information