NestedNet: Learning Nested Sparse Structures in Deep Neural Networks

Size: px
Start display at page:

Download "NestedNet: Learning Nested Sparse Structures in Deep Neural Networks"

Transcription

1 NestedNet: Learning Nested Sparse Structures in Deep Neura Networks Eunwoo Kim Chanho Ahn Songhwai Oh Department of ECE and ASRI, Seou Nationa University, South Korea {kewoo15, mychahn, Abstract Recenty, there have been increasing demands to construct compact deep architectures to remove unnecessary redundancy and to improve the inference speed. Whie many recent works focus on reducing the redundancy by eiminating unneeded weight parameters, it is not possibe to appy a singe deep network for mutipe devices with different resources. When a new device or circumstantia condition requires a new deep architecture, it is necessary to construct and train a new network from scratch. In this work, we propose a nove deep earning framework, caed a nested sparse network, which expoits an n-in-1-type nested structure in a neura network. A nested sparse network consists of mutipe eves of networks with a different sparsity ratio associated with each eve, and higher eve networks share parameters with ower eve networks to enabe stabe nested earning. The proposed framework reaizes a resource-aware versatie architecture as the same network can meet diverse resource requirements, i.e., anytime property. Moreover, the proposed nested network can earn different forms of knowedge in its interna networks at different eves, enabing mutipe tasks using a singe network, such as coarse-to-fine hierarchica cassification. In order to train the proposed nested network, we propose efficient weight connection earning and channe and ayer scheduing strategies. We evauate our network in mutipe tasks, incuding adaptive deep compression, knowedge distiation, and earning cass hierarchy, and demonstrate that nested sparse networks perform competitivey, but more efficienty, compared to existing methods. 1. Introduction Deep neura networks have recenty become a standard architecture due to their significant performance improvement over the traditiona machine earning modes in a number of fieds, such as image recognition [10, 13, 18], object detection [24], image generation [7], and natura anguage processing [4, 26]. The successfu outcomes are derived from the avaiabiity of massive abeed data and compu- Figure 1. Conceptua iustration of the nested sparse network with n nested eves (n-in-1 network, n=4 here). A nested sparse network consists of interna networks from core eve (with the sparsest parameters) to fu eve (with a parameters) and an interna network share its parameters with higher eve networks, making a network-in-network structure. Since the nested sparse network produces mutipe different outputs, it can be everaged for mutipe tasks as we as mutipe devices with different resources. tationa power to process such data. What is more, many studies have been conducted toward very deep and dense modes [10, 13, 25, 32] to achieve further performance gain. Despite of the success, the remarkabe progress is accompished at the expense of intensive computationa and memory requirements, which can imit deep networks for a practica use, especiay on mobie devices with ow computing capabiity. In particuar, if the size of a network architecture is designed to be coossa, it may be probematic for the network to achieve mission-critica tasks on a commercia device which requires a rea-time operation. Fortunatey, it is we-known that there exists much redundancy in most of deep architectures, i.e., a few number of network parameters represent the whoe deep network in substance [20]. This motivates many researchers to expoit the redundancy from mutipe points of view. The concept of sparse representation is to eucidate the redundancy by representing a network with a sma number of representative parameters. Most of sparse deep networks prune connections with insignificant contributions [3, 8, 9, 23, 30, 34],

2 prune the number of channes [11,21,23], or prune the number of ayers [28] by sparse reguarization. Another reguarization strategy to eiminate the network redundancy is owrank approximation which approximates weight tensors by minimizing the reconstruction error between the origina network and the reduced network [14,16,27,33,34]. Weight tensors can be approximated by decomposing into tensors of pre-specified sizes [14,16,27,33] or by soving a nucearnorm reguarized optimization probem [34]. Obviousy, deveoping a compact deep architecture is beneficia to satisfy the specification of a device with ow capacity. However, it is difficut for a earned compact network to be adjusted for different hardware specifications (e.g., different sparsity eves), since a deep neura network normay earns parameters for a given task. When a new mode or a device with a different computing budget is required, we usuay define a new network again manuay by tria and error. Likewise, if a different form of knowedge is required in a trained network, it is hard to keep the earned knowedge whie training using the same network again or using a new network [12]. In genera, to perform mutipe tasks, we need mutipe networks at the cost of considerabe computation and memory footprint. In this work, we aim to expoit a nested structure in a deep neura architecture which reaizes an n-in-1 versatie network to conduct mutipe tasks within a singe neura network (see Figure 1). In a nested structure, network parameters are assigned to mutipe sets of nested eves, such that a ow eve set is a subset of parameters to a higher eve set. Different sets can capture different forms of knowedge according to the type (or amount) of information, making it possibe to perform mutipe tasks using a singe network. To this end, we propose a nested sparse network, termed NestedNet, which consists of mutipe eves of networks with different sparsity ratios (nested eves), where an interna network with ower nested eve (higher sparsity) shares its parameters with other interna networks with higher nested eves in a network-in-network fashion. Thus, a ower eve interna network can earn common knowedge whie a higher eve interna network can earn task-specific knowedge. It is we-known that eary ayers of deep neura networks share genera knowedge and ater ayers earn task-specific knowedge. A nested network earns more systematic hierarchica representation and has an effect of grouping anaogous fiters for each nested eve as shown in Section 5.1. NestedNet aso enjoys another usefu property, caed anytime property [19, 35], and it can produce eary (coarse) prediction with a ow eve network and more accurate (fine) answer with a higher eve network. Furthermore, unike existing networks, the nested sparse network can earn different forms of knowedge in its interna networks with different eves. Hence, the same network can be appied to mutipe tasks satisfying different resource requirements, which can reduce the efforts to train separate existing networks. In addition, consensus of different knowedge in a nested network can further improve the performance of the overa network. In order to expoit the nested structure, we present severa pruning strategies which can be used to earn parameters from scratch using off-the-shef deep earning ibraries. We aso provide appications, in which the nested structure can be appied, such as adaptive deep compression, knowedge distiation, and hierarchica cassification. Experimenta resuts demonstrate that NestedNet performs competitivey compared to popuar baseine and other existing sparse networks. In particuar, our resuts in each appication (and each data) are produced from a singe nested network, making NestedNet highy efficient compared with currenty avaiabe approaches. In summary, the main contributions of this work are: We present an efficient connection pruning method, which earns sparse connections from scratch. We aso provide channe and ayer pruning by scheduing to expoit the nested structure to avoid the need to train mutipe different networks. We propose an n-in-1 nested sparse network to reaize the nested structure in a deep network. The nested structure enabes not ony resource-aware anytime prediction but knowedge-aware adaptive earning for various tasks which are not compatibe with existing deep architectures. Besides, consensus of mutipe knowedge can improve the prediction of NestedNet. The proposed nested networks are performed on various appications in order to demonstrate its efficiency and versatiity at comparabe performance. 2. Reated Work A naïve approach to compress a deep neura network is to prune network connections by sparse approximation. Han et a. [9] proposed an iterative prune and retrain approach using the 1 - or 2 -norm reguarization. Zhou et a. [34] proposed a forward-backward spitting method to sove the 1 -norm reguarized optimization probem. Note that the weight pruning methods with non-structured sparsity can be difficut to achieve vaid speed-up using standard machines due to their irreguar memory access [28]. Channe pruning approaches were proposed by structured sparsity reguarization [28] and channe seection methods [11,23]. Since they reduce the actua number of parameters, they have benefits of computationa and memory resources compared to the weight connection pruning methods. Layer pruning [28] is another viabe approach for compression when the parameters associated with a ayer has itte contributions in a deep neura network using short-cut connection [10]. There is another ine of compressing deep networks by ow-rank

3 approximation, where weight tensors are approximated by ow-rank tensor decomposition [14, 16, 27, 33] or by soving a nucear-norm reguarized optimization probem [34]. It can save memory storage and enabe vaid speed-up when earning and inferencing the network. The ow-rank approximation approaches, however, normay require a pretrained mode when optimizing parameters to reduce the reconstruction error with the origina earned parameters. It is important to note here that the earned networks using the above compression approaches are difficut to be utiized for different tasks, such as different compression ratios, since the earned parameters are trained for a singe task (or a given compression ratio). If a new compression ratio is required, one can train a new network with manua mode parameter tuning from scratch or further tune the trained network to suit the new demand 1, and this procedure wi be conducted continuay whenever the form of the mode is changed, requiring additiona resources and efforts. This difficuty can be fuy addressed using the proposed nested sparse network. It can embody mutipe interna networks within a network and perform different tasks at the cost of earning a singe network. Furthermore, since the nested sparse network is constructed from scratch, the effort to earn a baseine network is not needed. There have been studies to buid a compact network from a earned arge network, caed knowedge distiation, whie maintaining the knowedge of the arge network [2, 12]. It shares intention with the deep compression approaches but utiizes the teacher-student paradigm to ease the training of networks [2]. Since it constructs a separate student network from a earned teacher network, its efficiency is aso imited simiar to deep compression modes. The proposed nested structure is aso reated to treestructured deep architectures. Hierarchica structures in a deep neura network have been recenty expoited for improved earning [15,19,29]. Yan et a. [29] proposed a hierarchica architecture that outputs coarse-to-fine predictions using different interna networks. Kim et a. [15] proposed a structured deep network that can enabe mode paraeization and a more compact mode compared with previous hierarchica deep networks. However, since their networks do not have a nested structure since parameters in their networks form independent groups in the hierarchy, they cannot have the benefit of nested earning for sharing knowedge obtained from coarse- to fine-eve sets of parameters. This imitation is discussed more in Section Compressing a Neura Network Given a set of training exampes X = [x 1, x 2,..., x n ] and abes Y = [y 1, y 2,..., y n ], where n is the number of 1 Additiona tuning on a trained network with a new requirement can be accompanied by forgetting the earned knowedge [22]. sampes, a neura network earns a set of parameters W by minimizing the foowing optimization probem ( ) min L Y, f(x, W) + λr(w), (1) W where L is a oss function between the network output and the ground-truth abe, R is a reguarizer which constrains weight parameters, and λ is a weighting factor baancing between oss and reguarizer. f(x, W) outputs according to the purpose of a task, such as cassification (binary number) and regression (rea number), through a chain of inear and noninear operations using the parameter W. A set of parameters is represented by W = {W } 1 L, where L is the number of ayers in a network, and W R kw k h c i c o for a convoutiona weight or W R ci co for a fuyconnected weight in popuar deep earning architectures such as AexNet [18], VGG networks [25], and residua networks [10]. Here, k w and k h are the width and height of a convoutiona kerne and c i and c o are the number of input and output channes (or activations 2 ), respectivey. In order to expoit a sparse structure in a neura network, many studies usuay try to enforce constraints on W, such as sparsity using the 1 [9, 34] or 2 weight decay [9] and ow-rank-ness using the nucear-norm [34] or tensor factorization [14, 31]. However, many previous studies utiize a pre-trained network and then prune connections in the network to deveop a parsimonious network, which usuay requires significant additiona computation. 4. Nested Sparse Networks 4.1. Sparsity earning by pruning We investigate three pruning approaches for sparse deep earning: (entry-wise) weight connection pruning, channe pruning, and ayer pruning, which are used for nested sparse networks described in Section 4.2. To achieve weight connection pruning, pruning strategies were proposed to reduce earned parameters using a pre-defined threshod [9] and using a subgradient method [34]. However, they require additiona pruning steps to sparsify a earned dense network. As an aternative, we propose an efficient sparse connection earning approach which earns from scratch without additiona pruning steps using the standard optimization too. The probem formuation can be constructed as foows: ( min L Y, f(x, P ΩM (W)) W ) s.t. M σ(α(w) τ), ( ) + λr P ΩM (W) where P( ) is the projection operator and Ω M denotes the support set of M = {M } 1 L. α( ) is the eement-wise 2 Activations denote neurons in fuy-connected ayers. (2)

4 absoute operator and σ( ) is an activation function to encode binary output (such as the unit-step function) and τ is a pre-defined threshod vaue for pruning. Since the unit-step function makes earning the probem (2) by standard backpropagation difficut due to its discontinuity, we present a simpe approximated unit-step function: σ(x) = tanh(γ x + ), (3) where x + = max(x, 0), tanh( ) is the hyperboic tangent function, and γ is a arge vaue to mimic the sope of the unit-step function. 3 Note that M acts as an impicit mask of W to revea sparse weight tensor. Once an eement of M becomes 0, its corresponding weight is no onger updated in the optimization procedure, making no more contribution to the network. By soving (2), we construct a sparse deep network based on off-the-shef deep earning ibraries without additiona efforts. To achieve channe or ayer pruning 4, we consider the foowing weight (or channe) scheduing probem: ( ) ( ) min L Y, f(x, P ΩM (W)) + λr P ΩM (W), (4) W where M = {M } 1 L M consists of binary weight tensors whose numbers of input and output channes (or activations for fuy-connected ayers) are reduced to smaer numbers than the numbers of channes (activations) in the baseine architecture to fufi the demanded sparsity. In other words, we mode a network with a singe number of schedued channes using M and then optimize W in the network from scratch. Achieving mutipe sparse networks by scheduing mutipe numbers of channes is described in the foowing section. Simiar to the channe pruning, impementing ayer pruning is straight-forward by reducing the number of ayers in repeated bocks [10]. In addition, pruning approaches can be combined for various nested structures as described in the next section Nested sparse networks The goa of a nested sparse network is to represent an n- in-1 nested structure of parameters in a deep neura network to aow n nested interna networks as shown in Figure 1. In a nested structure, an interna network with ower (resp., higher) nested eve gives higher (resp., ower) sparsity on parameters, where higher sparsity means a smaer number of non-zero entries. In addition, the interna network of the core eve (resp., the owest eve) defines the most compact sub network among the interna networks and the interna network of the fu eve (resp., the highest eve) defines 3 We set γ = 10 5 and it is not sensitive to initia vaues of parameters when appying the popuar initiaization method [6] from our empirica experiences. 4 Layer pruning can be appicabe to the structure in which weights of the same size are repeated, such as residua networks [10]. Figure 2. A graphica representation of nested parameters W by masks with three eves between fuy-connected ayers. the fuy dense network. Between them, there can be other interna networks with intermediate sparsity ratios. Importanty, an interna network of a ower nested eve shares its parameters with other interna networks of higher nested eves. Given a set of masks M, a nested sparse network, where network parameters are assigned to mutipe sets of nested eves, can be earned by optimizing the foowing probem: min W ( n 1 ( L Y, f(x, P ΩM j (W))) ) ) + λr (P (W) ΩMn n j=1 s.t. M j M k, j k, j, k [1,..., n ], where n is the number of nested eves. Since a set of masks M j = {M j } 1 L represents the set of j-th nested eve weights by its binary vaues, P ΩM j (W) P (W), j ΩM k k, j, k [1,..., n ]. A simpe graphica iustration of nested parameters between fuy-connected ayers is shown in Figure 2. By optimizing (5), we can buid a nested sparse network with nested eves by M. In order to find a set of masks M, we appy three pruning approaches described in Section 4.1. First, for reaizing a nested structure in entry-wise weight connections, masks are estimated by soving the weight connection pruning probem (2) with n different threshods iterativey. Specificay, once the mask M k consisting of the k-th nested eve weights is obtained in a network 5, we further train the network from the masked weight P ΩM (W) using a higher k vaue of threshod to get another mask giving a higher sparsity, and this procedure is conducted iterativey unti reaching the sparsest mask of the core eve. This strategy is hepfu to find sparse dominant weights [9], and our network trained using this strategy performs better than a network whose sparse mask is obtained randomy and other sparse networks in Section 5.1. For a nested sparse structure in convoutiona channes or ayers, we schedue a set of masks M according to the type 5 Since we use the approximation in (3), an actua binary mask is obtained by additiona threshoding after the mask is estimated. (5)

5 of pruning. In the channe-wise scheduing, the number of channes in convoutiona ayers and the dimensions in fuy-connected ayers are schedued to pre-specified numbers for a schedued ayers. The schedued weights are earned by soving (5) without performing the mask estimation phase in (2). Mathematicay, we represent weights from the first (core) eve weight W 1 R kw k h i 1 o 1 to the fu eve weight W = W n R kw k h c i c o, where c i = m i m and c o = m o m, between -th and ( + 1)-th convoutiona ayers as W 1 = W 1,1 W 2 =., W = W n = [ W 1,1 W 1,2 W 2,1 W 2,2 ], W 1,1 W 1,2... W 1,n W 2,1 W 2,2... W 2,n. W n,1.... W n,2... W n,n Figure 3 iustrates the nested sparse network with channe scheduing, where different coor represents weights in different nested eve except its shared sub-eve weights. For the first input ayer, observation data is not schedued in this work (i.e., c i is not divided). Unike the nested sparse network with the weight pruning method, which hods a whoe-size network structure for any nested eves, channe scheduing ony keeps and earns the parameters corresponding to the number of schedued channes associated with a nested eve, making vaid speed-up especiay for an inference phase. Likewise, we can schedue the number of ayers and its corresponding weights in a repeated network bock and earn parameters by soving (5). Note that for a residua network which consists of 6n b + 2 ayers [10], where n b is the number of residua bocks, if we schedue n b = {2, 3, 5}, our singe nested residua network with n = 3 consists of three residua networks of size 14, 20, and 32 in the end. Among them, the fu eve network with n b = 5 has the same number of parameters to the conventiona residua network of size 32 without introducing further parameters Appications Adaptive deep compression. Since a nested sparse network is constructed under the weight connection earning, it can appy to deep compression [8]. Furthermore, the nested sparse network reaizes adaptive deep compression because of its anytime property [35] which makes it possibe to provide various sparse networks and infer adaptivey using a earned interna network with a sparsity eve suitabe for the required specification. For this probem, we appy weight pruning and channe scheduing presented in Section 4.2. (6) Figure 3. A graphica representation of channe scheduing using the nested parameters between -th and ( + 1)-th convoutiona ayers in a nested sparse network where both ayers are schedued. i k and o k denote additiona numbers of input and output channes in the k-th eve, respectivey. Knowedge distiation. Knowedge distiation is used to represent knowedge compacty in a network [12]. Here, we appy channe and ayer scheduing approaches to make sma-size sub-networks as shown in Figure 4. We train a interna networks, one fu-eve and n 1 sub-eve networks, simutaneousy from scratch without pre-training the fu-eve network. Note that the nested structure in subeve networks may not be necessariy coincided with the combination of channe and ayer scheduing (e.g., subset constraint is not satisfied for Figure 4 (b) and (c)) according to a design choice. Hierarchica cassification. In a hierarchica cassification probem [29], a hierarchy can be modeed as an interna network with a nested eve. For exampe, we mode a nested network with two nested eves for the CIFAR-100 dataset [17] as it has 20 super casses, where each cass has 5 subcasses (a tota 100 subcasses). It enabes nested earning to perform coarse to fine representation and inference. We appy the channe pruning method since it can hande different output dimensionaity. 5. Experiments We have evauated NestedNet based on popuar deep architectures, ResNet-n [10] and WRN-n-k [32], where n is the number of ayers and k is the scae factor on the number of convoutiona channes for the above three appications. Since it is difficut to compare fairy with other baseine and sparse networks due to their non-nested structure, we provide a one-to-one comparison between interna networks in NestedNet and their corresponding independent baseines or other pubished networks of the same network structure. NestedNet was performed on three benchmark datasets: CIFAR-10, CIFAR-100 [17], and ImageNet [18].

6 Tabe 1. Deep compression resuts using ResNet-56 for the CIFAR-10 dataset. ( ) denotes the compression rate of parameters from the baseine network. Baseine resuts are obtained by author s impementation without pruning (1 ). Method Accuracy Baseine Fiter pruning [21] (2 ) 91.5% 92.8% Channe pruning [11] (2 ) 91.8% 92.8% NestedNet - channe pruning (2 ) 92.9% 93.4% Network pruning [9] (2 ) 93.4% 93.4% NestedNet - random mask (2 ) 91.2% 93.4% NestedNet - weight pruning (2 ) 93.4% 93.4% Network pruning [9] (3 ) 92.6% 93.4% NestedNet - random mask (3 ) 85.1% 93.4% NestedNet - weight pruning (3 ) 92.8% 93.4% Figure 4. NestedNet with four nested eves using channe and ayer scheduing for knowedge distiation. The test time is computed for a batch set of the same size to the training phase. A NestedNet variants and other compared baseines were impemented using the TensorFow ibrary [1] and processed by an NVIDIA TITAN X graphics card. Impementation detais of our modes are described in the suppementary materia Adaptive deep compression We appied the weight connection and channe pruning approaches described in Section 4.1 based on ResNet- 56 to compare with the state-of-the-art network (weight connection) pruning [9] and channe pruning approaches [11, 21]. We impemented the iterative network pruning method for [9] under our experimenta environment giving the same baseine accuracy, and resuts of channe pruning approaches [11, 21] under the same baseine network were refereed from [11]. To compare with the channe pruning approaches, our nested network was constructed with two nested eves, fu-eve (1 compression) and core-eve (2 compression), and to compare with the network pruning method, we constructed another NestedNet with three interna networks (1, 2, and 3 compressions) by setting τ = (τ 1, τ 2, τ 3 ) = (0, 0.015, 0.025), where the fu-eve networks give the same resut to the baseine network, ResNet-56 [10]. In the experiment, we aso provide resuts of the three-eve nested network (1, 2, and 3 compressions), which is earned using random sparse masks, in order to verify the effectiveness of the proposed weight connection earning method in Section 4.1. Tabe 1 shows the cassification accuracy of the compared networks for the CIFAR-10 dataset. For channe pruning, NestedNet gives smaer performance oss from baseine than recenty proposed channe pruning approaches [11, 21] under the same reduced parameters (2 ), even though the baseine performance is not the same due to their different impementation strategies. For weight con- nection pruning, ours performs better than network pruning [9] on average. They show no accuracy compromise under 2 compression, but ours gives better accuracy than [9] under 3 compression. Here, the weight connection pruning approaches outperform channe pruning approaches incuding our channe scheduing based network under 2 compression, since they prune unimportant connections in eement-wise whie channe pruning approaches eiminate connections in group-wise (thus dimensionaity itsef is reduced) which can produce information oss. Note that the random connection pruning gives the poor performance, confirming the benefit of the proposed connection earning approach in earning the nested structure. Figure 5(a) represents earned fiters (brighter represents more intense) of the nested network with the channe pruning approach using ResNet-56 with three eves (1, 2, and 4 compressions) where the size of the first convoutiona fiters were set to 7 7 to see observabe arge size fiters under the same performance. As shown in the figure, the connections in the core-eve interna network are dominant and upper-eve fiters, which do not incude their sub-eve fiters (when drawing the fiters), have ower importance than core-eve fiters which may earn side information of the dataset. We aso provide quantitative resuts for fiters in three eves using averaged normaized mutua information for a eves: 0.38 ± 0.09 for within-eve and 0.08 ± 0.03 for between-eve, which revea that the nested network earns more diverse fiters between nested eves than within eves and it has an effect of grouping anaogous fiters for each eve. Figure 5(b) shows the activation maps (ayer outputs) of an image for different ayers. For more information, we provide additiona activation maps for both train and test images in the suppementary materia Knowedge distiation To show the effectiveness of nested structures, we evauated NestedNets using channe and ayer scheduing for knowedge distiation where we earned a interna net-

7 (a) Figure 5. Resuts from NestedNet on the CIFAR-10 dataset. (a) Learned fiters from the first convoutiona ayer. (b) Activation feature maps from a train image (upper eft). Each coumn represents a different ayer (the ayer number increases from eft to right). Each row represents the images in each interna network, from core-eve (top) to fu-eve (bottom). Best viewed in coor. (b) works jointy rather than earning a distied network from a pre-trained mode in the iterature [12]. The proposed network was constructed under the WRN architecture [32] (here WRN-32-4 was used). We set the fu-eve network to WRN-32-4 and appied (1) channe scheduing with 1 4 scae factor (WRN-32-1 or ResNet-32), (2) ayer scheduing with 2 5 scae factor (WRN-14-4), and (3) combined scheduing for both channe and ayer (WRN-14-1 or ResNet-14). In the scenario, we did not appy the nested structure for the first convoutiona ayer and the fina output ayer. We appied the proposed network to CIFAR-10 and CIFAR-100. Figure 6 shows the comparison between NestedNet with four interna networks, and their corresponding baseine networks earned independenty for the CIFAR-10 dataset. We aso provide test time of every interna network. 6 As observed in the figure, NestedNet performs competitivey compared to its baseine networks for most of the density ratios. Even though the tota number of parameters to construct the nested sparse network is smaer than that to earn its independent baseine networks, the shared knowedge among the mutipe interna networks can compensate for the handicap and give the competitive performance to the baseines. When it comes to test time, we achieve vaid speed-up for the interna networks with reduced parameters, from about 1.5 (37% density) to 8.3 speed-up (2.3% density). Tabe 2 shows the performance of NestedNet, under the same baseine structure to the previous exampe, for the CIFAR-100 dataset. N C and N P denote the number of casses and parameters, respectivey. In the probem, NestedNet is sti comparabe to its corresponding baseine networks on average, which requires simiar resource to the singe baseine of fu eve Hierarchica cassification We evauated the nested sparse network for hierarchica cassification. We first constructed a two-eve nested network for the CIFAR-100 dataset, which consists of twoeve hierarchy of casses, and channe scheduing was appied to hande different dimensionaity in the hierarchica 6 Since the baseine networks require the same number of parameters and time as our networks, we just present test time of our networks. Accuracy (%) NestedNet NestedNet-L NestedNet-A Baseine Density (%) Time (ms) Density (%) Figure 6. Knowedge distiation resuts w.r.t accuracy (eft) and time (right) using NestedNet for the CIFAR-10 dataset. structure of the dataset. We compared with the state-of-theart architecture, SpitNet [15], which can address cass hierarchy. Foowing the practice in [15], NestedNet was constructed under WRN-14-8 and we adopted WRN-14-4 as a core interna network (4 compression). Since the number of parameters in SpitNet is reduced to neary 68% from the baseine, we constructed another NestedNet based on the WRN-32-4 architecture which has the amost same number of parameters as SpitNet. Tabe 3 shows the performance comparison among the compared networks. Overa, our two NestedNets based on different architectures give the better performance than their baseines for a cases, since ours can earn rich knowedge from not ony earning the specific casses but earning their abstract eve (super-cass) knowedge within the nested network, compared to merey earning independent cass hierarchy. NestedNet aso outperforms SpitNet for both architectures. Whie SpitNet earns parameters which are divided into independent sets, NestedNet earns shared knowedge for different tasks which can further improve the performance by its combined knowedge obtained from mutipe interna networks. The experiment shows that the nested structure can reaize encompassing mutipe semantic knowedge in a singe network to acceerate earning. Note that if the number of interna networks increases for more hierarchy, the amount of resources saved increases. We aso provide experimenta resuts on the ImageNet (ILSVRC 2012) dataset [5]. From the dataset, we coected a subset, which consists of 100 diverse casses incuding

8 Tabe 2. Knowedge distiation resuts on the CIFAR-100 dataset. indicates that the approximated tota resource for the nested network. Network Architecture N P Density Memory Accuracy Baseine Test time Consensus (N C = 100) NestedNet WRN M 100% 28.2MB 75.5% 75.7% 52ms (1 ) WRN M 37% 10.3MB 74.3% 73.8% 35ms (1.5 ) NestedNet-A: 77.4% WRN M 6.4% 1.8MB 67.1% 67.5% 10ms (5.2 ) NestedNet-L: 78.1% WRN M 2.4% 0.7MB 64.3% 64.0% 7ms (7.4 ) Tabe 3. Hierarchica cassification resuts on the CIFAR-100 dataset. indicates that the approximated tota resource for the nested network. Network Architecture N C N P Accuracy Baseine WRN M 82.4% WRN M 75.8% NestedNet WRN M 83.9% WRN M 77.3% NestedNet-A WRN M 78.3% NestedNet-L WRN M 78.0% SpitNet [15] WRN M 74.9% Baseine WRN M 82.1% WRN M 75.7% NestedNet WRN M 83.7% WRN M 76.6% NestedNet-A WRN M 78.0% NestedNet-L WRN M 77.7% natura objects, pants, animas, and artifacts. We constructed a three-eve hierarchy and the numbers of super and intermediate casses are 4 and 11, respectivey (a tota 100 subcasses). Taxonomy of the dataset is summarized in the suppementary materia. The number of train and test images are 128,768 and 5,000, respectivey, which were coected from the origina ImageNet dataset [5]. Nested- Net was constructed based on the ResNet-18 architecture foowing the instruction in [10] for the ImageNet dataset, where the numbers of channes in the core and intermediate eve networks were set to quarter and haf of the number of a channes, respectivey, for every convoutiona ayer. Tabe 4 summarizes the hierarchica cassification resuts for the ImageNet dataset. The tabe shows that NestedNet, whose interna networks are earned simutaneousy in a singe network, outperforms its corresponding baseine networks for a nested eves Consensus of mutipe knowedge One important benefit of NestedNet is to everage mutipe knowedge of interna networks in a nested structure. To utiize the benefit, we appended another ayer at the end, which we ca a consensus ayer, to combine outputs from a nested eves for more accurate prediction by 1) averaging (NestedNet-A) or 2) earning (NestedNet-L). For NestedNet-L, we simpy added a fuy-connected ayer to the concatenated vector of a outputs in NestedNet, where we additionay coected the fine cass output in the core Tabe 4. Hierarchica cassification resuts on ImageNet. Network N C N P Accuracy 4 0.7M 92.8% Baseine M 89.2% M 79.8% 4 0.7M 94.0% NestedNet M 90.2% M 79.9% NestedNet-A M 80.2% NestedNet-L M 80.3% eve network for hierarchica cassification. See the suppementary materia for more detais. Whie the overhead of combining outputs of different eves of NestedNet is negigibe, as shown in the resuts for knowedge distiation and hierarchica cassification, the two consensus approaches outperform the existing structures incuding NestedNet of fu-eve under the simiar number of parameters. Notaby, NestedNet of fu-eve in hierarchica cassification gives better performance than that in knowedge distiation under the same architecture, WRN-32-4, since it has rich knowedge by incorporating coarse cass information in its architecture without introducing additiona structures. 6. Concusion We have proposed a nested sparse network, named NestedNet, to reaize an n-in-1 nested structure in a neura network, where severa networks with different sparsity ratios are contained in a singe network and earned simutaneousy. To expoit such structure, nove weight pruning and scheduing strategies have been presented. NestedNet is an efficient architecture to incorporate mutipe knowedge or additiona information within a neura network, whie existing networks are difficut to embody such structure. NestedNets have been extensivey tested on various appications and demonstrated that it performs competitivey, but more efficienty, compared to existing deep architectures. Acknowedgements: This research was supported in part by Basic Science Research Program through the Nationa Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Panning (NRF-2017R1A2B ), by the Next-Generation Information Computing Deveopment Program through the Nationa Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (2017M3C4A ), and by the Brain Korea 21 Pus Project in 2018.

9 References [1] M. Abadi, A. Agarwa, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et a. TensorFow: Large-scae machine earning on heterogeneous distributed systems. arxiv preprint arxiv: , [2] R. Adriana, B. Nicoas, K. S. Ebrahimi, C. Antoine, G. Caro, and B. Yoshua. FitNets: Hints for thin deep nets. In Internationa Conference on Learning Representations (ICLR), [3] J. M. Avarez and M. Sazmann. Learning the number of neurons in deep networks. In Advances in Neura Information Processing Systems (NIPS), [4] Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin. A neura probabiistic anguage mode. Journa of Machine Learning Research (JMLR), 3(Feb): , [5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scae Hierarchica Image Database. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), , 8 [6] X. Gorot and Y. Bengio. Understanding the difficuty of training deep feedforward neura networks. In Proceedings of the Thirteenth Internationa Conference on Artificia Inteigence and Statistics, [7] I. Goodfeow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farey, S. Ozair, A. Courvie, and Y. Bengio. Generative adversaria nets. In Advances in Neura Information Processing Systems (NIPS), [8] S. Han, H. Mao, and W. J. Day. Deep compression: Compressing deep neura networks with pruning, trained quantization and huffman coding. In Internationa Conference on Learning Representations (ICLR), , 5 [9] S. Han, J. Poo, J. Tran, and W. Day. Learning both weights and connections for efficient neura network. In Advances in Neura Information Processing Systems (NIPS), , 2, 3, 4, 6 [10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residua earning for image recognition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), , 2, 3, 4, 5, 6, 8 [11] Y. He, X. Zhang, and J. Sun. Channe pruning for acceerating very deep neura networks. In Internationa Conference on Computer Vision (ICCV), , 6 [12] G. Hinton, O. Vinyas, and J. Dean. Distiing the knowedge in a neura network. arxiv preprint arxiv: , , 3, 5, 7 [13] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten. Densey connected convoutiona networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), [14] M. Jaderberg, A. Vedadi, and A. Zisserman. Speeding up convoutiona neura networks with ow rank expansions. arxiv preprint arxiv: , , 3 [15] J. Kim, Y. Park, G. Kim, and S. J. Hwang. SpitNet: Learning to semanticay spit deep networks for parameter reduction and mode paraeization. In Internationa Conference on Machine Learning (ICML), , 7, 8 [16] Y.-D. Kim, E. Park, S. Yoo, T. Choi, L. Yang, and D. Shin. Compression of deep convoutiona neura networks for fast and ow power mobie appications. In Internationa Conference on Learning Representations (ICLR), , 3 [17] A. Krizhevsky and G. Hinton. Learning mutipe ayers of features from tiny images [18] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet cassification with deep convoutiona neura networks. In Advances in Neura Information Processing Systems (NIPS), , 3, 5 [19] G. Larsson, M. Maire, and G. Shakhnarovich. FractaNet: Utra-deep neura networks without residuas. arxiv preprint arxiv: , , 3 [20] Y. LeCun, J. S. Denker, and S. A. Soa. Optima brain damage. In Advances in Neura Information Processing Systems (NIPS), [21] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf. Pruning fiters for efficient convnets. In Internationa Conference on Learning Representations (ICLR), , 6 [22] Z. Li and D. Hoiem. Learning without forgetting. IEEE Transactions on Pattern Anaysis and Machine Inteigence, [23] Z. Liu, J. Li, Z. Shen, G. Huang, S. Yan, and C. Zhang. Learning efficient convoutiona networks through network simming. In 2017 IEEE Internationa Conference on Computer Vision (ICCV), pages IEEE, , 2 [24] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards rea-time object detection with region proposa networks. In Advances in Neura Information Processing Systems (NIPS), [25] K. Simonyan and A. Zisserman. Very deep convoutiona networks for arge-scae image recognition. In Internationa Conference on Learning Representations (ICLR), , 3 [26] I. Sutskever, O. Vinyas, and Q. V. Le. Sequence to sequence earning with neura networks. In Advances in Neura Information Processing Systems (NIPS), [27] C. Tai, T. Xiao, Y. Zhang, X. Wang, et a. Convoutiona neura networks with ow-rank reguarization. arxiv preprint arxiv: , , 3 [28] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep neura networks. In Advances in Neura Information Processing Systems (NIPS), [29] Z. Yan, H. Zhang, R. Piramuthu, V. Jagadeesh, D. DeCoste, W. Di, and Y. Yu. HD-CNN: hierarchica deep convoutiona neura networks for arge scae visua recognition. In Proceedings of the IEEE Internationa Conference on Computer Vision (ICCV), , 5 [30] T.-J. Yang, Y.-H. Chen, and V. Sze. Designing energyefficient convoutiona neura networks using energy-aware pruning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), [31] X. Yu, T. Liu, X. Wang, and D. Tao. On compressing deep modes by ow rank and sparse decomposition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), [32] S. Zagoruyko and N. Komodakis. Wide residua networks. arxiv preprint arxiv: , , 5, 7

10 [33] X. Zhang, J. Zou, K. He, and J. Sun. Acceerating very deep convoutiona networks for cassification and detection. IEEE transactions on pattern anaysis and machine inteigence (TPAMI), 38(10): , , 3 [34] H. Zhou, J. M. Avarez, and F. Poriki. Less is more: Towards compact CNNs. In European Conference on Computer Vision (ECCV). Springer, , 2, 3 [35] S. Ziberstein. Using anytime agorithms in inteigent systems. AI magazine, 17(3):73, , 5

Mobile App Recommendation: Maximize the Total App Downloads

Mobile App Recommendation: Maximize the Total App Downloads Mobie App Recommendation: Maximize the Tota App Downoads Zhuohua Chen Schoo of Economics and Management Tsinghua University chenzhh3.12@sem.tsinghua.edu.cn Yinghui (Catherine) Yang Graduate Schoo of Management

More information

Layer-Specific Adaptive Learning Rates for Deep Networks

Layer-Specific Adaptive Learning Rates for Deep Networks Layer-Specific Adaptive Learning Rates for Deep Networks arxiv:1510.04609v1 [cs.cv] 15 Oct 2015 Bharat Singh, Soham De, Yangmuzi Zhang, Thomas Godstein, and Gavin Tayor Department of Computer Science Department

More information

Research on UAV Fixed Area Inspection based on Image Reconstruction

Research on UAV Fixed Area Inspection based on Image Reconstruction Research on UAV Fixed Area Inspection based on Image Reconstruction Kun Cao a, Fei Wu b Schoo of Eectronic and Eectrica Engineering, Shanghai University of Engineering Science, Abstract Shanghai 20600,

More information

The Classification of Stored Grain Pests based on Convolutional Neural Network

The Classification of Stored Grain Pests based on Convolutional Neural Network 2017 2nd Internationa Conference on Mechatronics and Information Technoogy (ICMIT 2017) The Cassification of Stored Grain Pests based on Convoutiona Neura Network Dexian Zhang1, Wenun Zhao*, 1 1 Schoo

More information

Research of Classification based on Deep Neural Network

Research of  Classification based on Deep Neural Network 2018 Internationa Conference on Sensor Network and Computer Engineering (ICSNCE 2018) Research of Emai Cassification based on Deep Neura Network Wang Yawen Schoo of Computer Science and Engineering Xi

More information

On-Chip CNN Accelerator for Image Super-Resolution

On-Chip CNN Accelerator for Image Super-Resolution On-Chip CNN Acceerator for Image Super-Resoution Jung-Woo Chang and Suk-Ju Kang Dept. of Eectronic Engineering, Sogang University, Seou, South Korea {zwzang91, sjkang}@sogang.ac.kr ABSTRACT To impement

More information

A Robust Sign Language Recognition System with Sparsely Labeled Instances Using Wi-Fi Signals

A Robust Sign Language Recognition System with Sparsely Labeled Instances Using Wi-Fi Signals A Robust Sign Language Recognition System with Sparsey Labeed Instances Using Wi-Fi Signas Jiacheng Shang, Jie Wu Center for Networked Computing Dept. of Computer and Info. Sciences Tempe University Motivation

More information

Nearest Neighbor Learning

Nearest Neighbor Learning Nearest Neighbor Learning Cassify based on oca simiarity Ranges from simpe nearest neighbor to case-based and anaogica reasoning Use oca information near the current query instance to decide the cassification

More information

Language Identification for Texts Written in Transliteration

Language Identification for Texts Written in Transliteration Language Identification for Texts Written in Transiteration Andrey Chepovskiy, Sergey Gusev, Margarita Kurbatova Higher Schoo of Economics, Data Anaysis and Artificia Inteigence Department, Pokrovskiy

More information

Automatic Grouping for Social Networks CS229 Project Report

Automatic Grouping for Social Networks CS229 Project Report Automatic Grouping for Socia Networks CS229 Project Report Xiaoying Tian Ya Le Yangru Fang Abstract Socia networking sites aow users to manuay categorize their friends, but it is aborious to construct

More information

A Fast Block Matching Algorithm Based on the Winner-Update Strategy

A Fast Block Matching Algorithm Based on the Winner-Update Strategy In Proceedings of the Fourth Asian Conference on Computer Vision, Taipei, Taiwan, Jan. 000, Voume, pages 977 98 A Fast Bock Matching Agorithm Based on the Winner-Update Strategy Yong-Sheng Chenyz Yi-Ping

More information

JOINT IMAGE REGISTRATION AND EXAMPLE-BASED SUPER-RESOLUTION ALGORITHM

JOINT IMAGE REGISTRATION AND EXAMPLE-BASED SUPER-RESOLUTION ALGORITHM JOINT IMAGE REGISTRATION AND AMPLE-BASED SUPER-RESOLUTION ALGORITHM Hyo-Song Kim, Jeyong Shin, and Rae-Hong Park Department of Eectronic Engineering, Schoo of Engineering, Sogang University 35 Baekbeom-ro,

More information

Automatic Hidden Web Database Classification

Automatic Hidden Web Database Classification Automatic idden Web atabase Cassification Zhiguo Gong, Jingbai Zhang, and Qian Liu Facuty of Science and Technoogy niversity of Macau Macao, PRC {fstzgg,ma46597,ma46620}@umac.mo Abstract. In this paper,

More information

WHILE estimating the depth of a scene from a single image

WHILE estimating the depth of a scene from a single image JOURNAL OF L A T E X CLASS FILES, VOL. 4, NO. 8, AUGUST 05 Monocuar Depth Estimation using Muti-Scae Continuous CRFs as Sequentia Deep Networks Dan Xu, Student Member, IEEE, Eisa Ricci, Member, IEEE, Wani

More information

GPU Implementation of Parallel SVM as Applied to Intrusion Detection System

GPU Implementation of Parallel SVM as Applied to Intrusion Detection System GPU Impementation of Parae SVM as Appied to Intrusion Detection System Sudarshan Hiray Research Schoar, Department of Computer Engineering, Vishwakarma Institute of Technoogy, Pune, India sdhiray7@gmai.com

More information

Image Segmentation Using Semi-Supervised k-means

Image Segmentation Using Semi-Supervised k-means I J C T A, 9(34) 2016, pp. 595-601 Internationa Science Press Image Segmentation Using Semi-Supervised k-means Reza Monsefi * and Saeed Zahedi * ABSTRACT Extracting the region of interest is a very chaenging

More information

On Trivial Solution and High Correlation Problems in Deep Supervised Hashing

On Trivial Solution and High Correlation Problems in Deep Supervised Hashing On Trivia Soution and High Correation Probems in Deep Supervised Hashing Yuchen Guo, Xin Zhao, Guiguang Ding, Jungong Han Schoo of Software, Tsinghua University, Beijing 84, China Schoo of Computing and

More information

Multi-Scale Continuous CRFs as Sequential Deep Networks for Monocular Depth Estimation

Multi-Scale Continuous CRFs as Sequential Deep Networks for Monocular Depth Estimation Muti-Scae Continuous CRFs as Sequentia Deep Networks for Monocuar Depth Estimation Dan Xu 1, Eisa Ricci 4,5, Wani Ouyang 2,3, Xiaogang Wang 2, Nicu Sebe 1 1 University of Trento, 2 The Chinese University

More information

MCSE Training Guide: Windows Architecture and Memory

MCSE Training Guide: Windows Architecture and Memory MCSE Training Guide: Windows 95 -- Ch 2 -- Architecture and Memory Page 1 of 13 MCSE Training Guide: Windows 95-2 - Architecture and Memory This chapter wi hep you prepare for the exam by covering the

More information

Sensitivity Analysis of Hopfield Neural Network in Classifying Natural RGB Color Space

Sensitivity Analysis of Hopfield Neural Network in Classifying Natural RGB Color Space Sensitivity Anaysis of Hopfied Neura Network in Cassifying Natura RGB Coor Space Department of Computer Science University of Sharjah UAE rsammouda@sharjah.ac.ae Abstract: - This paper presents a study

More information

Distance Weighted Discrimination and Second Order Cone Programming

Distance Weighted Discrimination and Second Order Cone Programming Distance Weighted Discrimination and Second Order Cone Programming Hanwen Huang, Xiaosun Lu, Yufeng Liu, J. S. Marron, Perry Haaand Apri 3, 2012 1 Introduction This vignette demonstrates the utiity and

More information

A New Supervised Clustering Algorithm Based on Min-Max Modular Network with Gaussian-Zero-Crossing Functions

A New Supervised Clustering Algorithm Based on Min-Max Modular Network with Gaussian-Zero-Crossing Functions 2006 Internationa Joint Conference on Neura Networks Sheraton Vancouver Wa Centre Hote, Vancouver, BC, Canada Juy 16-21, 2006 A New Supervised Custering Agorithm Based on Min-Max Moduar Network with Gaussian-Zero-Crossing

More information

Load Balancing by MPLS in Differentiated Services Networks

Load Balancing by MPLS in Differentiated Services Networks Load Baancing by MPLS in Differentiated Services Networks Riikka Susitaiva, Jorma Virtamo, and Samui Aato Networking Laboratory, Hesinki University of Technoogy P.O.Box 3000, FIN-02015 HUT, Finand {riikka.susitaiva,

More information

Solving Large Double Digestion Problems for DNA Restriction Mapping by Using Branch-and-Bound Integer Linear Programming

Solving Large Double Digestion Problems for DNA Restriction Mapping by Using Branch-and-Bound Integer Linear Programming The First Internationa Symposium on Optimization and Systems Bioogy (OSB 07) Beijing, China, August 8 10, 2007 Copyright 2007 ORSC & APORC pp. 267 279 Soving Large Doube Digestion Probems for DNA Restriction

More information

Learning to Learn Second-Order Back-Propagation for CNNs Using LSTMs

Learning to Learn Second-Order Back-Propagation for CNNs Using LSTMs Learning to Learn Second-Order Bac-Propagation for CNNs Using LSTMs Anirban Roy SRI Internationa Meno Par, USA anirban.roy@sri.com Sinisa Todorovic Oregon State University Corvais, USA sinisa@eecs.oregonstate.edu

More information

CNN and RNN Based Neural Networks for Action Recognition

CNN and RNN Based Neural Networks for Action Recognition Journa of Physics: Conference Series PAPER OPEN ACCESS CNN and RNN Based Neura Networks for Action Recognition To cite this artice: Chen Zhao et a 2018 J. Phys.: Conf. Ser. 1087 062013 View the artice

More information

A METHOD FOR GRIDLESS ROUTING OF PRINTED CIRCUIT BOARDS. A. C. Finch, K. J. Mackenzie, G. J. Balsdon, G. Symonds

A METHOD FOR GRIDLESS ROUTING OF PRINTED CIRCUIT BOARDS. A. C. Finch, K. J. Mackenzie, G. J. Balsdon, G. Symonds A METHOD FOR GRIDLESS ROUTING OF PRINTED CIRCUIT BOARDS A C Finch K J Mackenzie G J Basdon G Symonds Raca-Redac Ltd Newtown Tewkesbury Gos Engand ABSTRACT The introduction of fine-ine technoogies to printed

More information

Hiding secrete data in compressed images using histogram analysis

Hiding secrete data in compressed images using histogram analysis University of Woongong Research Onine University of Woongong in Dubai - Papers University of Woongong in Dubai 2 iding secrete data in compressed images using histogram anaysis Farhad Keissarian University

More information

A Memory Grouping Method for Sharing Memory BIST Logic

A Memory Grouping Method for Sharing Memory BIST Logic A Memory Grouping Method for Sharing Memory BIST Logic Masahide Miyazai, Tomoazu Yoneda, and Hideo Fuiwara Graduate Schoo of Information Science, Nara Institute of Science and Technoogy (NAIST), 8916-5

More information

Real-Time Feature Descriptor Matching via a Multi-Resolution Exhaustive Search Method

Real-Time Feature Descriptor Matching via a Multi-Resolution Exhaustive Search Method 297 Rea-Time Feature escriptor Matching via a Muti-Resoution Ehaustive Search Method Chi-Yi Tsai, An-Hung Tsao, and Chuan-Wei Wang epartment of Eectrica Engineering, Tamang University, New Taipei City,

More information

Response Surface Model Updating for Nonlinear Structures

Response Surface Model Updating for Nonlinear Structures Response Surface Mode Updating for Noninear Structures Gonaz Shahidi a, Shamim Pakzad b a PhD Student, Department of Civi and Environmenta Engineering, Lehigh University, ATLSS Engineering Research Center,

More information

arxiv: v1 [cs.cv] 29 Jul 2018

arxiv: v1 [cs.cv] 29 Jul 2018 Joint Representation and Truncated Inference Learning for Correation Fiter based Tracking Yingjie Yao 1[0000 000 3533 1569], Xiaohe Wu 1[0000 0001 6884 911], Lei Zhang [0000 000 444 494], Shiguang Shan

More information

FACE RECOGNITION WITH HARMONIC DE-LIGHTING. s: {lyqing, sgshan, wgao}jdl.ac.cn

FACE RECOGNITION WITH HARMONIC DE-LIGHTING.  s: {lyqing, sgshan, wgao}jdl.ac.cn FACE RECOGNITION WITH HARMONIC DE-LIGHTING Laiyun Qing 1,, Shiguang Shan, Wen Gao 1, 1 Graduate Schoo, CAS, Beijing, China, 100080 ICT-ISVISION Joint R&D Laboratory for Face Recognition, CAS, Beijing,

More information

Sparse Representation based Face Recognition with Limited Labeled Samples

Sparse Representation based Face Recognition with Limited Labeled Samples Sparse Representation based Face Recognition with Limited Labeed Sampes Vijay Kumar, Anoop Namboodiri, C.V. Jawahar Center for Visua Information Technoogy, IIIT Hyderabad, India Abstract Sparse representations

More information

Dialectical GAN for SAR Image Translation: From Sentinel-1 to TerraSAR-X

Dialectical GAN for SAR Image Translation: From Sentinel-1 to TerraSAR-X Artice Diaectica GAN for SAR Image Transation: From Sentine-1 to TerraSAR-X Dongyang Ao 1,2, Corneiu Octavian Dumitru 1,Gottfried Schwarz 1 and Mihai Datcu 1, * 1 German Aerospace Center (DLR), Münchener

More information

PRUNING CONVOLUTIONAL NEURAL NETWORKS

PRUNING CONVOLUTIONAL NEURAL NETWORKS Pubished as a conference paper at ICLR 2017 PRUNING CONVOLUTIONAL NEURAL NETWORKS FOR RESOURCE EFFICIENT INFERENCE Pavo Mochanov, Stephen Tyree, Tero Karras, Timo Aia, Jan Kautz NVIDIA {pmochanov, styree,

More information

An Indian Journal FULL PAPER ABSTRACT KEYWORDS. Trade Science Inc.

An Indian Journal FULL PAPER ABSTRACT KEYWORDS. Trade Science Inc. [Type text] [Type text] [Type text] ISSN : 0974-7435 Voume 10 Issue 16 BioTechnoogy 014 An Indian Journa FULL PAPER BTAIJ, 10(16), 014 [999-9307] Study on prediction of type- fuzzy ogic power system based

More information

Action Recognition by Learning Deep Multi-Granular Spatio-Temporal Video Representation

Action Recognition by Learning Deep Multi-Granular Spatio-Temporal Video Representation Action Recognition by Learning Deep Muti-Granuar Spatio-Tempora Video Representation Qing Li 1, Zhaofan Qiu 1, Ting Yao 2, Tao Mei 2, Yong Rui 2, Jiebo Luo 3 1 University of Science and Technoogy of China,

More information

AUTOMATIC gender classification based on facial images

AUTOMATIC gender classification based on facial images SUBMITTED TO IEEE TRANSACTIONS ON NEURAL NETWORKS 1 Gender Cassification Using a Min-Max Moduar Support Vector Machine with Incorporating Prior Knowedge Hui-Cheng Lian and Bao-Liang Lu, Senior Member,

More information

Optimization and Application of Support Vector Machine Based on SVM Algorithm Parameters

Optimization and Application of Support Vector Machine Based on SVM Algorithm Parameters Optimization and Appication of Support Vector Machine Based on SVM Agorithm Parameters YAN Hui-feng 1, WANG Wei-feng 1, LIU Jie 2 1 ChongQing University of Posts and Teecom 400065, China 2 Schoo Of Civi

More information

Neural Network Enhancement of the Los Alamos Force Deployment Estimator

Neural Network Enhancement of the Los Alamos Force Deployment Estimator Missouri University of Science and Technoogy Schoars' Mine Eectrica and Computer Engineering Facuty Research & Creative Works Eectrica and Computer Engineering 1-1-1994 Neura Network Enhancement of the

More information

Lecture Notes for Chapter 4 Part III. Introduction to Data Mining

Lecture Notes for Chapter 4 Part III. Introduction to Data Mining Data Mining Cassification: Basic Concepts, Decision Trees, and Mode Evauation Lecture Notes for Chapter 4 Part III Introduction to Data Mining by Tan, Steinbach, Kumar Adapted by Qiang Yang (2010) Tan,Steinbach,

More information

COMPRESSIVE sensing (CS), which aims at recovering

COMPRESSIVE sensing (CS), which aims at recovering D-Net: Deep Learning pproach for Compressive Sensing RI Yan Yang, Jian Sun, Huibin Li, and Zongben u ariv:705.06869v [cs.cv] 9 ay 07 bstract Compressive sensing (CS) is an effective approach for fast agnetic

More information

Formulation of Loss minimization Problem Using Genetic Algorithm and Line-Flow-based Equations

Formulation of Loss minimization Problem Using Genetic Algorithm and Line-Flow-based Equations Formuation of Loss minimization Probem Using Genetic Agorithm and Line-Fow-based Equations Sharanya Jaganathan, Student Member, IEEE, Arun Sekar, Senior Member, IEEE, and Wenzhong Gao, Senior member, IEEE

More information

Joint disparity and motion eld estimation in. stereoscopic image sequences. Ioannis Patras, Nikos Alvertos and Georgios Tziritas y.

Joint disparity and motion eld estimation in. stereoscopic image sequences. Ioannis Patras, Nikos Alvertos and Georgios Tziritas y. FORTH-ICS / TR-157 December 1995 Joint disparity and motion ed estimation in stereoscopic image sequences Ioannis Patras, Nikos Avertos and Georgios Tziritas y Abstract This work aims at determining four

More information

Outline. Parallel Numerical Algorithms. Forward Substitution. Triangular Matrices. Solving Triangular Systems. Back Substitution. Parallel Algorithm

Outline. Parallel Numerical Algorithms. Forward Substitution. Triangular Matrices. Solving Triangular Systems. Back Substitution. Parallel Algorithm Outine Parae Numerica Agorithms Chapter 8 Prof. Michae T. Heath Department of Computer Science University of Iinois at Urbana-Champaign CS 554 / CSE 512 1 2 3 4 Trianguar Matrices Michae T. Heath Parae

More information

MULTIGRID REDUCTION IN TIME FOR NONLINEAR PARABOLIC PROBLEMS: A CASE STUDY

MULTIGRID REDUCTION IN TIME FOR NONLINEAR PARABOLIC PROBLEMS: A CASE STUDY MULTIGRID REDUCTION IN TIME FOR NONLINEAR PARABOLIC PROBLEMS: A CASE STUDY R.D. FALGOUT, T.A. MANTEUFFEL, B. O NEILL, AND J.B. SCHRODER Abstract. The need for paraeism in the time dimension is being driven

More information

Binarized support vector machines

Binarized support vector machines Universidad Caros III de Madrid Repositorio instituciona e-archivo Departamento de Estadística http://e-archivo.uc3m.es DES - Working Papers. Statistics and Econometrics. WS 2007-11 Binarized support vector

More information

ACTIVE LEARNING ON WEIGHTED GRAPHS USING ADAPTIVE AND NON-ADAPTIVE APPROACHES. Eyal En Gad, Akshay Gadde, A. Salman Avestimehr and Antonio Ortega

ACTIVE LEARNING ON WEIGHTED GRAPHS USING ADAPTIVE AND NON-ADAPTIVE APPROACHES. Eyal En Gad, Akshay Gadde, A. Salman Avestimehr and Antonio Ortega ACTIVE LEARNING ON WEIGHTED GRAPHS USING ADAPTIVE AND NON-ADAPTIVE APPROACHES Eya En Gad, Akshay Gadde, A. Saman Avestimehr and Antonio Ortega Department of Eectrica Engineering University of Southern

More information

A Design Method for Optimal Truss Structures with Certain Redundancy Based on Combinatorial Rigidity Theory

A Design Method for Optimal Truss Structures with Certain Redundancy Based on Combinatorial Rigidity Theory 0 th Word Congress on Structura and Mutidiscipinary Optimization May 9 -, 03, Orando, Forida, USA A Design Method for Optima Truss Structures with Certain Redundancy Based on Combinatoria Rigidity Theory

More information

AN EVOLUTIONARY APPROACH TO OPTIMIZATION OF A LAYOUT CHART

AN EVOLUTIONARY APPROACH TO OPTIMIZATION OF A LAYOUT CHART 13 AN EVOLUTIONARY APPROACH TO OPTIMIZATION OF A LAYOUT CHART Eva Vona University of Ostrava, 30th dubna st. 22, Ostrava, Czech Repubic e-mai: Eva.Vona@osu.cz Abstract: This artice presents the use of

More information

DETERMINING INTUITIONISTIC FUZZY DEGREE OF OVERLAPPING OF COMPUTATION AND COMMUNICATION IN PARALLEL APPLICATIONS USING GENERALIZED NETS

DETERMINING INTUITIONISTIC FUZZY DEGREE OF OVERLAPPING OF COMPUTATION AND COMMUNICATION IN PARALLEL APPLICATIONS USING GENERALIZED NETS DETERMINING INTUITIONISTIC FUZZY DEGREE OF OVERLAPPING OF COMPUTATION AND COMMUNICATION IN PARALLEL APPLICATIONS USING GENERALIZED NETS Pave Tchesmedjiev, Peter Vassiev Centre for Biomedica Engineering,

More information

arxiv: v1 [cs.cv] 28 Sep 2015

arxiv: v1 [cs.cv] 28 Sep 2015 Fast Non-oca Stereo Matching based on Hierarchica Disparity Prediction Xuan Luo, Xuejiao Bai, Shuo Li, Hongtao Lu Shanghai Jiao Tong University, No. 800, Dongchuan Road, Shanghai, China {roxanneuo, yukiaya,

More information

Topology-aware Key Management Schemes for Wireless Multicast

Topology-aware Key Management Schemes for Wireless Multicast Topoogy-aware Key Management Schemes for Wireess Muticast Yan Sun, Wade Trappe,andK.J.RayLiu Department of Eectrica and Computer Engineering, University of Maryand, Coege Park Emai: ysun, kjriu@gue.umd.edu

More information

Deep Quantization Network for Efficient Image Retrieval

Deep Quantization Network for Efficient Image Retrieval Proceedings of the Thirtieth AAAI Conference on Artificia Inteigence (AAAI-16) Deep Quantiation Network for Efficient Image Retrieva Yue Cao, Mingsheng Long, Jianmin Wang, Han Zhu and Qingfu Wen Schoo

More information

Arithmetic Coding. Prof. Ja-Ling Wu. Department of Computer Science and Information Engineering National Taiwan University

Arithmetic Coding. Prof. Ja-Ling Wu. Department of Computer Science and Information Engineering National Taiwan University Arithmetic Coding Prof. Ja-Ling Wu Department of Computer Science and Information Engineering Nationa Taiwan University F(X) Shannon-Fano-Eias Coding W..o.g. we can take X={,,,m}. Assume p()>0 for a. The

More information

University of Illinois at Urbana-Champaign, Urbana, IL 61801, /11/$ IEEE 162

University of Illinois at Urbana-Champaign, Urbana, IL 61801, /11/$ IEEE 162 oward Efficient Spatia Variation Decomposition via Sparse Regression Wangyang Zhang, Karthik Baakrishnan, Xin Li, Duane Boning and Rob Rutenbar 3 Carnegie Meon University, Pittsburgh, PA 53, wangyan@ece.cmu.edu,

More information

Semi-Supervised Learning with Sparse Distributed Representations

Semi-Supervised Learning with Sparse Distributed Representations Semi-Supervised Learning with Sparse Distributed Representations David Zieger dzieger@stanford.edu CS 229 Fina Project 1 Introduction For many machine earning appications, abeed data may be very difficut

More information

Further Optimization of the Decoding Method for Shortened Binary Cyclic Fire Code

Further Optimization of the Decoding Method for Shortened Binary Cyclic Fire Code Further Optimization of the Decoding Method for Shortened Binary Cycic Fire Code Ch. Nanda Kishore Heosoft (India) Private Limited 8-2-703, Road No-12 Banjara His, Hyderabad, INDIA Phone: +91-040-3378222

More information

Deep Hashing Network for Efficient Similarity Retrieval

Deep Hashing Network for Efficient Similarity Retrieval Proceedings of the Thirtieth AAAI Conference on Artificia Inteigence (AAAI-16) Deep Hashing Network for Efficient Simiarity Retrieva Han Zhu, Mingsheng Long, Jianmin Wang and Yue Cao Schoo of Software,

More information

Collinearity and Coplanarity Constraints for Structure from Motion

Collinearity and Coplanarity Constraints for Structure from Motion Coinearity and Copanarity Constraints for Structure from Motion Gang Liu 1, Reinhard Kette 2, and Bodo Rosenhahn 3 1 Institute of Information Sciences and Technoogy, Massey University, New Zeaand, Department

More information

As Michi Henning and Steve Vinoski showed 1, calling a remote

As Michi Henning and Steve Vinoski showed 1, calling a remote Reducing CORBA Ca Latency by Caching and Prefetching Bernd Brügge and Christoph Vismeier Technische Universität München Method ca atency is a major probem in approaches based on object-oriented middeware

More information

LEARNING causal structures is one of the main problems

LEARNING causal structures is one of the main problems cupc: CUDA-based Parae PC Agorithm for Causa Structure Learning on PU Behrooz Zare, Foad Jafarinejad, Matin Hashemi, and Saber Saehkaeybar arxiv:8.89v [cs.dc] Dec 8 Abstract The main goa in many fieds

More information

Extracting semistructured data from the Web: An XQuery Based Approach

Extracting semistructured data from the Web: An XQuery Based Approach EurAsia-ICT 2002, Shiraz-Iran, 29-31 Oct. Extracting semistructured data from the Web: An XQuery Based Approach Gies Nachouki Université de Nantes - Facuté des Sciences, IRIN, 2, rue de a Houssinière,

More information

CORRELATION filters (CFs) are a useful tool for a variety

CORRELATION filters (CFs) are a useful tool for a variety Zero-Aiasing Correation Fiters for Object Recognition Joseph A. Fernandez, Student Member, IEEE, Vishnu Naresh Boddeti, Member, IEEE, Andres Rodriguez, Member, IEEE, B. V. K. Vijaya Kumar, Feow, IEEE arxiv:4.36v

More information

Adaptive 360 VR Video Streaming: Divide and Conquer!

Adaptive 360 VR Video Streaming: Divide and Conquer! Adaptive 360 VR Video Streaming: Divide and Conquer! Mohammad Hosseini *, Viswanathan Swaminathan * University of Iinois at Urbana-Champaign (UIUC) Adobe Research, San Jose, USA Emai: shossen2@iinois.edu,

More information

5940 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 13, NO. 11, NOVEMBER 2014

5940 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 13, NO. 11, NOVEMBER 2014 5940 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 13, NO. 11, NOVEMBER 014 Topoogy-Transparent Scheduing in Mobie Ad Hoc Networks With Mutipe Packet Reception Capabiity Yiming Liu, Member, IEEE,

More information

A Comparison of a Second-Order versus a Fourth- Order Laplacian Operator in the Multigrid Algorithm

A Comparison of a Second-Order versus a Fourth- Order Laplacian Operator in the Multigrid Algorithm A Comparison of a Second-Order versus a Fourth- Order Lapacian Operator in the Mutigrid Agorithm Kaushik Datta (kdatta@cs.berkeey.edu Math Project May 9, 003 Abstract In this paper, the mutigrid agorithm

More information

Learning Dynamic Guidance for Depth Image Enhancement

Learning Dynamic Guidance for Depth Image Enhancement Learning Dynamic Guidance for Depth Image Enhancement Shuhang Gu 1, Wangmeng Zuo 2, Shi Guo 2, Yunjin Chen 3, Chongyu Chen 4,1, Lei Zhang 1, 1 The Hong Kong Poytechnic University, 2 Harbin Institute of

More information

PROOF COPY JOE. Hierarchical feed-forward network for object detection tasks PROOF COPY JOE. 1 Introduction

PROOF COPY JOE. Hierarchical feed-forward network for object detection tasks PROOF COPY JOE. 1 Introduction 45 6, 1 June 2006 Hierarchica feed-forward network for object detection tasks Ingo Bax Gunther Heidemann Hege Ritter Bieefed University Neuroinformatics Group Facuty of Technoogy P.O. Box 10 01 31 D-33501

More information

MULTITASK MULTIVARIATE COMMON SPARSE REPRESENTATIONS FOR ROBUST MULTIMODAL BIOMETRICS RECOGNITION. Heng Zhang, Vishal M. Patel and Rama Chellappa

MULTITASK MULTIVARIATE COMMON SPARSE REPRESENTATIONS FOR ROBUST MULTIMODAL BIOMETRICS RECOGNITION. Heng Zhang, Vishal M. Patel and Rama Chellappa MULTITASK MULTIVARIATE COMMON SPARSE REPRESENTATIONS FOR ROBUST MULTIMODAL BIOMETRICS RECOGNITION Heng Zhang, Visha M. Pate and Rama Cheappa Center for Automation Research University of Maryand, Coage

More information

CNN Fixations: An unraveling approach to visualize the discriminative image regions

CNN Fixations: An unraveling approach to visualize the discriminative image regions 1 CNN Fixations: An unraveing approach to visuaize the discriminative image regions Konda Reddy Mopuri*, Utsav Garg*, R. Venkatesh Babu, Senior Member, IEEE arxiv:1708.06670v3 [cs.cv] 12 Dec 2018 Abstract

More information

arxiv: v2 [cs.cv] 15 Mar 2017

arxiv: v2 [cs.cv] 15 Mar 2017 Viraiency: Pooing Loca Viraity 1 Xavier Aameda-Pineda1,2, Andrea Pizer1, Dan Xu1, Nicu Sebe1, Eisa Ricci3,4 University of Trento, 2 Perception Team, INRIA Grenobe, 3 University of Perugia, 4 Fondazione

More information

Application of Intelligence Based Genetic Algorithm for Job Sequencing Problem on Parallel Mixed-Model Assembly Line

Application of Intelligence Based Genetic Algorithm for Job Sequencing Problem on Parallel Mixed-Model Assembly Line American J. of Engineering and Appied Sciences 3 (): 5-24, 200 ISSN 94-7020 200 Science Pubications Appication of Inteigence Based Genetic Agorithm for Job Sequencing Probem on Parae Mixed-Mode Assemby

More information

arxiv: v1 [cs.cv] 7 Feb 2013

arxiv: v1 [cs.cv] 7 Feb 2013 Fast Image Scanning with Deep Max-Pooing Convoutiona Neura Networks arxiv:1302.1700v1 [cs.cv] 7 Feb 2013 Aessandro Giusti Dan C. Cireşan Jonathan Masci Luca M. Gambardea Jürgen Schmidhuber Technica Report

More information

Robust Multimodal Recognition via Multitask Multivariate Low-Rank Representations

Robust Multimodal Recognition via Multitask Multivariate Low-Rank Representations Robust Mutimoda Recognition via Mutitask Mutivariate Low-Rank Representations Heng Zhang, Visha M. Pate and Rama Cheappa Center for Automation Research, UMIACS, University of Maryand, Coege Park, MD 074

More information

Learning to Generate with Memory

Learning to Generate with Memory Chongxuan Li LICX14@MAILS.TSINGHUA.EDU.CN Jun Zhu DCSZJ@TSINGHUA.EDU.CN Bo Zhang DCSZB@TSINGHUA.EDU.CN Dept. of Comp. Sci. & Tech., State Key Lab of Inte. Tech. & Sys., TNList Lab, Center for Bio-Inspired

More information

WEAVER: A KNOWLEDGE-BASED ROUTING EXPERT

WEAVER: A KNOWLEDGE-BASED ROUTING EXPERT WEAVER: A KNOWLEDGE-BASED ROUTING EXPERT Rostam Joobbani, Danie P. Siewiorek Department of Eectrica and Computer Engineering Carnegie-Meon University Scheney Park Pittsburgh, PA 15213 ABSTRACT In this

More information

RDF Objects 1. Alex Barnell Information Infrastructure Laboratory HP Laboratories Bristol HPL November 27 th, 2002*

RDF Objects 1. Alex Barnell Information Infrastructure Laboratory HP Laboratories Bristol HPL November 27 th, 2002* RDF Objects 1 Aex Barne Information Infrastructure Laboratory HP Laboratories Bristo HPL-2002-315 November 27 th, 2002* E-mai: Andy_Seaborne@hp.hp.com RDF, semantic web, ontoogy, object-oriented datastructures

More information

Incremental Discovery of Object Parts in Video Sequences

Incremental Discovery of Object Parts in Video Sequences Incrementa Discovery of Object Parts in Video Sequences Stéphane Drouin, Patrick Hébert and Marc Parizeau Computer Vision and Systems Laboratory, Department of Eectrica and Computer Engineering Lava University,

More information

Comparative Analysis of Relevance for SVM-Based Interactive Document Retrieval

Comparative Analysis of Relevance for SVM-Based Interactive Document Retrieval Comparative Anaysis for SVM-Based Interactive Document Retrieva Paper: Comparative Anaysis of Reevance for SVM-Based Interactive Document Retrieva Hiroshi Murata, Takashi Onoda, and Seiji Yamada Centra

More information

Interpreting Individual Classifications of Hierarchical Networks

Interpreting Individual Classifications of Hierarchical Networks Interpreting Individua Cassifications of Hierarchica Networks Wi Landecker, Michae D. Thomure, Luís M. A. Bettencourt, Meanie Mitche, Garrett T. Kenyon, and Steven P. Brumby Department of Computer Science

More information

Deep Fisher Networks for Large-Scale Image Classification

Deep Fisher Networks for Large-Scale Image Classification Deep Fisher Networs for Large-Scae Image Cassification Karen Simonyan Andrea Vedadi Andrew Zisserman Visua Geometry Group, University of Oxford {aren,vedadi,az}@robots.ox.ac.u Abstract As massivey parae

More information

Fast Methods for Kernel-based Text Analysis

Fast Methods for Kernel-based Text Analysis Proceedings of the 41st Annua Meeting of the Association for Computationa Linguistics, Juy 2003, pp. 24-31. Fast Methods for Kerne-based Text Anaysis Taku Kudo and Yuji Matsumoto Graduate Schoo of Information

More information

Providing Hop-by-Hop Authentication and Source Privacy in Wireless Sensor Networks

Providing Hop-by-Hop Authentication and Source Privacy in Wireless Sensor Networks The 31st Annua IEEE Internationa Conference on Computer Communications: Mini-Conference Providing Hop-by-Hop Authentication and Source Privacy in Wireess Sensor Networks Yun Li Jian Li Jian Ren Department

More information

Transformation Invariance in Pattern Recognition: Tangent Distance and Propagation

Transformation Invariance in Pattern Recognition: Tangent Distance and Propagation Transformation Invariance in Pattern Recognition: Tangent Distance and Propagation Patrice Y. Simard, 1 Yann A. Le Cun, 2 John S. Denker, 2 Bernard Victorri 3 1 Microsoft Research, 1 Microsoft Way, Redmond,

More information

Enhanced continuous, real-time detection, alarming and analysis of partial discharge events

Enhanced continuous, real-time detection, alarming and analysis of partial discharge events DMS PDMG-RH Partia discharge monitor for GIS Enhanced continuous, rea-time detection, aarming and anaysis of partia discharge events Automatic PD faut cassification High resoution Seectabe UHF fiters and

More information

A study of comparative evaluation of methods for image processing using color features

A study of comparative evaluation of methods for image processing using color features A study of comparative evauation of methods for image processing using coor features FLORENTINA MAGDA ENESCU,CAZACU DUMITRU Department Eectronics, Computers and Eectrica Engineering University Pitești

More information

DETECTION OF OBSTACLE AND FREESPACE IN AN AUTONOMOUS WHEELCHAIR USING A STEREOSCOPIC CAMERA SYSTEM

DETECTION OF OBSTACLE AND FREESPACE IN AN AUTONOMOUS WHEELCHAIR USING A STEREOSCOPIC CAMERA SYSTEM DETECTION OF OBSTACLE AND FREESPACE IN AN AUTONOMOUS WHEELCHAIR USING A STEREOSCOPIC CAMERA SYSTEM Le Minh 1, Thanh Hai Nguyen 2, Tran Nghia Khanh 2, Vo Văn Toi 2, Ngo Van Thuyen 1 1 University of Technica

More information

Endoscopic Motion Compensation of High Speed Videoendoscopy

Endoscopic Motion Compensation of High Speed Videoendoscopy Endoscopic Motion Compensation of High Speed Videoendoscopy Bharath avuri Department of Computer Science and Engineering, University of South Caroina, Coumbia, SC - 901. ravuri@cse.sc.edu Abstract. High

More information

file://j:\macmillancomputerpublishing\chapters\in073.html 3/22/01

file://j:\macmillancomputerpublishing\chapters\in073.html 3/22/01 Page 1 of 15 Chapter 9 Chapter 9: Deveoping the Logica Data Mode The information requirements and business rues provide the information to produce the entities, attributes, and reationships in ogica mode.

More information

Interpreting Individual Classifications of Hierarchical Networks

Interpreting Individual Classifications of Hierarchical Networks Portand State University PDXSchoar Computer Science Facuty Pubications and Presentations Computer Science 2013 Interpreting Individua Cassifications of Hierarchica Networks Wi Landecker Portand State University,

More information

Quality Assessment using Tone Mapping Algorithm

Quality Assessment using Tone Mapping Algorithm Quaity Assessment using Tone Mapping Agorithm Nandiki.pushpa atha, Kuriti.Rajendra Prasad Research Schoar, Assistant Professor, Vignan s institute of engineering for women, Visakhapatnam, Andhra Pradesh,

More information

arxiv: v1 [cs.cv] 21 Aug 2018

arxiv: v1 [cs.cv] 21 Aug 2018 Text-to-image Synthesis via Symmetrica Distiation Networks Mingkuan Yuan and Yuxin Peng* Institute of Computer Science and Technoogy, Peking University Beijing, China pengyuxin@pku.edu.cn arxiv:1808.06801v1

More information

Constellation Models for Recognition of Generic Objects

Constellation Models for Recognition of Generic Objects 1 Consteation Modes for Recognition of Generic Objects Wei Zhang Schoo of Eectrica Engineering and Computer Science Oregon State University zhangwe@eecs.oregonstate.edu Abstract Recognition of generic

More information

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Scheduling

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Scheduling CSE120 Principes of Operating Systems Prof Yuanyuan (YY) Zhou Scheduing Announcement Homework 2 due on October 25th Project 1 due on October 26th 2 CSE 120 Scheduing and Deadock Scheduing Overview In discussing

More information

Space-Time Trade-offs.

Space-Time Trade-offs. Space-Time Trade-offs. Chethan Kamath 03.07.2017 1 Motivation An important question in the study of computation is how to best use the registers in a CPU. In most cases, the amount of registers avaiabe

More information

Automated Segmentation of Whole Cardiac CT Images based on Deep Learning

Automated Segmentation of Whole Cardiac CT Images based on Deep Learning Vo. 9, No., 08 Automated Segmentation of Whoe Cardiac CT Images based on Deep Learning Rajpar Suhai Ahmed, Dr. Jie Liu* Schoo of Computer & Information Technoogy Beijing Jiaotong University, Beijing, China

More information

A NEW APPROACH FOR BLOCK BASED STEGANALYSIS USING A MULTI-CLASSIFIER

A NEW APPROACH FOR BLOCK BASED STEGANALYSIS USING A MULTI-CLASSIFIER Internationa Journa on Technica and Physica Probems of Engineering (IJTPE) Pubished by Internationa Organization of IOTPE ISSN 077-358 IJTPE Journa www.iotpe.com ijtpe@iotpe.com September 014 Issue 0 Voume

More information

Community-Aware Opportunistic Routing in Mobile Social Networks

Community-Aware Opportunistic Routing in Mobile Social Networks IEEE TRANSACTIONS ON COMPUTERS VOL:PP NO:99 YEAR 213 Community-Aware Opportunistic Routing in Mobie Socia Networks Mingjun Xiao, Member, IEEE Jie Wu, Feow, IEEE, and Liusheng Huang, Member, IEEE Abstract

More information