Adapting Instance Weights For Unsupervised Domain Adaptation Using Quadratic Mutual Information And Subspace Learning
|
|
- Laurence Bates
- 5 years ago
- Views:
Transcription
1 Adapting Instane Weights For Unsupervised Domain Adaptation Using Quadrati Mutual Information And ubspae Learning M.N.A. Khan, Douglas R. Heisterkamp Department of Computer iene Oklahoma tate University, tillwater, OK mohk, Abstrat Domain adaptation (DA algorithms utilize a labelrih old dataset (domain to build a mahine learning model (lassifiation, detetion et. in a label-sare new dataset with different data distribution. Reent approahes transform rossdomain data into a shared subspae by minimizing the shift between their marginal distributions. In this paper, we propose a novel iterative method to learn a ommon subspae based on non-parametri quadrati mutual information (QMI between data and orresponding lass labels. We extend a prior work of disriminative subspae learning based on maximization of QMI and integrate instane weighting into the QMI formulation. We propose an adaptive weighting model to identify relevant samples that share underlying similarity aross domains and ignore irrelevant ones. Due to diffiulty of applying ross-validation, an alternative strategy is integrated with the proposed algorithm to setup model parameters. A set of omprehensive experiments on benhmark datasets is onduted to prove the effiay of our proposed framework over state-of-the-art approahes. I. INRODUCION o build an objet reognition or lassifiation model, suffiient number of labeled images are neessary. uh models are tested using images sampled from the same distribution as training one. In real life senario, training and testing data distributions might be different. Also olleting and annotating training data by manual intervention is often expensive, hene building a supervised mahine learning model in a new domain beomes hallenging. herefore, the need for transferring knowledge from a related domain (soure to a novel one (target beomes inevitable. One pratial example is, learning a lassifiation model using images aptured with anonial viewpoints in a studio environment and deploying that appliation to reognize images taken in natural surroundings (see Figure. Aording to the literature, this problem area is widely known as transfer learning, dataset bias, domain shift or domain adaptation [], [], [3]. Researhers fous on leveraging a label-rih soure domain to build an adaptive lassifier for the target domain where i label information is in poor supply and ii marginal data distribution is different from soure domain. o deal with the domain adaptation problem, two different settings are usually onsidered: i unsupervised domain adaptation [4], [5], [6], [7], where no labeled data available in target domain and ii semi-supervised domain adaptation [8], [9], where only a few labeled data are available in oure domain arget domain Fig. : wo domains with different data distributions. target domain along with abundant labeled data of soure domain. In this paper, we will fous on the most hallenging unsupervised ase. Reent works in this area are mainly based on subspae learning whih fouses to learn a shared subspae by disovering the underlying ommon strutures aross domains and instane weighting on soure data to math their distribution with target data. ome works foused on unifying both strategies in a single framework and reported better performane than employing a single one [6], []. As an example of the first strategy, Gong et al. [4] proposed a kernel based method that models the underlying low-dimensional struture along the geodesi path from soure to target domain. All available soure data are utilized in their approah. Pratially, not all the soure data are useful for transfer learning [] i.e. some soure samples share very little strutural similarity with target domain data. We propose an iterative framework for ommon subspae learning inorporated with instane weighting to model this senario. An adaptive weight update reipe is applied to down-weight irrelevant soure samples and up-weight ross-domain data with shared similarity. Learning a ommon disriminative subspae by utilizing soure information has been proved effetive in domain adaptation ontext []. his work was inspired by [], [3] and proposed a linear transformation based on maximization of non-parametri quadrati mutual information (QMI between data and orresponding lass labels. Also refinement of linear transformation with updated predition of target data via softlabeling (distribution of lass labels for a point was integrated with their algorithm. We improved their approah to leverage the benefit of disriminative subspae and indued instane weighting for soure and target data in QMI formulation.
2 Nevertheless, we propose an unsupervised tehnique for setting model parameters, as traditional ross-validation approah is diffiult due to unavailability of labeled data in target domain. he main ontributions of this paper are summarized as follows, (i Learning a linear transformation to reate ommon disriminative subspae based on maximization of QMI indued with instane weighting. (ii Proposing an adaptive weighting model to identify rossdomain data with shared similarity to assign them with higher weights ompared to others. (iii Proposing an alternative approah of parameter seletion in an unsupervised fashion without requiring labeled data from target domain. In Figure, the initial and final stage of the proposed iterative algorithm is illustrated. II. RELAED WORK Domain adaptation algorithms based on subspae learning has beome popular in reent years [6], [5], [4], [7]. Most of these methods are developed based on the assumption of ommon underlying struture shared aross domains. In [7], a ommon feature representation is proposed in a prinipled dimensionality redution proess. his work is further enhaned in [6] by introduing re-weighting of soure samples to math them with target data. he motivation of our work is similar to [6]. However, we employed non-uniform instane weighting for ross-domain samples, as opposed to [6] where only soure samples are weighted. Reent work based on landmark seletion also fouses on identifying relevant samples from both soure and target domains []. heir approah hooses landmarks by mathing pair-wise samples aross domains in kernel spae and used them to align the two domains. Despite its effiieny, it is observed that landmark seletion is a stati proess and requires to build the subspae from srath in ase of availability of new data. In ontrast, we propose the seletion of ross-domain similar instanes via an adaptive weighting model. his model is oupled with subspae learning with an iterative feedbak loop and leverages two benefits: i refinement of the linear transformation through updated instane weights, and ii assessing the ross-domain data similarity in the low-dimensional learned subspae, as opposed to kernel spae []. A. Prior work on QMI based domain adaptation Our work adopted the proedure of disriminative subspae learning by maximization of QMI from [] i.e. the objetive funtion for optimization and its solution proess are adopted from this paper. We will provide a brief desription of the derivation of objetive funtion. Aording to information theoreti literature, Mutual Information (MI is defined as a measure of independene between random variables. Assume that X is a random variable representing d-dimensional data x R d and C is a disrete random variable representing lass labels,,..., N }, where N is the total number (a Initialization Iterative WQMI- framework (b Final learned subspae Fig. : and represent data from soure and target domain respetively, olors represent different lasses and font-size is proportional to individual sample weight. Initially target data are unknown and assigned with negligible or zero weights (left sub-figure. In the learned subspae (right sub-figure, data with shared similarity are losely projeted, while ignoring irrelevant samples by down-weighting. of lasses. Also let p(x is the marginal density funtion of x and P ( is the lass prior probability. Following Renyi s entropy, a non-parametri quadrati estimation of MI (denoted as quadrati mutual information or QMI is defined as [3], I(X, C = (p(x, P (p(x dx x = p(x, dx + P ( p(x dx ( x x p(x, P (p(xdx x = V in + V all V btw he subspae learning algorithm of [] followed the traenorm formulation of QMI proposed by Bouzas et al. []. hey extended QMI by induing soft lass labeling, referred to as QMI-. Now P (, p(x, and p(x in Eq. ( is evaluated by a Parzen window method based on Gaussian kernel. A Gaussian distribution funtion N (x; µ, Σ with mean vetor µ and ovariane matrix Σ is, ( N (x; µ, Σ = π Σ e (x µ Σ (x µ and the Gaussian kernel matrix is K g with K g (i, j = N ( x i x j ;, σ I. Assume X R n d represents d- dimensional data points of size n and Φ R n m represents mapped data from raw feature spae to a kernel Hilbert spae, where m is the dimension of kernel spae. A linear transformation funtion (alternatively, a projetion matrix is learned to reate a k-dimensional subspae (k < d by maximizing QMI- between projeted data and orresponding lass labels. he final optimization objetive takes the following form, A tr A KM KA } = arg max A KA=I tra ( KKA} M = (M + M / (3 he matrix M is extrated from the formulation of QMI- (see Eq. ( of []. Also W is a projetion matrix whih is restrited to be in the range of Φ i.e. W = Φ A, where A R n k is a o-effiient matrix. A entralized Gaussian
3 kernel K R n n is defined as K = K g E n K g K g E n +E n K g E n, where E n R n n onsists of elements eah equal to n. ee [] for a detailed derivation of this objetive funtion. he trae ratio problem of Eq. ( an be approximated as a ratio trae optimization and solved for A using generalized eigen value deomposition method [4]. Hene Eq. ( an be rewritten as KM KU = KKUΛ, where Λ is a diagonal matrix of eigen values and U is a matrix of orresponding eigen vetors. ubstituting KU with V and multiplying both sides by K, it beomes M V = V Λ. his is a standard eigen problem where V and Λ represent matrix of eigen vetors and eigen values respetively. As M has rank N, the N vetors with largest eigen values are seleted from V. Hene, A will be A = K V. Finally, the projeted data X p R n k will be omputed as X p = KA = K V. III. QUADRAIC MUUAL INFORMAION INEGRAED WIH INANCE WEIGHING In [], soft lass labeling is indued in the QMI formulation (Eq. (. We further extend it by saling the soft-labeling of a point with orresponding weight. We define weight of a sample x i as the disrete probability distribution P (x i over the training samples X i.e. w i = P (x i. he expressions for P (, p(x, and p(x of Eq. ( are evaluated as follows, n p(x = P (x i N ( x; µ, σ I n = w i N ( x; µ, σ I P ( = n P ( x i w i = p(x, = P (p(x n = P ( x i w i N ( x; µ, σ I = n z,i N ( x; µ, σ I where z,i = P ( x i w i represents soft-labeling saled with individual sample weight. Also for notational onveniene, P ( is referred to as. Now inspired by [] and using the expressions for p(x, P ( and p(x, derived above, we an further elaborate Eq. ( (see able I whih is rewritten as, I(X, C = tr Φ (( = tr Φ MΦ } where, z z + ( ( } ww w z Φ ( ( ( M = z z + ww w z (4 Here, w = [w, w,..., w n ] R n is a weight vetor onsisting of all sample weights with n w i = and z = [z,, z,,..., z,n ] R n. We refer to this expression as WQMI-. his is a generi formulation of QMI and by hoosing w i = n, we an obtain QMI- of []. IV. PROPOED DA FRAMEWORK BAED ON WQMI- We propose an iterative algorithm (referred to as iterative WQMI- based on subspae learning by maximization of WQMI-. Input data X R n d onsists of soure domain data X s R ns d and target domain data X t R nt d. Ground truth labels of soure data is [y, y,..., y ns ], where n s and n t represent soure and target data size respetively (n = n s + n t. As an initialization step of the algorithm, softlabeling P ( x i of a sample x i and weight vetor w are set. For soure point x i X s, if = y i P ( x i = otherwise for eah,,..., N }. On other hand, target data are unlabeled and hene are initialized with uniform label distribution i.e. for target point x i X t, P ( x i = N for eah,,..., N }. Initialization of w is subjet to hoie of an instane weighting model. We proposed one suh model for w later in this setion. Eah iteration of the proposed algorithm will learn a subspae, update label preditions of target data via soft-labeling and adjust instane weights. ubspae learning proedure is desribed earlier in etion II-A, where M will be used from Eq (4. After obtaining projeted data, eah target sample s predition is updated by the weighted average predition of its neighboring samples in WQMI- subspae. Denoting L K (x as the set of K neighboring points of a sample x, the update rule will be following, P ( x i = w j P ( x j for eah x i X t (5 x j L k (x i Finally, instane weights are updated through a weight adaptation proess. A. Instane weight adaptation We propose a weight adaptation formula to adjust weights of soure and target samples at eah iteration. Weight vetor w is initialized: for eah soure sample x i X s, w i = n s and for eah target sample x j X t, w j =. With this assignment, the initial learned subspae is dominated by X s and further refined through out the iterations. he goal is to assign higher weights to relevant samples ompared to others. o do this, a fration from eah instane weight is shrinked and then the total shrinked weight is re-distributed among andidate sets that ontain ross-domain data with shared underlying similarity. hree sets of andidate points are defined as follows, Ω s = x (x X s (x L K (x i, x i X t } Ω ta = x x X t } Ω th = x x X t max P ( x τ} τ is hosen to onstrut Ω th whih is defined as a set of target points with onfidene in orresponding label preditions. oure and target data with underlying shared similarity are
4 ABLE I: Derivation of V in, V all and V btw. Here K = ΦΦ, z = [z,, z,,..., z,n] R n and w = [w, w,..., w n] R n V in = p(x, dx V all = P ( p(x dx V btw = p(x, P (p(xdx x x x = n n z,i z,j N ( x i x j ;, σ I = n n w i w j N ( x; x i, σ I N ( x; x j, σ I = n n z,i w j N ( x i x j ;, σ I j= = j= i j ( z Kz = tr Kz } w = z Kw Kw z ( = ( } tr K w z = = tr Φ ( z z Φ } = ( = tr Kww } } tr Φ (ww Φ = tr Φ ( w z Φ } projeted in a lose proximity on the subspae. Hene, members of Ω s (soure andidate set are up-weighted ompared to other soure data. he rest two sets onsist of target data. Now weight shrinking and re-distributing take the following form, w i = w i αw i. (6 w i + ηα Ω, if x s i Ω s wi new = w i + (( ηαβ Ω th, if x i Ω th (7 w i + (( ηα( β Ω ta, if x i Ω ta. Here a fration α is extrated from eah instane weight (weight shrinking resulting in total shrinked weight as n αw i = α. his shrinked weight α is distributed among andidate points aording to Eq. (7. he parameter η ontrols the partition of α among soure and target andidate points and β ontrols the partition of allotted α among the members of Ω th and Ω ta. All target points are assigned with uniform weights (as members of Ω ta exept some of them also gain bonus weights (as members of Ω th. he weight mass is initially onentrated on soure points and will eventually be shifted towards andidate points through out the iterations of iterative WQMI- algorithm. his is a generi weighting sheme where α, β and η are hosen appropriately from < α, η, β. B. Unsupervised parameter seletion o update weight vetor w in iterative WQMI- algorithm, parameters α, η, β and τ are set. For α, we argue that it is linearly proportional to the ratio r = Ω th Ω ta. High value of r indiates the growth of target onfident points and hene larger α is neessary to be shrinked for weight re-adjustment in target domain. In our implementation, α = r is used. Next, η is set to.5 to maintain a balane in weight re-adjustment among soure and target andidate points. he bonus weight assignment for the members of Ω th is ontrolled by β. he intension is to assign onfident target points with higher weights ompared to other target ones. Any funtion haraterizing this behavior would suffie to define β. In this work, β = max(r, e r is used. Note that, when Ω th is small, most points are non-onfident and assigned with very small weights and vie versa. Lastly, τ is used to onstrut Ω th. For tasks with moderate number of ategories, τ =.7 might be a simple hoie. Instead in our work, an alternative strategy is applied. Note that, eah iteration of iterative WQMI- involves a subspae learning, update of label preditions for target data and weight vetor w. he idea is to unify subspae learning and weight update as follows, Choose parameter τ from a set of possible hoies, e.g..55,.6,.65,.7,.75}. Update w using Eq. (6, (7. Compute M, learn a WQMI- subspae and obtain projeted data. K-means lustering is applied to projeted target data for eah WQMI- subspae, where K is set to the number of unique ategories. From eah of these lusterings, a satter metri G = J b J w is omputed with J w and J b representing trae-norm of within-luster satter and between-luster satter matries respetively [5]. he idea is to selet the best subspae generated by using orresponding τ. Assuming projeted data will be tightly lustered around their lass means, the WQMI- subspae (alternatively, projeted data with maximum G is hosen. We refer to this sheme as unsupervised parameter seletion, UP (Figure 3. his projeted data hosen in urrent iteration are utilized in next one for label preditions of target data. he overall approah is summarized in Algorithm. C. ermination riteria he algorithm will terminate when weight re-adjustment in suessive iterations is negligible i.e. any of these two riteria is satisfied: i weight hange for eah target point in suessive iterations is negligible or ii n s w i = n t j= w j with sum of weights for non-andidate soure points is negligible. After termination, a target point x is annotated with lass, where = arg max P ( x. V. DAAE AND EXPERIMEN Offie is a widely used image database for domain adaptation [6]. It ontains three different domains (Amazon, DLR, Webam of images aptured with varied settings and image onditions. Images of Amazon are downloaded from amazon site, DLR ontains images aptured with high-resolution DLR amera and Webam ontains images aptured with low-resolution web amera. Additionally, a popular dataset for objet reognition Calteh-56 [7] is used. he experiments will be onduted using these 4 domains with ommon ategories seleted from eah of them (Bike, BakPak, Calulator, Headphone, Keyboard, Laptop, Monitor, Mouse, Mug,
5 τ τ[] τ[] τ[m] w[] w[] w[m] M[] M[] M[m] Obtain X p [i] from WQMI- spae learned with M[i] Choose X p [i] for whih G is maximized Fig. 3: Blok diagram for unsupervised parameter seletion. Algorithm Iterative WQMI-: ubspae learning algorithm with maximization of WQMI-. : Input: Data matrix X=[X s,x t ] R n d where soure domain data X s R ns d and target domain data X t R n t d with n = n s + n t, label vetor for soure data [y, y,..., y n] R ns. : Output: Projeted data in final learned subspae X p R n k. 3: Initialize weight vetor w and P ( x. 4: Compute a entralized Gaussian kernel matrix, K R n n. 5: Call eigenm(w, P ( x to obtain projeted data X p. 6: repeat 7: for all projeted target point x i do 8: Update P ( x i using Eq. (5. 9: end for : Choose possible values for τ e.g. τ.55,.6,.65,.7,.75}. : for eah τ do : Construt Ω s, Ω th and Ω ta. 3: Apply Eq. (6 and (7 to update w and assign w[i] w. 4: Call eigenm(w[i], P ( x to obtain projeted data X p [i]. 5: Apply K-means lustering to the projeted target data. 6: Compute satter metri G[i]. 7: end for 8: Choose X p [j] and w[j] suh that j = arg max j G(j. 9: Assign X p X p [j] and w w[j]. : until termination riteria (etion IV-C satisfied : proedure eigenm (w, P ( x : Compute M matrix using Eq.(4 and (3. 3: olve standard eigen problem, M V = V Λ. 4: Compute projeted data X p = K V. Projetor. otal 7 DA sub-problems are reated, eah of whih ontains one soure and one target domain with n s > n t or n s n t. Our algorithm assumes a balane between soure and target datasize, as smaller soure domain is impratial for knowledge transfer. Experimental setup he image representations published by Gong et al.[4] are used in our experiments. Also the experimental protool of [6], [7] is followed. For other methods, a nearest neighbor lassifier was trained with soure data to lassify target points. For fair omparison, eah target point is annotated by the label of its nearest point in the learned subspae in iterative WQMI- algorithm. Input data are whitened with PCA preserving 95% of the data variane. Following heuristi, σ of Gaussian kernel is set to median of the pair-wise distanes of data in raw feature spae []. Also K is set to (log(n s + [8] to onstrut L k. he iterative WQMI- algorithm is ompared with 7 other methods (see able II. hey an be ategorized as follows, ABLE II: Comparative results in terms of lassifiation auray(% of target data for 7 different sub-problems of Offie+Calteh dataset. Eah sub-problem is in the form of soure target, where C(Calteh-56, A(Amazon, W(Webam and D(DLR indiate four different domains. Methods C A C W C D A C A W A D W D Avg Origfeat PCA GFK FL JM QMI WQMI Without adaptation: Origfeat and PCA indiate the lassifiation auray of target data in original feature spae and PCA subspae respetively. Adaptation based on subspae alignment: It inludes geodesi flow kernel (GFK [4] and transfer feature learning (FL [7]. Adaptation based on subspae alignment + instane reweighting: it inludes transfer joint mathing (JM [6]. Pasal-un-Calteh dataset Another experiment is onduted with three different datasets namely PACAL7(Ps, Calteh-(Cl and UN9(n [9]. Five ommon objet lasses (Bird, Car, Chair, Dog, Person are seleted from eah of them. he image representations released by [9] are used, where eah image is enoded with 5376 dimensional feature vetor. Classifiation auray in target domain is reported in able III. Analysis For Offie+Calteh dataset, the iterative WQMI- method shows improved performane ompared to other methods and outperforms them in 4 out of 7 sub-problems by signifiant margin. Also the average auray over all subproblems is.% higher than the seond best performane. QMI- is essentially a variant of WQMI-, whih is based on QMI maximization using soft-labeling with unweighted instanes; auray of QMI- is quoted from []. he auray data for GFK, FL and JM are quoted from [6] and [7]. It is noted that using both instane weighting and soft-labeling is very effetive in learning a domain adaptive disriminative subspae. imilar behavior is observed for the Pasal-un- Calteh dataset (able III. In this ase, for GFK, FL and JM, lassifiation auraies are omputed using different subspae dimensions and best results are reported. For both experiments, beause of unsupervised lustering involved in UP tehnique of our iterative WQMI- algorithm, eah subproblem is run 5 times and the average auray is provided. Aording to the proposed weight update model, soure points residing in the neighborhood of target data in WQMI- spae are assigned with higher weights ompared to other soure points. As non-andidate soure points are distantly projeted from target data loud, they are down-weighted through out the iterations. his phenomena is analyzed by measuring the minimum of distanes between eah soure point and all the target points, i.e. for a soure point x i X s,
6 ABLE III: Comparative results in terms of lassifiation auray(% of target data for 4 different sub-problems of Pasal-un-Calteh dataset. Eah sub-problem is in the form of soure target, where Cl(Calteh-, n(un 9, Ps(Pasal 7 indiate three different domains. Methods n Cl n Ps Ps Cl Ps n Avg Origfeat PCA GFK FL JM QMI WQMI d m (x i = min dist(x i, x j, for x j X t. Here dist(.,. measures a distane between two points. he weight of x i and d m (x i follow negative orrelation i.e. the inrease of d m (x i results in down-weighting of orresponding x i and vie versa. his behavior is illustrated in Figure 4 for two subproblems (A W, A D. he line fitted in d m (x i vs. weight satter plot indiates that soure andidate points are higher weighted and projeted in lose proximity of target data (hene d m (x i is lower. Weight dereases with the growth of dm(x i for non-andidate points. Finally, we used a visualization tool named t-ne [] to plot the projeted soure and target samples in d spae (Figure 5. arget data are plotted with orresponding predited labels. It is observed that data in WQMI- subspae is tightly lustered ompared to JM. his also supports the unsupervised lustering of UP proess as a reasonable alternative strategy of parameter seletion. weight x 4 andidate soure fitted urve non andidate soure d x 3 m (a A W weight x 3 andidate soure fitted urve non andidate soure d x 3 m (b A D Fig. 4: atter plot for weight vs. d m value of eah soure point x i X s in two sub-problems, where d m(x i = min dist(x i, x j, for x j X t (a G = (JM (b G = 96.6 (WQMI- Fig. 5: d visuallization of soure and target data in learned subspae using t-ne [] for sub-problem A W using two different methods (a JM and (bwqmi-. Values of satter metri G are also provided. VI. CONCLUION We proposed a subspae learning framework for domain adaptation based on maximization of quadrati mutual information between data and lass labels. A generi formulation of QMI is proposed inorporating soft-lass labeling and instane weights. Both soure and target data are weighted based on their underlying shared similarity. oure data that share little similarity with target distribution are down-weighted to redue their impat on subspae learning. hrough an iterative refinement, the linear transformation to build a ommon subspae is optimized with the use of relevant samples aross domains. An adaptive instane weighting model is provided to identify suh samples automatially. We propose an alternative strategy to deal with model parameter setup without requiring labeled data from target domain. In future, we plan to extend this work for detetion of novel ategory not present in training phase. REFERENCE [] A. orralba, A. Efros et al., Unbiased look at dataset bias, in In Pro. of IEEE Conferene on CVPR,, pp [] J. Huang, A. Gretton, K. M. Borgwardt, B. hölkopf, and A. J. mola, Correting sample seletion bias by unlabeled data, in In Pro. of Advanes in Neural Information Proessing ystems, 6, pp [3]. J. Pan and Q. Yang, A survey on transfer learning, IEEE rans. on KDE,, vol., no., pp [4] B. Gong, Y. hi, F. ha, and K. Grauman, Geodesi flow kernel for unsupervised domain adaptation, in In Pro. of IEEE Conferene on CVPR,. [5] B. Fernando, A. Habrard, M. ebban, and. uytelaars, Unsupervised visual domain adaptation using subspae alignment, in In Pro. of IEEE Conferene on CVPR, 3, pp [6] M. Long, J. Wang, G. Ding, J. un, and P.. Yu, ransfer joint mathing for unsupervised domain adaptation, in In Pro. of IEEE Conferene on CVPR, 4, pp [7], ransfer feature learning with joint distribution adaptation, in In Pro. of IEEE Conferene on ICCV, 3. [8] J. Donahue, J. Hoffman, E. Rodner, K. aenko, and. Darrell, emisupervised domain adaptation with instane onstraints, in In Pro. of IEEE Conferene on CVPR, 3, pp [9] H. Daumé III, A. Kumar, and A. aha, Frustratingly easy semisupervised domain adaptation, in In Pro. of the Workshop on Domain Adaptation for Natural Language Proessing,, pp [] R. Aljundi, R. Emonet, D. Muselet, and M. ebban, Landmarks-based kernelized subspae alignment for unsupervised domain adaptation, in In Pro. of IEEE Conferene on CVPR, 5, pp [] D. R. H. M.N.A. Khan, Domain adaptation by iterative improvement of soft-labeling and maximization of non-parametri mutual information, in In Pro. of IEEE Conferene on ICIP, 6. [] D. Bouzas, N. Arvanitopoulos, and A. efas, Graph embedded nonparametri mutual information for supervised dimensionality redution, IEEE rans. on Neural Networks and Learning ystems, vol. 6, 5. [3] K. orkkola, Feature extration by non parametri mutual information maximization, he Journal of Mahine Learning Researh, 3, vol. 3, pp [4] Y. Jia, F. Nie, and C. Zhang, rae ratio problem revisited, IEEE rans. on Neural Networks, 9, vol., no. 4, pp [5] L. Rokah and O. Maimon, Data Mining and Knowledge Disovery Handbook. pringer U, 5, h. Clustering Methods, pp [6] K. aenko, B. Kulis, M. Fritz, and. Darrell, Adapting visual ategory models to new domains, in In Pro. of ECCV. pringer, pp [7] G. Griffin, A. Holub, and P. Perona, Calteh-56 objet ategory dataset, California Institute of ehnology, eh. Rep. 7694, 7. [Online]. Available: [8] U. Von Luxburg, A tutorial on spetral lustering, tatistis and Computing, vol. 7, no. 4, pp , 7. [9] A. Khosla,. Zhou,. Malisiewiz, A. A. Efros, and A. orralba, Undoing the damage of dataset bias, in In Pro. of IEEE Conferene on ECCV. pringer,, pp [] L. Van der Maaten and G. Hinton, Visualizing data using t-sne, Journal of Mahine Learning Researh, vol. 9, no , p. 85, 8.
DOMAIN ADAPTATION BY ITERATIVE IMPROVEMENT OF SOFT-LABELING AND MAXIMIZATION OF NON-PARAMETRIC MUTUAL INFORMATION. M.N.A. Khan, Douglas R.
DOMAIN ADAPTATION BY ITERATIVE IMPROVEMENT OF SOFT-LABELING AND MAXIMIZATION OF NON-PARAMETRIC MUTUAL INFORMATION M.N.A. Khan, Douglas R. Heisterkamp Department of Computer Siene Oklahoma State University,
More informationLearning Convention Propagation in BeerAdvocate Reviews from a etwork Perspective. Abstract
CS 9 Projet Final Report: Learning Convention Propagation in BeerAdvoate Reviews from a etwork Perspetive Abstrat We look at the way onventions propagate between reviews on the BeerAdvoate dataset, and
More informationThe Minimum Redundancy Maximum Relevance Approach to Building Sparse Support Vector Machines
The Minimum Redundany Maximum Relevane Approah to Building Sparse Support Vetor Mahines Xiaoxing Yang, Ke Tang, and Xin Yao, Nature Inspired Computation and Appliations Laboratory (NICAL), Shool of Computer
More informationRotation Invariant Spherical Harmonic Representation of 3D Shape Descriptors
Eurographis Symposium on Geometry Proessing (003) L. Kobbelt, P. Shröder, H. Hoppe (Editors) Rotation Invariant Spherial Harmoni Representation of 3D Shape Desriptors Mihael Kazhdan, Thomas Funkhouser,
More informationthe data. Structured Principal Component Analysis (SPCA)
Strutured Prinipal Component Analysis Kristin M. Branson and Sameer Agarwal Department of Computer Siene and Engineering University of California, San Diego La Jolla, CA 9193-114 Abstrat Many tasks involving
More informationBoosted Random Forest
Boosted Random Forest Yohei Mishina, Masamitsu suhiya and Hironobu Fujiyoshi Department of Computer Siene, Chubu University, 1200 Matsumoto-ho, Kasugai, Aihi, Japan {mishi, mtdoll}@vision.s.hubu.a.jp,
More informationA Novel Validity Index for Determination of the Optimal Number of Clusters
IEICE TRANS. INF. & SYST., VOL.E84 D, NO.2 FEBRUARY 2001 281 LETTER A Novel Validity Index for Determination of the Optimal Number of Clusters Do-Jong KIM, Yong-Woon PARK, and Dong-Jo PARK, Nonmembers
More informationEASY TRANSFER LEARNING BY EXPLOITING INTRA-DOMAIN STRUCTURES
EASY TRANSFER LEARNING BY EXPLOITING INTRA-DOMAIN STRUCTURES Jindong Wang 1, Yiqiang Chen 1,, Han Yu 2, Meiyu Huang 3, Qiang Yang 4 1 Beiing Key Lab. of Mobile Computing and Pervasive Devie, Inst. of Computing
More informationAccepted Manuscript. Domain Class Consistency based Transfer Learning for Image Classification Across Domains. Lei Zhang, Jian Yang, David Zhang
Aepted Manusript Domain lass onsisteny based ransfer Learning for Image lassifiation Aross Domains Lei Zhang Jian Yang David Zhang PII: 000-055(6)335-9 DOI: 0.06/j.ins.07.08.034 Referene: IN 3038 o appear
More informationCluster-Based Cumulative Ensembles
Cluster-Based Cumulative Ensembles Hanan G. Ayad and Mohamed S. Kamel Pattern Analysis and Mahine Intelligene Lab, Eletrial and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1,
More informationCapturing Large Intra-class Variations of Biometric Data by Template Co-updating
Capturing Large Intra-lass Variations of Biometri Data by Template Co-updating Ajita Rattani University of Cagliari Piazza d'armi, Cagliari, Italy ajita.rattani@diee.unia.it Gian Lua Marialis University
More informationAbstract. Key Words: Image Filters, Fuzzy Filters, Order Statistics Filters, Rank Ordered Mean Filters, Channel Noise. 1.
Fuzzy Weighted Rank Ordered Mean (FWROM) Filters for Mixed Noise Suppression from Images S. Meher, G. Panda, B. Majhi 3, M.R. Meher 4,,4 Department of Eletronis and I.E., National Institute of Tehnology,
More informationExploiting Enriched Contextual Information for Mobile App Classification
Exploiting Enrihed Contextual Information for Mobile App Classifiation Hengshu Zhu 1 Huanhuan Cao 2 Enhong Chen 1 Hui Xiong 3 Jilei Tian 2 1 University of Siene and Tehnology of China 2 Nokia Researh Center
More informationPlot-to-track correlation in A-SMGCS using the target images from a Surface Movement Radar
Plot-to-trak orrelation in A-SMGCS using the target images from a Surfae Movement Radar G. Golino Radar & ehnology Division AMS, Italy ggolino@amsjv.it Abstrat he main topi of this paper is the formulation
More informationarxiv: v1 [cs.db] 13 Sep 2017
An effiient lustering algorithm from the measure of loal Gaussian distribution Yuan-Yen Tai (Dated: May 27, 2018) In this paper, I will introdue a fast and novel lustering algorithm based on Gaussian distribution
More informationImproved Vehicle Classification in Long Traffic Video by Cooperating Tracker and Classifier Modules
Improved Vehile Classifiation in Long Traffi Video by Cooperating Traker and Classifier Modules Brendan Morris and Mohan Trivedi University of California, San Diego San Diego, CA 92093 {b1morris, trivedi}@usd.edu
More informationPerformance of Histogram-Based Skin Colour Segmentation for Arms Detection in Human Motion Analysis Application
World Aademy of Siene, Engineering and Tehnology 8 009 Performane of Histogram-Based Skin Colour Segmentation for Arms Detetion in Human Motion Analysis Appliation Rosalyn R. Porle, Ali Chekima, Farrah
More informationExtracting Partition Statistics from Semistructured Data
Extrating Partition Statistis from Semistrutured Data John N. Wilson Rihard Gourlay Robert Japp Mathias Neumüller Department of Computer and Information Sienes University of Strathlyde, Glasgow, UK {jnw,rsg,rpj,mathias}@is.strath.a.uk
More informationA scheme for racquet sports video analysis with the combination of audio-visual information
A sheme for raquet sports video analysis with the ombination of audio-visual information Liyuan Xing a*, Qixiang Ye b, Weigang Zhang, Qingming Huang a and Hua Yu a a Graduate Shool of the Chinese Aadamy
More information3-D IMAGE MODELS AND COMPRESSION - SYNTHETIC HYBRID OR NATURAL FIT?
3-D IMAGE MODELS AND COMPRESSION - SYNTHETIC HYBRID OR NATURAL FIT? Bernd Girod, Peter Eisert, Marus Magnor, Ekehard Steinbah, Thomas Wiegand Te {girod eommuniations Laboratory, University of Erlangen-Nuremberg
More informationSelf-Adaptive Parent to Mean-Centric Recombination for Real-Parameter Optimization
Self-Adaptive Parent to Mean-Centri Reombination for Real-Parameter Optimization Kalyanmoy Deb and Himanshu Jain Department of Mehanial Engineering Indian Institute of Tehnology Kanpur Kanpur, PIN 86 {deb,hjain}@iitk.a.in
More informationDetection and Recognition of Non-Occluded Objects using Signature Map
6th WSEAS International Conferene on CIRCUITS, SYSTEMS, ELECTRONICS,CONTROL & SIGNAL PROCESSING, Cairo, Egypt, De 9-31, 007 65 Detetion and Reognition of Non-Oluded Objets using Signature Map Sangbum Park,
More informationAutomatic Physical Design Tuning: Workload as a Sequence Sanjay Agrawal Microsoft Research One Microsoft Way Redmond, WA, USA +1-(425)
Automati Physial Design Tuning: Workload as a Sequene Sanjay Agrawal Mirosoft Researh One Mirosoft Way Redmond, WA, USA +1-(425) 75-357 sagrawal@mirosoft.om Eri Chu * Computer Sienes Department University
More informationVideo Data and Sonar Data: Real World Data Fusion Example
14th International Conferene on Information Fusion Chiago, Illinois, USA, July 5-8, 2011 Video Data and Sonar Data: Real World Data Fusion Example David W. Krout Applied Physis Lab dkrout@apl.washington.edu
More informationLearning Discriminative and Shareable Features. Scene Classificsion
Learning Disriminative and Shareable Features for Sene Classifiation Zhen Zuo, Gang Wang, Bing Shuai, Lifan Zhao, Qingxiong Yang, and Xudong Jiang Nanyang Tehnologial University, Singapore, Advaned Digital
More informationGray Codes for Reflectable Languages
Gray Codes for Refletable Languages Yue Li Joe Sawada Marh 8, 2008 Abstrat We lassify a type of language alled a refletable language. We then develop a generi algorithm that an be used to list all strings
More informationGraph-Based vs Depth-Based Data Representation for Multiview Images
Graph-Based vs Depth-Based Data Representation for Multiview Images Thomas Maugey, Antonio Ortega, Pasal Frossard Signal Proessing Laboratory (LTS), Eole Polytehnique Fédérale de Lausanne (EPFL) Email:
More informationKERNEL SPARSE REPRESENTATION WITH LOCAL PATTERNS FOR FACE RECOGNITION
KERNEL SPARSE REPRESENTATION WITH LOCAL PATTERNS FOR FACE RECOGNITION Cuiui Kang 1, Shengai Liao, Shiming Xiang 1, Chunhong Pan 1 1 National Laboratory of Pattern Reognition, Institute of Automation, Chinese
More informationAlgorithms, Mechanisms and Procedures for the Computer-aided Project Generation System
Algorithms, Mehanisms and Proedures for the Computer-aided Projet Generation System Anton O. Butko 1*, Aleksandr P. Briukhovetskii 2, Dmitry E. Grigoriev 2# and Konstantin S. Kalashnikov 3 1 Department
More informationA Coarse-to-Fine Classification Scheme for Facial Expression Recognition
A Coarse-to-Fine Classifiation Sheme for Faial Expression Reognition Xiaoyi Feng 1,, Abdenour Hadid 1 and Matti Pietikäinen 1 1 Mahine Vision Group Infoteh Oulu and Dept. of Eletrial and Information Engineering
More informationMulti-Piece Mold Design Based on Linear Mixed-Integer Program Toward Guaranteed Optimality
INTERNATIONAL CONFERENCE ON MANUFACTURING AUTOMATION (ICMA200) Multi-Piee Mold Design Based on Linear Mixed-Integer Program Toward Guaranteed Optimality Stephen Stoyan, Yong Chen* Epstein Department of
More informationEvolutionary Feature Synthesis for Image Databases
Evolutionary Feature Synthesis for Image Databases Anlei Dong, Bir Bhanu, Yingqiang Lin Center for Researh in Intelligent Systems University of California, Riverside, California 92521, USA {adong, bhanu,
More informationDetecting Outliers in High-Dimensional Datasets with Mixed Attributes
Deteting Outliers in High-Dimensional Datasets with Mixed Attributes A. Koufakou, M. Georgiopoulos, and G.C. Anagnostopoulos 2 Shool of EECS, University of Central Florida, Orlando, FL, USA 2 Dept. of
More informationTUMOR DETECTION IN MRI BRAIN IMAGE SEGMENTATION USING PHASE CONGRUENCY MODIFIED FUZZY C MEAN ALGORITHM
TUMOR DETECTION IN MRI BRAIN IMAGE SEGMENTATION USING PHASE CONGRUENCY MODIFIED FUZZY C MEAN ALGORITHM M. Murugeswari 1, M.Gayathri 2 1 Assoiate Professor, 2 PG Sholar 1,2 K.L.N College of Information
More informationOutline: Software Design
Outline: Software Design. Goals History of software design ideas Design priniples Design methods Life belt or leg iron? (Budgen) Copyright Nany Leveson, Sept. 1999 A Little History... At first, struggling
More informationSystem-Level Parallelism and Throughput Optimization in Designing Reconfigurable Computing Applications
System-Level Parallelism and hroughput Optimization in Designing Reonfigurable Computing Appliations Esam El-Araby 1, Mohamed aher 1, Kris Gaj 2, arek El-Ghazawi 1, David Caliga 3, and Nikitas Alexandridis
More information2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,
2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any urrent or future media, inluding reprinting/republishing this material for advertising
More informationApproximate logic synthesis for error tolerant applications
Approximate logi synthesis for error tolerant appliations Doohul Shin and Sandeep K. Gupta Eletrial Engineering Department, University of Southern California, Los Angeles, CA 989 {doohuls, sandeep}@us.edu
More informationPipelined Multipliers for Reconfigurable Hardware
Pipelined Multipliers for Reonfigurable Hardware Mithell J. Myjak and José G. Delgado-Frias Shool of Eletrial Engineering and Computer Siene, Washington State University Pullman, WA 99164-2752 USA {mmyjak,
More informationOne Against One or One Against All : Which One is Better for Handwriting Recognition with SVMs?
One Against One or One Against All : Whih One is Better for Handwriting Reognition with SVMs? Jonathan Milgram, Mohamed Cheriet, Robert Sabourin To ite this version: Jonathan Milgram, Mohamed Cheriet,
More informationA New RBFNDDA-KNN Network and Its Application to Medical Pattern Classification
A New RBFNDDA-KNN Network and Its Appliation to Medial Pattern Classifiation Shing Chiang Tan 1*, Chee Peng Lim 2, Robert F. Harrison 3, R. Lee Kennedy 4 1 Faulty of Information Siene and Tehnology, Multimedia
More informationFace and Facial Feature Tracking for Natural Human-Computer Interface
Fae and Faial Feature Traking for Natural Human-Computer Interfae Vladimir Vezhnevets Graphis & Media Laboratory, Dept. of Applied Mathematis and Computer Siene of Mosow State University Mosow, Russia
More informationWeak Dependence on Initialization in Mixture of Linear Regressions
Proeedings of the International MultiConferene of Engineers and Computer Sientists 8 Vol I IMECS 8, Marh -6, 8, Hong Kong Weak Dependene on Initialization in Mixture of Linear Regressions Ryohei Nakano
More informationShape Outlier Detection Using Pose Preserving Dynamic Shape Models
Shape Outlier Detetion Using Pose Preserving Dynami Shape Models Chan-Su Lee Ahmed Elgammal Department of Computer Siene, Rutgers University, Pisataway, NJ 8854 USA CHANSU@CS.RUTGERS.EDU ELGAMMAL@CS.RUTGERS.EDU
More informationFacility Location: Distributed Approximation
Faility Loation: Distributed Approximation Thomas Mosibroda Roger Wattenhofer Distributed Computing Group PODC 2005 Where to plae ahes in the Internet? A distributed appliation that has to dynamially plae
More informationAn Optimized Approach on Applying Genetic Algorithm to Adaptive Cluster Validity Index
IJCSES International Journal of Computer Sienes and Engineering Systems, ol., No.4, Otober 2007 CSES International 2007 ISSN 0973-4406 253 An Optimized Approah on Applying Geneti Algorithm to Adaptive
More informationTime delay estimation of reverberant meeting speech: on the use of multichannel linear prediction
University of Wollongong Researh Online Faulty of Informatis - apers (Arhive) Faulty of Engineering and Information Sienes 7 Time delay estimation of reverberant meeting speeh: on the use of multihannel
More informationMulti-Channel Wireless Networks: Capacity and Protocols
Multi-Channel Wireless Networks: Capaity and Protools Tehnial Report April 2005 Pradeep Kyasanur Dept. of Computer Siene, and Coordinated Siene Laboratory, University of Illinois at Urbana-Champaign Email:
More informationAn Alternative Approach to the Fuzzifier in Fuzzy Clustering to Obtain Better Clustering Results
An Alternative Approah to the Fuzziier in Fuzzy Clustering to Obtain Better Clustering Results Frank Klawonn Department o Computer Siene University o Applied Sienes BS/WF Salzdahlumer Str. 46/48 D-38302
More informationCleanUp: Improving Quadrilateral Finite Element Meshes
CleanUp: Improving Quadrilateral Finite Element Meshes Paul Kinney MD-10 ECC P.O. Box 203 Ford Motor Company Dearborn, MI. 8121 (313) 28-1228 pkinney@ford.om Abstrat: Unless an all quadrilateral (quad)
More informationRelevance for Computer Vision
The Geometry of ROC Spae: Understanding Mahine Learning Metris through ROC Isometris, by Peter A. Flah International Conferene on Mahine Learning (ICML-23) http://www.s.bris.a.uk/publiations/papers/74.pdf
More informationWe don t need no generation - a practical approach to sliding window RLNC
We don t need no generation - a pratial approah to sliding window RLNC Simon Wunderlih, Frank Gabriel, Sreekrishna Pandi, Frank H.P. Fitzek Deutshe Telekom Chair of Communiation Networks, TU Dresden, Dresden,
More informationUnsupervised Stereoscopic Video Object Segmentation Based on Active Contours and Retrainable Neural Networks
Unsupervised Stereosopi Video Objet Segmentation Based on Ative Contours and Retrainable Neural Networks KLIMIS NTALIANIS, ANASTASIOS DOULAMIS, and NIKOLAOS DOULAMIS National Tehnial University of Athens
More informationDeep Rule-Based Classifier with Human-level Performance and Characteristics
Deep Rule-Based Classifier with Human-level Performane and Charateristis Plamen P. Angelov 1,2 and Xiaowei Gu 1* 1 Shool of Computing and Communiations, Lanaster University, Lanaster, LA1 4WA, UK 2 Tehnial
More informationA {k, n}-secret Sharing Scheme for Color Images
A {k, n}-seret Sharing Sheme for Color Images Rastislav Luka, Konstantinos N. Plataniotis, and Anastasios N. Venetsanopoulos The Edward S. Rogers Sr. Dept. of Eletrial and Computer Engineering, University
More informationFast Rigid Motion Segmentation via Incrementally-Complex Local Models
Fast Rigid Motion Segmentation via Inrementally-Complex Loal Models Fernando Flores-Mangas Allan D. Jepson Department of Computer Siene, University of Toronto {mangas,jepson}@s.toronto.edu Abstrat The
More informationVolume 3, Issue 9, September 2013 International Journal of Advanced Research in Computer Science and Software Engineering
Volume 3, Issue 9, September 2013 ISSN: 2277 128X International Journal of Advaned Researh in Computer Siene and Software Engineering Researh Paper Available online at: www.ijarsse.om A New-Fangled Algorithm
More informationDiscrete sequential models and CRFs. 1 Case Study: Supervised Part-of-Speech Tagging
0-708: Probabilisti Graphial Models 0-708, Spring 204 Disrete sequential models and CRFs Leturer: Eri P. Xing Sribes: Pankesh Bamotra, Xuanhong Li Case Study: Supervised Part-of-Speeh Tagging The supervised
More informationNew Fuzzy Object Segmentation Algorithm for Video Sequences *
JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 24, 521-537 (2008) New Fuzzy Obet Segmentation Algorithm for Video Sequenes * KUO-LIANG CHUNG, SHIH-WEI YU, HSUEH-JU YEH, YONG-HUAI HUANG AND TA-JEN YAO Department
More informationHEXA: Compact Data Structures for Faster Packet Processing
Washington University in St. Louis Washington University Open Sholarship All Computer Siene and Engineering Researh Computer Siene and Engineering Report Number: 27-26 27 HEXA: Compat Data Strutures for
More informationAccommodations of QoS DiffServ Over IP and MPLS Networks
Aommodations of QoS DiffServ Over IP and MPLS Networks Abdullah AlWehaibi, Anjali Agarwal, Mihael Kadoh and Ahmed ElHakeem Department of Eletrial and Computer Department de Genie Eletrique Engineering
More informationSmooth Trajectory Planning Along Bezier Curve for Mobile Robots with Velocity Constraints
Smooth Trajetory Planning Along Bezier Curve for Mobile Robots with Veloity Constraints Gil Jin Yang and Byoung Wook Choi Department of Eletrial and Information Engineering Seoul National University of
More informationContents Contents...I List of Tables...VIII List of Figures...IX 1. Introduction Information Retrieval... 8
Contents Contents...I List of Tables...VIII List of Figures...IX 1. Introdution... 1 1.1. Internet Information...2 1.2. Internet Information Retrieval...3 1.2.1. Doument Indexing...4 1.2.2. Doument Retrieval...4
More informationSuperpixel Tracking. School of Information and Communication Engineering, Dalian University of Technology, China 2
Superpixel Traking Shu Wang1, Huhuan Lu1, Fan Yang1, and Ming-Hsuan Yang2 1 Shool of Information and Communiation Engineering, Dalian University of Tehnology, China 2 Eletrial Engineering and Computer
More informationSpatial-Aware Collaborative Representation for Hyperspectral Remote Sensing Image Classification
Spatial-Aware Collaborative Representation for Hyperspetral Remote Sensing Image ifiation Junjun Jiang, Member, IEEE, Chen Chen, Member, IEEE, Yi Yu, Xinwei Jiang, and Jiayi Ma Member, IEEE Representation-residual
More informationPARAMETRIC SAR IMAGE FORMATION - A PROMISING APPROACH TO RESOLUTION-UNLIMITED IMAGING. Yesheng Gao, Kaizhi Wang, Xingzhao Liu
20th European Signal Proessing Conferene EUSIPCO 2012) Buharest, Romania, August 27-31, 2012 PARAMETRIC SAR IMAGE FORMATION - A PROMISING APPROACH TO RESOLUTION-UNLIMITED IMAGING Yesheng Gao, Kaizhi Wang,
More informationUsing Augmented Measurements to Improve the Convergence of ICP
Using Augmented Measurements to Improve the onvergene of IP Jaopo Serafin, Giorgio Grisetti Dept. of omputer, ontrol and Management Engineering, Sapienza University of Rome, Via Ariosto 25, I-0085, Rome,
More informationGradient based progressive probabilistic Hough transform
Gradient based progressive probabilisti Hough transform C.Galambos, J.Kittler and J.Matas Abstrat: The authors look at the benefits of exploiting gradient information to enhane the progressive probabilisti
More informationSimulation of Crystallographic Texture and Anisotropie of Polycrystals during Metal Forming with Respect to Scaling Aspects
Raabe, Roters, Wang Simulation of Crystallographi Texture and Anisotropie of Polyrystals during Metal Forming with Respet to Saling Aspets D. Raabe, F. Roters, Y. Wang Max-Plank-Institut für Eisenforshung,
More informationA Partial Sorting Algorithm in Multi-Hop Wireless Sensor Networks
A Partial Sorting Algorithm in Multi-Hop Wireless Sensor Networks Abouberine Ould Cheikhna Department of Computer Siene University of Piardie Jules Verne 80039 Amiens Frane Ould.heikhna.abouberine @u-piardie.fr
More informationA Novel Bit Level Time Series Representation with Implication of Similarity Search and Clustering
A Novel Bit Level Time Series Representation with Impliation of Similarity Searh and lustering hotirat Ratanamahatana, Eamonn Keogh, Anthony J. Bagnall 2, and Stefano Lonardi Dept. of omputer Siene & Engineering,
More informationSemi-Supervised Affinity Propagation with Instance-Level Constraints
Semi-Supervised Affinity Propagation with Instane-Level Constraints Inmar E. Givoni, Brendan J. Frey Probabilisti and Statistial Inferene Group University of Toronto 10 King s College Road, Toronto, Ontario,
More informationCluster Centric Fuzzy Modeling
10.1109/TFUZZ.014.300134, IEEE Transations on Fuzzy Systems TFS-013-0379.R1 1 Cluster Centri Fuzzy Modeling Witold Pedryz, Fellow, IEEE, and Hesam Izakian, Student Member, IEEE Abstrat In this study, we
More informationContour Box: Rejecting Object Proposals Without Explicit Closed Contours
Contour Box: Rejeting Objet Proposals Without Expliit Closed Contours Cewu Lu, Shu Liu Jiaya Jia Chi-Keung Tang The Hong Kong University of Siene and Tehnology Stanford University The Chinese University
More informationMultiple-Criteria Decision Analysis: A Novel Rank Aggregation Method
3537 Multiple-Criteria Deision Analysis: A Novel Rank Aggregation Method Derya Yiltas-Kaplan Department of Computer Engineering, Istanbul University, 34320, Avilar, Istanbul, Turkey Email: dyiltas@ istanbul.edu.tr
More informationSVC-DASH-M: Scalable Video Coding Dynamic Adaptive Streaming Over HTTP Using Multiple Connections
SVC-DASH-M: Salable Video Coding Dynami Adaptive Streaming Over HTTP Using Multiple Connetions Samar Ibrahim, Ahmed H. Zahran and Mahmoud H. Ismail Department of Eletronis and Eletrial Communiations, Faulty
More informationNONLINEAR BACK PROJECTION FOR TOMOGRAPHIC IMAGE RECONSTRUCTION. Ken Sauer and Charles A. Bouman
NONLINEAR BACK PROJECTION FOR TOMOGRAPHIC IMAGE RECONSTRUCTION Ken Sauer and Charles A. Bouman Department of Eletrial Engineering, University of Notre Dame Notre Dame, IN 46556, (219) 631-6999 Shool of
More informationBioTechnology. An Indian Journal FULL PAPER. Trade Science Inc. Improvement of low illumination image enhancement algorithm based on physical mode
[Type text] [Type text] [Type text] ISSN : 0974-7435 Volume 10 Issue 22 BioTehnology 2014 An Indian Journal FULL PAPER BTAIJ, 10(22), 2014 [13995-14001] Improvement of low illumination image enhanement
More informationFlow Demands Oriented Node Placement in Multi-Hop Wireless Networks
Flow Demands Oriented Node Plaement in Multi-Hop Wireless Networks Zimu Yuan Institute of Computing Tehnology, CAS, China {zimu.yuan}@gmail.om arxiv:153.8396v1 [s.ni] 29 Mar 215 Abstrat In multi-hop wireless
More informationHyperspectral Images Classification Using Energy Profiles of Spatial and Spectral Features
journal homepage: www.elsevier.om Hyperspetral Images Classifiation Using Energy Profiles of Spatial and Spetral Features Hamid Reza Shahdoosti a a Hamedan University of ehnology, Department of Eletrial
More informationExploring the Commonality in Feature Modeling Notations
Exploring the Commonality in Feature Modeling Notations Miloslav ŠÍPKA Slovak University of Tehnology Faulty of Informatis and Information Tehnologies Ilkovičova 3, 842 16 Bratislava, Slovakia miloslav.sipka@gmail.om
More informationA Descriptive Framework for the Multidimensional Medical Data Mining and Representation
Journal of Computer Siene 7 (4): 519-55, 011 ISSN 1549-3636 011 Siene Publiations A Desriptive Framework for the Multidimensional Medial Data Mining and Representation Veeramalai Sankaradass and Kannan
More informationA DYNAMIC ACCESS CONTROL WITH BINARY KEY-PAIR
Malaysian Journal of Computer Siene, Vol 10 No 1, June 1997, pp 36-41 A DYNAMIC ACCESS CONTROL WITH BINARY KEY-PAIR Md Rafiqul Islam, Harihodin Selamat and Mohd Noor Md Sap Faulty of Computer Siene and
More informationTowards Optimal Naive Bayes Nearest Neighbor
Towards Optimal Naive Bayes Nearest Neighbor Régis Behmo 1, Paul Marombes 1,2, Arnak Dalalyan 2,andVéronique Prinet 1 1 NLPR / LIAMA, Institute of Automation, Chinese Aademy of Sienes 2 IMAGINE, LIGM,
More informationA Load-Balanced Clustering Protocol for Hierarchical Wireless Sensor Networks
International Journal of Advanes in Computer Networks and Its Seurity IJCNS A Load-Balaned Clustering Protool for Hierarhial Wireless Sensor Networks Mehdi Tarhani, Yousef S. Kavian, Saman Siavoshi, Ali
More informationCluster-based Cooperative Communication with Network Coding in Wireless Networks
Cluster-based Cooperative Communiation with Network Coding in Wireless Networks Zygmunt J. Haas Shool of Eletrial and Computer Engineering Cornell University Ithaa, NY 4850, U.S.A. Email: haas@ee.ornell.edu
More informationProbabilistic Classification of Image Regions using an Observation-Constrained Generative Approach
9 Probabilisti Classifiation of mage Regions using an Observation-Constrained Generative Approah Sanjiv Kumar, Alexander C. Loui 2, and Martial Hebert The Robotis nstitute, Carnegie Mellon University,
More informationDrawing lines. Naïve line drawing algorithm. drawpixel(x, round(y)); double dy = y1 - y0; double dx = x1 - x0; double m = dy / dx; double y = y0;
Naïve line drawing algorithm // Connet to grid points(x0,y0) and // (x1,y1) by a line. void drawline(int x0, int y0, int x1, int y1) { int x; double dy = y1 - y0; double dx = x1 - x0; double m = dy / dx;
More informationNaïve Bayesian Rough Sets Under Fuzziness
IJMSA: Vol. 6, No. 1-2, January-June 2012, pp. 19 25 Serials Publiations ISSN: 0973-6786 Naïve ayesian Rough Sets Under Fuzziness G. GANSAN 1,. KRISHNAVNI 2 T. HYMAVATHI 3 1,2,3 Department of Mathematis,
More informationMicro-Doppler Based Human-Robot Classification Using Ensemble and Deep Learning Approaches
Miro-Doppler Based Human-Robot Classifiation Using Ensemble and Deep Learning Approahes Sherif Abdulatif, Qian Wei, Fady Aziz, Bernhard Kleiner, Urs Shneider Department of Biomehatroni Systems, Fraunhofer
More informationAn Edge-based Clustering Algorithm to Detect Social Circles in Ego Networks
JOURNAL OF COMPUTERS, VOL. 8, NO., OCTOBER 23 2575 An Edge-based Clustering Algorithm to Detet Soial Cirles in Ego Networks Yu Wang Shool of Computer Siene and Tehnology, Xidian University Xi an,77, China
More information13.1 Numerical Evaluation of Integrals Over One Dimension
13.1 Numerial Evaluation of Integrals Over One Dimension A. Purpose This olletion of subprograms estimates the value of the integral b a f(x) dx where the integrand f(x) and the limits a and b are supplied
More informationContext-Aware Activity Modeling using Hierarchical Conditional Random Fields
Context-Aware Ativity Modeling using Hierarhial Conditional Random Fields Yingying Zhu, Nandita M. Nayak, and Amit K. Roy-Chowdhury Abstrat In this paper, rather than modeling ativities in videos individually,
More informationSpatio-Temporal Naive-Bayes Nearest-Neighbor (ST-NBNN) for Skeleton-Based Action Recognition
Spatio-Temporal Naive-Bayes Nearest-Neighbor () for Skeleton-Based Ation Reognition Junwu Weng Chaoqun Weng Junsong Yuan Shool of Eletrial and Eletroni Engineering Nanyang Tehnologial University, Singapore
More informationFigure 1. LBP in the field of texture analysis operators.
L MEHODOLOGY he loal inary pattern (L) texture analysis operator is defined as a gray-sale invariant texture measure, derived from a general definition of texture in a loal neighorhood. he urrent form
More informationDistributed Resource Allocation Strategies for Achieving Quality of Service in Server Clusters
Proeedings of the 45th IEEE Conferene on Deision & Control Manhester Grand Hyatt Hotel an Diego, CA, UA, Deember 13-15, 2006 Distributed Resoure Alloation trategies for Ahieving Quality of ervie in erver
More informationParametric Abstract Domains for Shape Analysis
Parametri Abstrat Domains for Shape Analysis Xavier RIVAL (INRIA & Éole Normale Supérieure) Joint work with Bor-Yuh Evan CHANG (University of Maryland U University of Colorado) and George NECULA (University
More informationSegmentation of brain MR image using fuzzy local Gaussian mixture model with bias field correction
IOSR Journal of VLSI and Signal Proessing (IOSR-JVSP) Volume 2, Issue 2 (Mar. Apr. 2013), PP 35-41 e-issn: 2319 4200, p-issn No. : 2319 4197 Segmentation of brain MR image using fuzzy loal Gaussian mixture
More informationRecommendation Subgraphs for Web Discovery
Reommation Subgraphs for Web Disovery Arda Antikaioglu Department of Mathematis Carnegie Mellon University aantika@andrew.mu.edu R. Ravi Tepper Shool of Business Carnegie Mellon University ravi@mu.edu
More informationDr.Hazeem Al-Khafaji Dept. of Computer Science, Thi-Qar University, College of Science, Iraq
Volume 4 Issue 6 June 014 ISSN: 77 18X International Journal of Advaned Researh in Computer Siene and Software Engineering Researh Paper Available online at: www.ijarsse.om Medial Image Compression using
More information