Supplementary Material

Size: px
Start display at page:

Download "Supplementary Material"

Transcription

1 Supplementary Material ECO: Efficient Convolution Operators for Tracking Martin Danelljan, Goutam Bhat, Fahad Shahbaz Khan, Michael Felsberg Computer Vision Laboratory, Department of Electrical Engineering, Linköping University, Sweden {martin.danelljan, goutam.bhat, fahad.khan, This supplementary document of [1] first provides the computational complexity analysis of the learning step in C-COT. Section 2 provides additional details of the numerical optimization procedure used for learning the factorized convolution operator (section 3.1 in the paper). We report relevant hardware specifications in section 3. Lastly, we provide additional results on the VOT16 and OTB-15 benchmarks in section 4 and 5, respectively. 1. Complexity Analysis of the Learning Here, we derive the computational complexity of the learning step in the baseline C-COT [2]. The learning itself is completely dominated by the problem of solving the normal equations (eq. (5) in the paper), ( A H ΓA + W H W )ˆf = A H Γŷ. (1) Here, H denotes the conjugate transpose. This linear system is iteratively solved using the Conjugate Gradient (CG) method [5, 6]. The dominating computation in CG is the evaluation of the left-hand side of (1), which is performed once per CG iteration. This computation is performed as A H (Γ(Aˆf)) + W H (Wˆf), (2) where the parentheses are used to indicate the order in which the operations are performed. Since the conjugate symmetry in the filter ˆf is preserved by the operations in (2), only half of the spectrum needs to be processed. We can therefore regard ˆf as a complex vector of d K d = D K elements, where K d is the bandwidth of channel d in the filter (see section 2 in the paper), K = 1 D d K d is the average number of Fourier coefficients per channel and D is the number of feature channels d. The matrix A contains a diagonal block A j,d of size K K d for each sample j {1,..., M} and channel d {1,..., D}. Here, we have defined K = max d K d. The diagonal of A j,d consists of the elements {Xj d[k]ˆb d [k]} K d k=. As previously shown for the discrete DCF case [3], the matrix A can be permuted to a block diagonal matrix à = K k=ãk, where Ãk contains the elements (Ãk) j,d = Xj d[k]ˆb d [k]. The operations ˆf Aˆf and ˆv A Hˆv can thus be implemented as block-wise dense matrix-vector multiplications, with a total of O(DM K) operations. Moreover, Γ is a diagonal matrix containing the weights α j, giving O(M K) operations. In the second term of (2), arising from the spatial regularization in the loss (eq. (3) in the paper), W and W H are convolution matrices with the kernel ŵ[k]. These operations have a complexity of O(D KK w ), where K w are the number of non-zero coefficients in ŵ (i.e. the size of the kernel). In practice however, the kernel ŵ[k] is small (typically 5 5), having a lesser impact on the overall complexity. To simplify the complexity expression, we therefore disregard this term. By taking the number of CG iterations N CG into account, we thus obtain the final expression O(N CG DM K) for the complexity of the learning step. The preprocessing steps needed for the CG optimization only have a marginal impact on the overall learning time. The most significant of these being the Fast Fourier Transform (FFT) of the feature map, having a O( d N d log N d ) complexity, where N d is the resolution of feature channel d. But since the FFT computations correspond to roughly 1% of the total time in C-COT, we exclude this part. 1

2 2. Factorized Convolution Operator Optimization Here, we provide more details regarding the optimization procedure for learning the factorized convolution operator (section 3.1 in the paper). We consider the case of learning the factorized operator S P f {x} (eq. (6) in the paper) based on a single sample (x, y), C E(f, P ) = S f,p {x} y 2 L + wf c 2 2 L + λ P 2 2 F. (3) The loss is obtained by employing the factorized operator S f,p {x} in the data term of the original loss (eq. (3) in the paper) and adding a regularization on the Frobenius norm P 2 F of P. By applying the Parseval s formula to the first two terms of (3) and utilizing the linearity and convolution properties of the Fourier series coefficients, we obtain the equivalent loss (same as eq. (7) in the paper), E(f, P ) = ẑ T P ˆf ŷ 2 + C ŵ ˆf c 2 + λ P 2 l 2 l 2 F. (4) Here, we have defined the interpolated feature map as z = J{x} to simplify notation. Note that the matrix-vector products in (4) are performed point-wise, (ẑ T P ˆf)[k] D C = ẑ d [k]p d,cf ˆ c [k], k Z. (5) d=1 We use the Gauss-Newton method [5] to optimize the non-linear least squares problem (4). In each iteration i, the residual in the data-term is linearized by performing a first order Taylor expansion at the current estimate ( ˆf i, P i ) (eq. (8) in the paper). This gives the following quadratic sub-problem (eq. (9) in the paper), Ẽ( ˆf i,, P ) = ẑ T P i ˆfi, + ( ˆf i ẑ) T vec( P ) ŷ 2 + C ŵ ˆf c l 2 i, 2 + λ P i + P 2 l 2 F. (6) To derive a simple formula for the normal equations of (6), we first introduce some notation. Let ˆf be the vectorization of ˆf i,, analogously to (1), and define p = vec( P ). Further, let p c denote the cth column in P i and set p = vec(p i ). We then define the matrices, K K1 2K 1+1 K KC 2K C +1 ( ˆf i ẑ)[ K] T ẑ[ K 1 ] T p 1 ẑ[ K C ] T p C A P = diag.... diag., B f = ẑ[k 1 ] T p 1 ẑ[k C ] T p C K K1 2K 1+1 K KC 2K C +1. ( ˆf i ẑ)[k] T. (7) Here, A P has a structure very similar to the matrix A in (1), but contains only a single training sample. Note that the diagonal blocks in A P are padded with zero matrices M N along the columns to achieve the same number of 2K + 1 rows. The Gauss-Newton sub-problem (6) can then be expressed as, Ẽ(ˆf, p) = A Pˆf + Bf p ŷ Wˆf λ p + p 2 2. (8) Here, the convolution matrix W and the vectorization ŷ are defined as in (1). The normal equations of (8) are obtained by setting the gradient to zero, [ A H P A P + W H W A H P B f Bf HA P Bf HB f + λi ] [ ] [ ˆf = p We employ the Conjugate Gradient method to iteratively solve the sub-problem (9). A H P ŷ ] Bf Hŷ λp. (9)

3 Camera Motion Illumination Change Occlusion ECO ECO-HC C-COT TCNN SSAT MLDF Staple DDC EBT SRBT Size Change Motion Change No Attribute Expected Average Overlap Figure 1. Expected Average Overlap (EAO) scores for each attribute on the VOT16 dataset. Here, No Attribute denotes frames with no labeled attribute. 3. Hardware Specifications Our tracker is implemented in Matlab and uses Matconvnet [7] for deep feature extraction. The frame-rate measurements of our CPU implementation were performed on a desktop computer with a 4-core Intel Core i7-67 CPU at 3.4 GHz. The frame-rate measurements of our GPU implementation were performed on a Tesla K GPU. 4. Additional Results on VOT16 Here, we provide further experimental evaluation on the VOT16 dataset [4] with 6 videos. The videos and the evaluation toolkit can be obtained from In the VOT16 dataset, each frame is labeled with five different attributes: camera motion, illumination change, occlusion, size change and motion change. Figure 1 visualizes the EAO of each attribute individually. Our approach achieves the best results on all five attributes. 5. Additional Results on OTB-15 Here, we report additional results on the OTB-15 dataset [8] with 1 videos. The ground truth annotations and videos are available at In the OTB-15 dataset, each video is annotated with 11 different attributes: scale variation, background clutter, out-ofplane rotation, in-plane rotation, illumination variation, motion blur, fast motion, deformation, occlusion, out of view and low resolution. The success plots of all attributes are shown in figure 2. Our ECO tracker achieves the best performance on 8 out of 11 attributes. Further, our method improves over the baseline C-COT [2] on 9 out of 11 attributes. For a fair comparison, we employ the same combination of deep and hand-crafted features in the baseline C-COT and as in our ECO tracker on the OTB, TempleColor and UAV123 datasets (Conv1, Conv5 and HOG). Note that this set of features provides substantially improved performance in C-COT compared to the original results reported in [2], where only deep convolutional features are used.

4 1 Success plot 1 Success plot of scale variation (64) 1 Success plot of background clutter (31) ECO [7.] C-COT [69.] MDNet [68.5] TCNN [66.1] ECO-HC [65.] DeepSRDCF [64.3] SRDCFad [63.4] SRDCF [6.5] Staple [58.4] SiameseFC [57.5] Success plot of out-of-plane rotation (63) 1 6 ECO [67.5] C-COT [67.] MDNet [66.6] TCNN [64.5] ECO-HC [61.7] SRDCFad [61.4] DeepSRDCF [61.2] SRDCF [56.7] SiameseFC [55.8] Staple [52.5] Success plot of in-plane rotation (51) 1 6 ECO [71.] MDNet [68.4] C-COT [68.1] SRDCFad [65.] ECO-HC [64.4] TCNN [63.6] DeepSRDCF [63.5] HCF [59.1] SRDCF [59.] Staple [56.8] Success plot of illumination variation (38) ECO [68.1] C-COT [67.9] MDNet [66.8] TCNN [64.7] ECO-HC [61.5] DeepSRDCF [61.3] SRDCFad [59.8] SRDCF [55.5] SiameseFC [55.2] LCT [54.3] Success plot of motion blur (29) 1 6 ECO [66.2] MDNet [66.2] C-COT [65.3] TCNN [65.1] DeepSRDCF [59.4] ECO-HC [58.8] SRDCFad [57.9] SiameseFC [57.2] HCF [56.3] LCT [56.2] Success plot of fast motion (39) 1 6 C-COT [72.3] ECO [72.3] MDNet [69.7] TCNN [68.5] SRDCFad [65.5] ECO-HC [64.9] DeepSRDCF [62.8] SRDCF [62.] Staple [6.] LCT [57.2] Success plot of deformation (44) ECO [71.9] C-COT [71.1] MDNet [68.8] TCNN [68.2] DeepSRDCF [65.1] SRDCFad [64.9] ECO-HC [63.5] SRDCF [6.3] MEEM [59.4] HCF [59.1] Success plot of occlusion (49) 1 6 ECO [69.3] C-COT [68.8] MDNet [68.3] TCNN [66.] ECO-HC [64.2] DeepSRDCF [63.5] SRDCFad [61.3] SRDCF [6.4] HCF [57.5] SiameseFC [57.2] Success plot of out of view (14) 1 6 MDNet [65.5] C-COT [64.4] ECO [64.] TCNN [62.1] ECO-HC [6.1] DeepSRDCF [57.1] SRDCFad [55.9] Staple [55.5] SRDCF [55.] HCF [53.4] Success plot of low resolution (9) ECO [68.8] C-COT [66.9] MDNet [65.3] ECO-HC [63.6] TCNN [62.8] DeepSRDCF [6.8] SRDCFad [59.6] SRDCF [56.6] SAMF [55.] Staple [54.8] 6 ECO [67.] C-COT [64.5] MDNet [63.5] ECO-HC [6.1] TCNN [59.1] DeepSRDCF [56.] MEEM [52.6] SRDCFad [51.8] SiameseFC [51.8] SAMF [49.1] 6 MDNet [62.9] SiameseFC [61.4] ECO [61.2] TCNN [6.8] ECO-HC [59.9] C-COT [59.1] SRDCF [5.3] SRDCFad [5.] DeepSRDCF [48.2] TGPR [44.5] Figure 2. Success plots on the OTB-15 dataset [8]. The total success plot (top-left) is displayed along with the plots for all 11 attributes. The title text indicate the name of the attribute and the number of videos associated with it. The area-under-the-curve scores for the top 1 trackers are shown in the legend.

5 References [1] M. Danelljan, G. Bhat, F. Shahbaz Khan, and M. Felsberg. Eco: Efficient convolution operators for tracking. In CVPR, [2] M. Danelljan, A. Robinson, F. Shahbaz Khan, and M. Felsberg. Beyond correlation filters: Learning continuous convolution operators for visual tracking. In ECCV, 16. 1, 3 [3] H. K. Galoogahi, T. Sim, and S. Lucey. Multi-channel correlation filters. In ICCV, [4] M. Kristan, A. Leonardis, J. Matas, R. Felsberg, Pflugfelder, M., L. Čehovin, G. Vojír, T.and Hger, and et al. The visual object tracking vot16 challenge results. In ECCV workshop, [5] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, 2nd edition, 6. 1, 2 [6] J. R. Shewchuk. An introduction to the conjugate gradient method without the agonizing pain. Technical report, Pittsburgh, PA, USA, [7] A. Vedaldi and K. Lenc. Matconvnet convolutional neural networks for matlab. CoRR, abs/ , [8] Y. Wu, J. Lim, and M.-H. Yang. Object tracking benchmark. TPAMI, 37(9): , 15. 3, 4

arxiv: v2 [cs.cv] 10 Apr 2017

arxiv: v2 [cs.cv] 10 Apr 2017 ECO: Efficient Convolution Operators for Tracking Martin Danelljan, Goutam Bhat, Fahad Shahbaz Khan, Michael Felsberg Computer Vision Laboratory, Department of Electrical Engineering, Linko ping University,

More information

ECO: Efficient Convolution Operators for Tracking

ECO: Efficient Convolution Operators for Tracking ECO: Efficient Convolution Operators for Tracking Martin Danelljan, Goutam Bhat, Fahad Shahbaz Khan, Michael Felsberg Computer Vision Laboratory, Department of Electrical Engineering, Linköping University,

More information

Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking

Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking Martin Danelljan, Andreas Robinson, Fahad Shahbaz Khan, Michael Felsberg 2 Discriminative Correlation Filters (DCF)

More information

Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking

Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking Martin Danelljan, Andreas Robinson, Fahad Shahbaz Khan, Michael Felsberg CVL, Department of Electrical Engineering,

More information

Correlation Filters with Weighted Convolution Responses

Correlation Filters with Weighted Convolution Responses Correlation Filters with Weighted Convolution Responses Zhiqun He 1, Yingruo Fan 1, Junfei Zhuang 1, Yuan Dong 1,2, HongLiang Bai 2 1 Beijing University of Posts and Telecommunications, 2 Beijing FaceAll

More information

Unveiling the Power of Deep Tracking

Unveiling the Power of Deep Tracking Unveiling the Power of Deep Tracking Goutam Bhat 1, Joakim Johnander 1,2, Martin Danelljan 1, Fahad Shahbaz Khan 1,3, and Michael Felsberg 1 1 CVL, Department of Electrical Engineering, Linköping University,

More information

Modeling and Propagating CNNs in a Tree Structure for Visual Tracking

Modeling and Propagating CNNs in a Tree Structure for Visual Tracking The Visual Object Tracking Challenge Workshop 2016 Modeling and Propagating CNNs in a Tree Structure for Visual Tracking Hyeonseob Nam* Mooyeol Baek* Bohyung Han Dept. of Computer Science and Engineering

More information

Convolutional Features for Correlation Filter Based Visual Tracking

Convolutional Features for Correlation Filter Based Visual Tracking Convolutional Features for Correlation Filter Based Visual Tracking Martin Danelljan, Gustav Häger, Fahad Shahbaz Khan, Michael Felsberg Computer Vision Laboratory, Linköping University, Sweden {martin.danelljan,

More information

A Benchmark and Simulator for UAV Tracking -Supplementary Material-

A Benchmark and Simulator for UAV Tracking -Supplementary Material- A Benchmark and Simulator for UAV Tracking -Supplementary Material- Matthias Mueller, Neil Smith, and Bernard Ghanem King Abdullah University of Science and Technology (KAUST), Saudi Arabia {matthias.mueller.2,

More information

CREST: Convolutional Residual Learning for Visual Tracking

CREST: Convolutional Residual Learning for Visual Tracking CREST: Convolutional Residual Learning for Visual Tracking Yibing Song, Chao Ma 2, Lijun Gong 3, Jiawei Zhang, Rynson W.H. Lau, and Ming-Hsuan Yang 4 City University of Hong Kong, Hong Kong 2 The University

More information

Discriminative Correlation Filter with Channel and Spatial Reliability (CSR-DCF)

Discriminative Correlation Filter with Channel and Spatial Reliability (CSR-DCF) Discriminative Correlation Filter with Channel and Spatial Reliability (CSR-DCF) Alan Lukežič 1, Andrej Muhič 1, Tomáš Vojíř 2, Luka Čehovin 1, Jiří Matas 2, Matej Kristan 1 1 Faculty of Computer and Information

More information

VISUAL object tracking is an established yet rapidly. When Correlation Filters Meet Siamese Networks for Real-time Complementary Tracking

VISUAL object tracking is an established yet rapidly. When Correlation Filters Meet Siamese Networks for Real-time Complementary Tracking This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI.9/TCSVT.29.2892759,

More information

arxiv: v2 [cs.cv] 13 Sep 2017

arxiv: v2 [cs.cv] 13 Sep 2017 Learning Policies for Adaptive Tracking with Deep Feature Cascades Chen Huang Simon Lucey Deva Ramanan Robotics Institute, Carnegie Mellon University chenh2@andrew.cmu.edu {slucey,deva}@cs.cmu.edu arxiv:178.2973v2

More information

Learning to Track at 100 FPS with Deep Regression Networks - Supplementary Material

Learning to Track at 100 FPS with Deep Regression Networks - Supplementary Material Learning to Track at 1 FPS with Deep Regression Networks - Supplementary Material David Held, Sebastian Thrun, Silvio Savarese Department of Computer Science Stanford University {davheld,thrun,ssilvio}@cs.stanford.edu

More information

Online Object Tracking with Proposal Selection. Yang Hua Karteek Alahari Cordelia Schmid Inria Grenoble Rhône-Alpes, France

Online Object Tracking with Proposal Selection. Yang Hua Karteek Alahari Cordelia Schmid Inria Grenoble Rhône-Alpes, France Online Object Tracking with Proposal Selection Yang Hua Karteek Alahari Cordelia Schmid Inria Grenoble Rhône-Alpes, France Outline 2 Background Our approach Experimental results Summary Background 3 Tracking-by-detection

More information

A Complementary Tracking Model with Multiple Features

A Complementary Tracking Model with Multiple Features A Complementary Tracking Model with Multiple Features Peng Gao, Yipeng Ma, Chao Li, Ke Song, Fei Wang, Liyi Xiao Shenzhen Graduate School, Harbin Institute of Technology, China arxiv:184.7459v3 [cs.cv]

More information

arxiv: v1 [cs.cv] 5 Sep 2018

arxiv: v1 [cs.cv] 5 Sep 2018 arxiv:1809.01368v1 [cs.cv] 5 Sep 2018 Towards a Better Match in Siamese Network Based Visual Object Tracker Anfeng He 1, Chong Luo 2, Xinmei Tian 1, and Wenjun Zeng 2 1 CAS Key Laboratory of Technology

More information

Visual Object Tracking. Jianan Wu Megvii (Face++) Researcher Dec 2017

Visual Object Tracking. Jianan Wu Megvii (Face++) Researcher Dec 2017 Visual Object Tracking Jianan Wu Megvii (Face++) Researcher wjn@megvii.com Dec 2017 Applications From image to video: Augmented Reality Motion Capture Surveillance Sports Analysis Wait. What is visual

More information

Robust and Real-Time Visual Tracking Based on Complementary Learners

Robust and Real-Time Visual Tracking Based on Complementary Learners Robust and Real-Time Visual Tracking Based on Complementary Learners Xingzhou Luo, Dapeng Du, and Gangshan Wu (B) State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China

More information

arxiv: v2 [cs.cv] 8 Nov 2018

arxiv: v2 [cs.cv] 8 Nov 2018 Learning a Robust Society of Tracking Parts using Co-occurrence Constraints Elena Burceanu 1,2,3 and Marius Leordeanu 1,2,4 arxiv:1804.01771v2 [cs.cv] 8 Nov 2018 1 Bitdefender, Romania eburceanu@bitdefender.com

More information

Fast CNN-Based Object Tracking Using Localization Layers and Deep Features Interpolation

Fast CNN-Based Object Tracking Using Localization Layers and Deep Features Interpolation Fast CNN-Based Object Tracking Using Localization Layers and Deep Features Interpolation Al-Hussein A. El-Shafie Faculty of Engineering Cairo University Giza, Egypt elshafie_a@yahoo.com Mohamed Zaki Faculty

More information

Real-Time MDNet. 1 Introduction. Ilchae Jung 1, Jeany Son 1, Mooyeol Baek 1, and Bohyung Han 2

Real-Time MDNet. 1 Introduction. Ilchae Jung 1, Jeany Son 1, Mooyeol Baek 1, and Bohyung Han 2 Real-Time MDNet Ilchae Jung 1, Jeany Son 1, Mooyeol Baek 1, and Bohyung Han 2 1 Dept. of CSE, POSTECH, Korea 2 Dept. of ECE & ASRI, Seoul National University, Korea {chey0313,jeany,mooyeol}@postech.ac.kr

More information

A New Approach for Train Driver Tracking in Real Time. Ming-yu WANG, Si-le WANG, Li-ping CHEN, Xiang-yang CHEN, Zhen-chao CUI, Wen-zhu YANG *

A New Approach for Train Driver Tracking in Real Time. Ming-yu WANG, Si-le WANG, Li-ping CHEN, Xiang-yang CHEN, Zhen-chao CUI, Wen-zhu YANG * 2018 International Conference on Modeling, Simulation and Analysis (ICMSA 2018) ISBN: 978-1-60595-544-5 A New Approach for Train Driver Tracking in Real Time Ming-yu WANG, Si-le WANG, Li-ping CHEN, Xiang-yang

More information

arxiv: v1 [cs.cv] 28 Nov 2017

arxiv: v1 [cs.cv] 28 Nov 2017 Tracking for Half an Hour Ran Tao, Efstratios Gavves, Arnold W.M. Smeulders QUVA Lab, University of Amsterdam arxiv:1711.10217v1 [cs.cv] 28 Nov 2017 Abstract Long-term tracking requires extreme stability

More information

arxiv: v1 [cs.cv] 10 Nov 2017 Abstract

arxiv: v1 [cs.cv] 10 Nov 2017 Abstract UCT: Learning Unified Convolutional Networks for Real-time Visual Tracking Zheng Zhu Guan Huang, Wei Zou Dalong Du, Chang Huang Institute of Automation, Chinese Academy of Sciences University of Chinese

More information

Visual Tracking via Spatially Aligned Correlation Filters Network

Visual Tracking via Spatially Aligned Correlation Filters Network Visual Tracking via Spatially Aligned Correlation Filters Network Mengdan Zhang, Qiang Wang, Junliang Xing, Jin Gao, Peixi Peng, Weiming Hu, and Steve Maybank 2 CAS Center for Excellence in Brain Science

More information

Robust Estimation of Similarity Transformation for Visual Object Tracking

Robust Estimation of Similarity Transformation for Visual Object Tracking Robust Estimation of Similarity Transformation for Visual Object Tracking Yang Li, Jianke Zhu,3 *, Steven C.H. Hoi 2, Wenjie Song, Zhefeng Wang, Hantang Liu Colleage of Computer Science and Technology,

More information

arxiv: v2 [cs.cv] 8 Jan 2019

arxiv: v2 [cs.cv] 8 Jan 2019 Deeper and Wider Siamese Networks for Real-Time Visual Tracking arxiv:90.0660v2 [cs.cv] Jan 209 Zhipeng Zhang NLPR, CASIA zhangzhipeng20@ia.ac.cn Abstract Siamese networks have drawn great attention in

More information

End-to-end Flow Correlation Tracking with Spatial-temporal Attention

End-to-end Flow Correlation Tracking with Spatial-temporal Attention End-to-end Flow Correlation Tracking with Spatial-temporal Attention Zheng Zhu 1,2, Wei Wu 3, Wei Zou 1,2,4, Junjie Yan 3 1 Institute of Automation, Chinese Academy of Sciences, Beijing, China 2 University

More information

End-to-end representation learning for Correlation Filter based tracking

End-to-end representation learning for Correlation Filter based tracking End-to-end representation learning for Correlation Filter based tracking Jack Valmadre Luca Bertinetto João F. Henriques Andrea Vedaldi Philip H. S. Torr University of Oxford {name.surname}@eng.ox.ac.uk

More information

arxiv: v1 [cs.cv] 21 Nov 2016

arxiv: v1 [cs.cv] 21 Nov 2016 SANet: Structure-Aware Network for Visual Tracking Heng Fan and Haibin Ling Department of Computer and Information Sciences, Temple University, Philadelphia, USA {hengfan, hbling}@temple.edu arxiv:6.06878v

More information

Integrating Boundary and Center Correlation Filters for Visual Tracking with Aspect Ratio Variation

Integrating Boundary and Center Correlation Filters for Visual Tracking with Aspect Ratio Variation Integrating Boundary and Center Correlation Filters for Visual Tracing with Aspect Ratio Variation Feng Li, Yingjie Yao, Peihua Li 2, David Zhang 3, Wangmeng Zuo,, and Ming-Hsuan Yang 4 School of Computer

More information

Deep-LK for Efficient Adaptive Object Tracking

Deep-LK for Efficient Adaptive Object Tracking Deep-LK for Efficient Adaptive Object Tracking Chaoyang Wang, Hamed Kiani Galoogahi, Chen-Hsuan Lin, and Simon Lucey 1 Abstract In this paper, we present a new approach for efficient regression-based object

More information

arxiv: v2 [cs.cv] 14 Jan 2019

arxiv: v2 [cs.cv] 14 Jan 2019 FuCoLoT A Fully-Correlational Long-Term Tracker Alan Lukežič 1, Luka Čehovin Zajc1, Tomáš Vojíř 2, Jiří Matas 2, and Matej Kristan 1 arxiv:1711.09594v2 [cs.cv] 14 Jan 2019 1 Faculty of Computer and Information

More information

Online Learning of Discriminative Correlation Filter Bank for Visual Tracking

Online Learning of Discriminative Correlation Filter Bank for Visual Tracking information Article Online Learning of Discriminative Correlation Filter Bank for Visual Tracking Jian Wei ID and Feng Liu * Jiangsu Province Key Lab on Image Processing and Image Communications, Nanjing

More information

Towards Robust Visual Object Tracking: Proposal Selection & Occlusion Reasoning Yang HUA

Towards Robust Visual Object Tracking: Proposal Selection & Occlusion Reasoning Yang HUA Towards Robust Visual Object Tracking: Proposal Selection & Occlusion Reasoning Yang HUA PhD Defense, June 10 2016 Rapporteur Rapporteur Examinateur Examinateur Directeur de these Co-encadrant de these

More information

Learning Robust Gaussian Process Regression for Visual Tracking

Learning Robust Gaussian Process Regression for Visual Tracking Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-8) Learning Robust Gaussian Process Regression for Visual Tracking Linyu Zheng,2, Ming Tang,2 and Jinqiao

More information

arxiv: v1 [cs.cv] 5 Jul 2018

arxiv: v1 [cs.cv] 5 Jul 2018 Subpixel-Precise Tracking of Rigid Objects in Real-time arxiv:1807.01952v1 [cs.cv] 5 Jul 2018 Tobias Böttger Markus Ulrich Carsten Steger MVTec Software GmbH, Munich, Germany http://www.mvtec.com September

More information

arxiv: v1 [cs.cv] 7 Sep 2018

arxiv: v1 [cs.cv] 7 Sep 2018 DensSiam: End-to-End Densely-Siamese Network with Self-Attention Model for Object Tracking Mohamed H. Abdelpakey 1 ( ), Mohamed S. Shehata 1, and Mostafa M. Mohamed 2 arxiv:1809.02714v1 cs.cv] 7 Sep 2018

More information

Synthetic data generation for end-to-end thermal infrared tracking

Synthetic data generation for end-to-end thermal infrared tracking SUBMITTED TO IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Synthetic data generation for end-to-end thermal infrared tracking Lichao Zhang, Abel Gonzalez-Garcia, Joost van de Weijer, Martin Danelljan, and Fahad

More information

VISUAL tracking is a fundamental problem in computer

VISUAL tracking is a fundamental problem in computer .9/TPAMI.28.282887, IEEE Transactions on Pattern Analysis and Machine Intelligence IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE Hedging Deep Features for Visual Tracking Yuankai Qi, Shengping

More information

SANet: Structure-Aware Network for Visual Tracking

SANet: Structure-Aware Network for Visual Tracking SANet: Structure-Aware Network for Visual Tracking Heng Fan Haibin Ling Department of Computer and Information Sciences, Temple University, Philadelphia, USA 2 HiScene Information Technologies, Shanghai,

More information

Structured Siamese Network for Real-Time Visual Tracking

Structured Siamese Network for Real-Time Visual Tracking Structured Siamese Network for Real-Time Visual Tracking Yunhua Zhang [ 3 3567 25X], Lijun Wang [ 3 2538 8358], Jinqing Qi [ 2 3777 245], Dong Wang [ 2 6976 44], Mengyang Feng [ 2 72 4655], and Huchuan

More information

Visual Tracking UCU Day 3: State-of-art methods Dmytro Mishkin. Center for Machine Perception Czech Technical University in Prague

Visual Tracking UCU Day 3: State-of-art methods Dmytro Mishkin. Center for Machine Perception Czech Technical University in Prague Visual Tracking course @ UCU Day 3: State-of-art methods Dmytro Mishkin Center for Machine Perception Czech Technical University in Prague Lviv, Ukraine 2017 Heavily based on tutorial: Visual Tracking

More information

Parallel Tracking and Verifying: A Framework for Real-Time and High Accuracy Visual Tracking

Parallel Tracking and Verifying: A Framework for Real-Time and High Accuracy Visual Tracking Parallel Tracking and Verifying: A Framework for Real-Time and High Accuracy Visual Tracking Heng Fan Haibin Ling Meitu HiScene Lab, HiScene Information Technologies, Shanghai, China Department of Computer

More information

Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks

Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Si Chen The George Washington University sichen@gwmail.gwu.edu Meera Hahn Emory University mhahn7@emory.edu Mentor: Afshin

More information

Supplementary file for Robust Visual Tracking Using Oblique Random Forests

Supplementary file for Robust Visual Tracking Using Oblique Random Forests Supplementary file for Robust Visual Tracking Using Oblique Random Forests Le Zhang ADSC Singapore 38632 zhang.le@adsc.com.sg Jagannadan Varadarajan ADSC Singapore 38632 vjagan@adsc.com.sg Pierre Moulin

More information

Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging

Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging Florin C. Ghesu 1, Thomas Köhler 1,2, Sven Haase 1, Joachim Hornegger 1,2 04.09.2014 1 Pattern

More information

Discriminative Correlation Filter with Channel and Spatial Reliability

Discriminative Correlation Filter with Channel and Spatial Reliability Discriminative Correlation Filter with Channel and Spatial Reliability Alan Lukežič 1, Tomáš Vojíř 2, Luka Čehovin Zajc 1, Jiří Matas 2 and Matej Kristan 1 1 Faculty of Computer and Information Science,

More information

arxiv: v1 [cs.cv] 27 Oct 2015

arxiv: v1 [cs.cv] 27 Oct 2015 Learning Multi-Domain Convolutional Neural Networks for Visual Tracking Hyeonseob Nam Bohyung Han Dept. of Computer Science and Engineering, POSTECH, Korea {namhs9, bhhan}@postech.ac.kr arxiv:.9v [cs.cv]

More information

Collaborative Correlation Tracking

Collaborative Correlation Tracking ZHU, WANG, WU, LU: COLLABORATIVE CORRELATION TRACKING 1 Collaborative Correlation Tracking Guibo Zhu 1 gbzhu@nlpr.ia.ac.cn Jinqiao Wang 1 jqwang@nlpr.ia.ac.cn Yi Wu 2 ywu.china@yahoo.com Hanqing Lu 1 luhq@nlpr.ia.ac.cn

More information

Multi-task Correlation Particle Filter for Robust Object Tracking

Multi-task Correlation Particle Filter for Robust Object Tracking Multi-tas Correlation Particle Filter for Robust Object Tracing Tianzhu Zhang 1,2 Changsheng Xu 1,2 Ming-Hsuan Yang 3 1 National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy

More information

Fully-Convolutional Siamese Networks for Object Tracking

Fully-Convolutional Siamese Networks for Object Tracking Fully-Convolutional Siamese Networks for Object Tracking Luca Bertinetto*, Jack Valmadre*, João Henriques, Andrea Vedaldi and Philip Torr www.robots.ox.ac.uk/~luca luca.bertinetto@eng.ox.ac.uk Tracking

More information

Coloring Channel Representations for Visual Tracking

Coloring Channel Representations for Visual Tracking Coloring Channel Representations for Visual Tracking Martin Danelljan, Gustav Häger, Fahad Shahbaz Khan and Michael Felsberg Linköping University Post Print N.B.: When citing this work, cite the original

More information

VISUAL object tracking is one of the fundamental problems

VISUAL object tracking is one of the fundamental problems Robust Visual Tracking via Hierarchical Convolutional Features Chao Ma, Jia-Bin Huang, Xiaokang Yang, and Ming-Hsuan Yang arxiv:77.386v [cs.cv] 2 Jul 27 Abstract Visual tracking is challenging as target

More information

A Twofold Siamese Network for Real-Time Object Tracking

A Twofold Siamese Network for Real-Time Object Tracking A Twofold Siamese Network for Real-Time Object Tracking Anfeng He, Chong Luo, Xinmei Tian, and Wenjun Zeng CAS Key Laboratory of Technology in Geo-Spatial Information Processing and Application System,

More information

Real-time Actor-Critic Tracking

Real-time Actor-Critic Tracking Real-time Actor-Critic Tracking Boyu Chen 1[0000 0003 2397 7669], Dong Wang 1 [0000 0002 6976 4004], Peixia Li 1[0000 0001 6167 5309], Shuang Wang 2[0000 0002 6462 6040], and Huchuan Lu 1[0000 0002 6668

More information

VISUAL object tracking is one of the fundamental problems

VISUAL object tracking is one of the fundamental problems Robust Visual Tracking via Hierarchical Convolutional Features Chao Ma, Jia-Bin Huang, Xiaokang Yang, and Ming-Hsuan Yang arxiv:77.386v2 [cs.cv] Aug 28 Abstract Visual tracking is challenging as target

More information

Learning Dynamic Memory Networks for Object Tracking

Learning Dynamic Memory Networks for Object Tracking Learning Dynamic Memory Networks for Object Tracking Tianyu Yang [ 2 9674 522] and Antoni B. Chan [ 2 2886 253] Department of Computer Science, City University of Hong Kong, Hong Kong, China tianyyang8-c@my.cityu.edu.hk,

More information

VISUAL tracking plays an important role in signal processing

VISUAL tracking plays an important role in signal processing IEEE TRANSACTIONS ON CYBERNETICS 1 Correlation Filter Learning Toward Peak Strength for Visual Tracking Yao Sui, Guanghui Wang, and Li Zhang Abstract This paper presents a novel visual tracking approach

More information

Visual Tracking with Deformable Continuous Convolution Operators

Visual Tracking with Deformable Continuous Convolution Operators Master of Science Thesis in Electrical Engineering Department of Electrical Engineering, Linköping University, 2017 Visual Tracking with Deformable Continuous Convolution Operators Joakim Johnander Master

More information

Evaluation of Visual Object Trackers on Equirectangular Panorama

Evaluation of Visual Object Trackers on Equirectangular Panorama Evaluation of Visual Object Trackers on Equirectangular Panorama Ugur Kart, Joni-Kristian Kämäräinen, Lixin Fan 2 and Moncef Gabbouj Laboratory of Signal Processing, Tampere University of Technology, Finland

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.1: 2D Motion Estimation in Images Jürgen Sturm Technische Universität München 3D to 2D Perspective Projections

More information

A Survey on Visual Object Tracking: Datasets, Methods and Metrics

A Survey on Visual Object Tracking: Datasets, Methods and Metrics A Survey on Visual Object Tracking: Datasets, Methods and Metrics V. Ramalakshmi @ Kanthimathi 1, Dr. M. Germanus Alex 2 1 Research Scholar, Department of Computer Science, Bharathiar University, Coimbatore,

More information

A Comparative Study of Object Trackers for Infrared Flying Bird Tracking

A Comparative Study of Object Trackers for Infrared Flying Bird Tracking A Comparative Study of Object Trackers for Infrared Flying Bird Tracking Ying Huang, Hong Zheng, Haibin Ling 2, Erik Blasch 3, Hao Yang 4 School of Automation Science and Electrical Engineering, Beihang

More information

Visual Tracking (1) Feature Point Tracking and Block Matching

Visual Tracking (1) Feature Point Tracking and Block Matching Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

SUBMISSION TO IEEE TRANSACTIONS ON IMAGE PROCESSING 1. RGB-T Object Tracking: Benchmark and Baseline

SUBMISSION TO IEEE TRANSACTIONS ON IMAGE PROCESSING 1. RGB-T Object Tracking: Benchmark and Baseline SUBMISSION TO IEEE TRANSACTIONS ON IMAGE PROCESSING 1 RGB-T Object Tracking: Benchmark and Baseline Chenglong Li, Xinyan Liang, Yijuan Lu, Nan Zhao, and Jin Tang arxiv:1805.08982v1 [cs.cv] 23 May 2018

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear

More information

Beyond standard benchmarks: Parameterizing performance evaluation in visual object tracking

Beyond standard benchmarks: Parameterizing performance evaluation in visual object tracking Beyond standard benchmarks: Parameterizing performance evaluation in visual object tracking Luka Čehovin Zajc, Alan Lukežič, Aleš Leonardis, Matej Kristan University of Ljubljana Faculty of computer and

More information

Real-Time Visual Tracking: Promoting the Robustness of Correlation Filter Learning

Real-Time Visual Tracking: Promoting the Robustness of Correlation Filter Learning Real-Time Visual Tracking: Promoting the Robustness of Correlation Filter Learning Yao Sui 1, Ziming Zhang 2, Guanghui Wang 1, Yafei Tang 3, Li Zhang 4 1 Dept. of EECS, University of Kansas, Lawrence,

More information

Robust UAV-based Tracking Using Hybrid Classifiers

Robust UAV-based Tracking Using Hybrid Classifiers Robust UAV-based Tracking Using Hybrid Classifiers Yong Wang University of Ottawa 75 Laurier Ave E, Ottawa, ON K1N 6N5 ywang6@uottawa.ca Wei Shi INSKY Lab, Leotail Intelligent Tech 1628, Pudong District,

More information

arxiv: v1 [cs.cv] 26 Nov 2017

arxiv: v1 [cs.cv] 26 Nov 2017 MAVOT: Memory-Augmented Video Object Tracking Boyu Liu Yanzhao Wang HKUST {bliuag,ywangdg}@connect.ust.hk Yu-Wing Tai Tencent Youtu yuwingtai@tencent.com Chi-Keung Tang HKUST cktang@cs.ust.hk arxiv:1711.09414v1

More information

RoI Pooling Based Fast Multi-Domain Convolutional Neural Networks for Visual Tracking

RoI Pooling Based Fast Multi-Domain Convolutional Neural Networks for Visual Tracking Advances in Intelligent Systems Research, volume 133 2nd International Conference on Artificial Intelligence and Industrial Engineering (AIIE2016) RoI Pooling Based Fast Multi-Domain Convolutional Neural

More information

DeepIndex for Accurate and Efficient Image Retrieval

DeepIndex for Accurate and Efficient Image Retrieval DeepIndex for Accurate and Efficient Image Retrieval Yu Liu, Yanming Guo, Song Wu, Michael S. Lew Media Lab, Leiden Institute of Advance Computer Science Outline Motivation Proposed Approach Results Conclusions

More information

How to Make an RGBD Tracker?

How to Make an RGBD Tracker? How to Make an RGBD Tracker? Uğur Kart 1[0000 0002 3728 2473] Joni-Kristian Kämäräinen 1[0000 0002 5801 4371] Jiří Matas 2[0000 0003 0863 4844] 1 Laboratory of Signal Processing Tampere University of Technology,

More information

Multi-kernel Correlation Filter for Visual Tracking

Multi-kernel Correlation Filter for Visual Tracking Multi-kernel Correlation Filter for Visual Tracking Ming TANG and Jiayi FENG National Lab of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 119, China {tangm,jiayi.feng}@nlpr.ia.ac.cn

More information

Multi-Template Scale-Adaptive Kernelized Correlation Filters

Multi-Template Scale-Adaptive Kernelized Correlation Filters Multi-Template Scale-Adaptive Kernelized Correlation Filters Adel Bibi and Bernard Ghanem King Abdullah University of Science and Technology (KAUST), Saudi Arabia adel.bibi@kaust.edu.sa, bernard.ghanem@kaust.edu.sa

More information

Robust Tracking using Manifold Convolutional Neural Networks with Laplacian Regularization

Robust Tracking using Manifold Convolutional Neural Networks with Laplacian Regularization IEEE TRANSACTIONS ON MULTIMEDIA 1 Robust Tracking using Manifold Convolutional Neural Networks with Laplacian Regularization Hongwei Hu, Bo Ma, Member, IEEE, Jianbing Shen, Senior Member, IEEE, Hanqiu

More information

A multi-view model for visual tracking via correlation filters

A multi-view model for visual tracking via correlation filters A multi-view model for visual tracking via correlation filters Xin Li a,, Qiao Liu a,, Zhenyu He a,, Hongpeng Wang a, Chunkai Zhang a, Wen-Sheng Chen b a School of Computer Science, Harbin Institute of

More information

Blind Image Deblurring Using Dark Channel Prior

Blind Image Deblurring Using Dark Channel Prior Blind Image Deblurring Using Dark Channel Prior Jinshan Pan 1,2,3, Deqing Sun 2,4, Hanspeter Pfister 2, and Ming-Hsuan Yang 3 1 Dalian University of Technology 2 Harvard University 3 UC Merced 4 NVIDIA

More information

Recurrent Transformer Networks for Semantic Correspondence

Recurrent Transformer Networks for Semantic Correspondence Neural Information Processing Systems (NeurIPS) 2018 Recurrent Transformer Networks for Semantic Correspondence Seungryong Kim 1, Stepthen Lin 2, Sangryul Jeon 1, Dongbo Min 3, Kwanghoon Sohn 1 Dec. 05,

More information

Robust Visual Tracking with Deep Convolutional Neural Network based Object Proposals on PETS

Robust Visual Tracking with Deep Convolutional Neural Network based Object Proposals on PETS Robust Visual Tracking with Deep Convolutional Neural Network based Object Proposals on PETS Gao Zhu Fatih Porikli,2,3 Hongdong Li,3 Australian National University, NICTA 2 ARC Centre of Excellence for

More information

ROBUST visual tracking is a challenging problem due to

ROBUST visual tracking is a challenging problem due to Learning Support Correlation Filters for Visual Tracking Wangmeng Zuo, Xiaohe Wu, Liang Lin, Lei Zhang, and Ming-Hsuan Yang arxiv:6.632v [cs.cv] 22 Jan 26 Abstract Sampling and budgeting training examples

More information

Robust visual tracking by exploiting the historical tracker snapshots

Robust visual tracking by exploiting the historical tracker snapshots Robust visual tracking by exploiting the historical tracker snapshots Jiatong Li,2, Zhibin Hong 2, and Baojun Zhao School of Information and Electronics, Beijing Institute of Technology 5 South Zhongguancun

More information

Real-Time Visual Tracking: Promoting the Robustness of Correlation Filter Learning

Real-Time Visual Tracking: Promoting the Robustness of Correlation Filter Learning Real-Time Visual Tracking: Promoting the Robustness of Correlation Filter Learning Yao Sui 1(B), Ziming Zhang 2, Guanghui Wang 1, Yafei Tang 3, and Li Zhang 4 1 Department of EECS, University of Kansas,

More information

4-DoF Tracking for Robot Fine Manipulation Tasks

4-DoF Tracking for Robot Fine Manipulation Tasks 4-DoF Tracking for Robot Fine Manipulation Tasks Mennatullah Siam*, Abhineet Singh*, Camilo Perez and Martin Jagersand Faculty of Computing Science University of Alberta, Canada mennatul,asingh1,caperez,jag@ualberta.ca

More information

Finding Tiny Faces Supplementary Materials

Finding Tiny Faces Supplementary Materials Finding Tiny Faces Supplementary Materials Peiyun Hu, Deva Ramanan Robotics Institute Carnegie Mellon University {peiyunh,deva}@cs.cmu.edu 1. Error analysis Quantitative analysis We plot the distribution

More information

MOTION ESTIMATION USING CONVOLUTIONAL NEURAL NETWORKS. Mustafa Ozan Tezcan

MOTION ESTIMATION USING CONVOLUTIONAL NEURAL NETWORKS. Mustafa Ozan Tezcan MOTION ESTIMATION USING CONVOLUTIONAL NEURAL NETWORKS Mustafa Ozan Tezcan Boston University Department of Electrical and Computer Engineering 8 Saint Mary s Street Boston, MA 2215 www.bu.edu/ece Dec. 19,

More information

Adaptive Correlation Filters with Long-Term and Short-Term Memory for Object Tracking

Adaptive Correlation Filters with Long-Term and Short-Term Memory for Object Tracking International Journal of Computer Vision manuscript No. (will be inserted by the editor) Adaptive Correlation Filters with Long-Term and Short-Term Memory for Object Tracking Chao Ma Jia-Bin Huang Xiaokang

More information

Object-Aware Tracking

Object-Aware Tracking Object-Aware Tracking Ivan Bogun Eraldo Ribeiro Department of Computer Sciences Florida Institute of Technology, Melbourne, Florida, U.S.A. ibogun21@my.fit.edu, eribeiro@cs.fit.edu Abstract In this paper,

More information

Learning Spatial-Temporal Regularized Correlation Filters for Visual Tracking

Learning Spatial-Temporal Regularized Correlation Filters for Visual Tracking Learning Spatial-Temporal Regularize Correlation Filters or Visual Tracking Feng Li, Cheng Tian, Wangmeng Zuo, Lei Zhang, an Ming-Hsuan Yang 3 School o Computer Science an Technology, Harbin Institute

More information

Full waveform inversion by deconvolution gradient method

Full waveform inversion by deconvolution gradient method Full waveform inversion by deconvolution gradient method Fuchun Gao*, Paul Williamson, Henri Houllevigue, Total), 2012 Lei Fu Rice University November 14, 2012 Outline Introduction Method Implementation

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Tracking by recognition using neural network

Tracking by recognition using neural network Zhiliang Zeng, Ying Kin Yu, and Kin Hong Wong,"Tracking by recognition using neural network",19th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed

More information

Deep Back-Projection Networks For Super-Resolution Supplementary Material

Deep Back-Projection Networks For Super-Resolution Supplementary Material Deep Back-Projection Networks For Super-Resolution Supplementary Material Muhammad Haris 1, Greg Shakhnarovich 2, and Norimichi Ukita 1, 1 Toyota Technological Institute, Japan 2 Toyota Technological Institute

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing

Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing Supplementary Material Introduction In this supplementary material, Section 2 details the 3D annotation for CAD models and real

More information

Adaptive Cell-Size HoG Based. Object Tracking with Particle Filter

Adaptive Cell-Size HoG Based. Object Tracking with Particle Filter Contemporary Engineering Sciences, Vol. 9, 2016, no. 11, 539-545 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ces.2016.6439 Adaptive Cell-Size HoG Based Object Tracking with Particle Filter

More information

Real-Time Face Tracking under Long-Term Full Occlusions

Real-Time Face Tracking under Long-Term Full Occlusions 10th International Symposium on Image and Signal Processing and Analysis (ISPA 2017) Real-Time Face Tracking under Long-Term Full Occlusions Martin Soldić, Darijan Marčetić, Marijo Maračić, Darko Mihalić,

More information

Photo-realistic Renderings for Machines Seong-heum Kim

Photo-realistic Renderings for Machines Seong-heum Kim Photo-realistic Renderings for Machines 20105034 Seong-heum Kim CS580 Student Presentations 2016.04.28 Photo-realistic Renderings for Machines Scene radiances Model descriptions (Light, Shape, Material,

More information