2D Recognition and Tracking

Size: px
Start display at page:

Download "2D Recognition and Tracking"

Transcription

1 2D Recognition and Tracking 1 Theme: 2 Subject: List of Abbreviations AR Augmented Reality 6 DoF Six Degrees of Freedom 3 Background When you think of AR, one of the key elements to consider is 2D recognition and tracking technology. This term refers to an ability to identify the 2D images and their position in space caught by the device s camera. AR is the enhancement of the view of the real world with CG overlays such as graphics, text, videos or sounds, and across all AR applications. Most of these apps use 2D images to trigger pre-defined 3D visualization, animation, video, or soundtrack. In other words, they use 2D recognition and tracking to determine what relevant information should be added to the real world. It means that 2D recognition and tracking plays an essential role in the field of augmented reality. 4 Scope 2D Recognition and Tracking When the target image appears in the camera, the image can be recognized and its pose (6 DoF) can be known. After recognition, the image can be tracked and its pose (6 DoF) can be updated during the image moving. 1 / 4

2 5 Expected Outcome and Deliverables The source code of 2D recognition and tracking software, and it can be used on the Huawei mobile phone; The source code of 2D image preprocess software (if it is needed), and it can be used on the Huawei mobile phone and PC; The document of the software s API, and the Demo, based on the Huawei mobile phone, is needed; The copyright of source codes in 2D recognition and tracking software and 2D image preprocess software should belong to Huawei. 6 Acceptance Criteria 1) On-device 2D recognition and tracking result should be stable, and the recognition time should below 1 second; 2) Recognition accuracy is at 90%; 3) Recognition works up to 5% of screen area; 4) Positional error as good as 0,25% of image size; 5) Angle at which target image can be tracked is 150 (± 75 ); 6) Sideways (constant linear) movement for tracking is 0.38 m/s; 7) Rotation (constant) movement for tracking is 180 /s. 8) When using FullHD (1080p) camera image to recognize and track, the usage of the CPU (Kirin 970) should be below 15%. In the processing, only the recognizing and tracking is tested not including the rendering. The specific device will be Mate 10 (CPU: 4*Cortex-A GHz + 4*Cortex-A53 1.8GHz, RAM: 6GB). 2 / 4

3 7 Phased Project Plan HIRP OPEN 2018 T is denoted as the starting time of the project. Phase1 (from T to T + 3 month): Finish basic 2D recognition and tracking function software; Deliverables are the source code of 2D recognition and tracking software, the source code of 2D image preprocess software, the document of the software s API and the Demo. Phase2 (from T + 3 month to T + 5 month): Optimize the performance of 2D recognition and tracking software; Deliverables are the new version of 2D recognition and tracking software, the source code of 2D image preprocess software, the document of the software s API and the Demo; Optimization should reach the 6 Acceptance Criteria from 1) to 7). Phase3 (from T + 5 month to T + 6 month): Optimize the CPU usage of the 2D recognition and tracking software; Deliverables are the final version of 2D recognition and tracking software, the source code of 2D image preprocess software, the document of the software s API and the Demo; Optimization should reach the 6 Acceptance Criteria in 8). 3 / 4

4 Copyright Huawei Technologies Co., Ltd All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Confidentiality All information in this document (including, but not limited to interface protocols, parameters, flowchart and formula) is the confidential information of Huawei Technologies Co., Ltd and its affiliates. Any and all recipient shall keep this document in confidence with the same degree of care as used for its own confidential information and shall not publish or disclose wholly or in part to any other party without Huawei Technologies Co., Ltd s prior written consent. Notice Unless otherwise agreed by Huawei Technologies Co., Ltd, all the information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute the warranty of any kind, express or implied. Distribution Without the written consent of Huawei Technologies Co., Ltd, this document cannot be distributed except for the purpose of Huawei Innovation R&D Projects and within those who have participated in Huawei Innovation R&D Projects. 4 / 4

5 Project name: Architecture and/or Compiler for Deep Learning 1 Theme: 2 Subject: Computer Architecture List of Abbreviations NPU Neural Processing Unit CNN Convolutional Neural Network ASIC Application Specific Integrated Circuit FPGA Field Programmable Gate Array GPU Graphic Processing Unit EDA Electronic Design Automation 3 Background Deep learning has become ubiquitous and indispensable. To fulfill the high computation demand in deep learning, specialized architectures are proposed, such as the DianNao series and MIT Eyeriss accelerators. An example of this innovation in industry is the NPU in Huawei Kirin 970 processor. The design of deep learning architecture requires multi-objective optimization maximizing accuracy, speed, reliability, energy efficiency and also the software tool-chain support, flexibility in application. 1 / 4

6 Furthermore, there is a rising need for compiler technology for deep learning systems, to compile the front-end framework workloads directly to hardware backends, for supporting the custom processor, or various hardware backends e.g. GPU. Generally speaking, both architecture research and compiler research proposals for deep learning are welcomed. 4 Scope Deep learning architecture Investigate the architecture for deep learning acceleration (e.g. CNN, neuromorphic). The realization platform and style are not restricted. It can be specialized datapath, many-core processor, etc. in ASIC, FPGA or EDA simulation. Compiler for deep learning systems Deploying front-end deep learning framework workloads (e.g. Caffe, Tensorflow) to the hardware back-end, e.g. scheduling the deep learning accelerator and do optimization of the instructions. 5 Expected Outcome and Deliverables Technical reports for deep learning architecture and/or compiler for deep learning systems; Source codes, scripts, other design files and descriptions; Discussions including Face-to-face and/or online conference calls; 1 Invention/patents depending on the characteristic of the detailed research topic; 2 / 4

7 6 Acceptance Criteria HIRP OPEN 2018 Realization of the design in accepted language or format (e.g. C++/Python/Verilog/layout etc.) and demonstration of the idea in computer simulation. 7 Phased Project Plan Phase1 (~3 months): Survey the state of the art, and provide the technical plan of the research work. Phase2 (~6 months): Technical research / development / test of the design. Provide the technical report. Phase3 (~3 months): To apply for the patent and summarize the research work. To provide all the technical materials. 3 / 4

8 Copyright(C)Huawei Technologies Co., Ltd All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Confidentiality All information in this document (including, but not limited to interface protocols, parameters, flowchart and formula) is the confidential information of Huawei Technologies Co., Ltd and its affiliates. Any and all recipient shall keep this document in confidence with the same degree of care as used for its own confidential information and shall not publish or disclose wholly or in part to any other party without Huawei Technologies Co., Ltd s prior written consent. Notice Unless otherwise agreed by Huawei Technologies Co., Ltd, all the information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute the warranty of any kind, express or implied. Distribution Without the written consent of Huawei Technologies Co., Ltd, this document cannot be distributed except for the purpose of Huawei Innovation R&D Projects and within those who have participated in Huawei Innovation R&D Projects. 4 / 4

9 Conversion of Artificial Neural Networks to Spiking Neural Networks 1. Theme: 2. Subject: Brain-Inspired Computing List of Abbreviations: ANN Artificial Neural Networks DNN Deep Neural Networks CNN Convolutional Neural Networks SNN Spiking Neural Networks 3. Background In recent years, spiking deep neural networks have become an increasingly active field of research. This has been driven both by the interest to build more biologically realistic neural network models, and by recent improvements and the availability of larger-scale neuromorphic computing platforms, which are optimized for emulating brain-like spike-based computation in dedicated analog or digital hardware. Neuromorphic platforms can be orders of magnitude more efficient in terms of power consumption compared to conventional CPUs or GPUs for running spiking networks, and often permit distributed and asynchronous event-based computation, thereby improving scalability and reducing latencies. Furthermore, event-driven

10 neuromorphic systems focus their computational effort on currently active parts of the network, effectively saving power on the rest of the network. Training of spiking deep networks typically does not use spike-based learning rules, but instead starts from a conventional ANN, fully trained with backpropagation, followed by a conversion of the rate-based model into a model consisting of simple spiking neurons. Theory has shown that SNNs are at least as computationally powerful as their analog counterparts, but practically it has proven difficult to come up with equivalent solutions. One approach used by O Connor(2013) for example is to train spiking DBNs by using the Siegert mean-firing-rate approximation of leaky integrate-and-fire (LIF) neurons to approximate probabilities during training. Another approach used by Perez- Carrasco(2013) requires tuning of parameters such as leak and refractory period in the spiking network. In both cases, the spiking network suffers from considerable losses in classification accuracy, when compared to a non-spiking network of similar architecture. Therefore, we look for a new method which could convert any ANN to a SNN architecture, so that on the one hand it can maintain the accuracy and on the other hand computationally much more efficient than ANN. 4. Scope 1) Research on methodology: Investigate the state-of-the-art research on conversion from ANN to SNN; propose a novel method which has high accuracy and low energy-consuming. 2) Research on application: Investigate the scenarios which the aforementioned SNN architecture could show outstanding advantages. Do the experiments on simulation platform or on real scenarios. 5. Expected Outcome and Deliverables

11 At least one novel conversion algorithm is proposed and tested. 1-2 published papers in the top-tier (CCF-A, B) journals and conferences. 1-2 patents, need to pass Huawei s review. Regular meeting and notes (once per month). 6. Phased Project Plan Phase1 (~T+3 months): investigate the state-of-the-art conversion from ANN to SNN. Phase2 (~T+9 months): propose the new method and algorithm, do experiments based on real applications, show competitive performance in MNIST or CIFAR-10 (higher accuracy and lower energy consuming) Phase 3 (~T +12 months): experiments on deep network (e.g. Resnet18-SSD), show competitive performance. Experiments on SNN chips

12 Copyright(C)Huawei Technologies Co., Ltd All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Confidentiality All information in this document (including, but not limited to interface protocols, parameters, flowchart and formula) is the confidential information of Huawei Technologies Co., Ltd and its affiliates. Any and all recipient shall keep this document in confidence with the same degree of care as used for its own confidential information and shall not publish or disclose wholly or in part to any other party without Huawei Technologies Co., Ltd s prior written consent. Notice Unless otherwise agreed by Huawei Technologies Co., Ltd, all the information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute the warranty of any kind, express or implied. Distribution Without the written consent of Huawei Technologies Co., Ltd, this document cannot be distributed except for the purpose of Huawei Innovation R&D Projects and within those who have participated in Huawei Innovation R&D Projects

13 Distributed parallel modelling of the simulation model based on C/C++ 1 Theme: 2 Subject: Simulation Model 3 Background Even with today s high speed processors, it is common for simulation jobs to take hours, days, even weeks, to complete. To overcome the speed limitation of the processors, parallel or distributed computation over multiple servers is required. That is, the simulation model is divided into several parts, and each part is simulated on one of the servers. In this way, a speedup proportional to the number of servers can be achieved. 4 Expected Outcome and Deliverables 1. An accurate, reliable distributed parallel framework which encapsulates the task of distributing a simulation model over several partitions (servers), and handles messaging between these partitions (servers). The framework provides a speedup proportional to the number of servers used. 2. The source codes and the algorithms for the partition, communications and synchronization between different servers. 3. Technical report which gives a full description of the above distributed parallel framework; 5 Acceptance Criteria An accurate, reliable distributed parallel framework which encapsulates the task of distributing a simulation model over several partitions (servers), and handles messaging between these partitions (servers). The framework provides a speedup proportional to the number of servers used. The distributed parallel framework needs to be tested and 1 / 3

14 approved by the expert team 6 Phased Project Plan Phase1 (T~T+2 months): The analysis of technology status. Phase2 (T+3~T+7 months): Provide the corresponding technical analysis report and detailed design proposal. Phase3 (T+8~T+12 months): The simulation and results verification. 2 / 3

15 Copyright(C)Huawei Technologies Co., Ltd All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Confidentiality All information in this document (including, but not limited to interface protocols, parameters, flowchart and formula) is the confidential information of Huawei Technologies Co., Ltd and its affiliates. Any and all recipient shall keep this document in confidence with the same degree of care as used for its own confidential information and shall not publish or disclose wholly or in part to any other party without Huawei Technologies Co., Ltd s prior written consent. Notice Unless otherwise agreed by Huawei Technologies Co., Ltd, all the information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute the warranty of any kind, express or implied. Distribution Without the written consent of Huawei Technologies Co., Ltd, this document cannot be distributed except for the purpose of Huawei Innovation R&D Projects and within those who have participated in Huawei Innovation R&D Projects. 3 / 3

16 Improving Utilization Rate of GPUs Cores for Multi-stages Task Applications 1 Theme: 2 Subject: Heterogeneous Computing List of Abbreviations NA 3 Background GPUs are now widely used in applications driven by deep learning technology, such as video & image processing. GPUs play the role of the workhorse of modern AI. However, GPUs are not fully utilized and the utilization rate usually are lower than 60% in most cases. The low utilization rate of GPUs are mainly caused by two aspects: one is due to the limitation of hardware structure of Nvidia, releasing few controllability to programmers; the other factor is the characteristic of tasks itself, such as task granularity, data dependence and etc., which stall multi-stages task or multi-tasks to concurrently execute on GPU efficiently. Bo Wu[1] proposed and developed FLEP, a software system that enables flexible kernel preemption and kernel scheduling on commodity GPUs. A versatile programming framework for pipelined computing on GPU was developed by Jidong Zhai etc[2],. Their works are aiming to impose more controllability on GPU kernel execution, and improve utilization of GPU cores and resources. However, how to bypass the hardware limitations of Nvidia GPU and alleviate the task granularity and data dependence burden on task concurrent execution to improve GPUs utilization, is still a novel and tough research topic in the fields of accelerating applications, such as video analyzing and image retrieval. 1 / 4

17 4 Scope 1) Research on GPU resource management on CPU+GPU heterogeneous programming model: investigate efficiency GPU memory management methods on heterogeneous programming model; 2)Research on GPU Task schedule management on CPU+GPU heterogeneous programming model: investigate how to schedule GPU kernels to execute concurrently and reduce waiting time caused by implicit synchronize, task granularity and data dependence, to achieve higher utilization; 3)Research on efficiency solution design: based on 1) and 2), design one or more efficiency solutions to resolve the low utilization of GPU for most applications on heterogeneous programming model, especially for kernelized correlation filters(kcf) algorithm for object tracking in video analyzing application. 5 Expected Outcome and Deliverables Technical reports of efficiency GPU memory management methods on heterogeneous programming model; Technical reports of scheduling GPU kernels to execute concurrently, such as persistent threads programming model,gpu preemption model and GPU pipeline programming model and etc., including theoretical optimal system parameters analysis; Efficiency solutions for GPU utilization with source codes and description; 1~2 Invention/patents; 6 Acceptance Criteria Project proposal is accepted by the evaluation team, Huawei. Project deliverables are accepted by the evaluation team, Huawei. 2 / 4

18 GPU utilization rate > 95% for KCF algorithms. 7 Phased Project Plan Phase1 (~3 months): Survey the state of the art of GPU memory management methods, as well as GPU kernels concurrent scheduling methods. Analyze and provide the related technical reports. Phase2 (~5 months): Research on solution design based on 1) and 2), provide the state of the art solution to achieve higher GPU utilization rate in kinds of typical scenarios and provide the related technical report. Phase3 (~4 months): Evaluation on solutions with real scenario provided by Huawei, provide related algorithms, code, simulation results and patents. Reference: [1] Wu B, Liu X, Zhou X, et al. FLEP: Enabling flexible and efficient preemption on GPUs[C]//Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems. ACM, 2017: [2] Zheng Z, Oh C, Zhai J, et al. Versapipe: a versatile programming framework for pipelined computing on GPU[C]//Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture. ACM, 2017: / 4

19 Copyright(C)Huawei Technologies Co., Ltd All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Confidentiality All information in this document (including, but not limited to interface protocols, parameters, flowchart and formula) is the confidential information of Huawei Technologies Co., Ltd and its affiliates. Any and all recipient shall keep this document in confidence with the same degree of care as used for its own confidential information and shall not publish or disclose wholly or in part to any other party without Huawei Technologies Co., Ltd s prior written consent. Notice Unless otherwise agreed by Huawei Technologies Co., Ltd, all the information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute the warranty of any kind, express or implied. Distribution Without the written consent of Huawei Technologies Co., Ltd, this document cannot be distributed except for the purpose of Huawei Innovation R&D Projects and within those who have participated in Huawei Innovation R&D Projects. 4 / 4

20 Key Technical Progresses and Challenges of Topological Quantum Computing 1. Theme: 2. Subject: Topological Quantum Computing 3. Background Quantum Computing is the art of controlling and exploiting the time evolution of highly complex, entangled quantum states of physical hardware registers for the purpose of computing and simulation. It has already been shown that difficult problems such as factorization of numbers can be efficiently solved using a quantum computer. Topological quantum computing is a potential for Quantum Computers in future. A topological quantum computer is a theoretical quantum computer that employs two-dimensional quasiparticles called anyons, whose world lines pass around one another to form braids in a three-dimensional space time (i.e., one temporal plus two spatial dimensions). These braids form the logic gates that make up the computer. The advantage of a quantum computer based on quantum braids over using trapped quantum particles is that the former is much more stable. Small, cumulative perturbations can cause quantum states to decohere and introduce errors in the computing, but such small perturbations do not change the braids' topological properties. While the elements of a topological quantum computer originate in a purely mathematical realm, experiments in fractional quantum Hall systems indicate these elements may be created in the real world using semiconductors made of gallium arsenide at a temperature of near absolute zero and subjected to strong magnetic fields. This project will be dedicated to acquiring knowledge and opinions on the state-of-the-art of topological quantum computing, especially, the 华为保密信息, 未经授权禁止扩散第 1 页, 共 3 页

21 experimental feasibility of some up-to-date theoretical and technical proposals. 4. Scope The project involves analyzing and integrating information about topological quantum computing in areas including, but not limited to, the following: the basic theory of topological quantum computing, the up-to-date progress, the partner s research results, the breakthrough technologies and future trend. 5. Expected Outcome and Deliverables Survey reports about up-to-date progress of topological quantum computing; Survey reports of state of the art of technical proposal and experimental feasibility analysis Reports of partner s research results and future trend analyzation At least 6 seminar courses on the field of topological quantum computing, and a workshop 6. Acceptance Criteria Tutorial courses/survey reports/conference paper to be reviewed and accepted by assigned acceptance team. 7. Phased Project Plan Phase1 (~6 months): Integrate the theory of topological quantum computing, and give 6 seminar courses. Survey the state of the art of proposals, including the physical materials, experimental methods, etc. Phase2 (~4 months): Report the partner s research results on the field of topological quantum computing. Phase3 (~2 months): Hold a workshop on the field of topological quantum computing. Survey the state of the art of technical development and future trend. Analyze experiment feasibilities of theoretic and technical proposals 华为保密信息, 未经授权禁止扩散第 2 页, 共 3 页

22 Copyright(C)Huawei Technologies Co., Ltd All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Confidentiality All information in this document (including, but not limited to interface protocols, parameters, flowchart and formula) is the confidential information of Huawei Technologies Co., Ltd and its affiliates. Any and all recipient shall keep this document in confidence with the same degree of care as used for its own confidential information and shall not publish or disclose wholly or in part to any other party without Huawei Technologies Co., Ltd s prior written consent. Notice Unless otherwise agreed by Huawei Technologies Co., Ltd, all the information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute the warranty of any kind, express or implied. Distribution Without the written consent of Huawei Technologies Co., Ltd, this document cannot be distributed except for the purpose of Huawei Innovation R&D Projects and within those who have participated in Huawei Innovation R&D Projects 华为保密信息, 未经授权禁止扩散第 3 页, 共 3 页

23 Location technology research for Underground Parking 1 Theme: 2 Subject: Indoor location system List of Abbreviations RSSI Received Signal Strength Indication UWB Ultra-wideband RFID Radio Frequency Identification Slam Simultaneous location and mapping 3 Background Location technology is an important research point for autonomous driving. Accurate location algorithm can provide a robust and reliable location for path planning & control algorithms. The indoor location technology, combined with high-definition maps in the scenarios of park or basement park lots, solves the problems of Where I am and Where to go. At present, indoor location technology is divided into two aspects. One is independent of infrastructure, based on vehicle s sensors for location, such as laser Slam, visual Slam, inertial odometer, etc.; the other is relying on infrastructure, using RSSI or fingerprint and other information for location, such as RFID, ultra-wideband technology (UWB), Wi-Fi and Bluetooth, etc. The former is less robust to dynamic environments or related to environment such as lighting. For an underground parking lot with less characteristics it is difficult to build the feature maps; the latter is depending on the 1 / 4

24 deployment of the infrastructure, whose location accuracy is higher and the robustness is stronger. However, it s difficult to deploy the infrastructure widely caused by the high cost. One hand, driverless cars are equipped with sensors such as cameras, ultrasonic sensors, millimeter-wave radars and laser radar. It is easy to get rich environmental information for indoor location. On the other hand, infrastructure costs, proper deployment and algorithms have greatly improved for indoor location accuracy and robustness. Therefore, it is the time to start the application research of the indoor location technology for autonomous driving. 4 Scope Vehicle-based location scheme Investigate the advantages and disadvantages of various indoor location solutions based on in-vehicle sensors, such as laser radar location, visual location, and ultrasonic location; Investigate location schemes based on V2I, such as UWB, RFID, and other location solutions. Evaluate the advantages and disadvantages of the above solutions; Compare multiple location schemes, and determine the project's location scheme accordingly. Indoor location implementation Based on the above location scheme, on the condition of that vehicle sensors and infrastructure are installed, algorithm development is carried out to achieve basic vehicle location accuracy. 5 Expected Outcome and Deliverables Investigation report of Vehicle-based indoor location; Indoor location algorithm design instructions; Indoor location demonstration with source code; 1~2 inventions. 2 / 4

25 6 Acceptance Criteria HIRP OPEN 2018 The algorithm accuracy and robustness of indoor location algorithm meets centimeter-level or decimeter-level, and should be enough for path planning. 7 Phased Project Plan Phase1 (~3 months): Complete investigation work on relevant location solutions and finish basic design instructions manual. Phase2 (~6 months): Simulate basic program of algorithm, and output basic simulation instruction description; finish the basic hardware deployment solution with experimental reports; Phase3 (~3 months): Implement the location algorithm on Huawei s vehicle on the scenarios of underground parking lots. 3 / 4

26 Copyright(C)Huawei Technologies Co., Ltd All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Confidentiality All information in this document (including, but not limited to interface protocols, parameters, flowchart and formula) is the confidential information of Huawei Technologies Co., Ltd and its affiliates. Any and all recipient shall keep this document in confidence with the same degree of care as used for its own confidential information and shall not publish or disclose wholly or in part to any other party without Huawei Technologies Co., Ltd s prior written consent. Notice Unless otherwise agreed by Huawei Technologies Co., Ltd, all the information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute the warranty of any kind, express or implied. Distribution Without the written consent of Huawei Technologies Co., Ltd, this document cannot be distributed except for the purpose of Huawei Innovation R&D Projects and within those who have participated in Huawei Innovation R&D Projects. 4 / 4

27 Methodology and Framework for Automated Dependable Software Design and Implementation in Edge Computing and Embedded Systems 1 Theme: 2 Subject: Methodology and Framework List of Abbreviations MEC Multi-access Edge Computing 3 Background Nowadays, edge computing and modern embedded computing systems become more and more relevant when IoT and 5G/MEC connect physical world and digital world together. And emergent applications in IoT and 5G/MEC become more and more complex and expand to a much larger scale. In these applications, unpredictable events may occur in real-time and must be reacted upon in real-time too. In addition, the interactions between these systems/ applications and the physical world are more intricate. These changes make the correctness of these applications much more difficult to assure than before, since their complexities grow exponentially with the scale of the system. Traditional verification/validation methods/programming paradigms seem to produce high development cost. So, the foremost common question of these applications is how to assure their correctness to meet criticality requirement of certain levels with minimum cost. We believe that automating the building process of a trustworthy system is a promising direction. 4 Scope We are seeking proposals to a methodology and corresponding software framework for this automated building process of a trustworthy system problem in Edge Computing and next 1 / 4

28 generation Embedded Computing systems. The methodology and framework should confront following issues: 1. Flexible customization and composable service in edge/embedded computing 2. Automated end-to-end application lifecycle management in edge/embedded computing 3. Automated Software Optimization in edge/embedded computing 4. Dependability and Safety in edge/embedded computing 5 Expected Outcome and Deliverables We expect the outcome and deliverables as following: 1. Methodology, Theoretical Model and Framework: we need a methodology, theoretical model and corresponding software framework that could be used in IoT, 5G/MEC, video surveillance and other edge computing scenarios in which the openness of embedded system is one of the major concerns. The framework should support Huawei hardware/software platforms; 2. Embedded Domain Knowledge Base/Expertise System: a) Framework/toolsets to build embedded domain knowledge base/expertise system b) Auto-optimization technologies for embedded software 3. Transparent application deployment: technologies to enable transparent application deployment disregarding whether it is deployed on edge or in cloud 6 Acceptance Criteria 1. Methodology, Theoretical Model and Framework: a) A well-defined methodology and theoretical models that could pass Huawei review b) A framework that could be run a demo on an environment agreed by both parties and pass Huawei review 2 / 4

29 2. Embedded Domain Knowledge Base/Expertise System: a) A framework/toolsets that could be run a demo on an environment agreed by both parties and pass Huawei review b) Auto-optimization technologies for embedded software should show its advantages in a demo run on an environment agreed by both parties and pass Huawei review 3. Transparent application deployment: should show its advantages in a demo run on an environment agreed by both parties and pass Huawei review 7 Phased Project Plan Phase No. Phase description Time( months) Main task content Output Standard that should achieve 1 Design and review of proposal. 2~4 Proposal is accepted and approved by Huawei. Investigation Report Design documents of proposal. 2 Implementation and Delivery 6 or more Document preparation, software implementation and delivery Documents, implementations that could meet above acceptance criteria 3 / 4

30 Copyright(C)Huawei Technologies Co., Ltd All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Confidentiality All information in this document (including, but not limited to interface protocols, parameters, flowchart and formula) is the confidential information of Huawei Technologies Co., Ltd and its affiliates. Any and all recipient shall keep this document in confidence with the same degree of care as used for its own confidential information and shall not publish or disclose wholly or in part to any other party without Huawei Technologies Co., Ltd s prior written consent. Notice Unless otherwise agreed by Huawei Technologies Co., Ltd, all the information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute the warranty of any kind, express or implied. Distribution Without the written consent of Huawei Technologies Co., Ltd, this document cannot be distributed except for the purpose of Huawei Innovation R&D Projects and within those who have participated in Huawei Innovation R&D Projects. 4 / 4

31 Performance optimization method for quantum circuit simulator 1 Theme: 2 Subject: Quantum Computing List of Abbreviations N/A 3 Background Before we have mature and easy-to-access quantum computing chips, quantum circuit simulator is very quantum computer research, for quantum software, algorithm, hardware and architecture development and verification. It also could be useful for classical computer simulation of material / chemical / drug quantum system. It is also the foundation of quantum algorithm and quantum software research. The main challenge for quantum circuit simulation is that memory resources and communication overhead are increasing exponentially with the number of qubits. This is also the original motivation for quantum computer. This project is to develop optimization techniques of quantum simulator in the case of maximum entanglement, and supports the simulation of 50 qubits circuit system and algorithms based on high performance server cluster. 4 Scope Distributed shared memory technology Distributed shared memory technology and algorithm for large scale matrix vector multiplication in quantum circuit simulation. The memory overhead is reduced by more than 5 times while the communication overhead is not significantly larger, or more than 5 times faster. Distributed communication technology 1 / 4

32 Distributed communication optimization technology for large scale matrix vector multiplication in quantum circuit simulation: The communication efficiency is increased by more than 3 times on the premise that the memory cost is not significantly increased. Physical qubit simulation Support physical qubits, noise and error correction protocol simulation. Other Optimization methods such as qubits scheduling and mapping. 5 Expected Outcome and Deliverables Investigation and performance analysis report on Simulator. Document and source code on distributed shared memory and communication optimization technology for quantum circuit simulation. Design document and code supporting noise and error correction protocol simulation. 1-2 papers. 1-2 patents 6 Acceptance Criteria The report and design document pass Huawei review; The paper is submitted to conference or periodical review. The patent idea pass huawei review. Improvement of existing method should be better than the known one, new method should have some innovative features. Methods should be verified on some simulator or theoretically proved. 7 Phased Project Plan Phase1 (~T+3 months): A survey on the quantum circuit simulators and perform a bottleneck analysis. 2 / 4

33 Phase1 (~T+6 months): The design document on distributed shared memory and communication algorithms supporting 50 qubits. The design document on physical qubits (simulator for noise and error correction protocol) simulator. One patent pass review. Phase2 (~T+9 months): Implementation of distributed simulator, the first version of the physical qubit simulator. Phase 3 (~T +12 months): Optimize algorithms and final version of code, submitting 1 papers (CCF B or mainstream quantum related conference), one more patents. 3 / 4

34 Copyright(C)Huawei Technologies Co., Ltd All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Confidentiality All information in this document (including, but not limited to interface protocols, parameters, flowchart and formula) is the confidential information of Huawei Technologies Co., Ltd and its affiliates. Any and all recipient shall keep this document in confidence with the same degree of care as used for its own confidential information and shall not publish or disclose wholly or in part to any other party without Huawei Technologies Co., Ltd s prior written consent. Notice Unless otherwise agreed by Huawei Technologies Co., Ltd, all the information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute the warranty of any kind, express or implied. Distribution Without the written consent of Huawei Technologies Co., Ltd, this document cannot be distributed except for the purpose of Huawei Innovation R&D Projects and within those who have participated in Huawei Innovation R&D Projects. 4 / 4

35 Quantum Error Correction Method Research 1 Theme: 2 Subject: Quantum Computing List of Abbreviations N/A 3 Background & Scope Recently, big progress has been made in quantum computing chip. 100+qubits chips are on the way. But the quantum volume of such quantum computer is heavily restricted by noisy and gate error. Quantum error correction is key to universal fault tolerant quantum computer. Quantum error correction is an important research topic from the beginning of quantum information processing research. And many new codes, techniques, methodologies have been developed. The goal of this project is to support innovative research on quantum error correction, with considering the hardware progress, noisy intermediate-scale quantum computer in near future and fault-tolerant large scale quantum computer and algorithms. 4 Expected Outcome and Deliverables Survey report and trend analysis on quantum error correction research. Design document and source code on new quantum error correction code. 1-2 papers. 1-2 patents. 5 Acceptance Criteria The report and design document pass Huawei review; The paper is submitted to conference or periodical review. The patent idea pass huawei review. 1 / 3

36 Improvement of existing method should be better than known method, new method should have some innovative features. Method should be verified on some simulator or real quantum computer platform, or theoretically proved. 6 Phased Project Plan Phase1 (~T+3 months): Survey and analysis of quantum error correction research and development. Phase1 (~T+6 months): One new quantum error correction method and one patent idea. Phase2 (~T+9 months): Quantum error correction source code and test report, one paper draft. Phase 3 (~T +12 months): Final design and test report, source code. One more patent idea. 2 / 3

37 Copyright(C)Huawei Technologies Co., Ltd All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Confidentiality All information in this document (including, but not limited to interface protocols, parameters, flowchart and formula) is the confidential information of Huawei Technologies Co., Ltd and its affiliates. Any and all recipient shall keep this document in confidence with the same degree of care as used for its own confidential information and shall not publish or disclose wholly or in part to any other party without Huawei Technologies Co., Ltd s prior written consent. Notice Unless otherwise agreed by Huawei Technologies Co., Ltd, all the information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute the warranty of any kind, express or implied. Distribution Without the written consent of Huawei Technologies Co., Ltd, this document cannot be distributed except for the purpose of Huawei Innovation R&D Projects and within those who have participated in Huawei Innovation R&D Projects. 3 / 3

38 Quantum logic circuit optimization techniques 1 Theme: 2 Subject: Quantum Computing List of Abbreviations N/A 3 Background The quantum volume of current and near future quantum computer hardware are very limited by both qubits number and noise; how to make full use of quantum hardware resource as much as possible, to achieve efficient and large scale quantum algorithms, is an important and challenge research topic. Quantum algorithm and program is compiled into quantum logic circuit (Assembly instruction sequence), then logic circuit is implemented on physical chip. For the same quantum algorithm, there may be several different equivalent logic circuits. Intermediate representation algorithms are important to reduce the number of gate operations in logic circuit. 4 Scope Develop efficient quantum circuit optimization methods and improving related compiler optimizer. The goal is to reduce the quantum logic gates number by 50% on average. We recommend to implement the optimization method in open source quantum programming framework ProjectQ. 5 Expected Outcome and Deliverables Technical reports of optimizing quantum logic circuit. The algorithm design document and the implementation code (Based on ProjectQ), compared with the existing quantum compiler, the performance is raised by more than 50%. 1 / 3

39 1-2 papers. 1-2 patents. 6 Acceptance Criteria The report and design document pass Huawei review. The methods and related source code pass testing and meets the performance and functional requirements. The paper is submitted to the conference or periodical review. The patent idea pass Huawei review. 7 Phased Project Plan Phase1 (~T+3 months): Investigates techniques for the automated optimization of quantum logic circuit. Phase1 (~T+6 months): Implement an existing method for optimizing quantum logic circuit. A new idea of quantum logic circuit optimization. Phase2 (~T+9 months): Code and test report for optimizing quantum logic circuit, submit a paper. Phase 3 (~T +12 months): The design document and the final version of the code, submit one paper (CCF B or the mainstream quantum related conference), and one patent. 2 / 3

40 Copyright(C)Huawei Technologies Co., Ltd All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Confidentiality All information in this document (including, but not limited to interface protocols, parameters, flowchart and formula) is the confidential information of Huawei Technologies Co., Ltd and its affiliates. Any and all recipient shall keep this document in confidence with the same degree of care as used for its own confidential information and shall not publish or disclose wholly or in part to any other party without Huawei Technologies Co., Ltd s prior written consent. Notice Unless otherwise agreed by Huawei Technologies Co., Ltd, all the information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute the warranty of any kind, express or implied. Distribution Without the written consent of Huawei Technologies Co., Ltd, this document cannot be distributed except for the purpose of Huawei Innovation R&D Projects and within those who have participated in Huawei Innovation R&D Projects. 3 / 3

41 Quantum machine learning algorithms 1 Theme: 2 Subject: Quantum Computing List of Abbreviations N/A 3 Background & Scope Quantum computer is in fast research progress. Many important progresses in quantum hardware have been made in recent years. Machine learning are one of the potential killer application for future quantum computer. We need machine learning quantum algorithms for such applications. Some progresses have been made in quantum machine learning algorithms development. But it is still far from practically useful quantum machine learning algorithms that can run one real quantum computer and produce practical results. The main goal of this project is to make innovative progress on some quantum machine learning algorithms. 4 Expected Outcome and Deliverables 2 innovative quantum algorithms for machine learning applications, source code implemented in opensource quantum simulator or public quantum cloud, algorithm design and description document; Survey report on quantum machine learning algorithms; 1~2 papers; 1~2 patents. 5 Acceptance Criteria The report and design document pass Huawei review; 1 / 3

HIRP OPEN 2018 Compiler & Programming Language. An Efficient Framework for Optimizing Tensors

HIRP OPEN 2018 Compiler & Programming Language. An Efficient Framework for Optimizing Tensors An Efficient Framework for Optimizing Tensors 1 Theme: 2 Subject: Compiler Technology List of Abbreviations NA 3 Background Tensor computation arises frequently in machine learning, graph analytics and

More information

Machine learning for Wireless Networks: Challenges and opportunities

Machine learning for Wireless Networks: Challenges and opportunities Machine learning for Wireless Networks: Challenges and opportunities Mérouane Debbah Mathematical and Algorithmic Sciences Lab, Huawei, France 8 th of June, 2018 1 Huawei [Wow Way] 180,000 Employees 80,000

More information

Mobile AI: Challenges and Opportunities

Mobile AI: Challenges and Opportunities Mobile AI: Challenges and Opportunities Mérouane Debbah Mathematical and Algorithmic Sciences Lab, Huawei Geneva, January 29th, 2018 1 Networks are becoming very complex 4G MBB Voice Data HD Video B2C

More information

Call for Proposals Media Technology HIRP OPEN 2017

Call for Proposals Media Technology HIRP OPEN 2017 Call for Proposals HIRP OPEN 2017 1 Copyright Huawei Technologies Co., Ltd. 2015-2016. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without

More information

5G High-accuracy Localization

5G High-accuracy Localization 5G High-accuracy Localization 1 Theme: 2 Subject: 5G Communication Network 3 Background User localization in 5G communication network is of increasing importance for safety, gaming, driving, location-based

More information

An introduction to Machine Learning silicon

An introduction to Machine Learning silicon An introduction to Machine Learning silicon November 28 2017 Insight for Technology Investors AI/ML terminology Artificial Intelligence Machine Learning Deep Learning Algorithms: CNNs, RNNs, etc. Additional

More information

NVIDIA DGX SYSTEMS PURPOSE-BUILT FOR AI

NVIDIA DGX SYSTEMS PURPOSE-BUILT FOR AI NVIDIA DGX SYSTEMS PURPOSE-BUILT FOR AI Overview Unparalleled Value Product Portfolio Software Platform From Desk to Data Center to Cloud Summary AI researchers depend on computing performance to gain

More information

Bitnami Kafka for Huawei Enterprise Cloud

Bitnami Kafka for Huawei Enterprise Cloud Bitnami Kafka for Huawei Enterprise Cloud Description Apache Kafka is publish-subscribe messaging rethought as a distributed commit log. How to start or stop the services? Each Bitnami stack includes a

More information

Brainchip OCTOBER

Brainchip OCTOBER Brainchip OCTOBER 2017 1 Agenda Neuromorphic computing background Akida Neuromorphic System-on-Chip (NSoC) Brainchip OCTOBER 2017 2 Neuromorphic Computing Background Brainchip OCTOBER 2017 3 A Brief History

More information

Next-generation IT Platforms Delivering New Value through Accumulation and Utilization of Big Data

Next-generation IT Platforms Delivering New Value through Accumulation and Utilization of Big Data Next-generation IT Platforms Delivering New Value through Accumulation and Utilization of Big Data 46 Next-generation IT Platforms Delivering New Value through Accumulation and Utilization of Big Data

More information

Accelerating Implementation of Low Power Artificial Intelligence at the Edge

Accelerating Implementation of Low Power Artificial Intelligence at the Edge Accelerating Implementation of Low Power Artificial Intelligence at the Edge A Lattice Semiconductor White Paper November 2018 The emergence of smart factories, cities, homes and mobile are driving shifts

More information

CERN openlab & IBM Research Workshop Trip Report

CERN openlab & IBM Research Workshop Trip Report CERN openlab & IBM Research Workshop Trip Report Jakob Blomer, Javier Cervantes, Pere Mato, Radu Popescu 2018-12-03 Workshop Organization 1 full day at IBM Research Zürich ~25 participants from CERN ~10

More information

Learning to Match. Jun Xu, Zhengdong Lu, Tianqi Chen, Hang Li

Learning to Match. Jun Xu, Zhengdong Lu, Tianqi Chen, Hang Li Learning to Match Jun Xu, Zhengdong Lu, Tianqi Chen, Hang Li 1. Introduction The main tasks in many applications can be formalized as matching between heterogeneous objects, including search, recommendation,

More information

High-Performance Data Loading and Augmentation for Deep Neural Network Training

High-Performance Data Loading and Augmentation for Deep Neural Network Training High-Performance Data Loading and Augmentation for Deep Neural Network Training Trevor Gale tgale@ece.neu.edu Steven Eliuk steven.eliuk@gmail.com Cameron Upright c.upright@samsung.com Roadmap 1. The General-Purpose

More information

Bitnami Cassandra for Huawei Enterprise Cloud

Bitnami Cassandra for Huawei Enterprise Cloud Bitnami Cassandra for Huawei Enterprise Cloud Description Apache Cassandra is an open source distributed database management system designed to handle large amounts of data across many commodity servers,

More information

文档名称文档密级. Huawei Academy ICT Skill Competition Mexico 2017

文档名称文档密级. Huawei Academy ICT Skill Competition Mexico 2017 Huawei Academy ICT Skill Mexico 2017 I. Introduction A. Huawei Academy ICT Skills 2017-2018 Huawei runs an annual global ICT Skills competition, providing a platform for young people to showcase their

More information

Bitnami HHVM for Huawei Enterprise Cloud

Bitnami HHVM for Huawei Enterprise Cloud Bitnami HHVM for Huawei Enterprise Cloud Description HHVM is an open source virtual machine designed for executing programs written in Hack and PHP. HHVM uses a just-in-time (JIT) compilation approach

More information

Huawei Cloud Computing Showcase Guide

Huawei Cloud Computing Showcase Guide Security Level: Huawei Cloud Computing Showcase Guide www.huawei.com HUAWEI TECHNOLOGIES CO., LTD. Visit tree Shenzhen G1 Hall Shenzhen H2 desktop cloud experience zone Shenzhen H2 desktop cloud Office

More information

SAP HANA. HA and DR Guide. Issue 03 Date HUAWEI TECHNOLOGIES CO., LTD.

SAP HANA. HA and DR Guide. Issue 03 Date HUAWEI TECHNOLOGIES CO., LTD. Issue 03 Date 2018-05-23 HUAWEI TECHNOLOGIES CO., LTD. Copyright Huawei Technologies Co., Ltd. 2019. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any

More information

M.Tech Student, Department of ECE, S.V. College of Engineering, Tirupati, India

M.Tech Student, Department of ECE, S.V. College of Engineering, Tirupati, India International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2018 IJSRCSEIT Volume 3 Issue 5 ISSN : 2456-3307 High Performance Scalable Deep Learning Accelerator

More information

Third annual ITU IMT-2020/5G Workshop and Demo Day 2018

Third annual ITU IMT-2020/5G Workshop and Demo Day 2018 All Sessions Outcome Third annual ITU IMT-2020/5G Workshop and Demo Day 2018 Geneva, Switzerland, 18 July 2018 Session 1: IMT-2020/5G standardization (part 1): activities and future plan in ITU-T SGs 1.

More information

Is your IT Infrastructure Ready for Machine Learning & Artificial Intelligence?

Is your IT Infrastructure Ready for Machine Learning & Artificial Intelligence? BRKPAR-2955 Is your IT Infrastructure Ready for Machine Learning & Artificial Intelligence? Hoseb Dermanilian, EMEA BDM, NetApp Arnaud BASSALER, CSE, Cisco Systems Agenda Introduction AI, Machine Learning

More information

TensorFlow: A System for Learning-Scale Machine Learning. Google Brain

TensorFlow: A System for Learning-Scale Machine Learning. Google Brain TensorFlow: A System for Learning-Scale Machine Learning Google Brain The Problem Machine learning is everywhere This is in large part due to: 1. Invention of more sophisticated machine learning models

More information

Vehicular Cloud Computing: A Survey. Lin Gu, Deze Zeng and Song Guo School of Computer Science and Engineering, The University of Aizu, Japan

Vehicular Cloud Computing: A Survey. Lin Gu, Deze Zeng and Song Guo School of Computer Science and Engineering, The University of Aizu, Japan Vehicular Cloud Computing: A Survey Lin Gu, Deze Zeng and Song Guo School of Computer Science and Engineering, The University of Aizu, Japan OUTLINE OF TOPICS INTRODUCETION AND MOTIVATION TWO-TIER VEHICULAR

More information

RISC-V: Enabling a New Era of Open Data-Centric Computing Architectures

RISC-V: Enabling a New Era of Open Data-Centric Computing Architectures Presentation Brief RISC-V: Enabling a New Era of Open Data-Centric Computing Architectures Delivers Independent Resource Scaling, Open Source, and Modular Chip Design for Big Data and Fast Data Environments

More information

ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT

ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT PhD Summary DOCTORATE OF PHILOSOPHY IN COMPUTER SCIENCE & ENGINEERING By Sandip Kumar Goyal (09-PhD-052) Under the Supervision

More information

NVIDIA DEEP LEARNING INSTITUTE

NVIDIA DEEP LEARNING INSTITUTE NVIDIA DEEP LEARNING INSTITUTE TRAINING CATALOG Valid Through July 31, 2018 INTRODUCTION The NVIDIA Deep Learning Institute (DLI) trains developers, data scientists, and researchers on how to use artificial

More information

Enable AI on Mobile Devices

Enable AI on Mobile Devices Enable AI on Mobile Devices Scott Wang 王舒翀 Senior Segment Manager Mobile, BSG ARM Tech Forum 2017 14 th June 2017, Shenzhen AI is moving from core to edge Ubiquitous AI Safe and autonomous Mixed reality

More information

Data-Centric Innovation Summit DAN MCNAMARA SENIOR VICE PRESIDENT GENERAL MANAGER, PROGRAMMABLE SOLUTIONS GROUP

Data-Centric Innovation Summit DAN MCNAMARA SENIOR VICE PRESIDENT GENERAL MANAGER, PROGRAMMABLE SOLUTIONS GROUP Data-Centric Innovation Summit DAN MCNAMARA SENIOR VICE PRESIDENT GENERAL MANAGER, PROGRAMMABLE SOLUTIONS GROUP Devices / edge network Cloud/data center Removing data Bottlenecks with Fpga acceleration

More information

F5 Reference Architecture for Cisco ACI

F5 Reference Architecture for Cisco ACI F5 Reference Architecture for Cisco ACI Today s businesses face complex challenges to stay efficient and competitive. Together, F5 and Cisco enable organizations to dramatically reduce time to value on

More information

NVIDIA DLI HANDS-ON TRAINING COURSE CATALOG

NVIDIA DLI HANDS-ON TRAINING COURSE CATALOG NVIDIA DLI HANDS-ON TRAINING COURSE CATALOG Valid Through July 31, 2018 INTRODUCTION The NVIDIA Deep Learning Institute (DLI) trains developers, data scientists, and researchers on how to use artificial

More information

DEEP LEARNING ACCELERATOR UNIT WITH HIGH EFFICIENCY ON FPGA

DEEP LEARNING ACCELERATOR UNIT WITH HIGH EFFICIENCY ON FPGA DEEP LEARNING ACCELERATOR UNIT WITH HIGH EFFICIENCY ON FPGA J.Jayalakshmi 1, S.Ali Asgar 2, V.Thrimurthulu 3 1 M.tech Student, Department of ECE, Chadalawada Ramanamma Engineering College, Tirupati Email

More information

Huawei OceanStor ReplicationDirector Software Technical White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date

Huawei OceanStor ReplicationDirector Software Technical White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date Huawei OceanStor Software Issue 01 Date 2015-01-17 HUAWEI TECHNOLOGIES CO., LTD. 2015. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without

More information

How to Build Optimized ML Applications with Arm Software

How to Build Optimized ML Applications with Arm Software How to Build Optimized ML Applications with Arm Software Arm Technical Symposia 2018 ML Group Overview Today we will talk about applied machine learning (ML) on Arm. My aim for today is to show you just

More information

ASYNCHRONOUS SHADERS WHITE PAPER 0

ASYNCHRONOUS SHADERS WHITE PAPER 0 ASYNCHRONOUS SHADERS WHITE PAPER 0 INTRODUCTION GPU technology is constantly evolving to deliver more performance with lower cost and lower power consumption. Transistor scaling and Moore s Law have helped

More information

The OpenVX Computer Vision and Neural Network Inference

The OpenVX Computer Vision and Neural Network Inference The OpenVX Computer and Neural Network Inference Standard for Portable, Efficient Code Radhakrishna Giduthuri Editor, OpenVX Khronos Group radha.giduthuri@amd.com @RadhaGiduthuri Copyright 2018 Khronos

More information

The HP 3PAR Get Virtual Guarantee Program

The HP 3PAR Get Virtual Guarantee Program Get Virtual Guarantee Internal White Paper The HP 3PAR Get Virtual Guarantee Program Help your customers increase server virtualization efficiency with HP 3PAR Storage HP Restricted. For HP and Channel

More information

Huawei HiAI DDK User Manual

Huawei HiAI DDK User Manual Huawei HiAI DDK User Manual Issue: V100.150.10 Date: 2018-03-09 Huawei Technologies Co., Ltd. Copyright Huawei Technologies Co., Ltd. 2018. All rights reserved. No part of this document may be reproduced

More information

1 Publishable Summary

1 Publishable Summary 1 Publishable Summary 1.1 VELOX Motivation and Goals The current trend in designing processors with multiple cores, where cores operate in parallel and each of them supports multiple threads, makes the

More information

Research on Applications of Data Mining in Electronic Commerce. Xiuping YANG 1, a

Research on Applications of Data Mining in Electronic Commerce. Xiuping YANG 1, a International Conference on Education Technology, Management and Humanities Science (ETMHS 2015) Research on Applications of Data Mining in Electronic Commerce Xiuping YANG 1, a 1 Computer Science Department,

More information

Watchmaker precision for robotic placement of automobile body parts

Watchmaker precision for robotic placement of automobile body parts FlexPlace Watchmaker precision for robotic placement of automobile body parts Staff Report ABB s commitment to adding value for customers includes a constant quest for innovation and improvement new ideas,

More information

How to Build Optimized ML Applications with Arm Software

How to Build Optimized ML Applications with Arm Software How to Build Optimized ML Applications with Arm Software Arm Technical Symposia 2018 Arm K.K. Senior FAE Ryuji Tanaka Overview Today we will talk about applied machine learning (ML) on Arm. My aim for

More information

HOW TO BUILD A MODERN AI

HOW TO BUILD A MODERN AI HOW TO BUILD A MODERN AI FOR THE UNKNOWN IN MODERN DATA 1 2016 PURE STORAGE INC. 2 Official Languages Act (1969/1988) 3 Translation Bureau 4 5 DAWN OF 4 TH INDUSTRIAL REVOLUTION BIG DATA, AI DRIVING CHANGE

More information

Index. Springer Nature Switzerland AG 2019 B. Moons et al., Embedded Deep Learning,

Index. Springer Nature Switzerland AG 2019 B. Moons et al., Embedded Deep Learning, Index A Algorithmic noise tolerance (ANT), 93 94 Application specific instruction set processors (ASIPs), 115 116 Approximate computing application level, 95 circuits-levels, 93 94 DAS and DVAS, 107 110

More information

SDACCEL DEVELOPMENT ENVIRONMENT. The Xilinx SDAccel Development Environment. Bringing The Best Performance/Watt to the Data Center

SDACCEL DEVELOPMENT ENVIRONMENT. The Xilinx SDAccel Development Environment. Bringing The Best Performance/Watt to the Data Center SDAccel Environment The Xilinx SDAccel Development Environment Bringing The Best Performance/Watt to the Data Center Introduction Data center operators constantly seek more server performance. Currently

More information

IQ for DNA. Interactive Query for Dynamic Network Analytics. Haoyu Song. HUAWEI TECHNOLOGIES Co., Ltd.

IQ for DNA. Interactive Query for Dynamic Network Analytics. Haoyu Song.   HUAWEI TECHNOLOGIES Co., Ltd. IQ for DNA Interactive Query for Dynamic Network Analytics Haoyu Song www.huawei.com Motivation Service Provider s pain point Lack of real-time and full visibility of networks, so the network monitoring

More information

Facilitating IP Development for the OpenCAPI Memory Interface Kevin McIlvain, Memory Development Engineer IBM. Join the Conversation #OpenPOWERSummit

Facilitating IP Development for the OpenCAPI Memory Interface Kevin McIlvain, Memory Development Engineer IBM. Join the Conversation #OpenPOWERSummit Facilitating IP Development for the OpenCAPI Memory Interface Kevin McIlvain, Memory Development Engineer IBM Join the Conversation #OpenPOWERSummit Moral of the Story OpenPOWER is the best platform to

More information

OceanStor 9000 InfiniBand Technical White Paper. Issue V1.01 Date HUAWEI TECHNOLOGIES CO., LTD.

OceanStor 9000 InfiniBand Technical White Paper. Issue V1.01 Date HUAWEI TECHNOLOGIES CO., LTD. OceanStor 9000 Issue V1.01 Date 2014-03-29 HUAWEI TECHNOLOGIES CO., LTD. Copyright Huawei Technologies Co., Ltd. 2014. All rights reserved. No part of this document may be reproduced or transmitted in

More information

VMworld 2015 Track Names and Descriptions

VMworld 2015 Track Names and Descriptions Software- Defined Data Center Software- Defined Data Center General VMworld 2015 Track Names and Descriptions Pioneered by VMware and recognized as groundbreaking by the industry and analysts, the VMware

More information

Deep learning prevalence. first neuroscience department. Spiking Neuron Operant conditioning First 1 Billion transistor processor

Deep learning prevalence. first neuroscience department. Spiking Neuron Operant conditioning First 1 Billion transistor processor WELCOME TO Operant conditioning 1938 Spiking Neuron 1952 first neuroscience department 1964 Deep learning prevalence mid 2000s The Turing Machine 1936 Transistor 1947 First computer science department

More information

GPU ACCELERATED DATABASE MANAGEMENT SYSTEMS

GPU ACCELERATED DATABASE MANAGEMENT SYSTEMS CIS 601 - Graduate Seminar Presentation 1 GPU ACCELERATED DATABASE MANAGEMENT SYSTEMS PRESENTED BY HARINATH AMASA CSU ID: 2697292 What we will talk about.. Current problems GPU What are GPU Databases GPU

More information

Standardization Activities in ITU-T

Standardization Activities in ITU-T Standardization Activities in ITU-T Nozomu NISHINAGA and Suyong Eum Standardization activities for Future Networks in ITU-T have produced 19 Recommendations since it was initiated in 2009. The brief history

More information

Ten Reasons to Optimize a Processor

Ten Reasons to Optimize a Processor By Neil Robinson SoC designs today require application-specific logic that meets exacting design requirements, yet is flexible enough to adjust to evolving industry standards. Optimizing your processor

More information

Featured Articles AI Services and Platforms A Practical Approach to Increasing Business Sophistication

Featured Articles AI Services and Platforms A Practical Approach to Increasing Business Sophistication 118 Hitachi Review Vol. 65 (2016), No. 6 Featured Articles AI Services and Platforms A Practical Approach to Increasing Business Sophistication Yasuharu Namba, Dr. Eng. Jun Yoshida Kazuaki Tokunaga Takuya

More information

Value-driven Synthesis for Neural Network ASICs

Value-driven Synthesis for Neural Network ASICs Value-driven Synthesis for Neural Network ASICs Zhiyuan Yang University of Maryland, College Park zyyang@umd.edu ABSTRACT In order to enable low power and high performance evaluation of neural network

More information

ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center

ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center For more info contact Sol Distribution Ltd. A NEW NETWORK PARADIGM What do the following trends have

More information

NVIDIA GPU CLOUD DEEP LEARNING FRAMEWORKS

NVIDIA GPU CLOUD DEEP LEARNING FRAMEWORKS TECHNICAL OVERVIEW NVIDIA GPU CLOUD DEEP LEARNING FRAMEWORKS A Guide to the Optimized Framework Containers on NVIDIA GPU Cloud Introduction Artificial intelligence is helping to solve some of the most

More information

Assignment 5. Georgia Koloniari

Assignment 5. Georgia Koloniari Assignment 5 Georgia Koloniari 2. "Peer-to-Peer Computing" 1. What is the definition of a p2p system given by the authors in sec 1? Compare it with at least one of the definitions surveyed in the last

More information

Mark Sandstrom ThroughPuter, Inc.

Mark Sandstrom ThroughPuter, Inc. Hardware Implemented Scheduler, Placer, Inter-Task Communications and IO System Functions for Many Processors Dynamically Shared among Multiple Applications Mark Sandstrom ThroughPuter, Inc mark@throughputercom

More information

Digital system (SoC) design for lowcomplexity. Hyun Kim

Digital system (SoC) design for lowcomplexity. Hyun Kim Digital system (SoC) design for lowcomplexity multimedia processing Hyun Kim SoC Design for Multimedia Systems Goal : Reducing computational complexity & power consumption of state-ofthe-art technologies

More information

MIOVISION DEEP LEARNING TRAFFIC ANALYTICS SYSTEM FOR REAL-WORLD DEPLOYMENT. Kurtis McBride CEO, Miovision

MIOVISION DEEP LEARNING TRAFFIC ANALYTICS SYSTEM FOR REAL-WORLD DEPLOYMENT. Kurtis McBride CEO, Miovision MIOVISION DEEP LEARNING TRAFFIC ANALYTICS SYSTEM FOR REAL-WORLD DEPLOYMENT Kurtis McBride CEO, Miovision ABOUT MIOVISION COMPANY Founded in 2005 40% growth, year over year Offices in Kitchener, Canada

More information

Networking for a dynamic infrastructure: getting it right.

Networking for a dynamic infrastructure: getting it right. IBM Global Technology Services Networking for a dynamic infrastructure: getting it right. A guide for realizing the full potential of virtualization June 2009 Executive summary June 2009 Networking for

More information

EMBEDDED VISION AND 3D SENSORS: WHAT IT MEANS TO BE SMART

EMBEDDED VISION AND 3D SENSORS: WHAT IT MEANS TO BE SMART EMBEDDED VISION AND 3D SENSORS: WHAT IT MEANS TO BE SMART INTRODUCTION Adding embedded processing to simple sensors can make them smart but that is just the beginning of the story. Fixed Sensor Design

More information

New research on Key Technologies of unstructured data cloud storage

New research on Key Technologies of unstructured data cloud storage 2017 International Conference on Computing, Communications and Automation(I3CA 2017) New research on Key Technologies of unstructured data cloud storage Songqi Peng, Rengkui Liua, *, Futian Wang State

More information

PROBLEM FORMULATION AND RESEARCH METHODOLOGY

PROBLEM FORMULATION AND RESEARCH METHODOLOGY PROBLEM FORMULATION AND RESEARCH METHODOLOGY ON THE SOFT COMPUTING BASED APPROACHES FOR OBJECT DETECTION AND TRACKING IN VIDEOS CHAPTER 3 PROBLEM FORMULATION AND RESEARCH METHODOLOGY The foregoing chapter

More information

AllGoVision Achieves high Performance Optimization for its ANPR Solution with OpenVINO TM Toolkit

AllGoVision Achieves high Performance Optimization for its ANPR Solution with OpenVINO TM Toolkit AllGoVision Achieves high Performance Optimization for its ANPR Solution with OpenVINO TM Toolkit Version.9 Migrating to OpenVINO framework help its deep learning Automatic Number Plate Recognition (ANPR)

More information

Oracle Big Data Connectors

Oracle Big Data Connectors Oracle Big Data Connectors Oracle Big Data Connectors is a software suite that integrates processing in Apache Hadoop distributions with operations in Oracle Database. It enables the use of Hadoop to process

More information

Efficient, Scalable, and Provenance-Aware Management of Linked Data

Efficient, Scalable, and Provenance-Aware Management of Linked Data Efficient, Scalable, and Provenance-Aware Management of Linked Data Marcin Wylot 1 Motivation and objectives of the research The proliferation of heterogeneous Linked Data on the Web requires data management

More information

24th MONDAY. Overview 2018

24th MONDAY. Overview 2018 24th MONDAY Overview 2018 Imagination: your route to success At Imagination, we create and license market-leading processor solutions for graphics, vision & AI processing, and multi-standard communications.

More information

PRIME: A Novel Processing-in-memory Architecture for Neural Network Computation in ReRAM-based Main Memory

PRIME: A Novel Processing-in-memory Architecture for Neural Network Computation in ReRAM-based Main Memory Scalable and Energy-Efficient Architecture Lab (SEAL) PRIME: A Novel Processing-in-memory Architecture for Neural Network Computation in -based Main Memory Ping Chi *, Shuangchen Li *, Tao Zhang, Cong

More information

W3C CASE STUDY. Teamwork on Open Standards Development Speeds Industry Adoption

W3C CASE STUDY. Teamwork on Open Standards Development Speeds Industry Adoption January 2017 W3C CASE STUDY Teamwork on Open Standards Development Speeds Industry Adoption Like driving a long stretch of open road alone, standards development work can be a lonely endeavor. But with

More information

ENABLING MOBILE INTERFACE BRIDGING IN ADAS AND INFOTAINMENT APPLICATIONS

ENABLING MOBILE INTERFACE BRIDGING IN ADAS AND INFOTAINMENT APPLICATIONS ENABLING MOBILE INTERFACE BRIDGING IN ADAS AND INFOTAINMENT APPLICATIONS September 2016 Lattice Semiconductor 111 5 th Ave., Suite 700 Portland, Oregon 97204 USA Telephone: (503) 268-8000 www.latticesemi.com

More information

2018 Report The State of Securing Cloud Workloads

2018 Report The State of Securing Cloud Workloads 2018 Report The State of Securing Cloud Workloads 1 Welcome to our 2018 report on the state of securing cloud workloads A summary of the responses of close to 350 professionals whose primary areas of responsibility

More information

Neuromorphic Hardware. Adrita Arefin & Abdulaziz Alorifi

Neuromorphic Hardware. Adrita Arefin & Abdulaziz Alorifi Neuromorphic Hardware Adrita Arefin & Abdulaziz Alorifi Introduction Neuromorphic hardware uses the concept of VLSI systems consisting of electronic analog circuits to imitate neurobiological architecture

More information

What is 5g? Next generation of wireless networks Will provide higher speeds, greater capacity, and lower latency Will be capable of supporting billions of connected devices and things Distributes intelligence

More information

Deep learning in MATLAB From Concept to CUDA Code

Deep learning in MATLAB From Concept to CUDA Code Deep learning in MATLAB From Concept to CUDA Code Roy Fahn Applications Engineer Systematics royf@systematics.co.il 03-7660111 Ram Kokku Principal Engineer MathWorks ram.kokku@mathworks.com 2017 The MathWorks,

More information

Huawei Railway Communication Service Solution Guide

Huawei Railway Communication Service Solution Guide Huawei Railway Communication Service Solution Guide Huawei Technologies Co., Ltd. Keywords Railway transport, service solution, subsystem, design,, solution implementation Abstract Huawei Railway Communication

More information

Emerging Vision Technologies: Enabling a New Era of Intelligent Devices

Emerging Vision Technologies: Enabling a New Era of Intelligent Devices Emerging Vision Technologies: Enabling a New Era of Intelligent Devices Computer vision overview Computer vision is being integrated in our daily lives Acquiring, processing, and understanding visual data

More information

Portable GPU-Based Artificial Neural Networks For Data-Driven Modeling

Portable GPU-Based Artificial Neural Networks For Data-Driven Modeling City University of New York (CUNY) CUNY Academic Works International Conference on Hydroinformatics 8-1-2014 Portable GPU-Based Artificial Neural Networks For Data-Driven Modeling Zheng Yi Wu Follow this

More information

Deep Learning. Deep Learning. Practical Application Automatically Adding Sounds To Silent Movies

Deep Learning. Deep Learning. Practical Application Automatically Adding Sounds To Silent Movies http://blog.csdn.net/zouxy09/article/details/8775360 Automatic Colorization of Black and White Images Automatically Adding Sounds To Silent Movies Traditionally this was done by hand with human effort

More information

A NEW COMPUTING ERA. Shanker Trivedi Senior Vice President Enterprise Business at NVIDIA

A NEW COMPUTING ERA. Shanker Trivedi Senior Vice President Enterprise Business at NVIDIA A NEW COMPUTING ERA Shanker Trivedi Senior Vice President Enterprise Business at NVIDIA THE ERA OF AI AI CLOUD MOBILE PC 2 TWO FORCES DRIVING THE FUTURE OF COMPUTING 10 7 Transistors (thousands) 10 5 1.1X

More information

An Industrial Employee Development Application Protocol Using Wireless Sensor Networks

An Industrial Employee Development Application Protocol Using Wireless Sensor Networks RESEARCH ARTICLE An Industrial Employee Development Application Protocol Using Wireless Sensor Networks 1 N.Roja Ramani, 2 A.Stenila 1,2 Asst.professor, Dept.of.Computer Application, Annai Vailankanni

More information

Communication Patterns in Safety Critical Systems for ADAS & Autonomous Vehicles Thorsten Wilmer Tech AD Berlin, 5. March 2018

Communication Patterns in Safety Critical Systems for ADAS & Autonomous Vehicles Thorsten Wilmer Tech AD Berlin, 5. March 2018 Communication Patterns in Safety Critical Systems for ADAS & Autonomous Vehicles Thorsten Wilmer Tech AD Berlin, 5. March 2018 Agenda Motivation Introduction of Safety Components Introduction to ARMv8

More information

HETEROGENEOUS COMPUTE INFRASTRUCTURE FOR SINGAPORE

HETEROGENEOUS COMPUTE INFRASTRUCTURE FOR SINGAPORE HETEROGENEOUS COMPUTE INFRASTRUCTURE FOR SINGAPORE PHILIP HEAH ASSISTANT CHIEF EXECUTIVE TECHNOLOGY & INFRASTRUCTURE GROUP LAUNCH OF SERVICES AND DIGITAL ECONOMY (SDE) TECHNOLOGY ROADMAP (NOV 2018) Source

More information

Dynamic Routing Between Capsules

Dynamic Routing Between Capsules Report Explainable Machine Learning Dynamic Routing Between Capsules Author: Michael Dorkenwald Supervisor: Dr. Ullrich Köthe 28. Juni 2018 Inhaltsverzeichnis 1 Introduction 2 2 Motivation 2 3 CapusleNet

More information

A Data Classification Algorithm of Internet of Things Based on Neural Network

A Data Classification Algorithm of Internet of Things Based on Neural Network A Data Classification Algorithm of Internet of Things Based on Neural Network https://doi.org/10.3991/ijoe.v13i09.7587 Zhenjun Li Hunan Radio and TV University, Hunan, China 278060389@qq.com Abstract To

More information

Powering Knowledge Discovery. Insights from big data with Linguamatics I2E

Powering Knowledge Discovery. Insights from big data with Linguamatics I2E Powering Knowledge Discovery Insights from big data with Linguamatics I2E Gain actionable insights from unstructured data The world now generates an overwhelming amount of data, most of it written in natural

More information

Making Sense of Artificial Intelligence: A Practical Guide

Making Sense of Artificial Intelligence: A Practical Guide Making Sense of Artificial Intelligence: A Practical Guide JEDEC Mobile & IOT Forum Copyright 2018 Young Paik, Samsung Senior Director Product Planning Disclaimer This presentation and/or accompanying

More information

Making Mobile 5G a Commercial Reality. Peter Carson Senior Director Product Marketing Qualcomm Technologies, Inc.

Making Mobile 5G a Commercial Reality. Peter Carson Senior Director Product Marketing Qualcomm Technologies, Inc. Making Mobile 5G a Commercial Reality Peter Carson Senior Director Product Marketing Qualcomm Technologies, Inc. Insatiable global data demand First phase of 5G NR will focus on enhanced MBB Enhanced mobile

More information

Neural Network based Energy-Efficient Fault Tolerant Architect

Neural Network based Energy-Efficient Fault Tolerant Architect Neural Network based Energy-Efficient Fault Tolerant Architectures and Accelerators University of Rochester February 7, 2013 References Flexible Error Protection for Energy Efficient Reliable Architectures

More information

A Secure and Dynamic Multi-keyword Ranked Search Scheme over Encrypted Cloud Data

A Secure and Dynamic Multi-keyword Ranked Search Scheme over Encrypted Cloud Data An Efficient Privacy-Preserving Ranked Keyword Search Method Cloud data owners prefer to outsource documents in an encrypted form for the purpose of privacy preserving. Therefore it is essential to develop

More information

Virtualization and Softwarization Technologies for End-to-end Networking

Virtualization and Softwarization Technologies for End-to-end Networking ization and Softwarization Technologies for End-to-end Networking Naoki Oguchi Toru Katagiri Kazuki Matsui Xi Wang Motoyoshi Sekiya The emergence of 5th generation mobile networks (5G) and Internet of

More information

CONCENTRATIONS: HIGH-PERFORMANCE COMPUTING & BIOINFORMATICS CYBER-SECURITY & NETWORKING

CONCENTRATIONS: HIGH-PERFORMANCE COMPUTING & BIOINFORMATICS CYBER-SECURITY & NETWORKING MAJOR: DEGREE: COMPUTER SCIENCE MASTER OF SCIENCE (M.S.) CONCENTRATIONS: HIGH-PERFORMANCE COMPUTING & BIOINFORMATICS CYBER-SECURITY & NETWORKING The Department of Computer Science offers a Master of Science

More information

Financial Analytics Acceleration

Financial Analytics Acceleration Financial Analytics Acceleration Presented By Name GEORGI GAYDADJIEV Title Director of Maxeler IoT-Labs Date Dec 10, 2018 FPGA technology is getting traction among Datacenter providers and is expected

More information

Back propagation Algorithm:

Back propagation Algorithm: Network Neural: A neural network is a class of computing system. They are created from very simple processing nodes formed into a network. They are inspired by the way that biological systems such as the

More information

Comprehensive Arm Solutions for Innovative Machine Learning (ML) and Computer Vision (CV) Applications

Comprehensive Arm Solutions for Innovative Machine Learning (ML) and Computer Vision (CV) Applications Comprehensive Arm Solutions for Innovative Machine Learning (ML) and Computer Vision (CV) Applications Helena Zheng ML Group, Arm Arm Technical Symposia 2017, Taipei Machine Learning is a Subset of Artificial

More information

The Establishment of Large Data Mining Platform Based on Cloud Computing. Wei CAI

The Establishment of Large Data Mining Platform Based on Cloud Computing. Wei CAI 2017 International Conference on Electronic, Control, Automation and Mechanical Engineering (ECAME 2017) ISBN: 978-1-60595-523-0 The Establishment of Large Data Mining Platform Based on Cloud Computing

More information

Deep Learning Basic Lecture - Complex Systems & Artificial Intelligence 2017/18 (VO) Asan Agibetov, PhD.

Deep Learning Basic Lecture - Complex Systems & Artificial Intelligence 2017/18 (VO) Asan Agibetov, PhD. Deep Learning 861.061 Basic Lecture - Complex Systems & Artificial Intelligence 2017/18 (VO) Asan Agibetov, PhD asan.agibetov@meduniwien.ac.at Medical University of Vienna Center for Medical Statistics,

More information

Data safety for digital business. Veritas Backup Exec WHITE PAPER. One solution for hybrid, physical, and virtual environments.

Data safety for digital business. Veritas Backup Exec WHITE PAPER. One solution for hybrid, physical, and virtual environments. WHITE PAPER Data safety for digital business. One solution for hybrid, physical, and virtual environments. It s common knowledge that the cloud plays a critical role in helping organizations accomplish

More information

NEXT-GENERATION DATACENTER MANAGEMENT

NEXT-GENERATION DATACENTER MANAGEMENT NEXT-GENERATION DATACENTER MANAGEMENT From DCIM to DCSO Sometimes described as the operating or ERP system for the datacenter, datacenter infrastructure management (DCIM) is a technology that helps operators

More information