Development of System for Simultaneously Present Multiple Videos That Enables Search by Absolute Time

Similar documents
HSTP-IPTV-GUIDE.1 IPTV

4 rd class Department of Network College of IT- University of Babylon

ATSC 3.0 Update RICH CHERNOCK ATSC TG3 CHAIR TRIVENI DIGITAL CSO ADVANCED TELEVISION SYSTEMS COMMITTEE

Delay Constrained ARQ Mechanism for MPEG Media Transport Protocol Based Video Streaming over Internet

Networking Applications

MISB EG Motion Imagery Standards Board Engineering Guideline. 24 April Delivery of Low Bandwidth Motion Imagery. 1 Scope.

Beyond TS Workshop

ANSI/SCTE

Internet Video Delivery. Professor Hui Zhang

Streaming Technologies Delivering Multimedia into the Future. May 2014

RECOMMENDATION ITU-R BT.1720 *

The i-visto Gateway XG Uncompressed HDTV Multiple Transmission Technology for 10-Gbit/s Networks

UDP: Datagram Transport Service

A 120 fps High Frame Rate Real-time Video Encoder

Transport Protocol (IEX-TP)

Internet Streaming Media

Internet Streaming Media. Reji Mathew NICTA & CSE UNSW COMP9519 Multimedia Systems S2 2006

3 Convergence of broadcasting and telecommunications

Internet Streaming Media. Reji Mathew NICTA & CSE UNSW COMP9519 Multimedia Systems S2 2007

ATSC Candidate Standard: A/342 Part 3, MPEG-H System

3 September, 2015 Nippon Telegraph and Telephone Corporation NTT Advanced Technology Corporation

Internet Streaming Media

SERIES J: CABLE NETWORKS AND TRANSMISSION OF TELEVISION, SOUND PROGRAMME AND OTHER MULTIMEDIA SIGNALS Digital transmission of television signals

2 Convergence of broadcasting and telecommunications

CN1047 INTRODUCTION TO COMPUTER NETWORKING CHAPTER 6 OSI MODEL TRANSPORT LAYER

Chapter 3: Network Protocols and Communications CCENT Routing and Switching Introduction to Networks v6.0 Instructor Planning Guide

RTP: A Transport Protocol for Real-Time Applications

Module 10 MULTIMEDIA SYNCHRONIZATION

ATSC Standard: A/342 Part 3, MPEG-H System

Page 1. Outline / Computer Networking : 1 st Generation Commercial PC/Packet Video Technologies

MediaTek High Efficiency Video Coding

INTERNATIONAL ORGANISATION FOR STANDARDISATION ORGANISATION INTERNATIONALE DE NORMALISATION ISO/IEC JTC1/SC29/WG11 CODING OF MOVING PICTURES AND AUDIO

New Service Merging Communications and Broadcasting NOTTV. software platform and techniques for. implementation.

A look at the ROUTE forward ROUTE ~ Real-time Object-delivery over Unidirectional Transport

Survey of European Broadcasters on MPEG-DASH DASH Industry Forum- May 2013

Dolby Vision. Streams within the MPEG-DASH format

MODULE: NETWORKS MODULE CODE: CAN1102C. Duration: 2 Hours 15 Mins. Instructions to Candidates:

Emerging Internet Television Standards

SamKnows test methodology

MA5400 IP Video Gateway. Introduction. Summary of Features

Japanese Datacasting Coding Scheme BML

Seamless and Efficient Stream Switching of Multi- Perspective Videos

ITU-T J.288. Encapsulation of type length value (TLV) packet for cable transmission systems

Multimedia Networking

DISTRIBUTED NETWORK COMMUNICATION FOR AN OLFACTORY ROBOT ABSTRACT

OPTIMIZATION OF IPV6 PACKET S HEADERS OVER ETHERNET FRAME

Youngkwon Lim. Chair, MPEG Systems Samsung

Streaming and Recording Capabilities

CS 218 F Nov 3 lecture: Streaming video/audio Adaptive encoding (eg, layered encoding) TCP friendliness. References:

Interoperability and the Large-scale Growth of Broadband Video W3C Web and TV Workshop, 8-9 February 2011

Network Layer: Control/data plane, addressing, routers

ECE 158A: Lecture 7. Fall 2015

CSC 401 Data and Computer Communications Networks

ADDRESSING IP VIDEO ADAPTIVE STREAM LATENCY AND VIDEO PLAYER SYNCHRONIZATION JEFFREY TYRE - ARRIS WENDELL SUN - VIASAT

A Study on Transmission System for Realistic Media Effect Representation

Question 7: What are Asynchronous links?

Latest Technology for Video-Streaming Gateway of M-stage V Live

Datasets for AVC (H.264) and HEVC (H.265) for Evaluating Dynamic Adaptive Streaming over HTTP (DASH)

QoE Characterization for Video-On-Demand Services in 4G WiMAX Networks

Successful High-precision QoS Measurement in Widely Deployed 10-Gbit/s Networks Based on General-purpose Personal Computers

Triveni Digital Inc. MPEG Technology Series. MPEG 101 (MPEG 2 with a dash of MPEG 4 thrown in) Copyright 2011 Triveni Digital, Inc.

Interactive Multimedia Services : Trend & Insights

Lecture 8. Network Layer (cont d) Network Layer 1-1

CCNA 1 Chapter 7 v5.0 Exam Answers 2013

IP-Delivered Broadcast Channels and Related Signalling of HbbTV Applications

Hands-On IP Multicasting for Multimedia Distribution Networks

Audio/Video Transport Working Group. Document: draft-miyazaki-avt-rtp-selret-01.txt. RTP Payload Format to Enable Multiple Selective Retransmissions

Video Technology Crossroads What s next?

Multimedia in the Internet

ITU-T. FG AVA TR Version 1.0 (10/2013) Part 16: Interworking and digital audiovisual media accessibility

Introduction. IP Datagrams. Internet Service Paradigm. Routers and Routing Tables. Datagram Forwarding. Example Internet and Conceptual Routing Table

Dolby Vision. Streams within the HTTP Live Streaming format

UBR Congestion controlled Video Transmission over ATM Eltayeb Omer Eltayeb, Saudi Telecom Company

ELEC 691X/498X Broadcast Signal Transmission Winter 2018

White paper: Video Coding A Timeline

CSC 4900 Computer Networks: Multimedia Applications

Defining Networks with the OSI Model. Module 2

MISB ST STANDARD. MPEG-2 Transport Stream for Class 1/Class 2 Motion Imagery, Audio and Metadata. 27 October Scope.

ATSC 3.0 Next Generation Television

Chapter 9. Multimedia Networking. Computer Networking: A Top Down Approach

MISB ST STANDARD. MPEG-2 Transport Stream for Class 1/Class 2 Motion Imagery, Audio and Metadata. 25 February Scope.

NEULION DIGITAL PLATFORM POWERING THE NEXT-GENERATION VIDEO EXPERIENCE

Multipoint Streaming Technology for 4K Super-high-definition Motion Pictures

The Interconnection Structure of. The Internet. EECC694 - Shaaban

Network Working Group. Microsoft L. Vicisano Cisco L. Rizzo Univ. Pisa M. Handley ICIR J. Crowcroft Cambridge Univ. December 2002

MISB ST STANDARD. Timestamps for Class 1/Class 2 Motion Imagery. 25 February Scope. 2 References

What s the magic of real-time video streaming with hyper-low latency?

Experience. A New Modular E-Learning Platform Integrating an Enhanced Multimedia. Doctoral Program in Computer and Control Engineering (XXX Cycle)

DVBControl Intuitive tools that enables you to Control DVB!

2 RTP Encapsulation and its Application in NS-2 Simulation

Digital Asset Management 5. Streaming multimedia

Lecture 3. The Network Layer (cont d) Network Layer 1-1

Configure Video and Audio Settings

Mohammad Hossein Manshaei 1393

Chapter 2 - Part 1. The TCP/IP Protocol: The Language of the Internet

Introduction to Information Science and Technology 2017 Networking II. Sören Schwertfeger 师泽仁

EEC-682/782 Computer Networks I

Analysis of a Multiple Content Variant Extension of the Multimedia Broadcast/Multicast Service

OSI Transport Layer. Network Fundamentals Chapter 4. Version Cisco Systems, Inc. All rights reserved. Cisco Public 1

HYBRID BROADCAST AND OTT DELIVERY FOR TERRESTRIAL AND MOBILE TV SERVICES

Transcription:

Journal of Electrical Engineering 6 (2018) 33-39 doi: 10.17265/2328-2223/2018.01.005 D DAVID PUBLISHING Development of System for Simultaneously Present Multiple Videos Kazuhiro OTSUKI 1 and Yoshihiro FUJITA 2 1. NHK Science & Technology Research Laboratories, Tokyo 157-8510, Japan 2. The Department of Electrical and Electronic Engineering and Computer Science, Ehime University, Ehime 790-8577, Japan Abstract: We propose a system for simultaneously presenting numerous pieces of video content with absolute time metadata attached for effectively utilizing increasing number of pieces of video content. The proposed stored format used for the system is based on the MMT (MPEG (moving picture experts group) media transport) standardized method, which makes it possible to search by absolute time, and it is a format easy to access by HTTP. We implemented software that can simultaneously present multiple video files according to that format. The constructed system was able to provide service within a feasible processing time. Key words: MMT, search by absolute time, system for simultaneously present multiple videos. 1. Introduction Practical satellite-broadcasting of 8K SHV (Super Hi-Vision) will start in December of 2018, providing super-high-definition video and multi-channel audio included in 8K content. As the media transport scheme for 8K SHV, the MMT (MPEG (moving picture experts group) media transport) [1] was adopted [2]. MMT enables content to be distributed over multiple transmission paths. By using a UTC-based timestamp as the reference time in MMT, content can be transmitted synchronously. We expect to achieve full-scale broadcasting services that are enhanced with broadband networks, which include functions such as the ability to view multiple pieces of content at the same time, to select from multiple languages, and to view content across multiple devices (Fig. 1), as well as video services with a high sense of presence. However, delivery services for user-generated video content, as represented by YouTube, are becoming more generalized in the field of broadband networks. In addition, SVOD (subscription video on demand) Corresponding author: Yoshihiro FUJITA, professor, research fields: media technology and applications. services have been introduced by content-delivery companies such as Netflix and Hulu, and video content sharing services by public users have appeared, such as Meerkat and Periscope. The annual global IP traffic will surpass the 2.3-zettabyte threshold, and 82 percent of all consumer Internet traffic will be video traffic, according to the Cisco Visual Networking Index forecast [3], by 2020. Video content that can be accessed from multiple locations and angles will tend to continuously increase. In this paper, we propose a novel video presentation system for broadcast contents and user generated content. The rest of this paper is organized as follows. Details on MMT technology are given in Section 2. Then, the assumed total system and requirements are presented in Section 3. Formats stored in the system, the simple format and software implementation are described in Sections 4 and 5. Evaluation results are shown in Section 6. Finally, Section 7 concludes the paper. 2. MMT MMT is a standardized method that does not depend on transmission channels. We can display media with the same timing by decoding it on the basis

34 Development of System for Simultaneously Present Multiple Videos Broadcast Broadband Multi-device Multi-view Multi-language Fig. 1 Example of integrated broadcast and broadband services. Video/audio MMTP payload Fig. 2 metadata NAL/AU NAL/AU NAL/AU NAL/AU Sample data GOP Outline of configuration of s. Movie fragment metadata of a coordinated universal time (UTC)-based timestamp described in the signaling information by using self-decodable units called s (media processing units). In other words, we can manage media to which tags of absolute time have been added. s include frame units of video, encoding units of audio, and data files used in applications. These units can decode video and audio by themselves. It is possible to extract smaller units called s (media fragment units) by dividing s. s are units such as NAL (network abstraction layer) units, access units, or single files. s and s are stored in an MMT protocol payload when transmission occurs. Metadata that constitute s are not transmitted in the Association of Radio Industries and Businesses Standard (ARIB STD-B60) [4]. This is because the information required on the receiving side is described in the header of the MMTP (MMT protocol) payload, and it is therefore not necessary to transmit it again. Transmission is possible with this encapsulation with less overhead and simpler and lower delay. Fig. 2 outlines the configuration of an that comes in a video/audio signal that is stored in the MMTP payload. This example has one that is divided into three MMTP payloads. As IP (Internet protocol) packets are generally sized not to exceed 1.5 kilobytes, large s are divided into multiple MMTP payloads. The main feature of MMT is multi-protocol compliance. It is not only possible to support multicasting and unicasting with the UDP (user datagram protocol) but also the HTTP (hypertext transfer protocol) by using the TCP (transmission control protocol) [5]. The use of MMT as a distribution protocol in broadband networks has remained at the experimental trial level [6]. How to encourage service providers and others to use MMT is an issue that needs to be resolved to enable MMT to spread in the future. 3. System and Requirements We studied the practical use of effective video content. Video content is extracted from spatial information at certain times and places in real time. We may discover other information that we have not been to by gathering information from multiple videos. We assumed that there were innumerable amounts of video content in certain systems, regardless of whether the senders were broadcast companies, delivery companies, or general users. We would therefore be able to search scenes (of certain places) at certain times and to synchronize and play multiple scenes from the innumerable amounts of video content in these systems. Various use cases are conceivable, such as those that offer related videos for news content, updated reports on accidents and disasters, multi-angle views, and replays of sports programs. The functional requirements to achieve such use cases are listed below. (1) Inclusion of multiple broadcasting systems: A broadcasting system should function as part of a whole system. (2) Easy upload of video contents: Video content should be uploaded to the network easily regardless of

Development of System for Simultaneously Present Multiple Videos 35 the user or device. (3) Search for a required scene: Video content should be handled in a unified form regardless of the sender, or in a form that can easily be interconverted. The video content should also have metadata on its time and location. (4) Easy playback of video content: Search and playback of video content using metadata should be easy via a network. Fig. 3 outlines the total system we assumed in an example of a sports program. metadata Fragment metadata media track ftyp mmpu sidx moov moof mdat MMT hint track 4. Stored Formats We analyzed three existing formats used as stored formats, and compared them, and found that the format suitable for the proposed system was the simple format [7]. A brief description of each format and a comparison table is shown. 4.1 PCAP Fig. 4 outlines the PCAP (packet capture) format [8]. Each in the format stores the header of the timestamp that is given. This is possible by using this format to store a stream in real time so that it can easily be re-sent at the correct timing. When we search stored files for PTSs (presentation timestamps) to be used as a key, however, it is necessary to find signaling information in s that are Requirement 2 Fig. 3 Assumed total system. Requirement 1 Requirement 3 <live broadcasting> <rebroadcasting> Requirement 4 <viewing by designating absolute time> Fig. 5 ISOBMFF (ISO base media file format)-based format. MPT (PA) Fig. 6 MPT Simple Simple format. timestamp _ list Package_id PTS : location : : conveyed in s, and to read the PTSs in the signaling information from an MPT (MMT package table). 4.2 ISOBMFF (ISO Base Media File Format) Fig. 5 outlines the ISOBMFF-based format. A file in this format has a box structure according to an established standard. It is possible to search for stored files for PTSs to be used as a key by referring to the content of a specific box and accessing the required by using the box unit. However, we cannot directly store a real-time stream for ISOBMFF. It is necessary to convert the box structure when storing information. In addition, it is necessary to pass a parameter where the box structure has to be converted to a stream format when we re-send a stored file. 4.3 Simple 24 byte PCAP file header 16 byte PCAP packet header... Fig. 4 Packet capture (PCAP) format. 16 byte PCAP packet header s in the proposed format (Fig. 6), i.e., simple format, are extracted from the packets of real-time transmission by UDP and stored in a

36 Development of System for Simultaneously Present Multiple Videos location by using a source address and a packet ID. Control-information (MPT)-changed location descriptions are stored at the same time. In addition, meta-information (timestamp_list), i.e., a combination of UTC-based PTSs and the location of each, is stored. As a result, it becomes possible to immediately acquire the required by using HTTP and searching with PTS as a key. 4.4 Comparison Table 1 compares the three formats. The PCAP format is suitable for storing ARIB profiles and UDP transmission, but it is not suitable for scene search or HTTP access performed by designating absolute times. However, the ISOBMFF-based format is not suitable for storing ARIB profiles or UDP transmission, but it is suitable for scene search and HTTP access performed by designating absolute times. Storing ARIB profiles and UDP transmission are possible with the simple format, and it is suitable for scene search and HTTP access performed by designating absolute times. As can be seen from the comparison table, the PCAP method is mainly suitable for broadcasting services centered on live delivery with low delay. The simple method, in comparison, is suitable for services that perform scene search at absolute times and HTTP access, as we assume in this paper. Table 1 Storage of ARIB profiles UDP transmission Scene search by absolute time Comparison of stored formats. PCAP a (storage of s) (possible to transmit data as they are) (analyze IP packets) ISOBMFF-bas ed b (need to add metadata) (inverse required) (analyze metadata) Simple format c (need to generate original metadata) (possible by MMTP ) (analyze timestamp_list ) HTTP access (difficult) (possible) (easy) a. Suitable b. Not suitable c. Conditionally suitable 5. Software Implementation We implemented software and a scene search script with the simple method. The software accumulates MMT real-time streams and video files from smart devices with this method. Video and audio s from an MMT stream and MPT files used as signaling information to describe the general location of video and audio are stored. The packet ID of all media is described as a general location when broadcasting, but it is rewritten in the software to the location of s placed on the HTTP server and stored. Furthermore, timestamp_list that lists pairs of UTC-based PTSs and the location of each are generated. Similarly, for video files from smart devices, video and audio s and timestamp_list are stored. In this case, the file is in the unit of a GOP (group of picture) and ADTS (audio data transport stream). The PTS described in the timestamp_list is a time added to the recording start time, an offset value obtained from the duration (recording time), the number of GOPs of stored video, and the number of ADTS frames of audio. Fig. 7 outlines functional configuration diagrams of the server sides. The scene search script runs on an HTTP server that enables HTTP access by designating absolute times to the files stored by using the simple format. MMT streamer Smart device MMTP/UDP/IPv6 HTTP post Simple software Simple software HTTP server MPT/PA/MMTP Timestamp_list Video (HEVC ) Encoding prepared Audio Timestamp_list Video (AVC ) Audio Fig. 7 Functional configuration for server side. Video (AVC)

Development of System for Simultaneously Present Multiple Videos 37 Timestamp_lists on the server are searched according to the absolute time, the number of search scenes and search duration that is requested by the client. This script sets the absolute time as the start time, selects only media that contains search scenes with PTSs close to the start time existing within the search duration period, and responds to the client in the form of MPD_list. We currently group candidates of timestamp_list using the source address, which indicates differences between cameras, and the package ID, which indicates differences between programs with an algorithm for selecting candidates close to the required absolute time. Nevertheless, it will be necessary to consider more advanced methods in the future, such as those using GPS data that indicate the location of the scene. Client software needs to be able to present multiple scenes from multiple simple files placed on the HTTP server by designating absolute times requested by the client. It seems preferable from the viewpoint of a user interface to be able to simultaneously view multiple screens. In addition, since only one HEVC (high efficiency video coding) video is decoded due to current terminal capabilities, the client software switches between multiple video images to be reproduced at the same time reference for display screens that correspond to the reception of multiple video images. Therefore, we implemented clients that make it possible to simultaneously view multiple screens with a browser by converting HEVC to AVC (advanced video coding) and the simple method to MPEG-Dynamic adaptive streaming over HTTP (DASH) [9]. Fig. 8 outlines a functional configuration obtained with the MPEG-DASH method. A browser is used in the MPEG-DASH method as the client software. The of the video is converted beforehand on an HTTP server from HEVC to AVC as it can be played in the browser. In response to a request by the client, MPD_list, the MPD (media presentation description) and DASH s are Fig. 8 method. Fig. 9 Timestamp_list Video (AVC) Audio Functional configuration with MPEG-DASH HTTP server Scene search script real time creation Absolute time, number of search scenes, search duration MPD_list MPD Video DASH Audio DASH MPD HTTP request MPD_list HTTP request HTTP request Video DASH http request Audio DASH Display screen on HTML5 browser. HTML5 browser HTML5 app + dash.js generated in real time and are returned to the client. The browser is a conventional HTML5 browser, and simultaneous views of multiple scenes are achieved with the HTML5 application and dash.js acquires MPD and DASH s according to the MPD_list. Fig. 9 shows photographs of the display screen on the HTML5 browser. We confirmed that multiple screens can be displayed at the same time, and that more intelligible video presentation becomes possible. 6. Evaluation We evaluated the feasibility of using the implemented system. We considered the time before enjoying the service as one measure of the feasibility

38 Development of System for Simultaneously Present Multiple Videos of services. That is, if the time to upload content or the time to search and present the content takes a long time, the service will not be used by customers. Immediacy is a very big factor in considering the feasibility of services. First, since the stream data sent from the MMT streamer are converted in real time, it takes time to process only the real time of the content. However, with regard to user generated content recorded by a smart device, it is necessary to convert the stored file. Therefore, the processing time according to the length of the content was measured. Table 2 shows the results. As shown in the results, it is possible to change processing in about half the length of the content and it can be said that to the simple format is within a feasible range. Next, the time from a request from the client to the presentation of the result was also measured. Table 3 shows the results. It seems that there was a difference in drawing processing depending on the capability of the client device. In general, as the reproduction time increases and as the number of presented videos increases, the processing time of presentation is required. Regarding the reproduction time, the proposed system is thought to be suitable for service applications such as searching for videos one after another within a short playback time rather than continuing to watch for a long time. Regarding the number of presented videos, it is considered difficult to watch multiple videos at the Table 2 Processing time of upload. Length of content 10 s 20 s 30 s 60 s Processing time of upload 4.77 s 8.62 s 12.82 s 25.31 s Table 3 Processing time of presentation. Reproduction time 15 s 30 s 45 s 60 s 2-video presentation 2.57 s 4.71 s 7.19 s 8.73 s 3-video presentation 3.87 s 7.92 s 13.18 s 20.03 s 4-video presentation 5.22 s 9.90 s 14.88 s 25.96 s same time as the number increases, and it is practical to display about four videos. From the results, videos can be provided within a feasible processing time. From the above, the proposed system with the simple format can be sufficiently realized when used for services that it is suited for. 7. Conclusion We proposed a system that effectively utilizes innumerable pieces of video content. We compared three formats for required stored formats, including one based on MMT, which is a media transport method for SHV satellite broadcasting. We demonstrated that the format was suitable for the assumed system and tested and verified its performance through software implementation. We intend to upgrade the search algorithms of the system in the future. References [1] ISO/IEC. 23008-1:2014: Information Technology High Efficiency Coding and Media Delivery in Heterogeneous Environments Part 1: MPEG Media Transport (MMT). [2] Otsuki, K., Aoki, S., Kawamura, Y., Tsuchida, K., and Kimura, T. 2014. MMT-based Super Hi-Vision Satellite Broadcasting Systems. NAB Broadcast Engineering Conference Proceedings, 38-44. [3] Cisco Visual Networking Index: Forecast and Methodology. 2015-2020 White Paper. http://www.cisco.com/c/en/us/solutions/collateral/service -provider/ip-ngn-ip-next-generation-network/white_paper _c11-481360.html. [4] ARIB STD-B60 (Version 1.8): MMT-based Media Transport Scheme in Digital Broadcasting System. September 2016 (in Japanese). [5] Aoki, S., Kawamura, Y., Otsuki K., and Hashimoto, A. 2016. A Study on Conversion from MMT Protocol to HTTP. The Institute of Electronics, Information and Communication Engineers, Communications Society Conference B-6-10 (2016) (in Japanese). [6] Kawamura, Y., Otsuki, K., Hashimoto, A., and Endo, Y. 2016. Functional Evaluation of Hybrid Content Delivery Using MPEG Media Transport. IEEE International Conference on Consumer Electronics 2016, 267-8. [7] Otsuki, K., Hayami T., and Fujita,Y. 2016. A Study of Stored Formats to Enable HTTP Access by designating

Development of System for Simultaneously Present Multiple Videos 39 Absolute Time. ITE Technical Report 40 (45): 17-20. [8] Pcap Definitions [WinPcap user s manual]. https://www.winpcap.org/docs/docs_412/html/group w pcap def.html. [9] ISO/IEC 23009-1:2014: Information Technology Dynamic Adaptive Streaming over HTTP (DASH) Part 1: Media Presentation Description and Segment Formats. Kazuhiro Otsuki received his M.E. degrees in science and engineering from the Tokyo Institute of Technology in 1997. He joined NHK in 1997 and has since been researching and developing digital broadcasting system at the Science and Technology Research Laboratories of NHK. Yoshihiro Fujita is currently a Professor at Ehime University. From 1976 to 2010 he was with NHK (Japan Broadcasting Corporation). He conducted research on advanced imaging systems, HDTV and UHDTV broadcasting systems at NHK Science and Technology Research Laboratories (NHK STRL). From 2006 to 2010, he was the Director of System Research Division where he was in charge of broadcasting communication integration systems, and then the Deputy Director of the Laboratories. He received his B. S. and PH.D. Degrees in 1976 and 1998 respectively from the University of Tokyo and is a fellow of IEEE and ITEJ (Institute of Television Engineers Japan).