Video Annotation and Retrieval Using Vague Shot Intervals

Size: px
Start display at page:

Download "Video Annotation and Retrieval Using Vague Shot Intervals"

Transcription

1 Master Thesis Video Annotation and Retrieval Using Vague Shot Intervals Supervisor Professor Katsumi Tanaka Department of Social Informatics Graduate School of Informatics Kyoto University Naoki FUKINO Feburary 6, 2004

2 Video Annotation and Retrieval Using Vague Shot Intervals Naoki FUKINO Abstract In this paper, we propose a fast content description method to describe metadata for video which is broadcasted live. Recently, multichannel digital broadcasting and hard-disk video recorder are becoming widespread. Consequently, the amount of available video content becomes to be enormous. Although the time we can watch video contents is not longer than before. Therefore A system need to support users by abstracting high-value parts automatically. The system have to understand video content for abstracting proper video interval which is meet user s request. The metadata which describes video content is needed for the system to understand content. Content description of video is called video annotation. Annotation costs much. Therefore, there are few video contents which have metadata of the content. Especially, it is rare that metadata is sent with live broadcasting. In this paper, we propose content description method which emphasizes speed of annotation to make more video contents which have metadata. Concretely speaking, the goal of our research is that a sports video is described metadata in real-time. There are two approaches for video annotation. They are segmentation approach and stratification approach. In segmentation approach, the cost of annotation is lower but it is not suited for annotating some kind of videos in which semantic unit such as scene in movie is not provided. In stratification approach, intervals correspond tokeywordsarespecified in several. These intervals can have overlap with each other and an interval can contain other interval. This approach can describe keywords in detail, but this approach costs much because the boundary of intervals has to be specified. Each approaches are not suited for describing sports video in real-time. Consequently, we use a new approach which is neither segmentation approach nor stratification approach. In this approach we describe only the recognized timing of event without specifying the boundary of event s interval pre- i

3 ii cisely. Recognized timing of an event can be described by using manpower. In addition, sports commentator repeat realizing an event and speaking the event, so recognized timing of an event can be described by using speech recognition for commentator. In our method Vague Shot Interval and Vague Modifier are used. Vague Shot Interval is used to estimate interval of event and relation between events. This expresses distribution of event s interval and original point of distribution is recognized timing of the event. Vague Modifier is used to estimate an event which is modified by a keyword. This expressed distribution of event s interval and original point of distribution is a keyword which modifies the event. Starting point and terminal point of interval of each event are expressed by probability distribution, so the boundary is described vaguely. We can reduce cost of content description by using Vague Shot Interval and Vague Modifier. For retrieval for metadata described using Vague Shot Interval and Vague Modifier, an algorithm suited to the form of annotation is needed. In this paper, we describe the search algorithm too. We implemented a prototype system which uses proposed method of annotation and retrieval, and we annotated a soccer video. In annotation, we did not stop or rewind video on the assumption that we must annotate movie in real-time. In result, we can describe practical number of keywords in two iterations. In addition we retrieve scene from the described metadata. In result, the recall ratio is high but precision ratio is low. Roughly half of the wrong abstracted scenes come from mistakes in annotation. From this implementation, we verified that two annotator can describes practical number of keywords in real-time, and retrieval from the metadata is practical for a sort of applications in which the accuracy of retrieval is not so important. In this paper we describe an application whose demand will be higher if the real-time annotation is popular.

4 iii 曖昧なショット区間を用いたビデオ映像の内容記述と検索方式吹野直紀内容梗概 本稿で, 我々は生放送映像にメタデータを付与するための高速な内容記述手法を提案する. 近年, 放送のデジタル多チャンネル化やハードディスクレコーダーの普及, インターネット回線の高速化による動画配信サービスの増加により, 我々は膨大な映像コンテンツを得ることができるようになった. しかし, 個人が映像を視聴できる時間は限られている. ユーザが大量の映像の中から好きな時に好きな部分のみを視聴するためには, システムによる映像区間の選択支援が必要になる. ユーザの要求に応じてシステムが適した映像区間を取り出すためには, システムが映像の内容を理解する必要がある. システムが映像内容を知るためには映像中のどの部分に何が映っているかという情報を記述したメタデータが必要である. ビデオ映像の内容を記述する事を映像のアノテーションと呼ぶ. アノテーションには通常時間的コストが多くかかる. そのため, 現状ではメタデータが付与された映像はあまり多く提供されていない. 特に, 生放送の映像にメタデータが付与されている事は極めてまれである. 本研究ではできるだけ多くの映像にメタデータがつくよう, 高速性を重視した内容記述手法を提案する. 具体的には生中継される事が多いスポーツ映像に対してリアルタイムにアノテーションを行う事を目標とする. アノテーションには大きく分けると構造化法と層状化法がある. 構造化法とはビデオをまず複数の区間に区切り, それぞれの区間に対してキーワード等を付与する方法である. この方法はアノテーションのコストは低くなるが, スポーツ映像等, はっきりとした句切れ目の無い映像には向かない. 一方の層状化法はキーワード毎に該当区間を設定する方法である. 自由度は高いがそれぞれの区間の指定と終点を指定しなければならず, アノテーションに時間がかかってしまう. 生中継スポーツ映像に対してリアルタイムに内容記述を行うにはどちらの方法も向いていない. そこで, 本研究では層状化法でも構造化でもない新たな方法として, イベントを認識した瞬間のみを記述しイベント区間の始点と終点ははっきり指定しな

5 iv い方法をとる. イベントを認識した瞬間のみの記述は人手を用いて行う事もでき, また, 実況アナウンサーはイベントを認識して発話するサイクルを繰り返すのでその発話を頼りに記述する事ができる. イベントを認識した瞬間を基にシステムがイベント区間を推測したりイベント同士の関係を推測したりするために, 本手法では曖昧なショット区間 (Vague Shot Interval) と曖昧な修飾子 (Vague Modifier) という2つの概念を用いる. Vague Shot Interval はイベントの区間と関係を推測するために使うもので, イベントを認識した瞬間を基準にイベント区間がどのあたりに分布しているかを表している. 一方,Vague Modifier は修飾語が修飾しているイベントがどれかを推測するために使うもので, 修飾語を認識した瞬間を基準にイベント区間がどのあたりに分布しているかを表している. 各々のイベント区間は始点と終点を確率分布で表現しており, 境界が曖昧になっている. これらを利用して認識に従って内容記述を行う事で, 内容記述にかかるコストを大幅に低減する事ができる. Vague Shot Interval 及び Vague Modifier を用いて記述されたメタデータに対して検索を行うにはその形式に応じたアルゴリズムが必要である. 本稿では, そのアルゴリズム手法についても述べる. 本稿で提案した手法を使ったプロトタイプを実装し, サッカー映像に対してアノテーションを行った. アノテーションは実際のリアルタイムアノテーションをする場合を想定し, 映像が終わるまで映像の停止や巻き戻しはしなかった. 結果,2 回の反復で実用的な数のキーワードを記述する事ができた. また, この作業で作られたメタデータに対していくつかの検索を行った. 結果, 再現率は高いものの適合率は低めな結果となった. 適合していない検索結果のうち, 約半分はアノテーションの際のイベントの認識ミスであった. 結果として, 本手法ではアノテーションをする人が2 人以上いればリアルタイムアノテーションが可能である事がわかり, その手法で作られたメタデータに対する検索では適合率がやや低くなるものの検索精度が重要ではないアプリケーションでは実用的と思われた. 本稿ではリアルタイムアノテーションが一般的になった際に需要が高まると考えられるアプリケーションについても触れる.

6 Video Annotation and Retrieval Using Vague Shot Intervals Contents Chapter 1 Introduction 1 Chapter 2 Basic Items and Related Work BasicItems Segmentation Approach and Stratification Approach MPEG RelatedWork... 4 Chapter 3 Vague Descriptors VagueShotInterval Vague Modifier VagueSceneStructure Chapter 4 Search Algorithm OverviewofSearchProcess QueryForm UseofThesaurus FundamentalBinaryRelations AdditionalBinaryRelations QueryEasingforMultiKeywords Chapter 5 Experiment and Evaluation SystemOverview Creatingthesaurus CreatingTagInformation AnnotatingMovie SearchResult Evaluation Chapter 6 Discussion Multi-modalAnnotation... 40

7 6.2 QueryGenerationfromNewsArticle Chapter 7 Summary and Conclusion 45 Acknowledgments 47 References 48

8 Chapter 1 Introduction For several years, multichannel digital broadcasting and hard-disk video recorder are becoming widespread. By diffusion of multichannel digital broadcasting, we become to be able to get various video contents we are interested in. By diffusion of hard-disk video recorder we become to be able to accumulate video contents and watch them at our leisure. Consequently, the amount of available video content becomes to be enormous. Although the time we can watch video contents is not longer than before, so we have to select a content which have a lot of value for us. Besides, we have to abstract high-value parts from a video content. It is hard for watchers to abstract all high-value parts from all video contents, so a system needs to support users by abstracting high-value parts automatically. However video s bit stream cannot be an object of search engine as it is. Metadata of video content is needed for the search engine to understand details in each part. Metadata contain information what is reflected in each part of a video content. Recently a technology of metadata has gotten a lot of attention, and then MPEG7 is standardized as a format of metadata. We can describe various characteristics of a video content by using MPEG7. Although the number of video contents which have metadata describing details is not so much as it is now. One reason is thought that describing metadata cost much time. At this time, it is hard to recognize meanings of the scene by image recognition, voice recognition, and so on. For this reason, contents description using manpower is needed to efficient retrieval. However contents description using manpower costs much time, so it is hard to describe metadata for video contents produced continuously. Especially the metadata of sports video is hard to be described because the sports video is often broadcast live, so it has to be described in real time. However there is unevenness of importance in sports video, so sports video is in high demand of metadata for watching favorite parts. Therefore we propose a method of contents description at low cost on the assumption that we have to describe metadata in real time for sports video 1

9 contents, especially for soccer video contents. In this method, we describe eventsoccurredinvideobyusingmanpowerintextform. Howeveritcosts too much to describe details, so we reduce the cost of contents description by describing information vaguely when the information is no need to be described precisely. When attach a keyword to certain part of video, we have to specify the starting point and the terminal point of the part. But it costs too much to specify them. In addition, it is impossible in some cases to specify interval in real time because the starting point of interval corresponds to an event is before the moment which the event is recognized. So we take the following method. In the process of contents description, we describe only the recognized timing of events. In the process of retrieval, the search engine estimates the actual intervals correspond to events and the relations between the intervals by using knowledge which is arranged in advance. This knowledge consists of two elements. First one is the distribution of past intervals of an event. The center of this distribution is the time point which the event is recognized. Second one is the degree of impression of an event on surrounding time. This knowledge is accumulated in a dictionary in the form of Vague Shot Interval and the form of Vague Modifier. In addition, knowledge of alternative scene structure of an event is accumulated in the form of Vague Structure for reflecting difference of subjective definition of a scene of an event. The remainder of the paper is outlined as follows: In section Chapter 2, we describe basic items of contents description and related work. In section Chapter3,wedescribeVagueShotIntervalandVagueModifier. These ideas are used for contents description at low cost. In addition, we describe Vague Structure which is used in search process. In section Chapter 4, we describe search method for metadata which is described by using Vague Shot Interval and Vague Modifier. in section Chapter 5, we describe a prototype system which implements search algorithm shown in section Chapter 4, and we evaluate the cost of contents description and the effectiveness of the search algorithm. In section Chapter 6, we discuss alternative method of contents description using Vague Shot Interval and Vague Modifier. In addition, we discuss application using proposed technology. In section Chapter 7, we describe conclusion. 2

10 Chapter 2 Basic Items and Related Work For retrieving scenes from video, we have to recognize and describe content. Content description of video is called annotation of video. In this section, we describe basic items and related work about annotation. 2.1 Basic Items Segmentation Approach and Stratification Approach There are two approaches for description of video content. They are segmentation approach and stratification approach. These approaches is represented in figure 2.1 Figure 2.1: Segmentation Approach and Stratification Approach In segmentation approach, video is divided into segments and keywords are attached to each segment. Divided segments have no overlap with each other. This approach has advantage to make the annotation cost low, but this approach is not suited for annotating some kind of videos in which semantic unit such as scene in movie is not provided. In stratification approach, intervals correspond to keywords are specified in several. These intervals can have overlap with each other and an interval can contain other interval. This approach can describe keywords in detail, but this approach costs much because the boundary of intervals has to be specified. We aim at annotating sports video in real-time. Sports video do not have distinct semantic unit, so we can not use segmentation approach. However if we use stratification approach directly, we can not annotating in real-time because 3

11 of the cost. Consequently, we use stratification approach without specifying the boundary of interval precisely MPEG7 In response to increase of demand for metadata techniques, MPEG7 [6] was designed. Official name for MPEG7 is Multimedia Content Description Interface, and a standard description tool for multimedia content is defined. MPEG7 defines notation method for metadata. The process from creation of metadata to use of metadata is shown in figure 2.2. Method of creating metadata and method of using metadata is not standardized by MPEG7. We propose a method for creating and using metadata for annotating sports video in real time and using this metadata. Figure 2.2: Standardized part by MPEG7 2.2 Related Work There are various methods for annotation of video. Informedia project [1] [2] in CMU is famous system for that annotate video by using technologies of image recognition, speech recognition and natural language processing. This method use segmentation approach and total length of annotated video is more than 1000 hour. NHK [3] develops real-time speech recognition technology for closed caption of live broadcasting. This technology use manpower for adjusting recognized text in real-time. By digital broadcasting, broadcasting station can send scenario of drama. In [8], a matching method between a part of scenario and a part of video is proposed. This use DP matching method and image/speech 4

12 recognition. A method, which describes keyword using scenario and structuralize video by clustering keywords, is proposed[14][15]. GDA(Global Document Annotation)[4] is xml tag set which is used for express natural language to be understandable for machines. A tool that converts natural language to GDA is useful in annotation. In analysis of sports video, Snoek et al. [12] propose a method which use support vector machine for multi modal recognition. Nalagawa et al. [16] propose a method that recognizes play and camera work using image recognition and store its result in score book. 5

13 Chapter 3 Vague Descriptors When we annotate video by keyword or search scene from video by keyword, we come up against a problem which is caused by vagueness. For example, when we want to watch goal scene in soccer video, we input goal in search engine. However, definition of goal is various by people. A certain man may image goal scene as shoot scene, but the other man may image one as whole attack scene. This problem arises as long as recognition of an annotator and a user is different. There is another problem. It costs too much to annotate video. To annotate video by use keyword, we have to specify the starting point and the terminal point of the interval which is applicable to the keyword. This problem is serious especially when an object is a sport video. A sport video is broadcast live in many cases. If we want to put metadata on broadcast, we have to annotate movie in short time. To cope with problems described above, we introduces following three ideas into video annotation and retrieval. Vague Shot Interval Vague Modifier Vague Scene Structure Vague Shot Interval and Vague Modifier are used in order to solve the problem of cost. Vague Scene Structure is used to reflect the definition of the scene which is different with people. We describe details in following chapters; 3.1 Vague Shot Interval When we must annotate video in real time ornearcondition,wecannotspecify the starting point and the terminal point. Especially, it is impossible to specify the starting point in real time depending on the kind of keyword. It is because that the recognized timing of the event is after the starting point of the event. If we describe only the recognized timing of the event, the cost of annotating video is mitigated. It is impossible that specifying starting point and terminal 6

14 point of the event by the recognized timing of the event, but we can roughly estimate the starting point and the terminal point by thinking about characteristic of event. For example, we expect that the starting point and the terminal point of the interval correspond to the event pass is near recognized timing, and its length is 1 or 2 seconds. We also expect that the length of the interval correspond to the event long pass is longer than the length of the interval correspond to the event pass. If this knowledge is accumulated in a dictionary, the system can predict the boundary of interval based on the recognized timing. We accumulate this knowledge in the form of probability distribution of a set of the past intervals. This knowledge is generated in following two steps. First, we accumulate starting point and terminal point of past intervals. The instance of result is shown in table 3.1. Starting point and ending point are measured considering the time point becoming a standard as 0 second. The recognized timing of the event is suitable for this standard time point in realtime annotation. However it may not be suitable when video is annotated by another method. (E.g. Image recognition) starting point terminal point Past Interval second 3.9 second Past Interval second 3.1 second Past Interval second 5.0 second Past Interval second 4.3 second Table 3.1: Past intervals correspond to long pass Secondly, the probability distribution of starting point and terminal point is generated from past intervals (figure 3.1). We call the interval whose starting point and terminal point expressed by probability distribution Vague Shot Interval. In the process of annotating movie, annotator describes only the recognized 7

15 Figure 3.1: Making of probability distribution from past intervals timing of the event. Next, in the process of retrieval, the search engine predicts intervals correspond to the event by using the event s Vague Shot Interval. This processisshowninfigure 3.2. Vague Shot Interval of the event in the dictionary is introduced into the time line in video as a center is the timing that the event described by annotator. The search engine determines starting point and terminal point of this event from introduced Vague Shot Interval. If probability distribution of starting point is expressed f start (t) and probability distribution of terminal point is expressed f end (t), the probability I(t) thatthetimepointt is contained by interval is calculated by following formula. I(t) = Z t f start (t)dt Z t f end (t)dt (3.1) Next, the system set the threshold and determine interval of the event. The probability that the time point t is contained by interval is higher than threshold in the determined interval. If the threshold is set lower, the length of interval 8

16 Figure 3.2: Determin interval s boundary 9

17 becomes longer. The probability distribution of Vague Shot Interval is used not only for determine interval, but also estimate relation between a pair of intervals which is expressed by Vague Shot Interval. 3.2 Vague Modifier Vague Modifierisused to predictkeywords which modifythe intervalof a certain event. Intervals correspond to an event can be modified by various keywords. For example, a interval correspond to a event pass is modified by a keyword zidane if the event pass is released by zidane, and a event shot is modified by a keyword goal if the shot leads to goal in consequence. It is difficult to describe these keywords in real time because of the cost of description. Therefore we describe only the recognized timing of these keywords and the system estimate an event which is modified by the keyword. The scene which should be modified by keyword goal tends to distribute before the time point that goal is recognized. Because the timing we can recognize goal is near the last timing of goal scene. On the other hand, we can recognize event cornerkick before the scene which should be modified by keyword corner-kick. If this knowledge is kept in dictionary, modifying keywords of the interval of the event can be predicted. Vague Modifier is expressed by interval whose boundary is expressed by probability distribution like Vague Shot Interval, but in Vague Modifier the standard point is recognized time of modifier keyword and the interval is modified event. The process of making Vague Modifier is shown in figure 3.3. The system predicts which event is modified by the keyword. This process is shown in figure 3.4. First, Vague Modifiers of keywords are introduced into the time line in video. In the instance in the figure, Vague Modifier, which expresses distribution of event Pass from the recognized timing of player(zidane), is introduced into the time line. Second, Vague Shot Intervals of events are introduced into the time line in video. In the instance in the figure, Vague Shot Interval, which expressed distribution of event Pass from the recognized timing of pass, is introduced into the time line. Finally, the system determines the 10

18 Figure 3.3: Vague Modifier of GOAL 11

19 event which the keyword modifies. The event whose Vague Shot Interval has the largest interval to Vague Modifier is selected. In the instance in the figure, the system estimate that the second pass event is modified by the keyword Zidane because the Vague Shot Interval of the second pass event and the Vague Modifier of the keyword Zidane have the largest overlap. Figure 3.4: Vague Modifier of GOAL By using Vague Modifier, we reduce the cost of describing modifiers of events. 3.3 Vague Scene Structure Vague Scene Structure represents alternative structures of an event. This is used to reflect the difference of scene definition with a user. When a keyword is entered by a user to search it, image of the scene of the keyword may vary from user to user. For example, when user search scene by keyword goal, a user may image shot scene, other user may image shot scene and pass scene which leads to goal, and other user may image whole attack scene which leads to goal. This difference exists in annotators. The difference with annotator is can 12

20 be eliminated if annotators have common definition of scene. However it is not impossible to expect users to have common definition. For this reason, alternative structures of a keyword is stored in dictionary in the form of graph shown in figure 3.5 Figure 3.5: Alternative structures of Goal and Zidane The top-left graph means event shot which is modified by a keyword goal. The event which is in shaded circle is a component of scene. In search process, search engine use alternative structures, and respective results are shown to user. If user select one structure, search engine use it after that time. If a user enters multiple keywords to search engine, the structure imaged by user can be guessed from combination of keywords. 13

21 Chapter 4 Search Algorithm In this section we describe method to retrieve video scene from metadata. In section 4.1, we describe flow of retrieval. In section 4.2, we describe query form which expresses user request. In section 4.3, we explain about thesaurus which is used in retrieval to enhance accuracy of retrieval. In section 4.4, we describe fundamental relations between a pair of intervals which is expressed Vague Shot Interval. In section 4.5, we describe additional relations. 4.1 Overview of Search Process Search process can be divided into following steps. 1. Enter a query 2. Generate query graph by entered query 3. Abstract combination of events from metadata 4. Calculate goodness of each combination 5. Show abstracted scene for the user In step 1, user entered a query in textual form. In step 2, search engine generate query graph as shown in section from entered textual form. In step 3, search engine abstract events from metadata of video. Metadata is a set of tags. The tag contains following data. Time instant that this tag is recorded Tag information A tag information contains following data. Name Type (Modifier or Interval) Ifthetypeismodifier, tag information contains membership function. If the type is interval, tag information contains time distribution of boundary. The interval is expressed by Vague Shot Interval. Search engine abstract combinations of tags in according with query. In step 4, search engine calculate goodness of each combination abstracted in step 3. The goodness is the probability that the combination of tag meets 14

22 conditions described in query. In step 5, search engine shows candidate scenes for user. Candidate scenes are sorted in descending order of goodness. 4.2 Query Form Query form must be able to represent following information. semantic units which the user want conditions which the scene has to meet In common web page retrieval, users do not need to specify the semantic units. It is because that the semantic unit which the user gets after retrieval is essentially one page. But in video scene retrieval, such common semantic unit does not exist. Almost all the video have cuts. But there is no guarantee that a cut in video have some means. Especially in the sports video, it is rare case that a cut contains enough information unlikely in the movie video. Therefore the semantic units in the video need to be specified by the query. Asemanticunitwhichisspecified in the query is an event which has interval or a set of events. In addition to the semantic units, conditions the scene has to meet need to be specified. Conditions are described as relation between events. For example, we describe about a case that a user want to get attack scene which contains pass by zidane and this pass leads to goal. First the semantic unit which user wants is the following event. Attack Conditions which the scene has to meet are following item. This event of attack contains an event of pass This event of pass is modified by zidane This event of pass is modified by goal A graph which represents these conditions is shown in figure 4.1. In figure 4.1, ellipse means a semantic unit which is an interval corresponds to an event. Arrow means condition between events. In this instance, arrow means that the interval of attack event contains the interval of pass event. Keywords surrounded by rectangle means keywords which modifying events. In 15

23 Figure 4.1: Query Form(Query Graph) this instance, an event of pass is modified by keyword goal and keyword zidane in the sense that the pass is by zidane and the pass leads to goal. Thus the query form is expressed by a following graph. Node This expresses an event. Information of node consists of an event keyword, modifying keywords, and a flag. Flag specifies that this event is played or not. Edge This expressed a relation between keywords. Information of edge consists of the relation only. Available relations are shown in following chapter. In search process, search engine find the combination of events which meets the condition of query, and play the interval of each event which is to be played. But when the events which are to be played are more than two events and there is gap between events, system has to determine if the gap between events is played or not. This depends on a length of gap and a kind of the event. In current system, if total length of gap is larger than total length of event interval, gap is not played. 4.3 Use of Thesaurus In real-time video annotation, it is hard to describe abundant metadata because of the limited time. So it is more important to accept differences from keyword of query to keyword of metadata by using thesaurus. Thesaurus is a dictionary which has information of synonym, hypernym, and hyponym. Thesaurus can 16

24 be expressed graph structure shown in figure 4.2 Figure 4.2: Structure of thesaurus In this graph, each node means a set of synonym. Cross, Crossing Pass, and Center have the same meaning. Each arrow means the relation between nodes. Thenodepointedbythearrowisahypernyminrelationtothenodeinthe root of the arrow. By using information of synonym in thesaurus, the search engine can abstract cross tag even if the entered keyword is center. By using information of hypernym, the search engine can abstract cross tag even if the entered keyword is pass. If there are no tag whose keyword is synonym or hyponym of entered keyword, search engine abstract hypernym of the entered keyword from metadata. It is because that the tagged event, whose keyword is hypernym, may synonym of the entered keyword. 4.4 Fundamental Binary Relations We express relation between events by temporal relation between intervals corresponds to the events. There are thirteen fundamental relations, known as the Allen primitives, between pairs of time intervals. These relations are shown in figure 4.3. Every relation between a pair of time intervals belongs to one of Allen primitives, and Allen primitives are exclusive with each other. A relation between a pair of interval depends on position of each interval s starting point and terminal point. For example, the starting point of interval i 1 is before the starting point of interval i 2 and the terminal point of interval i 1 is after the terminal point of interval i 2, then the relation between i 1 and i 2 is Contains which means 17

25 Figure 4.3: Allen Primitives 18

26 the interval i 1 contains i 2. If the interval of event Attack is i 1 and the interval of event Pass is i 2,therelationbetweenAttack and Pass is Contains. However, if the starting point and terminal point are represented by Vague Shot Interval, the relation between intervals cannot be specified because the starting point and the terminal point cannot be specified. The instance is shown in figure 4.4. Figure 4.4: Possible relations between interval 1 and interval 2 In this case three relations are possible. The relation depends on the actual starting point and the actual end point of interval which is expressed by Vague Shot Interval. If the actual starting point of interval i 1 is after the actual terminal point of interval i 2,therelationbecomesAfter as a result. It is impossibletodeterminetherelationfromthepossible relations, so use probability of meeting the relation. In the case shown in figure 4.4, if the probability that the actual interval i 1 is after the actual interval i 2 is 0.2, the relation is expressed by following form. After(i 1,i 2 )=0.2 (4.1) Inthecaseshowninfigure 4.4, possible relations are After, Contains, Overlapped 19

27 by. So following formula has to be met. After(i 1,i 2 )+Contains(i 1,i 2 )+OverlappedBy(i 1,i 2 )=1 (4.2) The combination of intervals, whose probability of meeting condition is high is shown to the user preferentially. The relation between a pair of interval is determined by the relations of the starting points and the terminal points of intervals. For example, the relation between interval A and B is turned out to be Overlaps if the starting point of A is before the starting point of B, the terminal point of A is after the starting point B and the terminal point of A is before the terminal point of B. These decision processes is shown in 4.5 Figure 4.5: Binary tree for determining relation In this figure A start B start expresses that the starting point of A is before the starting point of B. By following the graph, the relation between a pair of intervals is determined from the relation between a pair of time point. Inthisgraph,thecasethateachtimepoint is simultaneous. Because the probability, that the time points which distribute according to probability distribution are simultaneous, is 0. Therefore the relation between a pair of interval which is expressed by Vague Shot Interval is expressed by 6 probabilities 20

28 of Af ter, Bef ore, Contains, During, Overlaps, Overlapped by. A sum of 6 probabilities must be 1. For calculate these six probabilities, the probability that a time point is before or after another time point have to be calculated. As shown in figure 4.6, if a time point T 1 follows the probability distribution f 1 (t) andatimepoint T 2 follows the probability distribution f 2 (t), the probability P (T 1 <T 2 )that the time point T 1 is before T 2 is calculated by following formula. P (T 1 <T 2 ) = (P (T 1 >T 2 ) = Z Z Z T f 2 (T ) f 2 (T ) Z = 1 P (T 1 <T 2 )) T f 1 (t)dtdt f 1 (t)dtdt Figure 4.6: Relation between time point and time point expressed by probability distribution Using this probability, each relation between a pair of interval is calculated. Each probability is calculated following formula. Before(i1,i2) = P (i1 start <i2 start ) P (i1 end <i2 start ) Overlaps(i1,i2) = P (i1 start <i2 start ) P (i1 end >i2 start ) P (i1 end <i2 end ) Contains(i1,i2) = P (i1 start <i2 start ) P (i1 end >i2 start ) P (i1 end >i2 end ) 21

29 After(i1,i2) = P (i1 start >i2 start ) P (i1 start >i2 end ) OverelappedBy(i1,i2) = P (i1 start >i2 start ) P (i1 start <i2 end ) P (i1 end >i2 end During(i1,i2) = P (i1 start >i2 start ) P (i1 start <i2 end ) P (i1 end <i2 end In these formula, i1 andi2 means interval which is expressed by Vague Shot Interval. i1 start means the starting point of interval i1. For describing conditions between events, use these relations as fundamental binary relations between intervals which are expressed by Vague Shot Interval. 4.5 Additional Binary Relations For express the relation between a pair of intervals, Fundamental Binary Relations are described in previous section. But in search process, other relations may be required. In this section, we describe additional binary relations. Additional binary relations are created by following operation on Fundamental Binary Relations. Combine Introduce error By combining of relations, we create new relation. New relations created by this operation are following relations. These relations treat equally each interval. ContainsorDuring HaveOverlap OverlapsorOverlapped by StandOff ContainsorDuring means that intervals are in inclusive relation. In this relation, two intervals are not distinguished. HaveOverlap means that intervals have overlap. OverlapsOrOverlapped by means that intervals have overlap and intervals are not in inclusive relation. StandOff means that two intervals are not have overlap. The probabilities that interval i 1 and interval i 2 meet 22

30 these relations are calculated by following formula. ContainsOrDuring(i 1,i 2 ) = Contains(i 1,i 2 )+During(i 1,i 2 ) HaveOverlap(i 1,i 2 ) = Contains(i 1,i 2 )+During(i 1,i 2 ) +Overlaps(i 1,i 2 )+OverlappedBy(i 1,i 2 ) OverlapsOrOverlappedBy(i 1,i 2 ) = Overlaps(i 1,i 2 )+OverlappedBy(i 1,i 2 ) StandOff(i 1,i 2 ) = Before(i 1 )+After(i 2 ) Next, we describe a method to introduce error into the relations. In section 4.4, we do not think about seven relations in Allen primitives. These relations are Meets, MetBy, Starts, StartedBy, Finishes, FinishedBy,and Cotemporal. It is because the probability, that a pair of time points expressed by probability distribution is simultaneous, is 0. But if the error introduced into the relations, these seven relations can be used. For example, if the error of 1 seconds are allowed, the intervals i 1 and i 2 in figure 4.7 do not meet the relation Meets(i 1,i 2 ). But if the error of 3 seconds are allowed, the intervals i 1 and i 2 in figure 4.7 meet the relation Meets(i 1,i 2 ). Figure 4.7: Gap between intervals In this instance, the relation of intervals depends on the relation of the terminal point of i 1 and the starting point of i 2. Other relations depend on the relation of starting point and terminal point too. So the relation of a pair of time points needs to be decided with the error. When the error of e secondsisallowed,therelation between the time point T 1 and T 2 is determined by following rule. 23

31 If the time points meet the following formula, the time point T 1 is before T 2. T 1 <T 2 + e (4.3) If the time points meet the following formula, the time point T 1 is simultaneous with T 2. (T 1 + e>t 2 ) (T 1 <T 2 + e) (4.4) If the time points meet the following formula, the time point T 1 is after T 2 T 1 + e>t 2 (4.5) If the time points are expressed by probability distribution, the probability that a pair of time point meets the relation is calculated by following formula. In following formula, P (T 1 <T 2,e) means the probability that T 1 is before T 2 on condition that the error is allowed to e seconds. P (T 1 <T 2,e) = P (T 1 >T 2,e) = P (T 1 = T 2,e) = Z Z Z Z T f 2 (T e) f 1 (T e) f 2 (T e) Z T Z T f 1 (t)dtdt f 2 (t)dtdt f 1 (t)dtdt = P (T 1 <T 2,e) P (T 1 >T 2,e) Z Z T f 1 (T e) f 2 (t)dtdt By using these relations, the relation between a pair of intervals is determined. The seven relations which have a relation of simultaneous time points are determined by following rules. meets If the terminal point of interval i 1 is simultaneous with the start point of interval i 2, the relation i 1 meets i 2 ismet. met-by If the start point of interval i 1 is simultaneous with the terminal point of interval i 2, the relation i 1 met by i 2 ismet. 24

32 starts If the start point of interval i 1 is simultaneous with the start point of interval i 2 andtheendpointofintervali 1 is before the terminal point of interval i 2, the relation i 1 starts i 2 ismet. started-by If the start point of interval i 1 is simultaneous with the start point of interval i 2 andtheendpointofintervali 1 is after the terminal point of interval i 2, the relation i 1 started by i 2 ismet. finishes If the start point of interval i 1 is after the start point of interval i 2 and the end point of interval i 1 is simultaneous with the terminal point of interval i 2, the relation i 1 finishes i 2 ismet. finished-by If the start point of interval i 1 is before the start point of interval i 2 and the end point of interval i 1 is simultaneous with the terminal point of interval i 2, the relation i 1 finished by i 2 is met. cotemporal If the start point of interval i 1 is simultaneous with the start point of interval i 2 andtheendpointofintervali 1 is simultaneous with the terminal point of interval i 2, the relation i 1 cotemporal i 2 ismet. Therefore, when the error is introduced, the probabilities of the seven relations are calculated by following formula. In following formula, e means the error which is allowed. i 1 and i 2 means the interval which is expressed by Vague Shot Interval. i1 start means the starting point of interval i 1. P (T 1 <T2,e)means the probability that the time point T 1 is before the time point T 2 on condition that the error is allowed to e seconds. Meets(i 1,i 2,e) = P (i1 end = i2 start,e) MetBy(i 1,i 2,e) = P (i1 start = i2 end,e) Starts(i 1,i 2,e) = P (i1 start = i2 start,e) P (i1 end <i2 end,e) StartedBy(i 1,i 2,e) = P (i1 start = i2 start,e) P (i1 end >i2 end,e) 25

33 Finishes(i 1,i 2,e) = P (i1 start >i2 start,e) P (i1 end = i2 end,e) FinishedBy(i 1,i 2,e) = P (i1 start <i2 start,e) P (i1 end = i2 end,e) Cotemporal(i 1,i 2,e) = P (i1 start = i2 start,e) P (i1 end = i2 end,e) Similarly, Fundamental Binary Relations can be introduced error. Before(i1,i2,e) = P (i1 start <i2 start,e) P (i1 end <i2 start,e) Overlaps(i1,i2,e) = P (i1 start <i2 start,e) P (i1 end >i2 start,e) P (i1 end <i2 end,e) Contains(i1,i2,e) = P (i1 start <i2 start,e) P (i1 end >i2 start,e) P (i1 end >i2 end,e) After(i1,i2,e) = P (i1 start >i2 start,e) P (i1 start >i2 end,e) OverelappedBy(i1,i2,e) = P (i1 start >i2 start,e) P (i1 start <i2 end,e) P (i1 end >i2 end,e) During, e(i1,i2) = P (i1 start >i2 start,e) P (i1 start <i2 end,e) P (i1 end <i2 end,e) 4.6 Query Easing for Multi Keywords When a user enter a query to retrieve scenes, the search engine can be religious about retrieving the scene which the user want if the user use query graph described in section 4.2. But it is better to reduce the cost to enter the query. Then a mechanism,that the search engine automatically decides the query without the user specifies it, is needed. The easiest method to enter query is lining up the keyword without specifying any conditions. In searching web, an information unit (one web page) which contains all keywords is abstracted. Although in searching video scene, there is no such information unit, so search engine have to abstract a scene by only the relation between keywords. Consequently the search engine selects the 26

34 scene whose intervals is related to each other, and show united intervals to user. In this research, the degree of relationship of a pair of intervals is estimated by following order. a pair of intervals is in inclusive relation apairofintervalshasanoverlap a pair of intervals do not have any overlap By using this order, when the user request the scene by lining up the keywords, the search engine retrieve scene by following process. 1. set the query that all keywords are in inclusive relation with each other 2. search scenes by the query 3. if the number of abstracted scene is not sufficient, relax the query and return to step 2. For determine the order to relax the query, set a score to each relation. In prototype, set a score 3 for the relation ContainsOrDuring, 1 for the relation HaveOverlap, and 0 for the relation BeforeOrAfter. For example, ContainsOrDuring(k1,k2) and HaveOverlap(k2,k3) are true, the score of scene which contains k1,k2,and k3 is 4. The search engine relax query in the order which the score is higher. The process of relax query is shown in figure 4.8 In this figure, the tightest query is Query 1. If sufficient number of scene is not abstracted by this query, search engine relax the query to Query 2, Query 3, and Query 4. The score of these queries is 7. The query is relaxed till the number of the abstracted scene is sufficient. After the number of scenes is sufficient, the search engine sorts the abstracted scenes. The order depends on the goodnessofthesceneandthedegreeof severity of query. The rate which is used to order the scenes is calculated by following formula. rate = ScoreOfQuery GoodnessOfScene (4.6) In the instance is shown in figure 4.9. The search engine shows user the abstracted scene in order of rate. 27

35 Figure 4.8: Process of query relaxation Figure 4.9: Calculate rate for each result 28

36 scene rate 1 Scene Scene Scene Scene Scene Scene Table 4.1: Past intervals correspond to long pass 29

37 Chapter 5 Experiment and Evaluation In this section, we describe a prototype system. After describing an overview of the system in section 5.1, we describe details of the system. In section 5.6, we evaluate effectiveness of the system. 5.1 System Overview Figure 5.1 represents an overview of system. The system can be divided into two parts. First part is Annotation part and second part is Search part. The search part can be divided into 3 steps. 1. Create thesaurus 2. Create tag information 3. Annotate movie At first, we created a thesaurus. There is some thesaurus which we can get fromtheweb,butweneedthedomainspecific thesaurus. In this prototype, we created a thesaurus for soccer video. Secondly, we created tag information. Tags are used in annotating movie. The information of tag consists of a keyword and tag type. The keyword is expressed by specifying a node of thesaurus. Tag information contains additional information depending on tag type. Tag types and additional information is described section 5.3. Finally, we annotated movie by using tag information. we simulated a situationthatwehavetoannotatemovieinreal-time. The search part can be divided into 3 steps. 1. Input query 2. Generate query graph 3. Match query graph from metadata At first, input query by lining the keywords. Secondly the system generate query graph from keywords. User can input query graph directly. After user input the query graph, the system matches query graph from metadata and show abstracted scenes to user in order the goodness of the scene. Environment of implementation is as shown below. 30

38 Figure 5.1: Overview of System 31

39 Development Software :VisualC#.NET OS : Windows XP SP1 CPU :Pentium42.53GHz Memory :512MB 5.2 Creating thesaurus We created thesaurus for soccer video. The captured image when we were creating thesaurus is shown in figure 5.2. The form of created thesaurus is tree structure. Each node has synonyms, and each edge represents the hypernym/hyponym relation. In this process, we made 37 nodes for common keyword of soccer and 16 nodes for player name and team name. We made player and team nodes about only Japan team. The number of synonym of each node is from1to5. Figure 5.2: Captured image in the process of making thesaurus 5.3 Creating Tag Information Figure 5.3 is a captured image when we were creating tag information. Tag information is used for simplifying the process of annotation. In annotation, we 32

40 describe only the ID of tag information at each time point. In search process, the search engine refers the tag information from ID. Tag information consists of following element. Figure 5.3: Captured image in the process of making tag information ID keyword tag type additional data Keyword is expressed by the node of thesaurus. So the keyword has to be in thesaurus. There is following tag types. Vague Interval Vague Modifier Starting Point Terminal Point Separator 33

41 If the tag type is Vague Interval, the tag is used for describing event interval by Vague Interval. The tag information contains the probability distribution of the starting point and the terminal point. This probability distribution is stored as additional data. We created probability distribution by past intervals in other soccer video. If the tag type is Vague Modifier, the tag is used for describing modifying keyword by Vague Modifier. The tag information contains distribution of interval of event which is modified by the modifier as additional data. If the tag type is Separator, the tag is used for separating video. For example, this is used to describe cut timing. An interval of the keyword of this tag information is a section enclosed with this Separator. The tag information contains accuracy as additional data. In the prototype, the interval of event Attack is described by Separator tag. If the tags whose type is Starting Point and Terminal Point are used to describe the starting point and terminal point of an interval in several. The tag information contains accuracy as additional data. 5.4 Annotating Movie We annotated movie by using created tag in tag information. Figure 5.4 is a captured image when we were annotating movie. In prototype, we described the second half of Japan vs. Belgium in World Cup The length of annotated segment is about 45 minutes. This process consists of following step. At first, tags are set corresponding to keyboard key. For example, set Pass tag to key P, and Goal tag to key G. And then play the movie and we describe tags by pushing the corresponding keyboard key when we recognize an event. For simulating the situation that we have to annotate in real-time, once we start playing the movie we did not stop playing. But we annotate movie in two operations because it is hard to describe enough tags at once. We assign different tags for each iteration on the assumption that movie is annotated in real time by two annotators. In first iteration, we describe events. Used tags whose type is Vague Interval 34

42 Figure 5.4: Captured image in annotation are Pass, Long Pass, Side Change, Clear, Cross, and Foul. Used tags whose type is Vague Modifier are Goal, Free Kick, Corner Kick, Off Side, Throw In, Goal Kick, Yellow Card, and Red Card. Used Tag whose type is Separator is Attack. In this instance we did not use tags whose types are Start Point and End Point. In second iteration, we describe player names. They are Suzuki, Yanagisawa, Nakata, and so on. We described the player of Japan team only. If we know well about both teams, describing both teams by one annotator is thought to be possible. We describe 450 tags in first iteration, and we describe 208 tags in second iteration. Second iteration is harder than first iteration because of difficulty of recognizing players. 5.5 Search Result We evaluate effectiveness of search algorithm by using metadata created in section 5.4. A captured image of search process is shown in figure 5.5. In this figure, user enters the query graph directly. User make node by entering 35

43 keywords and set relation between nodes. The result is shown in list view and a scene selected by user is played. Figure 5.5: Overview of System At first, we search goal scene. Entered query is shown in figure 5.6 and result is shown in table 5.1. Query graph for searching goal scene has a node whose keyword is Shot and a Modifier of Goal. As a result of search, 5 scenes are abstracted. There are 4 goal scenes in video. Recall ratio was 100 % and precision ratio was 80 %. One scene was not a goal scene. In this scene annotator describes goal because ball is in-goal, but the goal is cancelled by referee because of a foul. Secondly, we search a cross scene which leads to shot. Entered Query is shown in figure 5.7 and result is shown in 5.2. For searching a cross scene which leads to shot, we set condition that the relation of cross interval and shot interval is Meet. Query graph have two node, cross and shot, and one edge. TheedgeexpressesMeet relation and set the error to 2 seconds. As a result of Figure 5.6: Query 1 hits recall ratio precision ratio 5 4/4 4/5 Table 5.1: Result 1 36

44 Figure 5.7: Query 2 hits recall ratio precision ratio miss in annotation miss in retrieval 5 3/3 3/5 1 1 Table 5.2: Result 2 Figure 5.8: Query 3 search 5 scenes is abstracted. Precision ratio is 60 %. Two unsuccessful scenes are abstracted. In one scene the annotator mistake the shot scene for a cross scene. The rank of this scene is 3rd. In another scene a gap between cross and shot is too wide. The rank of this scene is 5th. Finally, we search Inamoto s pass scene. Entered Query is shown in figure 5.8. Number of abstracted scene is 62, and in 10 scenes whose goodness is higher than 0.4 is in these scenes. The result of top 10 scenes is shown in table scenes are unsuccessful because a miss in annotation and 2 scenes are unsuccessful because of a miss in retrieval. Scenes whose rank is lower than 11th contains many overlapping scenes. 37

45 hits precision ratio miss in annotation miss in retrieval 10 6/ Table 5.3: Result Evaluation As for the cost of annotation, we nearly reached our goal. We made metadata in two operations. In first iteration, we described 450 tags in 45 minutes. Tags are mainly representing occurrences of events. Tagging pace is about 6 seconds per tag, and there is margin to describe rare event. In second iteration, we described 208 tags in 45 minutes. Tags are mainly representing player s names of Japan team. Tagging pace is about 13 seconds per tag, but it was harder than first iteration. It is because of the difficulty of recognizing player. Describing both teams by one annotator is thought to be difficult. But its one reason is window size of video player( ), so if window size is more large or annotation is done in stadium, the difficulty of describing player name is thought to be reduced. In each operation we did not stop or rewind movie, so it is thought that two annotators can make metadata for practical use in real-time. As for the accuracy of annotation, there are some mistakes. In abstracted scenes after retrieval, roughly half of the wrong scenes come from misses in annotation. Misunderstanding of scene is more serious than deviation of timing whichwedescribetagat. Weoftenrecognized misunderstanding immediately, so operation which erases adjacent tag is needed. As for the accuracy of retrieval, there is room to make improvement. Roughly half of wrong scenes come from miss in search algorithm. The problems in result are following items. In some scenes the Vague Modifier is not work well. In abstracted scenes whose rank is lower, some scenes overlapped. The calculated relations between Vague Shot Intervals were reasonable. But in some scenes the Vague Modifier did not modify intervals well. The cause is 38

46 thought that membership function of Vague Modifier is determined subjectively. The membership function needs to be determined using some sort of statistical method. Overlapped scenes have to be combined if their rank is near with each other. If their rank has a big difference, the low rank scenes are thought that they have to be eliminated from abstracted scenes as result. 39

47 Chapter 6 Discussion In this section, we discuss the method to reduce the annotation cost. For this purpose we think about using image recognition and speech recognition. In addition, we discuss an application which will be more important when the technique of real-time annotation grow popular. 6.1 Multi-modal Annotation In prototype system, we annotate movie by using keyboard and attach an event tag at the time the event is recognized. But the method to attach tags is not only one way. The cost of annotation can be reduced by using other methods. Typical methods and its nature are shown below. Description Method Reliability of Timing Reliability of Meaning Button and Keyboard middle high Image Recognition high low Speech Recognition low middle Table 6.1: Past intervals correspond to long pass By using image recognition, we can recognize some information. For example, boundary of one cut can be recognized precisely. In addition, caption is recognized automatically which helps understanding of situation. By using audio recognition, we can estimate the importance of scene by the sound volume of audience or announcer, and we can recognize the content which the announcer says. There are various annotation techniques which use multimodal method [12][13]. In the technique we propose, using audio recognition will be effective. In our method, we annotate movie by describe the timing which the event is recognized. In the sports video, announcers often say a keyword when they recognize an event. So a part of the annotation process can be substituted by recognizing announcer s voice. When we use only the announcer s voice, we can search important scenes be- 40

48 cause the announcer often explain the scene if the scene is important. But there is gap between the timing announcer say and the timing announcer recognize, so the Vague Shot Interval have to be stretched to absorb the gap. 6.2 Query Generation from News Article By the growth of available digital video contents, the demand for summarizing video is increasing. There are various techniques to summarize video contents by using metadata. In Informedia[11], video skimming technique is achieved by using TF-IDF(term frequency/inverse document frequency) weighting. In this technique, if a certain keyword appears frequently in a segment and appears rarely in other segments, the keyword is considered to represent the segment. After getting representative keywords of segments, search intervals in which the announcer talks about the keyword by using image recognition and speech recognition. And then the skimmed video is generated by abstracting important segments of video. These methods are successful in summarizing video to understand overview in shorter time. But the summarized video is inferior in scene organization than a summarized video produced using manpower. Owing to this, we discuss a summarizing method using news article in this section. We often know the result of sports contents in news article. Once we know the result, we rarely watch whole contents even if it is recorded in our recorder. Incidentally, a sports article is good summary which is produced by a writer. So if a system can specify the scene correspond to a part of news article, it will be a good summarized video play abstracted scene along the article. The structure of a common news article is shown in figure 6.1 In the part of overview, the overview of this article is described. In sports news about match, this part consists of result, location, date, and so on of the match. The video scene correspond to the part of overview should be an overview of the match. So the corresponding scene is abstracted in order of level of importance. For example, the scene of goal, shoot, red card, and so on. In the part of body text, details of important scene are described. the part 41

49 Figure 6.1: Common structure of new article of body text has following structure shown in 6.2. A paragraph often has following nature. In a paragraph, the order of sentences matches the order of scenes in the video. In a paragraph, each sentence has common element. By using the keyword in the sentence and these natures of the paragraph, the scene correspond to a sentence is abstracted. A query which is used for specifying corresponding scene of the sentence can be generated by following process. 1. Abstract keywords which expresses an event from a sentence. 2. Abstract scenes using the query which is a set of abstracted keywords in step If the abstracted scenes are too many, narrow the scenes using common element of the paragraph. The thesaurus (described in section 4.3) is used forabstractcommonelement. 4. Select the combination of the scene which meet the order and the goodness of each scene is high as much as possible. The order of the scene corresponds to sentence in video have to match the order of the sentences in a paragraph. 42

携帯電話の 吸収率 (SAR) について / Specific Absorption Rate (SAR) of Mobile Phones

携帯電話の 吸収率 (SAR) について / Specific Absorption Rate (SAR) of Mobile Phones 携帯電話の 吸収率 (SAR) について / Specific Absorption Rate (SAR) of Mobile Phones 1. SC-02L の SAR / About SAR of SC-02L ( 本語 ) この機種 SC-02L の携帯電話機は 国が定めた電波の 体吸収に関する技術基準および電波防護の国際ガイドライ ンに適合しています この携帯電話機は 国が定めた電波の 体吸収に関する技術基準

More information

携帯電話の 吸収率 (SAR) について / Specific Absorption Rate (SAR) of Mobile Phones

携帯電話の 吸収率 (SAR) について / Specific Absorption Rate (SAR) of Mobile Phones 携帯電話の 吸収率 (SAR) について / Specific Absorption Rate (SAR) of Mobile Phones 1. Z-01K の SAR / About SAR of Z-01K ( 本語 ) この機種 Z-01K の携帯電話機は 国が定めた電波の 体吸収に関する技術基準および電波防護の国際ガイドライン に適合しています この携帯電話機は 国が定めた電波の 体吸収に関する技術基準

More information

Cloud Connector 徹底解説. 多様な基盤への展開を可能にするための Citrix Cloud のキーコンポーネント A-5 セールスエンジニアリング本部パートナー SE 部リードシステムズエンジニア. 哲司 (Satoshi Komiyama) Citrix

Cloud Connector 徹底解説. 多様な基盤への展開を可能にするための Citrix Cloud のキーコンポーネント A-5 セールスエンジニアリング本部パートナー SE 部リードシステムズエンジニア. 哲司 (Satoshi Komiyama) Citrix 1 2017 Citrix Cloud Connector 徹底解説 多様な基盤への展開を可能にするための Citrix Cloud のキーコンポーネント A-5 セールスエンジニアリング本部パートナー SE 部リードシステムズエンジニア 小宮山 哲司 (Satoshi Komiyama) 2 2017 Citrix このセッションのもくじ Cloud Connector 徹底解説 Cloud Connector

More information

Yamaha Steinberg USB Driver V for Mac Release Notes

Yamaha Steinberg USB Driver V for Mac Release Notes Yamaha Steinberg USB Driver V1.10.2 for Mac Release Notes Contents System Requirements for Software Main Revisions and Enhancements Legacy Updates System Requirements for Software - Note that the system

More information

MySQL Cluster 7.3 リリース記念!! 5 分で作る MySQL Cluster 環境

MySQL Cluster 7.3 リリース記念!! 5 分で作る MySQL Cluster 環境 MySQL Cluster 7.3 リリース記念!! 5 分で作る MySQL Cluster 環境 日本オラクル株式会社山崎由章 / MySQL Senior Sales Consultant, Asia Pacific and Japan 1 Copyright 2012, Oracle and/or its affiliates. All rights reserved. New!! 外部キー

More information

Methods to Detect Malicious MS Document File using File Structure Inspection

Methods to Detect Malicious MS Document File using File Structure Inspection MS 1,a) 2,b) 2 MS Rich Text Compound File Binary MS MS MS 98.4% MS MS Methods to Detect Malicious MS Document File using File Structure Inspection Abstract: Today, the number of targeted attacks is increasing,

More information

Introduction to Information and Communication Technology (a)

Introduction to Information and Communication Technology (a) Introduction to Information and Communication Technology (a) 6 th week: 1.5 Information security and management Kazumasa Yamamoto Dept. Computer Science & Engineering Introduction to ICT(a) 6th week 1

More information

Online Meetings with Zoom

Online Meetings with Zoom Online Meetings with Zoom Electronic Applications の下の部分に Zoom への入り口 What is Zoom? This Web Conferencing service is offered free of charge to eligible officers of technical committees, subcommittees, working

More information

A. 展開図とそこから折れる凸立体の研究 1. 複数の箱が折れる共通の展開図 2 通りの箱が折れる共通の展開図 3 通りの箱が折れる共通の展開図そして. 残された未解決問題たち 2. 正多面体の共通の展開図 3. 正多面体に近い立体と正 4 面体の共通の展開図 ( 予備 )

A. 展開図とそこから折れる凸立体の研究 1. 複数の箱が折れる共通の展開図 2 通りの箱が折れる共通の展開図 3 通りの箱が折れる共通の展開図そして. 残された未解決問題たち 2. 正多面体の共通の展開図 3. 正多面体に近い立体と正 4 面体の共通の展開図 ( 予備 ) A. 展開図とそこから折れる凸立体の研究 1. 複数の箱が折れる共通の展開図 2 通りの箱が折れる共通の展開図 3 通りの箱が折れる共通の展開図そして. 残された未解決問題たち この雑誌に載ってます! 2. 正多面体の共通の展開図 3. 正多面体に近い立体と正 4 面体の共通の展開図 ( 予備 ) このミステリー (?) の中でメイントリックに使われました! 主な文献 Dawei Xu, Takashi

More information

Androidプログラミング 2 回目 迫紀徳

Androidプログラミング 2 回目 迫紀徳 Androidプログラミング 2 回目 迫紀徳 前回の復習もかねて BMI 計算アプリを作ってみよう! 2 3 BMI の計算方法 BMI = 体重 [kg] 身長 [m] 2 状態も表示できると GOOD 状態低体重 ( 痩せ型 ) 普通体重肥満 (1 度 ) 肥満 (2 度 ) 肥満 (3 度 ) 肥満 (4 度 ) 指標 18.5 未満 18.5 以上 25 未満 25 以上 30 未満 30

More information

本書について... 7 本文中の表記について... 7 マークについて... 7 MTCE をインストールする前に... 7 ご注意... 7 推奨 PC 仕様... 8 MTCE をインストールする... 9 MTCE をアンインストールする... 11

本書について... 7 本文中の表記について... 7 マークについて... 7 MTCE をインストールする前に... 7 ご注意... 7 推奨 PC 仕様... 8 MTCE をインストールする... 9 MTCE をアンインストールする... 11 Installation Guide FOR English 2 About this guide... 2 Notations used in this document... 2 Symbols... 2 Before installing MTCE... 2 Notice... 2 Recommended computer specifications... 3 Installing MTCE...

More information

Unofficial Redmine Cooking - QA #782 yaml_db を使った DB のマイグレーションで失敗する

Unofficial Redmine Cooking - QA #782 yaml_db を使った DB のマイグレーションで失敗する Unofficial Redmine Cooking - QA #782 yaml_db を使った DB のマイグレーションで失敗する 2018/03/26 10:04 - Tamura Shinji ステータス : 新規開始日 : 2018/03/26 優先度 : 通常期日 : 担当者 : 進捗率 : 0% カテゴリ : 予定工数 : 0.00 時間 対象バージョン : 作業時間 : 0.00 時間

More information

電脳梁山泊烏賊塾 構造体のサイズ. Visual Basic

電脳梁山泊烏賊塾 構造体のサイズ. Visual Basic 構造体 構造体のサイズ Marshal.SizeOf メソッド 整数型等型のサイズが定義されて居る構造体の場合 Marshal.SizeOf メソッドを使う事に依り型のサイズ ( バイト数 ) を取得する事が出来る 引数に値やオブジェクトを直接指定するか typeof や GetType で取得した型情報を渡す事に依り 其の型のサイズを取得する事が出来る 下記のプログラムを実行する事に依り Marshal.SizeOf

More information

今日の予定 1. 展開図の基礎的な知識 1. 正多面体の共通の展開図. 2. 複数の箱が折れる共通の展開図 :2 時間目 3. Rep-Cube: 最新の話題 4. 正多面体に近い立体と正 4 面体の共通の展開図 5. ペタル型の紙で折るピラミッド型 :2 時間目 ~3 時間目

今日の予定 1. 展開図の基礎的な知識 1. 正多面体の共通の展開図. 2. 複数の箱が折れる共通の展開図 :2 時間目 3. Rep-Cube: 最新の話題 4. 正多面体に近い立体と正 4 面体の共通の展開図 5. ペタル型の紙で折るピラミッド型 :2 時間目 ~3 時間目 今日の予定 このミステリー (?) の中でメイントリックに使われました! 1. 展開図の基礎的な知識 1. 正多面体の共通の展開図 2. 複数の箱が折れる共通の展開図 :2 時間目 3. Rep-Cube: 最新の話題 4. 正多面体に近い立体と正 4 面体の共通の展開図 5. ペタル型の紙で折るピラミッド型 :2 時間目 ~3 時間目 Some nets are available at http://www.jaist.ac.jp/~uehara/etc/origami/nets/index-e.html

More information

J の Lab システムの舞台裏 - パワーポイントはいらない -

J の Lab システムの舞台裏 - パワーポイントはいらない - JAPLA 研究会資料 2011/6/25 J の Lab システムの舞台裏 - パワーポイントはいらない - 西川利男 学会の発表などでは 私は J の Lab を活用している 多くの人が使っているパワーポイントなぞ使う気にはならない J の Lab システムは会場の大きなスクリーンで説明文書が出来ることはもちろんだが システム自身が J の上で動いていることから J のプログラムが即実行出来て

More information

Unified System Management Technology for Data Centres

Unified System Management Technology for Data Centres Unified System Management Technology for Data Centres データセンタ向け統合システム管理技術 Abstract Fujitsu s Unified System Management Technology (USMT) is a powerful, ubiquitous infrastructure that harnesses Web Service

More information

Certificate of Accreditation

Certificate of Accreditation PERRY JOHNSON LABORATORY ACCREDITATION, INC. Certificate of Accreditation Perry Johnson Laboratory Accreditation, Inc. has assessed the Laboratory of: System One Co., Ltd. 1208-1 Otai, Saku-shi, Nagano

More information

Yamaha Steinberg USB Driver V for Windows Release Notes

Yamaha Steinberg USB Driver V for Windows Release Notes Yamaha Steinberg USB Driver V1.9.11 for Windows Release Notes Contents System Requirements for Software Main Revisions and Enhancements Legacy Updates System Requirements for Software - Note that the system

More information

API サーバの URL. <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE COMPLIANCE_SCAN SYSTEM "

API サーバの URL. <?xml version=1.0 encoding=utf-8?> <!DOCTYPE COMPLIANCE_SCAN SYSTEM Policy Compliance PC スキャン結果の XML Policy Compliance(PC) スキャンの結果は ユーザインタフェースのスキャン履歴リストから XML 形式でダウンロードできます UI からダウンロードした XML 形式の PC スキャン結果には その他のサポートされている形式 (PDF HTML MHT および CSV) の PC スキャン結果と同じ内容が表示されます

More information

Googleの強みは ささえるのは世界一のインフラ. Google File System 2008年度後期 情報システム構成論2 第10回 クラウドと協調フィルタリング. 初期(1999年)の Googleクラスタ. 最近のデータセンタ Google Chrome Comicより

Googleの強みは ささえるのは世界一のインフラ. Google File System 2008年度後期 情報システム構成論2 第10回 クラウドと協調フィルタリング. 初期(1999年)の Googleクラスタ. 最近のデータセンタ Google Chrome Comicより Googleの強みは 2008年度後期 情報システム構成論2 第10回 クラウドと協調フィルタリング 西尾 信彦 nishio@cs.ritsumei.ac.jp 立命館大学 情報理工学部 Cloud Computing 全地球規模で構成された圧倒的なPCクラスタ 部分的な機能不全を補う機能 あらゆる種類の情報へのサービスの提供 Web上の 全 情報 地図情報 (実世界情報) どのように利用されているかを機械学習

More information

PSLT Adobe Typekit Service (2016v1.1)

PSLT Adobe Typekit Service (2016v1.1) 1. Typekit Service. 1.1 Desktop Publishing. Page 1 of 2 (A) Customer may only use Typekit Desktop (including any Distributed Code that Adobe permits to be synced or otherwise made available to Customer

More information

暗い Lena トーンマッピング とは? 明るい Lena. 元の Lena. tone mapped. image. original. image. tone mapped. tone mapped image. image. original image. original.

暗い Lena トーンマッピング とは? 明るい Lena. 元の Lena. tone mapped. image. original. image. tone mapped. tone mapped image. image. original image. original. 暗い Lena トーンマッピング とは? tone mapped 画素値 ( ) output piel value input piel value 画素値 ( ) / 2 original 元の Lena 明るい Lena tone mapped 画素値 ( ) output piel value input piel value 画素値 ( ) tone mapped 画素値 ( ) output

More information

MetaSMIL : A Description Language for Dynamic Integration of Multimedia Content

MetaSMIL : A Description Language for Dynamic Integration of Multimedia Content Master Thesis MetaSMIL : A Description Language for Dynamic Integration of Multimedia Content Supervisor Professor Katsumi TANAKA Department of Social Informatics Graduate School of Informatics Kyoto University

More information

Relaxed Consistency models and software distributed memory. Computer Architecture Textbook pp.79-83

Relaxed Consistency models and software distributed memory. Computer Architecture Textbook pp.79-83 Relaxed Consistency models and software distributed memory Computer Architecture Textbook pp.79-83 What is the consistency model? Coherence vs. Consistency (again) Coherence and consistency are complementary:

More information

Yamaha Steinberg USB Driver V for Windows Release Notes

Yamaha Steinberg USB Driver V for Windows Release Notes Yamaha Steinberg USB Driver V1.10.4 for Windows Release Notes Contents System Requirements for Software Main Revisions and Enhancements Legacy Updates System Requirements for Software - Note that the system

More information

Vehicle Calibration Techniques Established and Substantiated for Motorcycles

Vehicle Calibration Techniques Established and Substantiated for Motorcycles Technical paper Vehicle Calibration Techniques Established and Substantiated for Motorcycles モータサイクルに特化した車両適合手法の確立と実証 Satoru KANNO *1 Koichi TSUNOKAWA *1 Takashi SUDA *1 菅野寛角川浩一須田玄 モータサイクル向け ECU は, 搭載性をよくするため小型化が求められ,

More information

~ ソフトウエア認証への取り組みと課題 ~

~ ソフトウエア認証への取り組みと課題 ~ 第 1 回航空機装備品認証技術オープンフォーラム ~ ソフトウエア認証への取り組みと課題 ~ 2019 年 3 月 14 日 The information in this document is the property of Sumitomo Precision Products Co.,LTD.(SPP) and may not be duplicated, or disclosed to any

More information

Agilent. IO Libraries Suite 16.3/16.2 簡易取扱説明書. [ IO Libraries Suite 最新版 ]

Agilent. IO Libraries Suite 16.3/16.2 簡易取扱説明書. [ IO Libraries Suite 最新版 ] Agilent IO Libraries Suite 16.3/16.2 簡易取扱説明書 この簡易取扱説明書は Agilent IO Libraries Suite 16.3 / 16.2 ( 以後 IO Lib. ) の簡易説明書です 詳細につきましては各 Help や下記の弊社 web をご参照ください [ IO Libraries Suite 最新版 ] http://www.agilent.com/find/iolib

More information

UB-U01III/U02III/U03II User s Manual

UB-U01III/U02III/U03II User s Manual English UB-U01III/U02III/U03II User s Manual Standards and Approvals Copyright 2003 by Seiko Epson Corporation Printed in China The following standards are applied only to the boards that are so labeled.

More information

Studies of Large-Scale Data Visualization: EXTRAWING and Visual Data Mining

Studies of Large-Scale Data Visualization: EXTRAWING and Visual Data Mining Chapter 3 Visualization Studies of Large-Scale Data Visualization: EXTRAWING and Visual Data Mining Project Representative Fumiaki Araki Earth Simulator Center, Japan Agency for Marine-Earth Science and

More information

JASCO-HPLC Operating Manual. (Analytical HPLC)

JASCO-HPLC Operating Manual. (Analytical HPLC) JASCO-HPLC Operating Manual (Analytical HPLC) Index A) Turning on Equipment and Starting ChromNav... 3 B) For Manual Measurement... 6 (1) Making Control Method... 7 (2) Preparation for Measurement... 9

More information

Lecture 4 Branch & cut algorithm

Lecture 4 Branch & cut algorithm Lecture 4 Branch & cut algorithm 1.Basic of branch & bound 2.Branch & bound algorithm 3.Implicit enumeration method 4.B&B for mixed integer program 5.Cutting plane method 6.Branch & cut algorithm Slide

More information

Rechargeable LED Work Light

Rechargeable LED Work Light Rechargeable LED Work Light 充電式 LED 作業灯 Model:SWL-150R1 Using LED:LG innotek SMD, HI-POWER(150mA 15 position) Color Temperature:5,700 kelvin Using Battery:LG chemical Li-ion Battery(2,600mA 1set) Brightness

More information

Computer Programming I (Advanced)

Computer Programming I (Advanced) Computer Programming I (Advanced) 7 th week Kazumasa Yamamoto Dept. Comp. Sci. & Eng. Computer Programming I (Adv.) 7th week 1 Exercise of last week 1. Sorting by bubble sort Compare the bubble sort with

More information

Zabbix ログ解析方法. 2018/2/14 サイバートラスト株式会社 Linux/OSS 事業部技術統括部花島タケシ. Copyright Cybertrust Japan Co., Ltd. All rights reserved.

Zabbix ログ解析方法. 2018/2/14 サイバートラスト株式会社 Linux/OSS 事業部技術統括部花島タケシ. Copyright Cybertrust Japan Co., Ltd. All rights reserved. Zabbix ログ解析方法 2018/2/14 サイバートラスト株式会社 Linux/OSS 事業部技術統括部花島タケシ Zabbix ログ解析方法 サイバートラスト株式会社 Linux/OSS 事業部技術統括部花島タケシ 2 自己紹介 MIRACLE ZBXサポート担当 Zabbixソースコード調査 ドキュメント作成 ( 当社ブログも執筆 ) ときどき新規機能追加もしたりします 4.0 へ向けての機能紹介等

More information

Snoop cache. AMANO, Hideharu, Keio University Textbook pp.40-60

Snoop cache. AMANO, Hideharu, Keio University Textbook pp.40-60 cache AMANO, Hideharu, Keio University hunga@am.ics.keio.ac.jp Textbook pp.40-60 memory A small high speed memory for storing frequently accessed data/instructions. Essential for recent microprocessors.

More information

WD/CD/DIS/FDIS stage

WD/CD/DIS/FDIS stage ISO #### All rights reserved ISO TC ###/SC ##/WG # Secretariat: XXXX テンプレート中 解説に相当する部分の和訳を黄色ボックスにて示します 一般財団法人日本規格協会 Title (Introductory element Main element Part #: Part title) WD/CD/DIS/FDIS stage Warning

More information

PRODUCT DESCRIPTIONS AND METRICS

PRODUCT DESCRIPTIONS AND METRICS PRODUCT DESCRIPTIONS AND METRICS 1. Multiple-User Access. 1.1 If On-Premise Software licensed on a per-user basis is installed on a Computer accessible by more than one User, then the total number of Users

More information

サンプル. NI TestStand TM I: Introduction Course Manual

サンプル. NI TestStand TM I: Introduction Course Manual NI TestStand TM I: Introduction Course Manual Course Software Version 4.1 February 2009 Edition Part Number 372771A-01 NI TestStand I: Introduction Course Manual Copyright 2009 National Instruments Corporation.

More information

BMW Head Up Display (HUD) Teardown BMW ヘッドアップディスプレイティアダウン

BMW Head Up Display (HUD) Teardown BMW ヘッドアップディスプレイティアダウン BMW Head Up Display (HUD) Teardown BMW ヘッドアップディスプレイティアダウン FEATURES: 製品の特徴 Head Up Display Socionext MB88F333BA 3.15-inch WVGA IPS LCD Techno Solutions Manufacturer Nippon Seiki Model Number 6230-9 367

More information

振込依頼書記入要領 Entry Guide for Direct Deposit Request Form

振込依頼書記入要領 Entry Guide for Direct Deposit Request Form 振込依頼書記入要領 Entry Guide for Direct Deposit Request Form 国立大学法人名古屋大学 National University Corporation Nagoya University この振込依頼書は 本学が貴社にお支払いする代金をご指定の金融機関口座に銀行振込するためのものです 新規に登録される場合 あるいは内容を一部変更される場合はその都度 この申出書を提出していただくよう

More information

Preparing Information Design-Oriented. Posters. easy to. easy to. See! Understand! easy to. Convey!

Preparing Information Design-Oriented. Posters. easy to. easy to. See! Understand! easy to. Convey! Preparing Information Design-Oriented Posters easy to Convey! easy to See! easy to Understand! Introduction What is the purpose of a presentation? It is to convey accurately what you want to convey to

More information

Quick Install Guide. Adaptec SCSI RAID 2120S Controller

Quick Install Guide. Adaptec SCSI RAID 2120S Controller Quick Install Guide Adaptec SCSI RAID 2120S Controller The Adaptec SCSI Raid (ASR) 2120S Controller is supported on the HP Workstation xw series with Microsoft Windows 2000 and Windows XP operating systems

More information

IP Network Technology

IP Network Technology IP Network Technology IP Internet Procol QoS Quality of Service RPR Resilient Packet Ring FLASHWAVE2700 Abstract The Internet procol (IP) has made it possible to drastically broaden the bandwidth of networks

More information

マルチビットアップセット耐性及びシングルビットアップセット耐性を備えた

マルチビットアップセット耐性及びシングルビットアップセット耐性を備えた マルチビットアップセット耐性及びシングルビットアップセット耐性を備えた 8T SRAM セルレイアウト 吉本秀輔神戸大学博士課程 1 年 E-mail : yoshipy@cs28.cs.kobe-u.ac.jp 1 Outline 背景 提案 8T SRAM cell layout ソフトエラーシミュレーション結果 消費電力比較結果 まとめ 2 Outline 背景 提案 8T SRAM cell

More information

Certificate of Accreditation

Certificate of Accreditation PERRY JOHNSON LABORATORY ACCREDITATION, INC. Certificate of Accreditation Perry Johnson Laboratory Accreditation, Inc. has assessed the Laboratory of: NOISE LABORATORY CO., LTD. Customer Service Center

More information

フラクタル 1 ( ジュリア集合 ) 解説 : ジュリア集合 ( 自己平方フラクタル ) 入力パラメータの例 ( 小さな数値の変化で模様が大きく変化します. Ar や Ai の数値を少しずつ変化させて描画する. ) プログラムコード. 2010, AGU, M.

フラクタル 1 ( ジュリア集合 ) 解説 : ジュリア集合 ( 自己平方フラクタル ) 入力パラメータの例 ( 小さな数値の変化で模様が大きく変化します. Ar や Ai の数値を少しずつ変化させて描画する. ) プログラムコード. 2010, AGU, M. フラクタル 1 ( ジュリア集合 ) PictureBox 1 TextBox 1 TextBox 2 解説 : ジュリア集合 ( 自己平方フラクタル ) TextBox 3 複素平面 (= PictureBox1 ) 上の点 ( に対して, x, y) 初期値 ( 複素数 ) z x iy を決める. 0 k 1 z k 1 f ( z) z 2 k a 写像 ( 複素関数 ) (a : 複素定数

More information

DürrConnect the clever connection. The quick connection with the Click

DürrConnect the clever connection. The quick connection with the Click DürrConnect the clever connection The quick connection with the Click 90d Elbow Securing clip 45d Elbow O-rings Double plug Plug D36 Double socket Double socket with valve カチッ と接続早い 確実 便利 新しく開発された接続システム

More information

Motion Path Searches for Maritime Robots

Motion Path Searches for Maritime Robots Journal of National Fisheries University 59 ⑷ 245-251(2011) Motion Path Searches for Maritime Robots Eiji Morimoto 1, Makoto Nakamura 1, Dai Yamanishi 1 and Eiki Osaki 2 Abstract : A method based on genetic

More information

UML. A Model Trasformation Environment for Embedded Control Software Design with Simulink Models and UML Models

UML. A Model Trasformation Environment for Embedded Control Software Design with Simulink Models and UML Models Simulink UML 1,a) 1, 1 1 1,b) 1,c) 2012 3 5, 2012 9 10 Simulink UML 2 MATLAB/Simulink Simulink UML Simulink UML UML UML Simulink Simulink MATLAB/Simulink UML A Model Trasformation Environment for Embedded

More information

Centralized (Indirect) switching networks. Computer Architecture AMANO, Hideharu

Centralized (Indirect) switching networks. Computer Architecture AMANO, Hideharu Centralized (Indirect) switching networks Computer Architecture AMANO, Hideharu Textbook pp.92~130 Centralized interconnection networks Symmetric: MIN (Multistage Interconnection Networks) Each node is

More information

Nonfinancial Reporting Track:03 Providing non-financial information to reporters, analysts and asset managers; the EDINET Case

Nonfinancial Reporting Track:03 Providing non-financial information to reporters, analysts and asset managers; the EDINET Case Nonfinancial Reporting Track:03 Providing non-financial information to reporters, analysts and asset managers; the EDINET Case Nomura Research Institute, Ltd. Data Analyst Chie Mitsui Contents for today

More information

URL IO オブジェクト指向プログラミング特論 2018 年度只木進一 : 工学系研究科

URL IO オブジェクト指向プログラミング特論 2018 年度只木進一 : 工学系研究科 URL IO オブジェクト指向プログラミング特論 2018 年度只木進一 : 工学系研究科 2 ネットワークへのアクセス ネットワークへの接続 TCP:Socket 利用 UDP:DatagramSocket 利用 URL へのアクセス 3 application String Object reader / writer char stream byte device 4 階層化された IO の利点

More information

楽天株式会社楽天技術研究所 Autumn The Seasar Foundation and the others all rights reserved.

楽天株式会社楽天技術研究所 Autumn The Seasar Foundation and the others all rights reserved. 2008 Autumn Seasar の中の中 楽天株式会社楽天技術研究所 西澤無我 1 Seasar の中の中 Javassist (Java バイトコード変換器 ) の説明 S2Container ( 特に S2AOP) は静的に 動的にコンポーネントを拡張可能 実行時に Java バイトコードを生成 編集 Javassist を利用 component interceptor1 interceptor2

More information

PGroonga 2. Make PostgreSQL rich full text search system backend!

PGroonga 2. Make PostgreSQL rich full text search system backend! PGroonga 2 Make PostgreSQL rich full text search system backend! Kouhei Sutou ClearCode Inc. PGConf.ASIA 2017 2017-12-05 Targets 対象者 Want to implement full text search with PostgreSQL PostgreSQL で全文検索したい

More information

サーブレットと Android との連携. Generated by Foxit PDF Creator Foxit Software For evaluation only.

サーブレットと Android との連携. Generated by Foxit PDF Creator Foxit Software   For evaluation only. サーブレットと Android との連携 Android からサーブレットへの GET リクエスト Android からサーブレットにリクエストを出すには スレッドを使わなければなりません 枠組みは以下のようになります Android 側 * Hello JSON package jp.ac.neec.kmt.is04.takata; import の記述 public class HelloJsonActivity

More information

IPv6 関連 WG の状況 (6man, v6ops, softwire)

IPv6 関連 WG の状況 (6man, v6ops, softwire) 第 88 回 IETF 報告会 IPv6 関連 WG の状況 (6man, v6ops, softwire) 2013 年 12 月 20 日 NECアクセステクニカ株式会社川島正伸 kawashimam vx.jp.nec.com 目次 自己紹介 6man WG v6ops WG softwire WG 最後に 2001:db8:café::2 自己紹介 氏名 : 川島正伸 (Nickname:

More information

Quick Installation Manual

Quick Installation Manual Safety Light Curtain F3SG- RA Series http://www.ia.omron.com/f3sg-r Quick Installation Manual Document Title Safty Light Curtain /RE Series User's Manual Cat. No. Z352-E1 OMRON Corporation 2014-2018 All

More information

IRS16: 4 byte ASN. Version: 1.0 Date: April 22, 2008 Cisco Systems 2008 Cisco, Inc. All rights reserved. Cisco Systems Japan

IRS16: 4 byte ASN. Version: 1.0 Date: April 22, 2008 Cisco Systems 2008 Cisco, Inc. All rights reserved. Cisco Systems Japan IRS16: 4 byte ASN Version: 1.0 Date: April 22, 2008 Cisco Systems hkanemat@cisco.com 1 目次 4 byte ASN の対応状況 運用での変更点 2 4 byte ASN の対応状況 3 4 byte ASN の対応状況 IOS XR 3.4 IOS: 12.0S 12.2SR 12.2SB 12.2SX 12.5T

More information

PNRGOV/Ver11.1/ 旅客氏名表予約情報報告 (PNR01)

PNRGOV/Ver11.1/ 旅客氏名表予約情報報告 (PNR01) UNB: INTERCHANGE HEADER 項番については業務仕様書の入出力項目表の項番を参照 TAG COMP NAME PADIS EDIFACT NACCS 項番 項目名 / 設定値 特記事項 UNB INTERCHANGE HEADER C 1 M 1 S001 SYNTAX IDENTIFIER M 1 M 1 0001 Syntax identifier M a4 1 M a4 1

More information

Saki is a Japanese high school student who/ has just started to study/ in the US.//

Saki is a Japanese high school student who/ has just started to study/ in the US.// L3 gr8 or great? Part 1 Saki is a Japanese high school student who/ has just started to study/ in the US.// Recently,/ she received/ the following cellphone e-mail.// It says that/ her friends are going

More information

Web Billing User Guide

Web Billing User Guide Web Billing User Guide ( Smart Phone ) This guide describes how to use Web Billing service provided by NTT Finance. Your display on the screen may vary depending on the payment methods you have. Contents

More information

https://login.microsoftonline.com/ /oauth2 Protected API Your Client App Your Client App Your Client App Microsoft Account v2.0 endpoint Unified AuthN/Z endpoint Outlook.com (https://login.microsoftonline.com/common/oauth2/v2.0)

More information

Verify99. Axis Systems

Verify99. Axis Systems Axis Systems Axis Systems Mission Axis Systems, Inc. is a technology leader in the logic design verification market. Founded in 1996, the company offers breakthrough technologies and high-speed simulation

More information

Industrial Solar Power PoE Switch

Industrial Solar Power PoE Switch Industrial Solar Power Switch の技術や太陽光発電システムの業界をリードする統合ネットワークインストールの需要の増加のためにどこでも 惑星の 産業用太陽光発電の スイッチは現在 理想的なソリューションを提供します ゼロ炭素放出源アトス - 太陽の光 は パルス幅変調 (PWM) 充電コントローラが効果的にソーラーパネルが充電中にバッテリーバンクと同じ電圧で動作するように強制的に組み込まれています

More information

NI TB Introduction. Conventions INSTALLATION INSTRUCTIONS Wire Terminal Block for the NI PXI-2529

NI TB Introduction. Conventions INSTALLATION INSTRUCTIONS Wire Terminal Block for the NI PXI-2529 INSTALLATION INSTRUCTIONS NI TB-2636 4 32 2-Wire Terminal Block for the NI PXI-2529 Introduction This document describes how to install and connect signals to the National Instruments TB-2636 terminal

More information

Manufacturing that s good for people and good for the environment

Manufacturing that s good for people and good for the environment Manufacturing that s good for people and good for the environment ハアーモニーがめざすもの それは人に自然にやさしいモノづくり We re committed to manufacturing that s good for people and good for the environment. 経営理念 経営指針 Co rp o

More information

Infrared Data Association Trademark and Brand Guidelines

Infrared Data Association Trademark and Brand Guidelines Infrared Data Association Trademark and Brand Guidelines March 2011 1 Infrared Data Association s (IrDA) Philosophy on Trademarks and Brands IrDA's trademarks, certification marks and brands ( Marks )

More information

PCIe SSD PACC EP P3700 Intel Solid-State Drive Data Center Tool

PCIe SSD PACC EP P3700 Intel Solid-State Drive Data Center Tool Installation Guide - 日本語 PCIe SSD PACC EP P3700 Intel Solid-State Drive Data Center Tool Software Version 2.x 2015 年 4 月 富士通株式会社 1 著作権および商標 Copyright 2015 FUJITSU LIMITED 使用されているハードウェア名とソフトウェア名は 各メーカーの商標です

More information

TOOLS for MR V1.7.7 for Mac Release Notes

TOOLS for MR V1.7.7 for Mac Release Notes TOOLS for MR V1.7.7 for Mac Release Notes TOOLS for MR V1.7.7 for Mac consists of the following programs. - V1.7.4 - V1.6.4 - V1.7.5 Contents System Requirements for Software Main Revisions and Enhancements

More information

Denso Lexus GS250 TCU Teardown

Denso Lexus GS250 TCU Teardown Denso Lexus GS250 TCU Teardown FEATURES: Telematics Control Unit CDMA Cypress MB91F577BH, 32-bit, 80MHz Techno Solutions Manufacturer Denso Model Number 86741-53045 Carrier - Assembled in unknown Retail

More information

Synchronization with shared memory. AMANO, Hideharu Textbook pp.60-68

Synchronization with shared memory. AMANO, Hideharu Textbook pp.60-68 Synchronization with shared memory AMANO, Hideharu Textbook pp.60-68 Fork-join: Starting and finishing parallel processes fork Usually, these processes (threads) can share variables fork join Fork/Join

More information

Oracle Cloud で実現する DevOps

Oracle Cloud で実現する DevOps Java で創るクラウド時代のエンタープライズ開発 ~ マイクロサービス DevOps と Java の最新動向 ~ Oracle Cloud で実現する DevOps 2016 年 12 月 2 日 日本オラクル株式会社クラウド テクノロジー事業統括本部 Fusion Middleware 事業本部シニアセールスコンサルタント関屋信彦 以下の事項は 弊社の一般的な製品の方向性に関する概要を説明するものです

More information

MathWorks Products and Prices Japan September 2016

MathWorks Products and Prices Japan September 2016 MATLAB Product Family page 1 of 5 MATLAB 1 295,000 1,180,000 Parallel Computing Toolbox 145,000 580,000 Math and Optimization Symbolic Math Toolbox 145,000 580,000 Partial Differential Equation Toolbox

More information

Kazunari Okada( 岡田一成 ) Sr. Technical Marketing Manager ISO Vibration Analyst (CAT II) National Instruments Corporation Japan

Kazunari Okada( 岡田一成 ) Sr. Technical Marketing Manager ISO Vibration Analyst (CAT II) National Instruments Corporation Japan June 1 st 2018 in Tokyo The second time IIC & IVI joint workshop IIC & IVI sharing use case information Condition Monitoring and Predictive Maintenance Testbed Kazunari Okada( 岡田一成 ) Sr. Technical Marketing

More information

The Secret Life of Components

The Secret Life of Components Practical WebObjects Chapter 6 (Page 159-185): The Secret Life of Components WR WR at Csus4.net http://www.csus4.net/wr/ 目次詳細 The Hypertext Transfer Protocol Spying on HTTP The Request-Response Loop, Briefly

More information

YAS530B MS-3E Magnetic Field Sensor Type 3E

YAS530B MS-3E Magnetic Field Sensor Type 3E MS-3E Magnetic Field Sensor Type 3E Overview The is a 3-axis geomagnetic sensor device with the following circuits integrated on one chip: a buffer amplifier, an AD converter, a clock generator circuit,

More information

Operational Precaution

Operational Precaution User s Manual FieldMate R3.04 Operational Precaution Contents PART A PART B Operational Precaution: English version 和文版の操作注意事項が記載されております : Japanese version 17th Edition 1 PART A This document supplements

More information

さまざまなニーズにお応えできるラインナップ!

さまざまなニーズにお応えできるラインナップ! さまざまなニーズにお応えできるラインナップ! The Line-up that meets your various needs! 永い歴史を有する エレクトロニクス通信の沖電気 の豊富な経験と技術をベースに 一貫して船内通信分野に 多種多用途の電話機 自動 共電 バッテリーレス式及び各種の防爆型 を供給し続けております Based on a wide spectrum of experience

More information

GEO Grid の概要とその IT 技術の現状 将来について

GEO Grid の概要とその IT 技術の現状 将来について の概要とその IT 技術の現状 将来について 小島功情報技術研究部門産業技術総合研究所 kojima@ni.aist.go.jp 一部資料作成協力 & 引用 : 山本直孝 Steven Lynden, 岩男弘毅 山本浩万 児玉信介 松岡昌志 ( 順不同 産総研情報技術研究部門 ) 概要 とは 産総研地質部門 [ 旧地質調査所 ] と情報部門との分野連携プロジェクト 地質や地理 衛星データ処理など分野の研究者にサービスを提供しつつ

More information

船舶保安システムのセルフチェックリスト. Record No. Name of Ship 船名 flag 国籍 Name of Company 会社名 Date 点検日 Place 場所 Checked by 担当者名. MS-SELF-CHK-SHIP-j (2012.

船舶保安システムのセルフチェックリスト. Record No. Name of Ship 船名 flag 国籍 Name of Company 会社名 Date 点検日 Place 場所 Checked by 担当者名. MS-SELF-CHK-SHIP-j (2012. 船舶保安システムのセルフチェックリスト Record No. Name of Ship 船名 flag 国籍 Name of Company 会社名 Date 点検日 Place 場所 Checked by 担当者名 Is a copy of valid DOC and a valid SMC placed onboard the ship? 有効な DOC の写し及び SMC は備え置かれているか

More information

NUC and its Applications

NUC and its Applications 11 NUC とその応用 手のひらの上のCore NUC and its Applications The Core in the palm of your hands ネットワーク情報学部 石原秀男 School of Network and Information Hideo Ishihara Keywords: NUC, Core, Arduino, Kinect Abstract Next

More information

Private Sub 終了 XToolStripMenuItem_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles 終了 XToolStripMenuItem.

Private Sub 終了 XToolStripMenuItem_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles 終了 XToolStripMenuItem. Imports MySql.Data.MySqlClient Imports System.IO Public Class FrmMst Private Sub 終了 XToolStripMenuItem_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles 終了 XToolStripMenuItem.Click

More information

Ritsu-Mate Registration Manual (for Undergraduate Programs)

Ritsu-Mate Registration Manual (for Undergraduate Programs) Ritsu-Mate Registration Manual (for Undergraduate Programs) - Ritsumeikan University has introduced Ritsu-Mate, an online application / enrollment system that can be used to complete a part of the undergraduate

More information

OPTICAL TALK SET 光トークセット MODEL 415/430/450/450XL INSTRUCTION MANUAL 取扱説明書

OPTICAL TALK SET 光トークセット MODEL 415/430/450/450XL INSTRUCTION MANUAL 取扱説明書 OPTICAL TALK SET 光トークセット MODEL 415/430/450/450XL INSTRUCTION MANUAL 取扱説明書 HR1028-13J-11/110906 ** TABLE OF CONTENTS ** 1. GENERAL INFORMATION 1 2. SPECIFICATIONS 1 3. OPERATING INSTRUCTIONS 2 3-1. Descriptions

More information

Chapter 1 Videos Lesson 61 Thrillers are scary ~Reading~

Chapter 1 Videos Lesson 61 Thrillers are scary ~Reading~ LESSON GOAL: Can read about movies. 映画に関する文章を読めるようになろう Choose the word to match the underlined word. 下線の単語から考えて どんな映画かを言いましょう 1. The (thriller movie, sports video) I watched yesterday was scary. 2. My

More information

TS-M2M-0008v onem2m 技術仕様書サービス層 API 仕様 (CoAP 用 )

TS-M2M-0008v onem2m 技術仕様書サービス層 API 仕様 (CoAP 用 ) TS-M2M-0008v1.0.1 onem2m 技術仕様書サービス層 API 仕様 (CoAP 用 ) onem2m Technical Specification CoAP Protocol Binding 2015 年 3 月 16 日制定 一般社団法人情報通信技術委員会 THE TELECOMMUNICATION TECHNOLOGY COMMITTEE 本書は 一般社団法人情報通信技術委員会が著作権を保有しています

More information

Clinical Data Acquisition Standards Harmonization (CDASH)

Clinical Data Acquisition Standards Harmonization (CDASH) Revision History Clinical Data Acquisition Standards Harmonization (CDASH) Prepared by: CDISC CDASH Core and Domain Teams Document Number Release Date Updates Initial release Note: See 7.7 Representations

More information

HPE Insight Control サーバープロビジョニング 7.6 ビルドプランリファレンスガイド

HPE Insight Control サーバープロビジョニング 7.6 ビルドプランリファレンスガイド HPE Insight Control サーバープロビジョニング 7.6 ビルドプランリファレンスガイド HPE 部品番号 : 5200-2448 発行 : 2016 年 11 月第 1 版 1 Copyright 2012, 2016 Hewlett Packard Enterprise Development LP 本書の内容は 将来予告なしに変更されることがあります Hewlett Packard

More information

A note on quaternion, applied to attitude estimation Minoru HIGASHIGUCHI 四元数について衛星姿勢の逐次推定への応用例

A note on quaternion, applied to attitude estimation Minoru HIGASHIGUCHI 四元数について衛星姿勢の逐次推定への応用例 A note on quaternion, applied to attitude estimation Minoru HGASHGUCH Astract : We use the direction cosine matrix (DCM) descriing geometrical relation in 3 dimensional space. For computational purpose

More information

ポータブルメディア機器向けプロセッサ フォトフレーム向けメディアプロセッサ

ポータブルメディア機器向けプロセッサ フォトフレーム向けメディアプロセッサ Android ポータブルメディア機器向けプロセッサ フォトフレーム向けメディアプロセッサ Dual プロセッサ構造による 低消費電力 & 高性能 の両立 データ処理 (ARM11) と 音声 動画像の圧縮 伸張処理 (DSP あるいは専用 HW) が同時実行できる 各々の処理の切り替え等によるオーバーヘッドがない 動作周波数を低く抑えられるため 低消費電力プロセスの適用が可能となる 高度な低消費電力化の仕組み

More information

SteelEye Protection Suite for Linux

SteelEye Protection Suite for Linux SteelEye Protection Suite for Linux Postfix Recovery Kit v8.2.1 管理ガイド 2014 年 3 月 SteelEye and LifeKeeper are registered trademarks. Adaptec is a trademark of Adaptec, Inc. Adobe Acrobat is a registered

More information

2018 年 2 月 16 日 インテル株式会社

2018 年 2 月 16 日 インテル株式会社 2018 年 2 月 16 日 インテル株式会社 HPC 事業開発マネージャ 矢澤克巳 HPC Trends Exascale Computing Artificial Intelligence Workflow Convergence 2 Intel Scalable System Framework for HPC Modeling & Simulation HPC Data Analytics

More information

Appliance Edition 入門ガイド

Appliance Edition 入門ガイド [Type the document title] 1.0 2013 年 7 月 3725-69903-001/A Polycom RealPresence Capture Server - Appliance Edition 入門ガイド Polycom Document Title 1 商標情報 POLYCOM および Polycom 社製品に関連する製品名およびマークは Polycom, Inc.

More information

FUJITSU Software SystemcastWizard Professional V5.1 L30 ユーザーズガイド B7FW Z0(00) 2014 年 8 月

FUJITSU Software SystemcastWizard Professional V5.1 L30 ユーザーズガイド B7FW Z0(00) 2014 年 8 月 FUJITSU Software SystemcastWizard Professional V5.1 L30 ユーザーズガイド B7FW-0261-01Z0(00) 2014 年 8 月 本書をお読みになる前に 本製品のハイセイフティ用途での使用について 本製品は 一般事務用 パーソナル用 家庭用 通常の産業用等の一般的用途を想定して設計 製造されているものであり 原子力施設における核反応制御 航空機自動飛行制御

More information

BraindumpStudy. BraindumpStudy Exam Dumps, High Pass Rate!

BraindumpStudy.   BraindumpStudy Exam Dumps, High Pass Rate! BraindumpStudy http://www.braindumpstudy.com BraindumpStudy Exam Dumps, High Pass Rate! Exam : 200-120 日本語 (JPN) Title : CCNA Cisco Certified Network Associate CCNA (803) Vendor : Cisco Version : DEMO

More information

Project to Transfer Mission-critical System of Banks to Private Cloud

Project to Transfer Mission-critical System of Banks to Private Cloud Project to Transfer Mission-critical System of Banks to Private Cloud 倉田明憲 あらまし BCP Business Continuity Planning BCP DBMS Database Management System Abstract In post-2011-earthquake Japan, there is a heightened

More information

和英対訳版. IEC Standard Template のユーザーガイド 備考 : 英語原文掲載 URL ( 一財 ) 日本規格協会

和英対訳版. IEC Standard Template のユーザーガイド 備考 : 英語原文掲載 URL ( 一財 ) 日本規格協会 IEC Standard Template のユーザーガイド 和英対訳版 ( 一財 ) 日本規格協会 備考 : 英語原文掲載 URL http://www.iec.ch/standardsdev/resources/draftingpublications/layout_formatting/iec_t emplate/ IEC:2014 3 CONTENTS 1 Introduction... 5

More information