Intention Deduction by Demonstration for Tool-Handling Tasks

Size: px
Start display at page:

Download "Intention Deduction by Demonstration for Tool-Handling Tasks"

Transcription

1 國立交通大學 資訊科學與工程研究所 博士論文 基於人類示範之意圖推論於工具操作任務之應用 Intention Deduction by Demonstration for Tool-Handling Tasks 研究生 : 陳豪宇 指導教授 : 傅心家教授楊谷洋教授 中華民國一百零一年六月

2 基於人類示範之意圖推論於工具操作任務之應用 Intention Deduction by Demonstration for Tool-Handling Tasks 研究生 : 陳豪宇 Student: Hoa-Yu Chan 指導教授 : 傅心家教授 楊谷洋教授 Advisor: Prof. Hsin-Chia Fu Prof. Kuu-young Young 國立交通大學 資訊科學與工程研究所 博士論文 A Dissertation Submitted to Institute of Computer Science and Engineering College of Computer Science National Chiao Tung University in partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in Computer Science June 212 Hsinchu, Taiwan, Republic of China 中華民國一百零一年六月

3 基於人類示範之意圖推論於工具操作任務之應用 研究生 : 陳豪宇指導教授 : 傅心家教授 國立交通大學資訊科學與工程研究所 楊谷洋教授 摘 要 在不久的將來, 可以預期越來越多的機器人工作環境會從工廠移至家庭環境, 如果機器人在面對各種家庭任務時均需有對應的程式是不切實際的 因此, 有學者提出讓機器人從示範中學習的概念, 可以減少使用者分析與程式的負擔 然而許多遵循這概念的方法卻需要限制使用者動作或任務計畫進而得以推論使用者的意圖 為了避免這些限制, 我們提出一個新的方法讓機器人能從工具操作任務執行的軌跡中推論出示範者的意圖 在家庭環境中, 工具操作任務是常見的任務, 但是分析示範者的意圖卻不容易 我們的方法乃基於交叉驗證的概念, 定位出符合精細技巧操作的軌跡片段, 並且利用動態規劃尋找最有可能的意圖 我們提出的方法不需要事先定義可操作的動作或限制動作的速度, 並且在示範的過程中允許變換動作順序, 也可加入多餘的動作 在實驗中, 我們提出的方法以三種任務進行測試, 分別是倒水 泡咖啡及塗果醬任務, 在示範中改變任務物件的位置和數目以測試其影響 更進一步, 我們分析任務中各參數的影響來研究方法的適用性 實驗結果顯示我們的方法不但對於這三種任務可以推論使用者的意圖, 而且可以讓使用者在沒有限制的情況下以較自然且有效率的方式示範動作 ii

4 Intention Deduction by Demonstration for Tool-Handling Tasks Student: Hoa-Yu Chan Advisor: Prof. Hsin-Chia Fu Prof. Kuu-young Young Institute of Computer Science and Engineering National Chiao Tung University Abstract In the near future, more robots come to the home-like environment, the programming for task execution becomes very demanding, if not infeasible. The concept of learning from demonstration is thus introduced, which may remove the load of detailed analysis and programming from the user. However, many methods which follow the concept of learning from demonstration limit the motions of the user or task plan to deduce the intentions of the user. To avoid these limitations, in this dissertation, we propose a novel approach for the robot to deduce the intention of the demonstrator from the trajectories during task execution. We focus on the tool-handling task, which is common in the home environment, but complicated for analysis. We apply the concept of cross-validation to locate the portions of the trajectory that corresponds to delicate and skillful maneuvering, and apply an algorithm based on dynamic programming to search for the most probable intention. The proposed approach does not predefine motions or put constraints on motion speed, while allowing the event order to be altered and the presence of redundant operations during demonstration. In experiments, we apply the proposed approach for three different kinds of tasks: pouring, coffee-making, and fruit jam, with the number of objects and their locations varied during demonstrations. To further investigate its scalability and generality, we also perform intensive analysis on the parameters involved in the tasks. The results show that our approach can not only deduce the intentions of user in the three kinds of tasks but also let the demonstrations be executed in a natural and effective manner without the limitations. iii

5 Acknowledgment 首先感謝我的指導教授 楊谷洋教授與傅心家教授 在我攻讀學位長達十年的 期間給予指導 支持與協助 亦感謝口試委員林正中教授 單智君教授 蔡清池 教授 蘇木春教授 許秋婷教授 林銘瑤博士於口試時給予的寶貴意見 使本論 文可以更加完整 另外謝謝同學一元 木政 修任 一哲與 人與機器實驗室 的同窗在研究與實驗上的幫忙 使本論文的研究可以完成 最後要感謝我的家 人 我的父母 外公 外婆 妹妹 以及寵物貓咪 烏龜與小狗乖乖 iv

6 Table of contents 中文摘要 ii Abstract Acknowledgment Table of contents List of Tables iii iv v vii List of Figures viii Nomenclature xi 1 Introduction Background Contributions of the Dissertation Organization of the Dissertation Preliminaries and Surveys Motion Recognition Type Motion Matching Type Motion Synchronization Type Proposed Approach Intention Deduction Intention in Tool-handling Task Derivation of Optimal Motion Index Implementation of Intention Deduction v

7 3.2 Similar Function Reasoning for Similar Function Implementation of Similar Function Experimental Design for Extensibility and Robustness Experiments System Implementation Pouring Task Robustness in Pouring Tasks Coffee-making Task Robustness in Coffee-making Tasks Fruit Jam Task Discussion Comparison with Previous Approaches Application Scenario for Breakfast Preparation Conclusions Conclusions Future Research References vi

8 List of Tables 4.1 Specifications of the tracking system Polhemus FASTRAK Specifications of the Mitsubishi RV-2A Denavit-Hartenberg parameters of the Mitsubishi RV-2A Average position error between the trajectories of human operator and the generated trajectories in the pouring tasks Number of success in the presence of redundant operations The average path length and demonstration time in the pouring tasks Average position error between the trajectories of human operator and the generated trajectories in the coffee-making tasks vii

9 List of Figures 3.1 Conceptual diagram of the proposed approach A pouring task: (a) the setting of the vessels, (b) pouring vessel A to vessel B, and (c) pouring vessel A to vessel C Process for motion generation Examples for (a) motion cutting and (b) motion generation based on the pouring task shown in Fig Process for MI evaluation: (a) standard MI evaluation and (b) MI evaluation with strategy Process for MI generation Process for optimal MI derivation The validation of M.I.: (a) with M.I. and (b) with M.I. and operating order System implementation Tracking system Polhemus FASTRAK Long range transmitter Mitsubishi RV-2A type six-axis robot arm Experimental setups for the pouring task: (a) human demonstration and (b) robot execution The derived intentions for the eight demonstrations of the pouring task. 29 viii

10 4.7 Experimental results for the pouring 4 cups task executed by both the human operator and robot manipulator under new environmental states: (a) trajectory in X-Y plane, (b) variation of the height, and (c) variation of the tilt angle of vessel A The errors of the generated trajectories with the different numbers of the training demonstrations in the pouring 2 4 cups task The calculating time with the different numbers of demonstrations in the six pouring tasks Experimental setups for the coffee-making task: (a) human demonstration and (b) robot execution The derived intentions for the eight demonstrations of the coffeemaking task The derived intentions for the eight demonstrations of the coffeemaking task Experimental results for the coffee-making task 1 executed by both the human operator and robot manipulator under new environmental states: (a) trajectory in X-Y plane, (b) variation of the height, and (c) variation of the tilt angle of spoon A Experimental results for the coffee-making task 2 executed by both the human operator and robot manipulator under new environmental states: (a) trajectory in X-Y plane, (b) variation of the height, and (c) variation of the tilt angle of spoon A The errors of the generated trajectories with the different numbers of the training demonstrations in the coffee-making tasks The calculating time with the different numbers of demonstrations in the two coffee-making tasks ix

11 4.17 Experimental setups for the fruit jam task: (a) human demonstration and (b) robot execution The derived intentions for the ten demonstrations of the fruit jam task Experimental results for the fruit jam task executed by both the human operator and robot manipulator under new environmental states: (a) trajectory in X-Y plane, (b) variation of the height, and (c) variation of the tilt angle of knife A x

12 Nomenclature D E G i delicate motion the difference between generated motion and validating motion generated motion an index of training motions or generated motions j, k an index of delicate motions or move motions M MI Q T V move motion an index linking to a set of delicate motions a motion consists of delicate motions and move motions training motion validating motion xi

13 Chapter 1 Introduction 1.1 Background Due to the progress in service robot, more robots are entering the home or office environment. It can be expected that many challenging problems shall emerge when they deal with these highly uncertain and varying environments, such as path planning and manipulation [1,2]. One issue of interest is how to teach the robot to perform daily works effectively. To relieve the human operator from detailed task analysis and program coding, researchers have proposed letting the robot learn how to execute the task from observing human demonstration by itself [3]. However, the motions of human demonstration are difficult to be analyzed because the motions related to home or office environment may not be able to be predefined for robot learning. Besides, in the trajectory level, it is difficult that human repeats a motion in exactly the same speed and trajectory during multiple demonstrations, and the operated objects may be not fixed in the environment. Moreover, in the task level, some tasks can be accomplished with a varying operating order or redundant motions. These pose challenges to learning from demonstration. Many researchers have proposed approaches to tackling these challenges [4 7]. Among them, Calinon proposed an approach using Gaussian Mixture Regression (GMR) and Lagrange optimization to extract the unchanged motions from the multiple demonstrations [8,9]. The proposed approach demands the order of the operating motions to be the same during demonstrations. Dautenhahn and Nehaniv proposed an approach for the robot to learn from human demonstration by imitation [1], referred to as the correspondence problem, and later the team developed a system that can learn 2D arranging tasks [11,12]. Dillmann proposed a hierarchical structure for the robot to deal with complex tasks while the motion order can be changed [13], and later they went on analyzing human motion features for highlevel tasks [14]. With both symbolic and trajectory levels of skill representation, Ogawara proposed a method that determines the essential motions from the possi- 1

14 ble motions [15]. However, these proposed methods may need to pre-define human motions, limit the human operator to follow some motion type or speed, or limit operating order. The robot may, thus, not be able to learn the task automatically or the human operator not demonstrate the task naturally and efficiently. In contrast, babies of 7 8 month old can learn motions by regular pattern, while it is still unclear of the learning process [16]. Therefore, it motivates us to propose a method which can learn motion by demonstration without these shortcomings. 1.2 Contributions of the Dissertation In this dissertation, we propose an approach for the robot to learn the human intention from her/his demonstration. To allow the human operator for more natural and efficient manipulation during demonstration, the proposed approach (a) does not need to pre-define motions, (b) does not constrain the operator to perform the task with certain motion speed or motion type, (c) allows the order of the events to be altered, and (d) allows some redundant operations. For the motions of human manipulations, Dillmann classify them into three different types by their goals: transport operations for moving objects, device handling for changing the internal states of the objects, and tool handling for using tool to interact with objects. We focus on the tool-handling task, which is common in daily life [14]. The motions of this kind of task can be performed continuously without stopping, because the tool can be operated for multiple objects sequentially without leaving the hand, and it is complicated for analysis. Without predefined motions, the method of cross-validation is suitable to decide the unknown parameters of a learning system. Based on the concept of cross-validation, but with some modification, we propose an approach to identifying the portions of the trajectory corresponding to the delicate and dexterous maneuver of the demonstrator, referred to as motion features. These motion features, in some sense, exhibits the human skill in executing a certain task. The challenge is how to find the correct intentions, among all possible ones, that lead to the demonstrated trajectories. To tackle the complexity, 2

15 we apply the method of dynamic programming for the search. For demonstration, experiments based on three different kinds of tasks, the pouring, coffee-making, and fruit jam tasks, are performed. During the experiments, the locations of the operated objects and operating sequence may vary, and the motion features derived from the demonstrated trajectories are used for task execution under different experimental settings. To further demonstrate the scalability and generality of the proposed approach, we perform intensive analysis on the parameters involved in the tasks, such as numbers of objects and demonstrations, among others. 1.3 Organization of the Dissertation This dissertation is divided into six chapters. In Chapter 1, the background information of robot learning from demonstration and the contributions are introduced to explain why we proposed such a new approach for intention deduction. In Chapter 2, previous approaches are introduced and discussed. In Chapter 3, we describe the proposed approach, which is based on the concept of cross-validation to deduce the intention from demonstrations In Chapter 4, the experimental results are reported for performance evaluation, and the discussion on the proposed approach is given. Finally, in Chapter 5, the conclusions are presented and suggestions are stated for future research. 3

16 Chapter 2 Preliminaries and Surveys Many researchers have proposed letting the robot learn how to execute the toolhandling task from observing human demonstration by itself [4 7]. The tool-handling task is of the focus in this dissertation, which involves the interactions between tools and objects [17]. The resultant trajectory from task execution can be mainly divided into two types of motions: delicate motion (D) for delicate maneuver and move motion (M) between the delicate motions [15]. The delicate motion is much of the interest, since it serves to achieve the goal; by contrast, the move motion is not very critical. Meanwhile the delicate and move motions are executed alternately. As some delicate motions are noncontact motions, which do not contact the operated objects, such as those in pouring motions, the operated objects may not be determined directly. Besides, some tasks may be able to be accomplished when the order of executing the delicate motions is changed or redundant delicate motions are added during demonstration. Therefore, in order to tackle the uncertainty in demonstration, multiple demonstrations are collected. The robot may need to recognize the delicate motions and analyze the ordering of the delicate motions from the multiple demonstrations. Among the current approaches, they can basically be classified into three types based on either using pre-known human motions or certain assumptions [4]. 2.1 Motion Recognition Type The approaches of the first type are based on pattern recognition methods to classify motions before analyzing the ordering of the delicate motions [18 27]. In this type, human needs to predefine a set of operating motions before the robot observes the human demonstrations. Human collects all possible operating motions, labels the collected motions into different classes of motions, and uses them as training data. For example, in [18], the researchers define a set of human motion data, which consists of Power Grasp, Precision Grasp, Pour, Hand-over, Release, OK- 4

17 sign, Garbage, Start, and End, to classify human motions. Usually, Hidden Markov Model(HMM) is used to learn and recognize the operating motions because HMM can process stochastic human motion data in space and time. After motion recognition, in order to analyze the relation between operating motions, the relation of operating motions may be transformed into a symbolic sequence problem such as longest common subsequence problem (LCS) to find the common relation in the multiple demonstrations. Finally, the recognized motions are used to generate an operating trajectory based on the motion relations for robot execution. The concept of this type is not complicated, and the system can be implemented easily. However, it may not be able to handle all possible motions of daily life because the operating motions need to be collected before recognition. 2.2 Motion Matching Type The approaches of second type segment human motions and matches the common motions without predefining motions [14,15,28 35].First, the motions of the demonstrations are segmented based on some motion features such as the difference of operating speed [34,36] or the cyclic motion [37] without predefining motions. The approaches search for the common motions of all demonstrations by motion matching and they analyze the essential motions and the ordering of the motions. In [15], the problem of searching the common motions between the demonstrations is transformed to multiple sequence alignment problem to handle redundant motions. In [14], the researchers measure the similarities between all possible permutations of the motions of the subtasks to analyze the hierarchical structure of the common motions of the task. Finally, the common motions are searched to be the essential motions of the task and used to generate a new operating trajectory for execution. These approaches do not need to predefine possible motions, so this type can handle many known or unknown tasks of daily life. However, the motion hints such as the difference of operating speed or the cyclic motion, which are used to segment, limit human motions because only the hint-following motions can be segmented correctly. 5

18 2.3 Motion Synchronization Type The approaches of the third type find common portions from synchronized operating trajectories without motion segmentation [8,9,38 55]. First, the operating trajectories of the multiple demonstrations are synchronized as signals to avoid segmentation. Usually, the synchronization methods such as Dynamic Time Warpping (DTW) [56] or Continuous Profile Models (CPM) [57,58] are used to synchronize multiple signals. After synchronization, the difference of each portion of the synchronized trajectories are measured to find common operating motions from multiple demonstrations. For example, the synchronized trajectories can be transformed to the probability distribution by using Gaussian Mixture Models (GMM) to measure the expected operating trajectory [9]. Finally, the common operating trajectory is used to generate a new operating trajectory for the robot execution in a new environment. The approaches of this type can learn tasks without predefining motions or limiting human to follow some motion hints. However, these synchronization methods cannot handle permuting motions, so the ordering of the motions must be the same during multiple demonstrations. The order of operating motions is limited, and human may not be able to demonstrate task naturally and efficiently. 6

19 Chapter 3 Proposed Approach In this chapter, we describe the proposed approach which takes the intention deduction problem to be that of locating the delicate motion from the demonstrated trajectory. In Sec. 2.1, we describe the process of intention deduction. In Sec. 2.2, we explain similar function, which is used in intention deduction. And, in Sec. 2.3, we design a series of experiments for investigating its extensibility and robustness. 3.1 Intention Deduction In this section, first, we introduce what the intention is in the tool-handling task, and then formulate it. Then, we use the concept of validation to evaluate the motion index candidates. Finally, we use dynamic programming to search the optimal motion index from all possible candidates Intention in Tool-handling Task Fig. 3.1 shows the conceptual diagram of the proposed approach for intention deduction from demonstration. In Fig. 3.1, the robot first observes a series of human demonstrations and records the corresponding trajectories and environmental states. From these recorded motion data, the robot searches for the possible intentions that lead to the delicate motions. The derived intentions can then be used to generate new trajectories that respond to new environmental states. Let us take the pouring task shown in Fig. 3.2 as an example. In Fig. 3.2(a), three vessels A, B, and C are arbitrarily located on the table. And, in Figs. 3.2(b)-(c), the operator pours the content from vessel A to vessels B and C, respectively, and then places vessel A back on the table. During the demonstrations, the initial locations of the vessels may vary, and so does the pouring sequence. From the recorded trajectories and corresponding locations of the vessels (environmental states), the proposed approach identifies the intention of the operator, i.e., the portions of the trajectory that cor- 7

20 Demonstrated Demonstrated motions Human motions Intention Demonstration Human Deduction Intention Demonstration Deduction Intention Intention New environment New environment Generated Generated motion Motion motion Generation Motion Generation Figure Fig. 3.1: 4: Conceptual diagram of of the the proposed proposed approach. approach. Figure 1 The conceptual diagram of the proposed approach for intention learning by demonstration. Fig. 5: A pouring task: (a) the setting of the vessels, (b) pouring vessel A to vessel B, and (c) pouring vessel A to Figure 3.2: A pouring task: (a) the setting of the vessels, (b) pouring vessel A to vessel C. vessel B, and (c) pouring vessel A to vessel C. A to vessels B and C, respectively, and then places vessel A back on the table. During the demonstrations, the respond to the two pouring actions (delicate motions). With the derived intention, initial locations of the vessels may vary, and so does the pouring sequence. From the recorded trajectories and the robot is then able to execute the pouring task with the vessels located at various locations and possibly altered pouring sequences. corresponding locations of the vessels (environmental states), the proposed approach will identify the intention of Before the discussion on the process of intention deduction, we first describe the operator, i.e., the portions of the trajectory that correspond to the two pouring actions (delicate motions). With how the motion can be generated under new environmental states when the human intention has already been derived. We start with the representation of the intention the derived intention, the robot is then able to execute the pouring task with the vessels located at various locations I. Assume that there are N delicate motions and S objects involved in a demon- altered pouring task. Because sequences. the intention is closely related to the delicate motions of the and possiblystrated maneuver, I is formulated as a set of delicate motions, D n (t), associated with the corresponding objects Obj s : Before the discussion on the process of intention deduction, we first describe how the motion can be generated I = {D 1 (t), D 2 (t),..., D N (t); Obj 1, Obj 2,..., Obj S } (3.1) under new environmental states when the human intention has already been derived. We start with the representation where D n (t) stands for the part of the demonstrated trajectory for delicate motion of the intention n and I. Assume Obj that there are N delicate motions and S objects involved in a demonstrated task. Because s the position and orientation of an s. Note that, because an object may correspond to one, several, or no delicate motion, the number of delicate the intention is closely related to the delicate motions of the maneuver, I is formulated as a set of delicate motions, motions may not be equal to that of the objects. We then introduce the motion index (MI), which serves as an index linking to I. MI is formulated as an ordered set of the time-point pairs, d j = {n j, l j, s j }, which provides the starting time n j, 8

21 Motion index Environmental state Demonstrated motion Motion Cutting Delicate motions Motion Adjustment New delicate motions Motion Connection Resultant motion Motion Generation Figure 3.3: Process for motion generation. end time l j, and number of the operated object s j for each of delicate motions D: Figure 2 The process for motion generation. MI = {d 1, d 2,.., d N } (3.2) where M I represents I. Fig. 3.3 shows the process for motion generation. According to MI, the motion cutting module locates the delicate motions D j from the demonstrated motion in order. To respond to the new environmental state, the motion adjustment module moves these D j to match the new locations of the objects and become D Gj. Finally, the motion connection module uses the move motion M Gj to smoothly connect every two D Gj. As its accuracy is not that critical, M Gj is generated using the cubic polynomial. With both D Gj and M Gj, we now have a feasible trajectory Q G corresponding to the new environmental state: Q G = {M G1, D G1, M G2, D G2,..., D GN, M GN+1 } (3.3) Fig. 3.4(a) shows an example for motion cutting based on the pouring task shown in Fig. 3.2, and Fig. 3.4(b) that of motion generation. In Fig. 3.4(a), the demonstrated trajectory during task execution is projected on the X-Y plane, where the yellow and green rectangles indicate the locations of vessels B and C. The yellow and green trajectories are the delicate motions determined by the motion cutting module according to the given MI. In Fig. 3.4(b), the yellow and green rectangles indicate the locations of vessels B and C in the new environmental state. In responding to these new locations of vessels B and C, the delicate motions identified in Fig. 3.4(a) are transformed to be the yellow and green trajectories by the motion adjustment module. Finally, the three move motions, as the red trajectories, are utilized to smoothly connect the two delicate motions. 9

22 y (m) y (m).3 D.25 D.2 End.15.1 Start x (m) (a) motion cutting D M M D Start M End x (m) (b) motion generation Figure 3.4: Examples for (a) motion cutting and (b) motion generation based on the pouring task shown in Fig Figure 4 Examples of (a) motion cutting and (b) motion generation based on the pouring task. Environmental state (validating motion) Validating motion Environmental state (validating motion) Validating motion MI candidate Motion Generation Generated motions Motion Comparison Error MI candidate Motion Generation Generated motions Motion Comparison Error Training motions (a) standard MI evaluation Training motions (b) MI evaluation with strategy Figure 3.5: Process for MI evaluation: (a) standard MI evaluation and (b) MI evaluation with strategy. 1

23 3.1.2 Derivation of Optimal Motion Index From the motion generation process discussed above, we can take the intention deduction process as that of finding proper motion index MI. To find the optimal M I among all M I candidates, we introduce first the process for M I evaluation, shown in Fig. 3.5(a). This process evaluates the fitness of the M I candidates derived from the demonstrated motion, based on a reasoning that proper M I should lead to a generated motion very similar to the human demonstrated motion, which includes all the delicate motions. In Fig. 3.5(a), from the demonstrated motions, we select one demonstrated motion as the validating motion and the rest as the training motions. We will discuss the selection of validating and training motions later. For an M I candidate derived from the validating motion, the motion generation module, described above, generates motions based on the training motions and the environmental state corresponding to the validating motion; the generated motions, with their lengths set to be equal to that of the validating motion, are then compared with the validating motion via the motion comparison module, yielding the differences between them (marked as errors). Because the operator may perform the demonstrations in different speeds and possibly with different orders for the events involved, the corresponding delicate motions are likely to be with various sampling rates, or to appear in different portions of the demonstrated trajectories. To tackle this, our strategy is to let each of the delicate motions of the validating motion be compared with every portion of the training motion, accompanied by altering sampling rates, showing in Fig. 3.5(b). Through this comparison process, the generated motion, whose delicate motions lead to the minimum difference when compared with those of the validating motion, is determined as the output and sent to the motion comparison module for the following comparison. As a high search complexity is expected, we come up with an approach analogous to that of dynamic time warping (DTW) in execution [56]. Details of this strategy will be explained in next section. We go on with the process for MI generation, shown in Fig In Fig. 3.6, among all the demonstrated motions, one demonstrated motion is first selected as 11

24 MI Generation Demonstrated motions Select Validating motion Validating motion MI Generator Validating motion Pool of MI candidates Training motions Figure 3.6: Process for M I generation. Demonstrated motions Validating motion motion and Figure 6: Selection of validating Environmental and training Optimal motions state from demonstrated motions. MI Generation Pool of MI candidates Validating MI evaluation Errors Evaluation MI candidate All Optimal MI candidates Find Optimal MI Optimal MI Training motions Training motions For all MI candidates : single I/O : multiple I/O Figure 3.7: Process for optimal M I derivation. the validating motion, denoted as Q V, and the rest as the training motions, Q T, Figure 7: The complete process for deriving the optimal motion index (MI). for each sequence of the process. The process will be repeated until each of the demonstrated motions serves as the validating motion once. In next step, the M I (b) complete process generator will locate all possible MI candidates from Q V. Because the proposed approach does not constrain the human operator to perform the task with certain motion speed or motion type, and also allows the order of the events to be altered during demonstration, there is in fact not a priori knowledge for the selection of MI. The criterion for MI generation is thus to let MI candidate correspond to every portion of Q V with a duration longer than.3 second, as human cannot cognize an event until it happens.3 second later [59]. It can be expected that there will be a huge number of MI candidates. That is why we employ the method of dynamic programming for the search of the optimal MI. With the MI evaluation process in Fig. 3.5(b) and MI generation process in Fig. 3.6, Fig. 3.7 shows the entire process for optimal M I derivation. For the outer dotted block in Fig. 3.7, the inputs are the demonstrated motions and each 12

25 of them serves as the validating motion once. Via the MI generation process, MI candidates along with the validating and training motions are sent into the M I evaluation process to determine which M I candidate leads to the minimum error, identified as an optimal M I candidate. As each validating motion corresponds to one optimal MI candidate, the outputs of the outer dotted block are the optimal MI candidates for each of them. Finally, the optimal MI is determined to be the one with the minimum error among all optimal MI candidates Implementation of Intention Deduction For mathematical formulation of this optimal M I derivation process, we start with the description of MI for a given validating motion Q V, denoted as MI V : MI V = {d V1, d V2,.., d VN } (3.4) with d Vj = {n Vj, l Vj, s Vj } (3.5) where d Vj indexes the delicate motion D Vj with n Vj, l Vj, and s Vj the starting time, end time, and number of the operated object. According to MI V, Q V can then be expressed as the combination of a series of delicate and move motions: Q V = {M V1, D V1, M V2, D V2,..., D VN, M VN+1 } (3.6) On the other hand, with the same MI V, the generated motion Q i G motion Q i T can be formulated as for each training Q i G = {MG i 1, DG i 1, MG i 2, DG i 2,..., DG i N, MG i N+1 } (3.7) where DG i j and MG i j are its delicate and move motion, respectively. DG i j can be determined via the MI evaluation process above, of which the minimization between DG i j and D Vj is dealt with a DTW-like method: DG i j = similar(q i T, D Vj ) (3.8) In the similar function, the training motion Q i T is transformed to match the environment of the validating motion according to the possible operated object s Vj, 13

26 and generated delicate motion DG i j is searched from the transformed motion to be similar to D Vj as close as possible. Therefore, the search can use a DTW-like method to minimize the difference between DG i j and D Vj [56]. Details of this method will be explained in next section. After the generated delicate motions are generated, MG i j determined by function M G, which utilizes the cubic polynomial to smoothly connect the two delicate motions, DG i j 1 and DG i j : MG i j = M G (DG i j 1, DG i j ) (3.9) To determine the optimal motion index MI V, Q V will be compared with all Q G generated according to every MI V. Because we are looking for an MI V may induce all the necessary delicate motions, MI V deviation between the delicate motions for Q V that should not induce too much and Q G, and consequently between the move motions for them. By taking E max as the maximum difference between the delicate and move motions for Q V and those Q G generated for all the training motions corresponding to some MI V, we determine MI V, among all MI V, to be the one that leads to the smallest E max : with where E max = MI V = arg min MI V E max (3.1) N E D (D Vj ) + j=1 N+1 j=1 E M (D Vj 1, D Vj ) (3.11) E D (D V ) = max i D V D i G 2 (3.12) E M (D Va, D Vb ) = max i M V (D Va, D Vb ) M G (D i G a, D i G b ) 2 (3.13) Here, E D computes the difference between the respective delicate motions for Q V and those Q G, and E M that for the move motions, with M V as a function which outputs the move motion part between two delicate motions of the validating motion, D Va and D Vb. Because each demonstrated motion serves as the validating motion once, the final optimal motion index MI for all demonstrated motions will be further chosen as that MI, among those for each Q V, with the smallest corresponding 14

27 E max, demoted as E. As the length L V for each Q V may not be the same, E needs to be normalized before the comparison. MI is then formulated as MI = arg min E /L V (3.14) MIV The search for MI is of high complexity, as exhibited in Eqs. (3.1)-(3.14) above. As an attempt to enhance search efficiency, we employ the method of dynamic programming [6] and let the computation of E in Eq. (3.14) be expressed into a recursive formulation: with E = min d Vk E R (D Vk ) + E M (D Vk, D VN+1 ) (3.15) E R (D Vk ) = min d Vk 1 (E R (D Vk 1 ) + E M (D Vk 1, D Vk )) + E D (D Vk ) (3.16) where E R (D Vk ) stands for the minimum difference between the motions from the first move motion to a given delicate motion; d Vk and d Vk 1, described in Eq. (3.5), index the delicate motions D Vk and D Vk 1 ; and 1 k N. Because the number of delicate motions is not known in advance, N and k are not specific numbers. Also note that, the first move motion is generated between D V and D V1, and the last one between D VN and D VN+1, with D V and D VN+1 taken as the first and last point of the trajectory, respectively. In Eq. (3.15), E is derived as the minimum one for all E R (D Vk ) with E R (D Vk ) computed recursively via Eq. (3.16). With Eqs. (3.15) and (3.16), dynamic programming can take advantage of the table generated for E R (D Vk ) to simplify the computation in deriving E. Based on the discussions above, the algorithm for intention derivation algorithm is formulated in Algorithm 1. Time complexity for this optimal M I derivation process is related to the number (R) and length (L V ) of the demonstrated motions and the number (S) of objects involved in the task. Here, the lengths of the demonstrated motions are assumed to be close. In Eqs. (3.15) and (3.16), the generation of the table for E R (D Vk ) takes up most of the time consumed. The table has O(L V 2 S) elements, and each element deals with the complexity of the order of O(R L V 3 S). During the entire process, the table needs to be generated R times. The final time complexity is thus computed to be in the order of O(R 2 L V 5 S 2 ). 15

28 The divide-and-conquer method [6] may also be an alternate to solve Eq. (3.14). However, because our proposed approach takes every portion of the trajectory of the validation demonstration as the candidate for a possible delicate motion, it is not that straightforward to divide the trajectory properly. Consequently, the search for the optimal solution may demand a large number of divisions, leading to high computational load. Algorithm 1 Find the intention of the task through R times of demonstrations Input: the demonstrated trajectories Q i (1 i R) for the R times of demonstrations Output: the optimal MI 1: for i = 1 to n do 2: Select Q i among the R recorded trajectories as the validating motion Q V and the rest as the training demonstrations Q T 3: Apply the method of dynamic programming, based on Eq. (3.1), to determine the optimal MI for Q V 4: end for 5: Utilize Eq. (3.14) to determine the optimal MI for the demonstrator among those MI for the R validating motions 6: return MI 3.2 Similar Function In this section, the reason why the similar function is used is first explained, and the implementation of the similar function, which is based on the dynamic time wrapping method, is then discussed Reasoning for Similar Function Before we discuss the implementation of the similar function, the reason why we use the similar function in the validation is explained in the following. In the process of the intention deduction, it is important to decide the validating data and the 16

29 Environmental state (validating motion) Validating motion Environmental state (validating motion) Validating motion MI candidate Motion Generation Generated motion Motion Comparison Error MI candidate Operating order Motion Generation Generated motion Motion Comparison Error Training motion Training motion (a) with M.I. (b) with M.I. and operating order Figure 3.8: The validation of M.I.: (a) with M.I. and (b) with M.I. and operating order. training data because the fitness of each possible M.I. candidate is evaluated by the validation. In Fig. 3.8(a), the fitness of each M.I. candidate can be evaluated by comparing the difference between the generated motion and the validating motion in the motion comparison module, and the M.I. candidate which leads to the minimum error is the optimal M.I. candidate. However, the operating speed of the generated motion and the ordering of the delicate motions may be different from that of the validating motion because the motion generation module uses the M.I. candidate of the training motion to generate motion. The motion comparison module may need to use DTW to deal with the differences of motion speeds [56], but DTW cannot deal with the different ordering of delicate motions. Therefore, when DTW is used in the motion comparison module, we need to align the ordering of the delicate motions of the generated motion as that of the validating motion, as shown in Fig. 3.8(b). In Fig. 3.8(b), the motion speeds of the delicate motions are not included in the operating order because DTW can handle this. With the concept of validation, the correct operating order and the optimal M.I. may be able to be searched simultaneously by validating all possible pairs of the operating order and the M.I. candidate when single training motion is inputted, as shown in Fig. 3.8(b). Moreover, the M.I. candidates of the training motion can be evaluated more accurately by inputting multiple validating motions with each operating order. However, the number of possible operating orders of each validating motions increases factorially with the number of the M.I.-assigned delicate motions. A lot of calculating time is needed to evaluate all operating order for each 17

30 M.I. candidate. In contrast, when multiple training motions and a single validating motion are inputted, the searching space is much larger because each training motion has an individual M.I. and an individual operating order. Therefore, we need a method to tackle this searching problem. It is known that the optimal M.I. candidate leads to the minimum difference and the difference between two similar motions is smaller than two randomly chosen motions. Therefore, a shortcut method is proposed. The M.I. candidate of validating motion is inputted into the motion generation module without inputting the possible operating order candidate for the multiple training motions. The motion cutting module of the motion generation module is modified to use the similar function to minimize the difference between the delicate motion of the generated motion and the validating motion, as shown in Fig. 3.5(b). Note that this shortcut method may not be match the concept of validation. We use the similar function to search for the part of the motion from each training motion as closely as possible to the M.I.-assigned delicate motion of the validating motion, so the difference between generated delicate motion and M.I.- assigned delicate motion can lead to minimum error. Besides, we use a DTW-like method to resample the delicate motion of generated motion in the calculation of the similar function, so the motion comparison module can calculate the difference without conducting DTW again Implementation of Similar Function To calculate the similar function in Eq. (3.8), we use a DTW-like method, which uses the technique of dynamic programming to minimize the error between the delicate motions of the validating motion and the training motion. In the calculation of the similar function, first, the training motion Q i T is transformed to match the environment of the validating motion according to the M.I.-assigned operated object s Vj. Then, the transformed motion is resampled to P i T for searching the minimum difference between the M.I.-assigned delicate motion of the validating motion and 18

31 some portion of the training motion. After that, the minimum error between the M.I.-assigned delicate motion of the validating motion and some portion of the training motion can be calculated as E similar = min H(ϱ, l Vj n Vj + 1) (3.17) 1 ϱ 2L i T 1 with H(ϱ, ς) =, if ϱ, P i T (ϱ) D V j (ς) 2 + min H(ι, ς 1), if ϱ 1,ς 1, ϱ 4 ι ϱ 1, otherwise. (3.18) where E similar is the minimum error between the delicate motions D Vj and D i G j, H(ϱ, ς) is a recursive function that outputs the minimum error between D Vj (1 ς) and PT i (u ϱ) (u is determined automatically during the process of minimization), and other symbols are explained in the following. The calculation of the similar function is different from DTW in two parts. First, in order to measure the difference independently without the influence of the time length of different generated motion, the generated motion is mapped and resampled to the time of the validating motion and the difference is measured based on the time of validating demonstration in the calculation of H(ϱ, ς). Because we want to search the minimum error under the dynamic speed whose range is from 1/2 to 2 times of the original speed of training motion, the number of the samples of P i T is 2Li T 1, which is the two times of the sampling rate of Q i T, and the four answers of the subproblems (H(ϱ 4, ς 1) H(ϱ 1, ς 1)) are used to solve the problem H(ϱ, ς). Second, in order to check each possible start point of PT i, the number of the cases which H(ϱ, ς) are setted to zero is more than that using DTW because each sample points of P i T is a possible start point of the similar motion. Therefore, H(ϱ, ς) is a function that outputs the minimum error between D Vj (1 ς) and PT i (u ϱ), and the result of similar function can be obtained by tracking the selections in the calculation of the minimum error E similar. Moreover, some elements of table H(ϱ, ς) can be shared for the different inputs of D Vj to reduce the calculation time, so the time complexity of calculation of the similar function with the total possible inputs is O(R L 3 V S) when the range of the dynamic speed is fixed and the demonstration times of the validating motion and the training motions are assumed to be close. 19

32 Because the similar function is a shortcut method, which does not calculate the difference between the move motions of validating motion and generated motions, this method does not minimize the total difference between the validating motion and the generated motions. Although the penalty which is the expectative error between the move motions of the validating motion and the generated motions can be used in the calculation of the similar function, we do not use it in our experiment because the expectative error is difficult to be estimated precisely. Note that, in the experiment, it is observed that to create E R (D V ) table by using the similar function, many the same E M values have been used. Therefore, if a cache is used to stand for the E M, the calculation process can be reduced. 3.3 Experimental Design for Extensibility and Robustness The proposed approach is developed for general tool-handling tasks, with the appealing features in (a) no need for pre-defined motions, (b) no constraints on motion speed or motion type, (c) allowance for event-order altering, and (d) allowance for redundant operations during demonstration. To further investigate its extensibility and robustness, we design a series of experiments based on three different kinds of tasks: the pouring, coffee-making, and fruit jam tasks. In the pouring task, the operator is asked to hold a vessel and pour the content into other vessel(s), as described in the example shown in Fig The experiments are designed to evaluate the influence from the following factors: pouring order during execution; number of vessels to pour; number of demonstrations used for M I derivation. We also analyze the time complexity during task execution, which is expected to match that predicted by the algorithm for intention derivation in Sec In the 2

33 coffee-making task, the operator uses a spoon to scoop coffee powder, sugar, or milk from the jars into the coffee cup and stir it. The number of jars are fixed for the experiments, but the operator can access the same jar(s) one or several several times. In addition to the performance on this coffee-making task, the effect of number of demonstrations on M I derivation and time complexity during task execution are also evaluated. To further explore its generality, in the fruit jam task, the operator picks up a knife from the table, scoops the fruit jam from the jar, spreads it on the toast in a zigzag motion, and places the knife back on the table. Meanwhile, we also analyze how the presence of the redundant motions affects system performance. To simulate that the robot learns a task in a home environment, we assume that the robot uses a vision system to identify objects by comparing the difference between background and foreground in the multiple demonstrations for obtaining the positions of the objects. Moreover, the handled tool whose position is varied during the demonstration can also be identified. We suggest that the operated objects should change the positions in the multiple demonstrations, but some operated objects which are heavy or fixed may not be moved in the task. These objects cannot be identified by the vision system. Fortunately, no matter how many operated objects hidden in the background, these objects can be seen as a special object - background object whose position is not changed during demonstrations just like the origin. In contrast, although we suggest that the positions of the operated objects should be changed during the multiple demonstrations, we do not require that the positions of the unoperated objects cannot be changed. For example, there are three cups in the pouring task of two cups, it is allowed that the position of the unoperated cup is changed carelessly for some reason during demonstrations. Therefore, even some identified objects which are not operated in the demonstrations are inputted into the intention deduction module, our learning method can still handle it. 21

34 Chapter 4 Experiments In this chapter, first, the system implementation is introduced. Then, the results of experiments of the pouring, coffee-making, and fruit jam tasks are reported, and its robustness is evaluated. Finally, we discuss how the proposed approach performs when compared with previous ones, and an application scenario is proposed for breakfast preparation to demonstrate the practicality of this approach in daily lives. 4.1 System Implementation The proposed system is implemented for experiments, as shown in Fig In Fig. 4.1, the task environment consists of the objects, including the tool, the operated objects, and the background objects. The human operator can see the task environment and operate the objects. The positions and orientations of the objects are measured by the tracking system Polhemus FASTRAK. This tracking system Polhemus FASTRAK consists of a system electronics unit, receivers, a power supply, which are shown in Fig. 4.2, and a long range transmitter, which are shown in Fig The update rate of the positions (XYZ) and orientations (ZYX Euler angles) of receivers is 3Hz, and the specifications of this tracking system Polhemus FASTRAK is described in Table 4.1. After the human operator demonstrates a task multiple times, these operating trajectories are recorded as the 7-dimensional sequences, which consist of positions and orientations (in the form of quaternion) in 3 and 4 dimensions, respectively. The data for the position is normalized by its standard deviation to balance the effects of the errors of due positions and orientations. These trajectories with the positions and orientations of all possible operated objects are then inputted into our learning method to deduce the intention. The executable operating trajectory for the robot manipulation is generated as 7-dimensional sequences (positions and orientations) according to the positions and orientations of the operated objects of the robot-faced environment. In the experiment, the core of our learning method is programmed by using C language, 22

35 RV-2A position Task environment object radiator vision Human operator Eyes force position sensor object sensor position & force force Hands positions positions RV-2A drive unit PC FASTRAK Figure 4.1: System implementation. and the program is executed in the computer with CPU of Intel E63, running at 1.86GHz with 3.62Gbyte RAM. Although this CPU has two cores, the program only uses one core to measure the time of the calculation. The robot manipulator Mitsubishi RV-2A is position-controlled, shown in Fig. 4.4, with its specifications listed in Table 4.2. The Denavit-Hartenberg parameters of this robot manipulator are specified in Table 4.3 [61], and the transformation matrix of each joint is defined as cos(θ i ) sin(θ i ) cos(α i ) sin(θ i ) sin(α i ) a i cos(θ i ) sin(θ A i 1 i (θ i ) = i ) cos(θ i ) cos(α i ) cos(θ i ) sin(α i ) a i sin(θ i ) sin(α i ) cos(α i ) d i 1 (4.1) This robot manipulator can accept the position command from internet by Ethernet interface card, and each position command can be operated at the operation control time 7.1 ms ( 141Hz). Therefore, the generated robot trajectory is resampled at 141Hz to perform robot manipulator, and the position command (J1 J6) of each sampling point is solved by inverse kinematics [61]. In order to avoid damaging the devices, in the experiments, all vessels are demonstrated with empty content. 23

36 Figure 4.2: Tracking system Polhemus FASTRAK. Figure 4.3: Long range transmitter. 24

37 Figure 4.4: Mitsubishi RV-2A type six-axis robot arm. Table 4.1: Specifications of the tracking system Polhemus FASTRAK. Latency 4 milliseconds Interface RS-232 with selectable baud rates up to 115.2K baud Static Accuracy position.3 RMS orientation.15 degrees RMS Resolution position.2 inches orientation.25 degrees Range 3 feet 25

38 Table 4.2: Specifications of the Mitsubishi RV-2A. Degrees of freedom 6 Maximum load capacity (rating) 2Kg Maximum reach radius 621mm Working area J1 32 (-16 to +16) J2 18 (-45 to +135) J3 12 (+5 to +17) J4 32 (-16 to +16) J5 24 (-12 to +12) J6 4 (-2 to +2) Maximum speed (degree/s) J1 15 J2 15 J3 18 J4 24 J5 18 J6 33 Repeat position accuracy ±.4mm Table 4.3: Denavit-Hartenberg parameters of the Mitsubishi RV-2A. Link a i (m) α i (rad.) d i (m) θ i (rad.) 1.1 π/2.35 θ θ 2 π/ π/2 θ 3 π/2 4 π/2.25 θ 4 5 π/2 θ θ 6 26

39 4.2 Pouring Task We applied the proposed approach for the pouring task shown in Fig. 4.5 first. The experiment is divided into two stages: (a) human demonstration and (b) robot execution. Fig. 4.5(a) shows the experimental setup for human demonstration, which includes the human operator and the Polhemus FASTRAK tracking system. There are five vessels placed randomly on the table. The human operator held a vessel (vessel A) and poured the content into the other vessels (vessels B, C, D, and E) on the table. The Polhemus FASTRAK tracking system, with a sampling rate of 3 Hz for each of the sensors, was used to measure and record the demonstrated trajectories and positions of the objects. These trajectories were recorded as the 7-dimensional sequences, which consist of positions and orientations (in the form of quaternion) in 3 and 4 dimensions, respectively, with the position normalized by its standard deviation. From these recorded trajectories, we applied the intention deduction algorithm, discussed in Sec. 2.1, to derive the intention of the operator from all possible intentions. We then moved on to the second stage of the experiment, and let the Mitsubishi RV-2A 6-DOF robot manipulator follow the derived intention to execute the pouring task under new environmental states, as shown in Fig. 4.5(b). There are two changeable parameters of the pouring tasks in the experiment, which are the poured vessels ({B,C}, {B,C,D}, and {B,C,D,E}) and the pouring order (arbitrary order or same order), so there are 6 different settings of the pouring tasks. The human operator demonstrated 18 times in each setting of the pouring tasks, and the total number of demonstrations is 18. To test the results of our method, for each pouring task (6 pouring tasks), the 8 of the demonstrations are randomly selected to be training data to be inputted into the learning method, and the rest demonstrations whose operating orders are equal to the operating orders of generated trajectories are selected to be testing data. In each test, the positions of vessel B E and origin are inputted to test the effect of the unoperated objects and the background object because the robot does not know which object is operated. These processes are executed 5 times, and the average errors, which describe the difference between the generated trajectories and 27

40 (a) Human demonstration (b) Robot execution Figure 8. Experimental setups for the pouring 4 cups task: Figure 4.5: Experimental setups for the pouring task: (a) human demonstration (a) human demonstration and (b) robot execution. and (b) robot execution. the trajectories of the testing data, are calculated by DTW. Besides, we then moved on to the second stage of the experiment, and let the Mitsubishi RV-2A 6-DOF robot manipulator follow the generated trajectories to execute the pouring tasks under a new environmental state, as shown in Fig. 4.5(b). Fig. 4.6 shows the derived intentions for each of the eight demonstrations of the pouring task in one test. Because the tilt-angle changes are the clear features of pouring motions, the time series graphs which describe the tilt-angle of the trajectories are used to illustrate the results. In Fig. 4.6, delicate motions related to vessels B, C, D, and E were identified from the trajectory of vessel A, marked by the yellow, green, blue, and purple blocks, respectively. It was observed that most of the delicate motions were located at those portions with evident tilt-angle changes, implicating the pouring action. The derived intention for demonstration 3 was determined to be optimal among all. Fig. 4.7 shows one of the generated trajectories and one of the trajectories of the testing data. In Fig. 4.7, the black line is the trajectory of the testing data, and the color line is the generated trajectory, which consists of the red lines (move motions) and the other color lines (delicate motions for operated objects). In XY-plane subfigure of Fig. 4.7, the color rectangles (yellow,green,blue, and purple) describe the positions of the operated object (B,C,D, and E). In the XY-plane 28

41 q H r a d.l q H r a d.l q H r a d.l q H r a d.l Demonstration Time HsL Demonstration Time HsL Demonstration Time HsL Demonstration Time HsL q H r a d.l q H r a d.l q H r a d.l q H r a d.l Time HsL Demonstration Time HsL Demonstration Demonstration Time HsL Demonstration Time HsL Figure 4.6: The derived intentions for the eight demonstrations of the pouring task. 29

42 y(m) z(m) (a) Trajectory in X-Y plane θ(rad.) (c) Variation of the tilt angle x(m) t(s) (b) Variation of the height Human Robot t(s) Figure 11. Experimental results for the pouring 4 cups task executed by both the human operator and robot manipulator Figureunder 4.7: new Experimental environmental states: results(a) for trajectory the pouring of vessel A 4 in cups X-Y plane. task (b) executed variation by of the both height of vessel A. (c) variation of the tilt angle of vessel A. the human operator and robot manipulator under new environmental states: (a) trajectory in X-Y plane, (b) variation of the height, and (c) variation of the tilt angle of vessel A. 3

43 subfigure and the height subfigure and the tilt angle subfigure of Figure 4.7, it was observed that the most of the delicate motions were located at those positions near the operated objects and those portions with minimum height and those portions with maximum tilt angle because these features are the features of the pouring motions. This generated trajectory and human trajectory exhibited certain degree of similarity, but not exactly the same. However, the generated trajectory can be used to manipulate the robot to accomplish the pouring tasks. Moreover, the generated trajectories of the other tasks show the correct delicate motions and match the human operations. Table 4.4: Average position error between the trajectories of human operator and the generated trajectories in the pouring tasks. Pouring task Order Error (m) 2 cups 3 cups 4 cups Same.52 Arbitrary.51 Same.49 Arbitrary.49 Same.52 Arbitrary.45 Table 4.4 shows average position errors between the trajectories of the human operator and the generated trajectories for the six combinations of the pouring task, and these error are calculated by DTW on 3D positions. First, it was observed that the orders of the operations do not have a great effect upon the results of our method. Second, it was observed that the number of the operated objects does not have a great effect upon the results of our method Robustness in Pouring Tasks Figure 4.8 shows the relation between the errors and the number of demonstrations (2 to 8) in the pouring 2 4 cups task, as shown in Table 4.4. First, it was observed that our method may need 4 5 demonstrations to learn the goals of the 2 4-cup 31

44 (m) vessels (s) 4 vessels (a) 3 vessels (s) 3 vessels (a) 2 vessels (s) 2 vessels (a) s: same, a: arbitrary (no.) Figure 4.8: The errors Wtih of unoperated the generated objects trajectories with the different numbers of the training demonstrations in the pouring 2 4 cups task. Figure basic. The errors of the generated trajectories with the different pouring tasks. numbers Second, of itthe wastraining observed that demonstrations the number of demonstrations in the pouring which 2~4 cups tasks. is needed to decrease the errors of the 4-cup pouring task is more than those for the 2 3-cup pouring tasks. It implies that the difficulty of learning task increases with the number of the poured cups. Third, it was observed that many errors of the arbitrary order pouring tasks are smaller than that of the same order pouring tasks. It implies that human operator may introduce more unusual operations in the same order pouring tasks because human operator needs to pay more attentions to unnatural operating orders during demonstrations. As for the time of the calculation of the pouring tasks, Fig. 4.9 shows that it increases approximately squarely with the the number of the demonstrations. This matches the expectation of the time complexity. We further evaluate how the presence of the redundant operations during demonstration may affect system performance. The analysis was based on the 2- vessel pouring task. Among a group of 2-vessel demonstrations, we gradually added in some demonstrations involving three or four vessels, taken as the introduction of the redundant operations. With this, we attempted to find out whether the proposed approach could still successfully recognize that the demonstrations were intended for the 2-vessel pouring task, even some proportion of the demonstrations 32

45 t(s) vessels (s) 4 vessels (a) 3 vessels (s) 3 vessels (a) 2 vessels (s) 2 vessels (a) s: same, a: arbitrary (no.) Figure 4.9: The calculatingwith timeunoperated with the different objectsnumbers of demonstrations in Figure 22. The executing time with the different numbers of demonstrations in the six pouring tasks. the six pouring tasks. involving redundant operations. Table 4.5 lists the number of success out of 1 tests where the number of demonstrations involving 3 or 4 vessels increased from 1 to 4 out of a total 8 demonstrations. Because the redundant operations are not common motions which can not be searched in each training motion in Eq. (3.8), larger percent of the redundant operations usually led to larger E max in deriving the optimal M I, as shown in Eq. (3.14). When the demonstrations involving redundant operations consisted of half of the total demonstrations, the proposed approach can still reach a high success rate at 8%. Table 4.5: Number of success in the presence of redundant operations 2 vessels 3 vessels 4 vessels number of success About the operating order of the tasks, in the experiment, it is observed that the path length and demonstration time of the tasks whose operating orders are arbitrary are smaller than that of the task whose operating orders are same, as 33

46 shown in Table 4.6. Therefore, the arbitrary order of the operations can decrease the path length and the demonstration time of the operations. Besides, it can decrease the probability of the mistakes of the operations, the pause motions, and the redundant operations because the operator does not need to care about the order of the operations when he/she demonstrates a complex task. From Fig. 4.9 and Table 4.6, it clear that the calculation time increases with the demonstration time. It means that the arbitrary operating orders can also decrease the calculation time in the pouring tasks. Therefore, in order to decrease the calculation time of our method, the operator should accomplish each demonstration as fast as possible and do not care about the operating order in each demonstration. Table 4.6: The average path length and demonstration time in the pouring tasks. path length (m) demonstration time (s) Pouring 2 cups Pouring 3 cups Pouring 4 cups Same order Arbitrary order Same order Arbitrary order Same order Arbitrary order Coffee-making Task In the coffee-making tasks, a spoon (spoon A) and a vessel (vessel B) are placed randomly on an area of.21x.21 meter 2 on the table in each demonstration, and three vessels (vessel C, D, and E) are placed on the assigned positions where are not changed during the experiment, as shown in Fig Spoon A is always held as a tool and operated to do the scooping motions and stirring motions on the vessels. Two different coffee-making tasks are designed to test the effects of the repeated operations. The motions of the first coffee-making task are using spoon A to scoop the content of vessel C into vessel B, to scoop the content of vessel D into vessel B, to scoop the content of vessel E into vessel B, and to stir the content of vessel B. The 34

47 (a) Human demonstration (b) Robot execution Figure 4.1: Experimental Figure 9. Experimental setups for the setups coffee-making for the coffee-making task: (a) human task: demonstration and (b) robot (a) human execution. demonstration and (b) robot execution. motions of the second coffee-making task are using spoon A to scoop the content of vessel C into vessel B, to scoop the content of vessel C into vessel B, to scoop the content of vessel D into vessel B, and to stir the content of vessel B. Both coffeemaking tasks are demonstrated 22 times, and the total number of demonstrations is 44. In the coffee-making task, the Polhemus FASTRAK tracking system, with a sampling rate of 3 Hz for the sensors attached on the spoon A, were used to measure the demonstrated trajectories. These trajectories were recorded as the 7- dimensional sequences, which consist of positions and orientations (in the form of quaternion) in 3 and 4 dimensions, respectively, with the position normalized by its standard deviation. In the coffee-making tasks, vessel C, D, and E never change positions in the training data. In order to recognize the operated objects in the background, a background object is created to replace all background objects, and the position of the background object is the origin. Therefore, in the coffee-making tasks, only two operated objects are inputted, which are the vessel B and the background object. For each coffee-making task (2 coffee-making tasks), the 8 of the demonstrations are randomly selected to be training data to be inputted into the learning method, and the rest demonstrations are selected to be testing data. These processes are 35

48 executed 5 times, and the average errors, which describe the difference between the generated trajectories and the trajectories of the testing data, are calculated by DTW. Then, we let the Mitsubishi RV-2A 6-DOF robot manipulator follow the generated trajectories to execute the coffee-making tasks under new environmental states, as shown in Fig. 4.1(b). Figs and 4.12 show the derived intentions for each of the eight demonstrations of the two coffee-making tasks. Because the height changes are the clear features of scooping motions, the time series graphs which describe the height of the trajectories are used to illustrate the results. Because the locations of the three vessels (vessels C, D, and E) were not changed during the experiment, they were viewed as one object fixed on the origin. The two colors blocks mark the delicate motions (green for vessels C, D, and E, and yellow for vessel B). The derived intention for demonstration 2 (in coffee-making task 1) and 1 (in coffee-making task 2) were determined to be the optimal among all. Figs and 4.14 show the generated trajectories and the trajectories of the testing data in the two coffee-making tasks. In the subfigures of Figs and 4.14, the black line is the trajectory of the testing data, and the color line is the generated trajectory, which consists of the red lines (move motions) and other color lines (delicate motions for operated objects). In each XY-plane subfigure, the color rectangles (yellow,green,blue, and purple) indicate the positions of the operated object (B,C,D, and E). Because there are only two operated objects, which are the vessel B and the background object, in each subfigure of the coffee-making tasks, the delicate motions which scoop the content of vessel C, D, and E are green lines, which operate only on the background object, but the operated positions of the background object are different. Although the background objects of both two coffee-making tasks are operated, the difference of the operated objects of the generated trajectories are observed clearly in each coffee-making tasks. These generated trajectories and human trajectories exhibited certain degree of similarity, but not exactly the same. However, the generated trajectories can be used to manipulate the robot to accomplish the coffee-making tasks. 36

49 H eight H m m L H eight H m m L H eight H m m L H eight H m m L Demonstration Time HsL Demonstration Time HsL Demonstration Time HsL Demonstration Time HsL H eight H m m L H eight H m m L H eight H m m L H eight H m m L Demonstration Time HsL Demonstration Time HsL Demonstration Time HsL Demonstration Time HsL op Figure 4.11: The derived intentions for the eight demonstrations of the coffee-making task 1. 37

50 H eight H m m L H eight H m m L H eight H m m L H eight H m m L Demonstration Time HsL 4 Demonstration Time HsL 25 Demonstration Time HsL Demonstration Time HsL H eight H m m L H eight H m m L H eight H m m L H eight H m m L Demonstration Time HsL Demonstration Time HsL Demonstration Time HsL Demonstration Time HsL opt1 Figure 4.12: The derived intentions for the eight demonstrations of the coffee-making task 2. 38

51 y(m).4 z(m) (a) Trajectory in X-Y plane θ(rad.) (c) Variation of the tilt angle x(m) t(s) (b) Variation of the height --- Human - - Robot t(s) Figure 15. Experimental results for the coffee-making task 1 executed by both the human operator and robot manipulator Figure 4.13: under Experimental new environmental results states: for (a) trajectory the coffee-making vessel A in task X-Y plane. 1 executed (b) variation by of both the height of vessel A. (c) variation of the tilt angle of vessel A. the human operator and robot manipulator under new environmental states: (a) trajectory in X-Y plane, (b) variation of the height, and (c) variation of the tilt angle of spoon A. Table 4.7: Average position error between the trajectories of human operator and the generated trajectories in the coffee-making tasks. Task order Error (m) Pattern 1.3 Coffee-making Pattern

52 y(m) z(m) θ(rad.) (a) Trajectory in X-Y plane (c) Variation of the tilt angle x(m) t(s) (b) Variation of the height --- Human - - Robot t(s) Figure 16. Experimental results for the coffee-making task 2 executed by both the human operator and robot manipulator Figure under 4.14: new Experimental environmental states: results (a) for trajectory the coffee-making vessel A X-Y task plane. 2 (b) executed variation by of the both height of vessel A. (c) variation of the tilt angle of vessel A. the human operator and robot manipulator under new environmental states: (a) trajectory in X-Y plane, (b) variation of the height, and (c) variation of the tilt angle of spoon A. 4

53 (m) Coffee1 Coffee (no.) Figure 4.15: The errors of the generated trajectories with the different numbers of the training demonstrations in the coffee-making tasks. Figure 17. The errors of the generated trajectories with the different Table numbers 4.7 shows of the thetraining averagedemonstrations errors between in the the trajectories coffee-making thetasks. testing data and the generated trajectories for the two coffee-making tasks, and these errors are calculated by DTW on 3D positions. In Table 4.7, first, it was observed that the different operations of two coffee-making tasks do not have a great effect upon the results of our method. Second, because the working space of the coffee-making tasks is smaller than that of the pouring tasks, it may cause that the error of the coffee-making tasks is smaller than that of the pouring tasks Robustness in Coffee-making Tasks Figure 4.15 shows the relation between the errors and the number of demonstrations (2 to 8) in the two coffee-making tasks, as shown in Table 4.7. First, it was observed that our method may needs 4 demonstrations to learn the goals of the coffee-making tasks. Second, it was observed that the errors of the first coffee-making task is larger than that of the second, and it implicates that learning the first coffee-making task is more difficult than learning the second one. Because the coffee-making tasks has only two objects (vessel B and background object) and the demonstration time of each demonstration is almost equal, we compare the time complexity with the time of the calculation by changing the number of the demonstrations. Fig shows that the time of the calculation of the coffee-making tasks increases approximately squarely with the the number of the 41

54 t(s) coffee1 coffee (no.) Figure 4.16: The calculating time with the different numbers of demonstrations in Figure 23. The executing time with the different numbers of the two coffee-making tasks. demonstrations in the two coffee-making tasks. (a) Human demonstration (b) Robot execution Figure 4.17: Experimental Figure 8. Experimental setups forsetups the fruit for the jamfruit task: jam task: (a) human (a) demonstration human demonstration and (b) robot execution. and (b) robot execution. demonstrations. This matches the expectation of the time complexity. Fig also shows that the difference between the calculation time of two coffee-making tasks is not evident. This is because the number of the operated objects of two coffee-making tasks are the same and the demonstration time of each demonstration is almost equal. Therefore, although learning the first coffee-making task is more difficult than learning the second one, the difficulties of the tasks do not influence the calculation time evidently. 42

Figure 1 Microsoft Visio

Figure 1 Microsoft Visio Pattern-Oriented Software Design (Fall 2013) Homework #1 (Due: 09/25/2013) 1. Introduction Entity relation (ER) diagrams are graphical representations of data models of relation databases. In the Unified

More information

CLAD 考前準備 與 LabVIEW 小技巧

CLAD 考前準備 與 LabVIEW 小技巧 CLAD 考前準備 與 LabVIEW 小技巧 NI 技術行銷工程師 柯璟銘 (Jimmy Ko) jimmy.ko@ni.com LabVIEW 認證 Certified LabVIEW Associate Developer (LabVIEW 基礎認證 ) Certified LabVIEW Associate Developer LabVIEW 全球認證 40 題 (37 題單選,3 題複選

More information

港專單一登入系統 (SSO) 讓本校的同學, 全日制及兼職老師只要一個登入帳戶, 便可同時使用由本校提供的網上系統及服務, 包括 Blackboard 網上學習平台, 港專電郵服務, 圖書館電子資料庫及其他教學行政系統.

港專單一登入系統 (SSO) 讓本校的同學, 全日制及兼職老師只要一個登入帳戶, 便可同時使用由本校提供的網上系統及服務, 包括 Blackboard 網上學習平台, 港專電郵服務, 圖書館電子資料庫及其他教學行政系統. 港專單一登入系統 (SSO) 讓本校的同學, 全日制及兼職老師只要一個登入帳戶, 便可同時使用由本校提供的網上系統及服務, 包括 Blackboard 網上學習平台, 港專電郵服務, 圖書館電子資料庫及其他教學行政系統. 港專單一登入網站網址 http://portal.hkct.edu.hk (HKCT 之教職員, 學生 ) http://portal.ctihe.edu.hk (CTIHE 之教職員,

More information

PC Link Mode. Terminate PC Link? Esc. [GO]/[Esc] - - [GO]/[Esc] 轉接座未放滿. Make auto accord with socket mounted? [GO]/[Esc] Copy to SSD E0000

PC Link Mode. Terminate PC Link? Esc. [GO]/[Esc] - - [GO]/[Esc] 轉接座未放滿. Make auto accord with socket mounted? [GO]/[Esc] Copy to SSD E0000 Start SU-6808 EMMC Programmer V.0bd7 [ ]Link PC / [ ]Menu [ ] >.Select project.make new project.engineer mode.reset counter 5.Link to PC [ ] PC disconnected PC connected Select project SEM0G9C_A.prj Terminate

More information

報告人 / 主持人 : 林寶樹 Colleges of Computer Science & ECE National Chiao Tung University

報告人 / 主持人 : 林寶樹 Colleges of Computer Science & ECE National Chiao Tung University 行動寬頻尖端技術跨校教學聯盟 - 行動寬頻網路與應用 MiIoT ( Mobile intelligent Internet of Things) 報告人 / 主持人 : 林寶樹 Colleges of Computer Science & ECE National Chiao Tung University Aug 14, 2015 課程簡介 課程綱要 實作平台評估 2 背景說明 目前雲端與行動寬頻緊密結合,

More information

桌上電腦及筆記本電腦安裝 Acrobat Reader 應用程式

桌上電腦及筆記本電腦安裝 Acrobat Reader 應用程式 On a desktop or notebook computer Installing Acrobat Reader to read the course materials The Course Guide, study units and other course materials are provided in PDF format, but to read them you need a

More information

全面強化電路設計與模擬驗證. Addi Lin / Graser 2 / Sep / 2016

全面強化電路設計與模擬驗證. Addi Lin / Graser 2 / Sep / 2016 全面強化電路設計與模擬驗證 Addi Lin / Graser 2 / Sep / 2016 Agenda OrCAD Design Solution OrCAD Capture 功能應用 OrCAD Capture CIS 介紹 OrCAD PSpice 模擬與驗證 OrCAD Design Solution Powerful and Widely Used Design Solution Front-to-Back

More information

行政院國家科學委員會補助專題研究計畫成果報告

行政院國家科學委員會補助專題研究計畫成果報告 行政院國家科學委員會補助專題研究計畫成果報告 光彈調變式橢圓偏光儀 ( ㆒ ) 反射面之校正 計畫類別 :v 個別型計畫 整合型計畫 計畫編號 :NSC89-11-M-009-03 執行期間 :88 年 8 月 1 日至 89 年 7 月 31 日計畫主持 : 趙于飛 計畫參與 員 : 王夢偉交大光電所博士 本成果報告包括以 應繳交之附件 : 赴國外出差或研習心得報告㆒份 赴大陸 區出差或研習心得報告㆒份

More information

外薦交換生線上申請系統操作說明 Instruction on Exchange Student Online Application System. [ 中文版 ] [English Version]

外薦交換生線上申請系統操作說明 Instruction on Exchange Student Online Application System. [ 中文版 ] [English Version] 外薦交換生線上申請系統操作說明 Instruction on Exchange Student Online Application System [ 中文版 ] [English Version] 線上申請流程說明 申請系統網址 : http://schwebap.nccu.edu.tw/zeweb/exgstdapply/ 1. 建立新帳號 : 請輸入姓名 生日 email 做為未來登入系統用

More information

Java 程式設計基礎班 (7) 劉根豪台大電機所網路資料庫研究室. Java I/O. Class 7 1. Class 7

Java 程式設計基礎班 (7) 劉根豪台大電機所網路資料庫研究室. Java I/O.   Class 7 1. Class 7 Java 程式設計基礎班 (7) 劉根豪台大電機所網路資料庫研究室 Email: kenliu@arbor.ee.ntu.edu.tw 1 回顧 Java I/O 2 1 Java Data Structure 動態資料結構 執行的時候可以動態變大或縮小 類型 Linked lists Stacks Queues Binary trees 3 自我參考類別 (self-referential classes)

More information

Chapter 7. Digital Arithmetic and Arithmetic Circuits. Signed/Unsigned Binary Numbers

Chapter 7. Digital Arithmetic and Arithmetic Circuits. Signed/Unsigned Binary Numbers Chapter 7 Digital Arithmetic and Arithmetic Circuits Signed/Unsigned Binary Numbers Signed Binary Number: A binary number of fixed length whose sign (+/ ) is represented by one bit (usually MSB) and its

More information

虛擬機 - 惡意程式攻防的新戰場. 講師簡介王大寶, 小時候大家叫他王小寶, 長大後就稱王大寶, 目前隸屬一神祕單位. 雖然佯稱興趣在看書與聽音樂, 但是其實晚上都在打 Game. 長期於系統最底層打滾, 熟悉 ASM,C/C++,

虛擬機 - 惡意程式攻防的新戰場. 講師簡介王大寶, 小時候大家叫他王小寶, 長大後就稱王大寶, 目前隸屬一神祕單位. 雖然佯稱興趣在看書與聽音樂, 但是其實晚上都在打 Game. 長期於系統最底層打滾, 熟悉 ASM,C/C++, 王大寶, PK 虛擬機 - 惡意程式攻防的新戰場 講師簡介王大寶, 小時候大家叫他王小寶, 長大後就稱王大寶, 目前隸屬一神祕單位. 雖然佯稱興趣在看書與聽音樂, 但是其實晚上都在打 Game. 長期於系統最底層打滾, 熟悉 ASM,C/C++, 對於資安毫無任何興趣, 也無經驗, 純粹是被某壞人騙上台, 可以說是不可多得的素人講師!! 議程大綱 : 現今的 CPU 都支援虛擬化專用指令集, 讓 VM

More information

Frame Relay 訊框中繼 FRSW S0/0 S0/1

Frame Relay 訊框中繼 FRSW S0/0 S0/1 Frame Relay 訊框中繼 將路由器設定為訊框中繼交換器以進行 frame relay 實驗 : 首先練習設定兩個埠的 frame relay switch FRSW S0/0 S0/1 介面 S0/0 介面 S0/1 102 201 DLI 102 DLI 201 Router(config)# hostname FRSW FRSW(config)# frame-relay switching

More information

國立交通大學 電控工程研究所 博士論文. 基於 ZigBee 智慧型環境與移動式機器人之位置感知系統設計. Design of Location Aware Systems using ZigBee-based Intelligent Environment and Mobile Robots

國立交通大學 電控工程研究所 博士論文. 基於 ZigBee 智慧型環境與移動式機器人之位置感知系統設計. Design of Location Aware Systems using ZigBee-based Intelligent Environment and Mobile Robots 國立交通大學 電控工程研究所 博士論文 基於 ZigBee 智慧型環境與移動式機器人之位置感知系統設計 Design of Location Aware Systems using ZigBee-based Intelligent Environment and Mobile Robots 研究生 : 林嘉豪指導教授 : 宋開泰博士 中華民國一百零二年七月 基於 ZigBee 智慧型環境與移動式機器人之位置感知系統設計

More information

Java 程式設計基礎班 (7) 莊坤達台大電信所網路資料庫研究室. Java I/O. Class 7 1. Class 7 2

Java 程式設計基礎班 (7) 莊坤達台大電信所網路資料庫研究室. Java I/O.   Class 7 1. Class 7 2 Java 程式設計基礎班 (7) 莊坤達台大電信所網路資料庫研究室 Email: doug@arbor.ee.ntu.edu.tw Class 7 1 回顧 Java I/O Class 7 2 Java Data Structure 動態資料結構 Grow and shrink at execution time Several types Linked lists Stacks Queues Binary

More information

SSL VPN User Manual (SSL VPN 連線使用手冊 )

SSL VPN User Manual (SSL VPN 連線使用手冊 ) SSL VPN User Manual (SSL VPN 連線使用手冊 ) 目錄 前言 (Preface) 1. ACMICPC 2018 VPN 連線說明 -- Pulse Secure for Windows ( 中文版 ):... 2 2. ACMICPC 2018 VPN 連線說明 -- Pulse Secure for Linux ( 中文版 )... 7 3. ACMICPC 2018

More information

網路安全與頻寬控制閘道器之實作與研究. Management Gateways

網路安全與頻寬控制閘道器之實作與研究. Management Gateways 行政院國家科學委員會補助專題研究計畫成果報告 網路安全與頻寬控制閘道器之實作與研究 Implementation and Research of Security and Bandwidth Management Gateways 計畫類別 : 個別型計畫 整合型計畫 計畫編號 :NSC 90-2213-E-009-161- 執行期間 : 2001 年 08 月 01 日至 2002 年 7 月 31

More information

國立交通大學 資訊科學與工程研究所 博士論文 中華民國九十九年六月. CloudEdge: 一個架構在雲端計算環境的內容傳遞系統. CloudEdge: A Content Delivery System in Cloud Environment 研究生 : 邱繼弘指導教授 : 袁賢銘博士

國立交通大學 資訊科學與工程研究所 博士論文 中華民國九十九年六月. CloudEdge: 一個架構在雲端計算環境的內容傳遞系統. CloudEdge: A Content Delivery System in Cloud Environment 研究生 : 邱繼弘指導教授 : 袁賢銘博士 國立交通大學 資訊科學與工程研究所 博士論文 : 一個架構在雲端計算環境的內容傳遞系統 : A Content Delivery System in Cloud Environment 研究生 : 邱繼弘指導教授 : 袁賢銘博士 中華民國九十九年六月 : 一個架構在雲端計算環境的內容傳遞系統 研究生 : 邱繼弘 指導教授 : 袁賢銘博士 國立交通大學資訊科學與工程研究所 摘要 隨著 Internet

More information

國立交通大學 多媒體工程研究所 碩士論文 以天花板上多環場攝影機輔助自動車 作室內安全監控之研究. A Study on Indoor Security Surveillance by. Vision-based Autonomous Vehicle With

國立交通大學 多媒體工程研究所 碩士論文 以天花板上多環場攝影機輔助自動車 作室內安全監控之研究. A Study on Indoor Security Surveillance by. Vision-based Autonomous Vehicle With 國立交通大學 多媒體工程研究所 碩士論文 以天花板上多環場攝影機輔助自動車 作室內安全監控之研究 A Study on Indoor Security Surveillance by Vision-based Autonomous Vehicle With Omni-cameras on House Ceiling 研究生 : 王建元 指導教授 : 蔡文祥教授 中華民國九十八年六月 i 以天花板上多環場攝影機輔助自動車

More information

Oxford isolution. 下載及安裝指南 Download and Installation Guide

Oxford isolution. 下載及安裝指南 Download and Installation Guide Oxford isolution 下載及安裝指南 Download and Installation Guide 系統要求 個人電腦 Microsoft Windows 10 (Mobile 除外 ) Microsoft Windows 8 (RT 除外 ) 或 Microsoft Windows 7 (SP1 或更新版本 ) ( 網上下載 : http://eresources.oupchina.com.hk/oxfordisolution/download/index.html)

More information

Chapter 4 (Part IV) The Processor: Datapath and Control (Parallelism and ILP)

Chapter 4 (Part IV) The Processor: Datapath and Control (Parallelism and ILP) Chapter 4 (Part IV) The Processor: Datapath and Control (Parallelism and ILP) 陳瑞奇 (J.C. Chen) 亞洲大學資訊工程學系 Adapted from class notes by Prof. M.J. Irwin, PSU and Prof. D. Patterson, UCB 4.10 Instruction-Level

More information

Lomographic Society Taiwan Institute of Creative Industry Design

Lomographic Society Taiwan Institute of Creative Industry Design Lomographic Society Taiwan Institute of Creative Industry Design On 2008.10.07 Allan, PA6971076 Contents 中文摘要 02 Short story of Lomographic Society 03 About Lomographic Society Taiwan 04 Lomo Spirit 06

More information

A Gesture Recognition System for Remote Control with List Menu

A Gesture Recognition System for Remote Control with List Menu 國立臺灣大學電機資訊學院資訊工程學研究所碩士論文 Graduate Institute of Computer Science and Information Engineering College of Electrical Engineering and Computer Science National Taiwan University Master Thesis 遙控條列式選單之手勢辨識系統

More information

EdConnect and EdDATA

EdConnect and EdDATA www.hkedcity.net Tryout Programme of Standardised Data Format for e-textbook and e-learning Platform EdConnect and EdDATA 5 December 2018 Agenda Introduction and background Try-out Programme Q&A 電子課本統一數據格式

More information

2009 OB Workshop: Structural Equation Modeling. Changya Hu, Ph.D. NCCU 2009/07/ /07/03

2009 OB Workshop: Structural Equation Modeling. Changya Hu, Ph.D. NCCU 2009/07/ /07/03 Amos Introduction 2009 OB Workshop: Structural Equation Modeling Changya Hu, Ph.D. NCCU 2009/07/02- 2 Contents Amos Basic Functions Observed Variable Path Analysis Confirmatory Factor Analysis Full Model

More information

第九章結構化查詢語言 SQL - 資料定義語言 (DDL) 資料庫系統設計理論李紹綸著

第九章結構化查詢語言 SQL - 資料定義語言 (DDL) 資料庫系統設計理論李紹綸著 第九章結構化查詢語言 SQL - 資料定義語言 (DDL) 資料庫系統設計理論李紹綸著 SQL 的資料定義語言 本章內容 建立資料表 修改資料表 刪除資料表 FOREIGN KEY 外鍵條件約束與資料表關聯性 2 資料定義語言可分為下列三種 : SQL 的資料定義語言 CREATE TABLE 指令 : 用來建立一個基底關聯表, 和設定關聯表相關的完整性限制 CREATE VIEW 指令 : 用來建立一個視界,

More information

Department of Computer Science and Engineering National Sun Yat-sen University Master Thesis

Department of Computer Science and Engineering National Sun Yat-sen University Master Thesis 國立中山大學資訊工程學系 碩士論文 Department of Computer Science and Engineering National Sun Yat-sen University Master Thesis 有效使用Tag RAM空間之可規劃性快取記憶體 A Reconfigurable Cache for Efficient Usage of the Tag RAM Space 研究生

More information

一般來說, 安裝 Ubuntu 到 USB 上, 不外乎兩種方式 : 1) 將電腦上的硬碟排線先予以排除, 將 USB 隨身碟插入主機, 以一般光碟安裝方式, 將 Ubuntu 安裝到 USB

一般來說, 安裝 Ubuntu 到 USB 上, 不外乎兩種方式 : 1) 將電腦上的硬碟排線先予以排除, 將 USB 隨身碟插入主機, 以一般光碟安裝方式, 將 Ubuntu 安裝到 USB Ubuntu 是新一代的 Linux 作業系統, 最重要的是, 它完全免費, 不光是作業系統, 連用軟體都不必錢 為什麼要裝在 USB 隨身碟上? 因為, 你可以把所有的軟體帶著走, 不必在每一台電腦上重新來一次, 不必每一套軟體裝在每一台電腦上都要再一次合法授權 以下安裝方式寫的是安裝完整的 Ubuntu- 企業雲端版本 V. 11.10 的安裝過程, 若是要安裝 Desktop 版本, 由於牽涉到

More information

Use of SCTP for Handoff and Path Selection Strategy in Wireless Network

Use of SCTP for Handoff and Path Selection Strategy in Wireless Network Use of SCTP for Handoff and Path Selection Strategy in Wireless Network Huai-Hsinh Tsai Grad. Inst. of Networking and Communication Eng., Chaoyang University of Technology s9530615@cyut.edu.tw Lin-Huang

More information

Digital imaging & free fall of immersed sphere with wall effects

Digital imaging & free fall of immersed sphere with wall effects 量測原理與機工實驗 ( 下 ) 熱流實驗 ( 一 ) Digital imaging & free fall of immersed sphere with wall effects May 14-18, 2012 Objective: This week s lab work has two parts: (1) how to record digital video and convert it

More information

Twin API Guide. How to use Twin

Twin API Guide. How to use Twin Twin API Guide How to use Twin 1 目錄 一 Cycle Job------------------------------------------------------------------------------------P3 二 Twin Action Table-----------------------------------------------------------------------P4-5

More information

JAVA Programming Language Homework V: Overall Review

JAVA Programming Language Homework V: Overall Review JAVA Programming Language Homework V: Overall Review ID: Name: 1. Given the following Java code: [5 points] 1. public class SimpleCalc { 2. public int value; 3. public void calculate(){ value = value +

More information

EZCast Wire User s Manual

EZCast Wire User s Manual EZCast Wire User s Manual Rev. 2.01 Introduction Thanks for choosing EZCast! The EZCast Wire contains the cutting-edge EZCast technology, and firmware upgrade will be provided accordingly in order to compatible

More information

EZCast Docking Station

EZCast Docking Station EZCast Docking Station Quick Start Guide Rev. 2.00 Introduction Thanks for choosing EZCast! The EZCast Docking Station contains the cutting-edge EZCast technology, and firmware upgrade will be provided

More information

描述性資料採礦 Descriptive Data Mining

描述性資料採礦 Descriptive Data Mining 描述性資料採礦 Descriptive Data Mining 李御璽 (Yue-Shi Lee) 銘傳大學資訊工程學系 leeys@mail.mcu.edu.tw Agenda Cluster Analysis ( 集群分析 ) 找出資料間的內部結構 Association Rules ( 關聯規則 ) 找出那些事件常常一起出現 Sequence Clustering ( 時序群集 ) Clustering

More information

UAK1-C01 USB Interface Data Encryption Lock USB 資料加密鎖. Specifications for Approval

UAK1-C01 USB Interface Data Encryption Lock USB 資料加密鎖. Specifications for Approval Product Definition C-MING Product Semi-finished Product OEM/ODM Product Component USB Interface Data Encryption Lock USB 資料加密鎖 Specifications for Approval Approval Manager Issued By Revision History Revision

More information

國立交通大學 資訊工程與科學研究所 碩士論文 一個在二元轉譯中連結原生函式庫且可重定目標之方法 研究生 : 郭政錡 指導教授 : 楊武教授

國立交通大學 資訊工程與科學研究所 碩士論文 一個在二元轉譯中連結原生函式庫且可重定目標之方法 研究生 : 郭政錡 指導教授 : 楊武教授 國立交通大學 資訊工程與科學研究所 碩士論文 一個在二元轉譯中連結原生函式庫且可重定目標之方法 A Retargetable Approach for Linking Native Shared Libraries in Binary Translation 研究生 : 郭政錡 指導教授 : 楊武教授 中華民國一百零一年七月 一個在二元轉譯中連結原生函式庫且可重定目標之方法 A Retargetable

More information

購票流程說明 How To purchase The Ticket?

購票流程說明 How To purchase The Ticket? 購票流程說明 How To purchase The Ticket? 步驟 1: 點選 登入 Click 登入 Login (You have to login before purchasing.) 步驟 2: 若已是會員請填寫會員帳號 密碼, 點選 登入 若非會員請點選 註冊 If you are the member of PB+, Please login. If not, please register.

More information

EZCast Wire. User s Manual. Rev. 2.00

EZCast Wire. User s Manual. Rev. 2.00 EZCast Wire User s Manual Rev. 2.00 Introduction Thanks for choosing EZCast! The EZCast Wire contains the cutting-edge EZCast technology, and firmware upgrade will be provided accordingly in order to compatible

More information

國立交通大學 資訊科學與工程研究所 碩士論文 內核的一對多串流轉送技術 研究生 : 洪家鋒 指導教授 : 林盈達教授. In-kernel Relay for One-to-Many Streaming 中華民國九十九年六月

國立交通大學 資訊科學與工程研究所 碩士論文 內核的一對多串流轉送技術 研究生 : 洪家鋒 指導教授 : 林盈達教授. In-kernel Relay for One-to-Many Streaming 中華民國九十九年六月 國立交通大學 資訊科學與工程研究所 碩士論文 內核的一對多串流轉送技術 In-kernel Relay for One-to-Many Streaming 研究生 : 洪家鋒 指導教授 : 林盈達教授 中華民國九十九年六月 內核的一對多串流轉送技術 In-kernel Relay for One-to-Many Streaming 研究生 : 洪家鋒 指導教授 : 林盈達 Student:Chia-Feng

More information

David M. Kroenke and David J. Auer Database Processing Fundamentals, Design, and Implementation

David M. Kroenke and David J. Auer Database Processing Fundamentals, Design, and Implementation David M. Kroenke and David J. Auer Database Processing Fundamentals, Design, and Implementation Chapter Six: Transforming Data Models into Database Designs 6-1 Chapter Objectives To understand how to transform

More information

香港中文大學學生會計算機科學系會 圖書清單

香港中文大學學生會計算機科學系會 圖書清單 香港中文大學學生會計算機科學系會 圖書清單 100 Theory 120 CGI 140 Visual Basic 160 Other Programming Book 101 Program budgeting and benefit-cost analysis 102 Introduction to Algorithms 103 Introduction to Algorithms 104 Data

More information

國立交通大學 電機與控制工程學系 碩士論文

國立交通大學 電機與控制工程學系 碩士論文 國立交通大學 電機與控制工程學系 碩士論文 利用多核心處理器平台平行處理網路入侵偵測系統 A Load-balanced Parallel Architecture for Network Intrusion Detection System 研究生 : 陳柏廷 Student: Bo-Ting Chen 指導教授 : 黃育綸 博士 Advisor: Dr. Yu-Lun Huang 中華民國九十八年五月

More information

Chapter 7. Signed/Unsigned Binary Numbers. Digital Arithmetic and Arithmetic Circuits. Unsigned Binary Arithmetic. Basic Rules (Unsigned)

Chapter 7. Signed/Unsigned Binary Numbers. Digital Arithmetic and Arithmetic Circuits. Unsigned Binary Arithmetic. Basic Rules (Unsigned) Chapter 7 Digital rithmetic and rithmetic Circuits igned/unsigned inary Numbers igned inary Number: binary number of fixed length whose sign (+/ ) is represented by one bit (usually M) and its magnitude

More information

WriteAhead 遨遊雲端暨 行動學習應 用 研討會 雲端時代的資訊教育與語 言學習 介紹互動式寫作環境 張俊盛 清華 大學資訊 工程系及研究所 2015 年 4 月 21 日 ( 二 ) 上午 10:00 ~ 12:30 台北市 立 大同 高中 行政 大學 5 樓階梯教室

WriteAhead 遨遊雲端暨 行動學習應 用 研討會 雲端時代的資訊教育與語 言學習 介紹互動式寫作環境 張俊盛 清華 大學資訊 工程系及研究所 2015 年 4 月 21 日 ( 二 ) 上午 10:00 ~ 12:30 台北市 立 大同 高中 行政 大學 5 樓階梯教室 遨遊雲端暨 行動學習應 用 研討會 雲端時代的資訊教育與語 言學習 介紹互動式寫作環境 WriteAhead 張俊盛 清華 大學資訊 工程系及研究所 2015 年 4 月 21 日 ( 二 ) 上午 10:00 ~ 12:30 台北市 立 大同 高中 行政 大學 5 樓階梯教室 高中資訊教育 培養現代公 民的資訊素養 並不是如何使 用 生產 力軟體 也不只是寫程式 了解現在商業軟體並 非唯 一的選擇,

More information

Increase Productivity and Quality by New Layout Flow

Increase Productivity and Quality by New Layout Flow Increase Productivity and Quality by New Layout Flow Jonathan / Graser 16 / Oct / 2015 Design Process Introduction CONSTRAINTS PLACEMENT FANOUT BREAKOUT ROUTING DELAY (ATE) NET-GROUP Topology & Delay Physical

More information

國立交通大學 資訊科學與工程研究所 碩士論文 以指令快取為基準之 低功耗分支目標暫存器. Low Power I-Cache-based BTB 研究生 : 黃富群 指導教授 : 單智君博士 中華民國九十五年八月

國立交通大學 資訊科學與工程研究所 碩士論文 以指令快取為基準之 低功耗分支目標暫存器. Low Power I-Cache-based BTB 研究生 : 黃富群 指導教授 : 單智君博士 中華民國九十五年八月 國立交通大學 資訊科學與工程研究所 碩士論文 以指令快取為基準之 低功耗分支目標暫存器 Low Power I-Cache-based BTB 研究生 : 黃富群 指導教授 : 單智君博士 中華民國九十五年八月 以指令快取為基準之低功耗分支目標暫存器 Low Power I-Cache-based BTB 研究生 : 黃富群 指導教授 : 單智君 Student:Fu-Ching Hwang Advisor:Jyh-Jiun

More information

臺北巿立大學 104 學年度研究所碩士班入學考試試題

臺北巿立大學 104 學年度研究所碩士班入學考試試題 臺北巿立大學 104 學年度研究所碩士班入學考試試題 班別 : 資訊科學系碩士班 ( 資訊科學組 ) 科目 : 計算機概論 ( 含程式設計 ) 考試時間 :90 分鐘 08:30-10:00 總分 :100 分 注意 : 不必抄題, 作答時請將試題題號及答案依照順序寫在答卷上 ; 限用藍色或黑色筆作答, 使用其他顏色或鉛筆作答者, 所考科目以零分計算 ( 於本試題紙上作答者, 不予計分 ) 一 單選題

More information

用於網頁版權保護的資訊隱藏方法. A Steganographic Method for Copyright Protection of Web Pages

用於網頁版權保護的資訊隱藏方法. A Steganographic Method for Copyright Protection of Web Pages 用於網頁版權保護的資訊隱藏方法 A Steganographic Method for Copyright Protection of Web Pages Ya-Hui Chang( 張雅惠 ) and Wen-Hsiang Tsai( 蔡文祥 ) Department of Computer & Information Science National Chiao Tung University

More information

購票流程說明 How To purchase The Ticket?

購票流程說明 How To purchase The Ticket? 購票流程說明 How To purchase The Ticket? 步驟 1: 已是會員請點選 登入, 選擇 2016 WTA 臺灣公開賽 Taiwan Open tickets Step1:If You are the member, please Click 登入 Click to the column: 2016 WTA 臺灣公開賽 Taiwan Open tickets Click 登入

More information

國立交通大學 資訊科學與工程研究所 碩士論文 適用於非對稱網路連線之動態用戶的 彈性應用層多點傳播 研究生 : 郭宇軒 指導教授 : 邵家健副教授. Resilient Application Layer Multicast Tailored for

國立交通大學 資訊科學與工程研究所 碩士論文 適用於非對稱網路連線之動態用戶的 彈性應用層多點傳播 研究生 : 郭宇軒 指導教授 : 邵家健副教授. Resilient Application Layer Multicast Tailored for 國立交通大學 資訊科學與工程研究所 碩士論文 適用於非對稱網路連線之動態用戶的 彈性應用層多點傳播 Resilient Application Layer Multicast Tailored for Dynamic Peers with Asymmetric Connectivity 研究生 : 郭宇軒 指導教授 : 邵家健副教授 中華民國九十五年七月 適用於非對稱網路連線之動態用戶的彈性應用層多點傳播

More information

Department of Computer Science and Engineering National Sun Yat-sen University Master Thesis. Applications of Homomorphic Encryption.

Department of Computer Science and Engineering National Sun Yat-sen University Master Thesis. Applications of Homomorphic Encryption. 國立中山大學資訊工程學系 碩士論文 Department of Computer Science and Engineering National Sun Yat-sen University Master Thesis 同態加密系統之應用 Applications of Homomorphic Encryption 研究生 蔡誠祐 Chen-Yu Tsai 指導教授 官大智 博士 D. J. Guan

More information

Version Control with Subversion

Version Control with Subversion Version Control with Subversion 指導教授郭忠義 邱茂森 95598051 1 Table of contents (1) Basic concepts of subversion (1)What is Subversion (2)Version Control System (3)Branching and tagging (4) Repository and Working

More information

C B A B B C C C C A B B A B C D A D D A A B D C C D D A B D A D C D B D A C A B

C B A B B C C C C A B B A B C D A D D A A B D C C D D A B D A D C D B D A C A B 高雄市立右昌國中 106 學年度第二學期第二次段考三年級考科答案 國文科 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. C B D C A C B A D B 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. D C B A D C A B D B 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. C B D C B B C

More information

Multimedia Service Support and Session Management 鍾國麟

Multimedia Service Support and Session Management 鍾國麟 Multimedia Service Support and Session Management 鍾國麟 2003-9-31 1 1 Agenda Introduction What is Session? Definition Functions Why need Session Management 2G,Internet,3G SIP Basic Operation User Location

More information

微處理機系統 吳俊興高雄大學資訊工程學系. February 21, What are microprocessors (µp)? What are the topics of this course? Why to take this course?

微處理機系統 吳俊興高雄大學資訊工程學系. February 21, What are microprocessors (µp)? What are the topics of this course? Why to take this course? 微處理機系統 吳俊興高雄大學資訊工程學系 February 21, 2005 processor, central processing unit (CPU) A silicon chip which forms the core of a microcomputer The heart of the microprocessor-based computer system Concept of what

More information

BTC, EMPREX Wireless Keybaord +Mouse + USB dongle. 6309URF III Quick Installation Guide

BTC, EMPREX Wireless Keybaord +Mouse + USB dongle. 6309URF III Quick Installation Guide BTC, EMPREX 6309URF III Quick Installation Guide Hardware Installation 1. Plug the dongle receiver connector into your available USB port on PC. 2. Make sure the batteries of the keyboard and mouse are

More information

The notice regarding Participation Ways of our global distributor video conference on Feb. 5.

The notice regarding Participation Ways of our global distributor video conference on Feb. 5. The notice regarding Participation Ways of our global distributor video conference on Feb. 5. On Feb.5, 2010 Los Angeles time, between 5:00 PM - 7:00 PM, we will convene an important global distributor

More information

國立交通大學 資訊科學與工程研究所 碩士論文 針對 P2P 移時串流系統之影音檔案儲存. Distributed Video Storage Management for a P2P Time Shift Streaming System 研究生 : 廖威凱 指導教授 : 張明峰教授

國立交通大學 資訊科學與工程研究所 碩士論文 針對 P2P 移時串流系統之影音檔案儲存. Distributed Video Storage Management for a P2P Time Shift Streaming System 研究生 : 廖威凱 指導教授 : 張明峰教授 國立交通大學 資訊科學與工程研究所 碩士論文 針對 P2P 移時串流系統之影音檔案儲存 Distributed Video Storage Management for a P2P Time Shift Streaming System 研究生 : 廖威凱 指導教授 : 張明峰教授 中華民國九十八年八月 針對 P2P 移時串流系統之影音檔案儲存 Distributed Video Storage Management

More information

國立交通大學 網路工程研究所 高效率非集中式之 KAD 同儕網路 負載平衡策略 研究生 : 徐崇騵 指導教授 : 王國禎博士. An Efficient Decentralized Load Balancing Scheme. in KAD Peer-to-Peer Networks

國立交通大學 網路工程研究所 高效率非集中式之 KAD 同儕網路 負載平衡策略 研究生 : 徐崇騵 指導教授 : 王國禎博士. An Efficient Decentralized Load Balancing Scheme. in KAD Peer-to-Peer Networks 國立交通大學 網路工程研究所 碩士論文 高效率非集中式之 KAD 同儕網路 負載平衡策略 An Efficient Decentralized Load Balancing Scheme in KAD Peer-to-Peer Networks 研究生 : 徐崇騵 指導教授 : 王國禎博士 中華民國九十九年六月 高效率非集中式非集中式之 KAD 同儕網路負載平衡策略 An Efficient Decentralized

More information

國立交通大學 資訊工程學系 碩士論文 基於無線區域網路之語音服務的快速換手機制 指導教授 : 張明峰教授 研究生 : 李榮泰 中華民國九十四年六月. Fast Handoff Mechanism for VoWLAN

國立交通大學 資訊工程學系 碩士論文 基於無線區域網路之語音服務的快速換手機制 指導教授 : 張明峰教授 研究生 : 李榮泰 中華民國九十四年六月. Fast Handoff Mechanism for VoWLAN 國立交通大學 資訊工程學系 碩士論文 基於無線區域網路之語音服務的快速換手機制 Fast Handoff Mechanism for VoWLAN 指導教授 : 張明峰教授 研究生 : 李榮泰 中華民國九十四年六月 基於無線區域網路之語音服務的快速換手機制 Fast Handoff Mechanism for VoWLAN 研究生 : 李榮泰指導教授 : 張明峰教授 Student: Jung-Tai

More information

國立交通大學 應用層多播網路之即時影音串流設計 指導教授 : 張明峰教授 研究生 : 張博今 中華民國九十六年六月. The Design of Live Video Streaming Using Application Layer. Multicast

國立交通大學 應用層多播網路之即時影音串流設計 指導教授 : 張明峰教授 研究生 : 張博今 中華民國九十六年六月. The Design of Live Video Streaming Using Application Layer. Multicast 國立交通大學 應用層多播網路之即時影音串流設計 The Design of Live Video Streaming Using Application Layer Multicast 指導教授 : 張明峰教授 研究生 : 張博今 中華民國九十六年六月 應用層多播網路之即時影音串流設計 The Design of Live Video Streaming Using Application Layer

More information

Citrix CloudGateway. aggregate control. all apps and data to any device, anywhere

Citrix CloudGateway. aggregate control. all apps and data to any device, anywhere Citrix CloudGateway aggregate control all apps and data to any device, anywhere Agenda 1. What s Cloud Gateway? 2. Cloud Gateway Overview 3. How it works? What s Cloud Gateway? It s all about the apps

More information

Understanding IO patterns of SSDs

Understanding IO patterns of SSDs 固态硬盘 I/O 特性测试 周大 众所周知, 固态硬盘是一种由闪存作为存储介质的数据库存储设备 由于闪存和磁盘之间物理特性的巨大差异, 现有的各种软件系统无法直接使用闪存芯片 为了提供对现有软件系统的支持, 往往在闪存之上添加一个闪存转换层来实现此目的 固态硬盘就是在闪存上附加了闪存转换层从而提供和磁盘相同的访问接口的存储设备 一方面, 闪存本身具有独特的访问特性 另外一方面, 闪存转换层内置大量的算法来实现闪存和磁盘访问接口之间的转换

More information

國立交通大學 電子工程學系電子研究所 碩士論文 在多核心系統中考慮動態隨機存取記憶體讀 / 寫特性以降低功率消耗之排程機制. A Read-Write Aware DRAM Scheduling for Power Reduction in Multi-Core Systems

國立交通大學 電子工程學系電子研究所 碩士論文 在多核心系統中考慮動態隨機存取記憶體讀 / 寫特性以降低功率消耗之排程機制. A Read-Write Aware DRAM Scheduling for Power Reduction in Multi-Core Systems 國立交通大學 電子工程學系電子研究所 碩士論文 在多核心系統中考慮動態隨機存取記憶體讀 / 寫特性以降低功率消耗之排程機制 A Read-Write Aware DRAM Scheduling for Power Reduction in Multi-Core Systems 研究生 : 賴之彥 指導教授 : 周景揚教授 中華民國一 二年八月 在多核心系統中考慮動態隨機存取記憶體讀 / 寫特性以降低功率消耗之排程機制

More information

English G H. Package Contents. Hardware Requirements. Technical Specifications. Device Overview. MSI DS502 GAMING HEADSET User Guide

English G H. Package Contents. Hardware Requirements. Technical Specifications. Device Overview. MSI DS502 GAMING HEADSET User Guide Package Contents MSI DS502 GAMING HEADSET User Guide Hardware Requirements PC with USB port Windows 8.1/8/7/XP English Technical Specifications Headphones * Drivers : Ø40mm * Sensitivity (S.P.L) : 105

More information

晶焱科技股份有限公司 Amazing Microelectronic Corp.

晶焱科技股份有限公司 Amazing Microelectronic Corp. 晶焱科技股份有限公司 Amazing Microelectronic Corp. ESD Leading Provider 公司簡介 2 公司沿革 AmazingIC Founded in Taiwan 1 Billion (pcs) shipment Amazing IPO in Taiwan 1.Touch down 10 Billion pcs 2.AVL Touch down 23 billion

More information

Oracle Database 11g Overview

Oracle Database 11g Overview Oracle Database 11g Overview Charlie 廖志華倍力資訊資深系統顧問 Great Year for Oracle Database Database Market Database for SAP 14.3% 48.6% 9% 3% 17% 4% 15.0% 22.0% 67% Oracle IBM Microsoft Other

More information

The transformation relationship between defense enterprise architecture and C4ISR system architecture

The transformation relationship between defense enterprise architecture and C4ISR system architecture The transformation relationship between defense enterprise architecture and C4ISR system architecture Dr. Meng-chyi Harn 報告人 : 韓孟麒博士德明財經科技大學資訊科技系 C4ISR 研究中心 Introducing Takming Outline Introduction Fundamental

More information

基於智慧型代理人的自動化商業協同合作 AUTOMATIC ELECTRONIC BUSINESS COLLABORATION BASED ON INTELLIGENT AGENT TECHNOLOGY

基於智慧型代理人的自動化商業協同合作 AUTOMATIC ELECTRONIC BUSINESS COLLABORATION BASED ON INTELLIGENT AGENT TECHNOLOGY 基於智慧型代理人的自動化商業協同合作 AUTOMATIC ELECTRONIC BUSINESS COLLABORATION BASED ON INTELLIGENT AGENT TECHNOLOGY 研究生 : 呂賴誠 (Lai-Chen Lu) 指導教授 : 葉慶隆 (Prof. Ching-Long Yeh) 大同大學 資訊工程研究所 博士論文 Ph.D. Dissertation Department

More information

InTANK ir2771-s3 ir2772-s3. User Manual

InTANK ir2771-s3 ir2772-s3. User Manual InTANK ir2771-s3 ir2772-s3 User Manual » InTANK...1» InTANK ir2771-s3 & ir2772-s3 產品使用說明... 10 V1.1 Introduction Thank you for purchasing RAIDON products. This manual will introduce the InTANK ir2771-s3

More information

Ubiquitous Computing Using SIP B 朱文藝 B 周俊男 B 王雋伯

Ubiquitous Computing Using SIP B 朱文藝 B 周俊男 B 王雋伯 Ubiquitous Computing Using SIP B91902039 朱文藝 B91902069 周俊男 B91902090 王雋伯 Outline Ubiquitous Computing Using SIP 1. Introduction 2. Related Work 3. System Architecture 4. Service Example 1. Introduction

More information

Performance Diagnosis of Mobile Applications and Cloud Services

Performance Diagnosis of Mobile Applications and Cloud Services Performance Diagnosis of Mobile Applications and Cloud Services KANG, Yu A Thesis Submitted in Partial Fulfilment of the Requirements for the Degree of Doctor of Philosophy in Computer Science and Engineering

More information

WIN Semiconductors. Wireless Information Networking 穩懋半導體 2014 年第四季法人說明會. p.0

WIN Semiconductors. Wireless Information Networking 穩懋半導體 2014 年第四季法人說明會. p.0 WIN Semiconductors Wireless Information Networking 穩懋半導體 2014 年第四季法人說明會 2015 年 3 月 p.0 免責聲明 本資料可能包含對於未來展望的表述 該類表述是基於對現況的 預期, 但同時受限於已知或未知風險或不確定性的影響 因此實 際結果將可能明顯不同於表述內容 除法令要求外, 公司並無義務因應新資訊的產生或未來事件的發生主動更新對未來展望的表述

More information

國立中山大學海洋環境及工程學系 博士論文

國立中山大學海洋環境及工程學系 博士論文 國立中山大學海洋環境及工程學系 博士論文 應用航測影像及光達資料探討以知識庫為基礎之都市地物特徵分類之研究 A Knowledge-Based Approach to Urban-feature Classification Using Aerial Imagery with Airborne LiDAR Data 研究生 : 黃明哲撰 指導教授 : 薛憲文博士 李良輝博士 中華民國九十六年六月 論文提要

More information

使用 TensorFlow 設計矩陣乘法計算並轉移執行在 Android 上 建國科技大學資管系 饒瑞佶 2017/8

使用 TensorFlow 設計矩陣乘法計算並轉移執行在 Android 上 建國科技大學資管系 饒瑞佶 2017/8 使用 TensorFlow 設計矩陣乘法計算並轉移執行在 Android 上 建國科技大學資管系 饒瑞佶 2017/8 Python 設計 Model import tensorflow as tf from tensorflow.python.tools import freeze_graph from tensorflow.python.tools import optimize_for_inference_lib

More information

微軟新一代私有雲服務. 利用 Windows Azure Pack 協助企業建構現代化的 IT 服務架構, 提升競爭力降低維運成本. Jason Chou Architect. Nov 7, 2013

微軟新一代私有雲服務. 利用 Windows Azure Pack 協助企業建構現代化的 IT 服務架構, 提升競爭力降低維運成本. Jason Chou Architect. Nov 7, 2013 微軟新一代私有雲服務 利用 Windows Azure Pack 協助企業建構現代化的 IT 服務架構, 提升競爭力降低維運成本 Jason Chou Architect Nov 7, 2013 Agenda Cloud OS Vision Windows Server 2012 R2 New Features Windows Azure Pack Overview Success Case High-performance

More information

國立中興大學科技管理研究所 碩士學位論文

國立中興大學科技管理研究所 碩士學位論文 國立中興大學科技管理研究所 碩士學位論文 MyNext: 利用集體智慧開發適用於台灣研究所 招生考試之社群查榜系統 MyNext: A Collective Intelligence Enabled System for Cross-Checking Entrance Exam Results 指導教授 : 沈培輝博士 Dr. John Sum 研究生 : 蔡智強 Chih-Chiang Tsai

More information

4Affirma Analog Artist Design Flow

4Affirma Analog Artist Design Flow 4Affirma Analog Artist Design Flow Getting Started 1. 登入工作站 : Username : trainaxx Password : train0xx 其中 XX 代表工作站名字的號碼, 例如工作站名字叫做 traina01 的話,XX 就是 01 2. 先確定是否進入 Solaris 作業系統的 Common Desktop Environment(CDE)

More information

St. Francis Canossian College

St. Francis Canossian College St. Francis Canossian College 2003.02 VOL.1 CONTENTS: GOING WIRELESS P.1-7 HOW TO BUY A GOOD DIGITAL CAMERA P.8-12 REPORT OF THE VISIT TO INTEGER P.13-15 GAME SITES P.16-18 BY 4D Wendy Yue When you are

More information

Lotusphere Comes to You 輕鬆打造 Web 2.0 入口網站 IBM Corporation

Lotusphere Comes to You 輕鬆打造 Web 2.0 入口網站 IBM Corporation 輕鬆打造 Web 2.0 入口網站 2007 IBM Corporation 議程 Web 2.0 新特性一覽 Web 2.0 入口網站主題開發 用戶端聚合技術 PortalWeb2 主題 開發 AJAX portlets 程式 總結 JSR 286 及 WSRP 2.0 對 AJAX 的支援 AJAX 代理 用戶端 portlet 編程模型 Web 2.0 特性一覽 WP 6.1 提供的 Web

More information

Preamble Ethernet packet Data FCS

Preamble Ethernet packet Data FCS Preamble Ethernet. packet Data FCS Destination Address Source Address EtherType Data ::: Preamble. bytes. Destination Address. bytes. The address(es) are specified for a unicast, multicast (subgroup),

More information

利用數據與軟體瞭解 讀者行為使用分析與服務平台選項

利用數據與軟體瞭解 讀者行為使用分析與服務平台選項 By using the data and software analysis to study the user experience & the option for the service platform in library field. 利用數據與軟體瞭解 讀者行為使用分析與服務平台選項 周頡 Jeremy Chou EBSCO Information Services Sales Director

More information

微軟商務用 Skype 雲端視訊會議及與所需頻寬介紹

微軟商務用 Skype 雲端視訊會議及與所需頻寬介紹 微軟商務用 Skype 雲端視訊會議及與所需頻寬介紹 傳統視訊會議 : 視訊會議解決方案 以硬體設備為主, 內建專屬視訊會議軟體, 要增加連線數量就必須加購昂貴的 MCU Server, 整套設備的價格多在數百萬之譜 軟體式視訊會議 : 在現有的基礎設備上, 強化整合通訊功能 (UC), 再結合視訊會議功能 (VC, Video Conference), 對於公司的網路系統或是通訊系統做更有效率的運用

More information

Simulation of SDN/OpenFlow Operations. EstiNet Technologies, Inc.

Simulation of SDN/OpenFlow Operations. EstiNet Technologies, Inc. Simulation of SDN/OpenFlow Operations EstiNet Technologies, Inc. Agenda: (1) 模擬器簡介 (2) 模擬器的基本操作 (3) 如何建置一個 SDN Topology (5) 如何下達指令並觀察 Flow Table, Group Table 與 Meter Table (5) 如何用 SDN 下達 QoS 指令並觀察結果 (6)

More information

允許學生個人 非營利性的圖書館或公立學校合理使用本基金會網站所提供之各項試題及其解答 可直接下載而不須申請. 重版 系統地複製或大量重製這些資料的任何部分, 必須獲得財團法人臺北市九章數學教育基金會的授權許可 申請此項授權請電郵

允許學生個人 非營利性的圖書館或公立學校合理使用本基金會網站所提供之各項試題及其解答 可直接下載而不須申請. 重版 系統地複製或大量重製這些資料的任何部分, 必須獲得財團法人臺北市九章數學教育基金會的授權許可 申請此項授權請電郵 注意 : 允許學生個人 非營利性的圖書館或公立學校合理使用本基金會網站所提供之各項試題及其解答 可直接下載而不須申請 重版 系統地複製或大量重製這些資料的任何部分, 必須獲得財團法人臺北市九章數學教育基金會的授權許可 申請此項授權請電郵 ccmp@seed.net.tw Notice: Individual students, nonprofit libraries, or schools are

More information

國立交通大學 網路工程研究所 碩士論文 一個在點對點網路電視上提升播放品質之傳送機制研究 研究生 : 黃譽維 指導教授 : 陳耀宗教授. A Study on Content Delivery Scheme for Playback. Quality Enhancement in P2P IPTV

國立交通大學 網路工程研究所 碩士論文 一個在點對點網路電視上提升播放品質之傳送機制研究 研究生 : 黃譽維 指導教授 : 陳耀宗教授. A Study on Content Delivery Scheme for Playback. Quality Enhancement in P2P IPTV 國立交通大學 網路工程研究所 碩士論文 一個在點對點網路電視上提升播放品質之傳送機制研究 A Study on Content Delivery Scheme for Playback Quality Enhancement in P2P IPTV 研究生 : 黃譽維 指導教授 : 陳耀宗教授 中華民國 101 年 7 月 一個在點對點網路電視上提升播放品質之傳送機制研究 A Study on Content

More information

國立交通大學 資訊科學與工程研究所 碩士論文 基於 H.264/AVC 的混合式多重描述編碼 研究生 : 蕭家偉 指導教授 : 蔡文錦教授. Hybrid Multiple Description Coding Based on H.

國立交通大學 資訊科學與工程研究所 碩士論文 基於 H.264/AVC 的混合式多重描述編碼 研究生 : 蕭家偉 指導教授 : 蔡文錦教授. Hybrid Multiple Description Coding Based on H. 國立交通大學 資訊科學與工程研究所 碩士論文 基於 H.264/AVC 的混合式多重描述編碼 Hybrid Multiple Description Coding Based on H.264/AVC 研究生 : 蕭家偉 指導教授 : 蔡文錦教授 中華民國九十七年七月 基於 H.264/AVC 的混合式多重描述編碼 Hybrid Multiple Description Coding Based on

More information

Port GCC to a new architecture Case study: nds32

Port GCC to a new architecture Case study: nds32 HelloGCC 2013 Port GCC to a new architecture Case study: nds32 2013.11.16 Chung-Ju Wu ( 吳中如 ) www.andestech.com WWW.ANDESTECH.COM Overview of Andes Technology Corporate Highlights Founded in 2005 in Hsinchu

More information

Department of Computer Science and Engineering National Sun Yat-sen University Data Structures - Final Exam., Jan. 9, 2017

Department of Computer Science and Engineering National Sun Yat-sen University Data Structures - Final Exam., Jan. 9, 2017 Department of Computer Science and Engineering National Sun Yat-sen University Data Structures - Final Exam., Jan. 9, 2017 1. Multiple choices (There may be zero or more correct answers. If there is no

More information

Quick Installation Guide for Connectivity Adapter Cable CA-42

Quick Installation Guide for Connectivity Adapter Cable CA-42 9235663_CA42_1_en.fm Page 1 Monday, September 13, 2004 11:26 AM Quick Installation Guide for Connectivity Adapter Cable CA-42 9235645 Issue 1 Nokia, Nokia Connecting People and Pop-Port are registered

More information

PowerBuilder 12. PowerBuilder is.net. 向質彬 Ben Sybase Taiwan 2010 / 08

PowerBuilder 12. PowerBuilder is.net. 向質彬 Ben Sybase Taiwan 2010 / 08 PowerBuilder 12 PowerBuilder is.net 向質彬 Ben Sybase Taiwan 2010 / 08 談談 PowerBuilder 的開發產能 Deborah Kurata's (MSMVP) Blog Post: Painting on the DataGridView http://msmvps.com/blogs/deborahk/archive/2009/08/13/painting-on-thedatagridview.aspx

More information

國立交通大學 資訊工程學系 碩士論文 應用層多播線上即時串流 指導教授 : 張明峰教授 研究生 : 張雅智 中華民國九十五年六月. Application-Layer Multicast for Live streaming

國立交通大學 資訊工程學系 碩士論文 應用層多播線上即時串流 指導教授 : 張明峰教授 研究生 : 張雅智 中華民國九十五年六月. Application-Layer Multicast for Live streaming 國立交通大學 資訊工程學系 碩士論文 應用層多播線上即時串流 Application-Layer Multicast for Live streaming 指導教授 : 張明峰教授 研究生 : 張雅智 中華民國九十五年六月 應用層多播線上即時串流 Application-Layer Multicast for Live streaming 研究生 : 張雅智指導教授 : 張明峰教授 Student:

More information

國立臺灣大學電機資訊學院電信工程學研究所碩士論文

國立臺灣大學電機資訊學院電信工程學研究所碩士論文 國立臺灣大學電機資訊學院電信工程學研究所碩士論文 Graduate Institute of Communication College of Electrical Engineering and Computer Science National Taiwan University Master Thesis 應用於影像分類的整體學習與轉移學習之新方法 New approaches of ensemble

More information

物聯網發展趨勢 黃能富教授國立清華大學特聘教授網路通訊國家型科技計畫 (NCP) 通訊軟體與平臺組召集人. IPv6 建置計畫應用分項召集人 March 28, 2012 網際網路趨勢研討會

物聯網發展趨勢 黃能富教授國立清華大學特聘教授網路通訊國家型科技計畫 (NCP) 通訊軟體與平臺組召集人. IPv6 建置計畫應用分項召集人   March 28, 2012 網際網路趨勢研討會 物聯網發展趨勢 黃能富教授國立清華大學特聘教授網路通訊國家型科技計畫 (NCP) 通訊軟體與平臺組召集人 IPv6 建置計畫應用分項召集人 E-mail: nfhuang@cs.nthu.edu.tw March 28, 2012 網際網路趨勢研討會 1 1 物聯網 : 下一個萬億產業? Forrester 預測, 到 2020 年, 物物互聯 的業務, 跟 人與人通信 的業務相比, 將達到 30:1.

More information

RENESAS BLE 實作課程 Jack Chen Victron Technology CO., LTD 2015 Renesas Electronics Corporation. All rights reserved.

RENESAS BLE 實作課程 Jack Chen Victron Technology CO., LTD 2015 Renesas Electronics Corporation. All rights reserved. RENESAS BLE 實作課程 2016-01-21 Jack Chen Jack.chen@victron.com.tw Victron Technology CO., LTD AGENDA CS+ & Renesas Flash Programmer 安裝 3 Renesas Flash Programmer 燒錄介紹 6 CS+ 介面介紹 11 CS+ 開啟 Project & 使用教學 14

More information

A n d r o i d Ta b l e t P C

A n d r o i d Ta b l e t P C User Guide for LPT -200AR A n d r o i d Ta b l e t P C Table of Contents 1. Overviewing Product 2. H/W Spec. 3. Wi-Fi Output Power 4. Easy Setting Icons 5. Setting 1. Wi-Fi 2. Bluetooth 3. Airplane mode

More information