Advances on cognitive automation at LGI2P / Ecole des Mines d'alès

Size: px
Start display at page:

Download "Advances on cognitive automation at LGI2P / Ecole des Mines d'alès"

Transcription

1 Advances on cognitive automation at LGI2P / Ecole des Mines d'alès Doctoral research snapshot July 2014 Research report RR/14-01

2

3 Foreword This research report sums up the results of the 2014 PhD seminar of the LGI2P lab of the Alès National Superior School of Mines. This annual day-long meeting gathers presentations of the latest research results of LGI2P PhD students. This year edition of the seminar took place on June 19th. All PhD students presented their work for the past academic year. All presentations were be followed by extensive time for very constructive questions from the audience. The aggregation of abstracts of these works constitute the present research report and gives a precise snapshot of the research on cognitive automation led in the lab this year. I would like to thank all lab members, among which all PhD students and their supervisors, for their professionalism and enthusiasm in helping me prepare this seminar. I would also like to thank all the researchers that came to listen to presentations and ask questions, thus contributing to their thesis defense training. I wish you all an inspiring reading and hope to see you all again for next year s 2015 edition!! Christelle URTADO Page 1

4 Page 2

5 Contents First year PhD students Hasan ABDULRAHMAM Page 7 Model of noise and color restoration Blazo NASTOV Page 13 Contribution to model verification: Operational semantics for systems engineering modeling languages Second year PhD students Nawel AMOKRANE Page 15 Toward a methodological approach for engineering complex systems with formal verification applied to the computerization of small and medium sized enterprises Mustapha BILAL Page 19 Contribution to System of Systems Engineering (SoSE) Mirsad BULJUBASIC Page 23 Efficient local search for large scale combinatorial problems Sami DALHOUMI Page 27 Ensemble methods for transfer learning in brain- computer interfacing Nicolas FIORINI Page 31 Coping with imprecision during a semi-automatic conceptual indexing process Abderrahman MOKNI Page 35 A three-level formal model for software architecture evolution Darshan VENKATRAYAPPA Page 39 Object matching in videos :a small report Page 3

6 Page 4

7 JOURNEE DE PRESENTATION DES TRAVAUX DES DOCTORANTS DU LGI2P JEUDI 19 JUIN 2014 SALLE DE CONFERENCE SITE DE NIMES DE L ECOLE NATIONALE SUPERIEURE DES MINES D ALES PROGRAMME DE LA JOURNEE Début du séminaire 9h30 1A Hassan ABDULRAHMAN 15 minutes / 5 minutes 9h40 1A Blazo NASTOV 15 minutes / 5 minutes 10h00 Pause 10h20 2A Nawel AMOKRANE 20 minutes / 10 minutes 10h35 2A Mustapha BILAL 20 minutes / 10 minutes 11h05 2A Mirsad BULJUBASIC 20 minutes / 10 minutes 11h35 Déjeuner 12h05 2A Sami DALOUMI 20 minutes / 10 minutes 13h45 2A Nicolas FIORINI 20 minutes / 10 minutes 14h15 2A Abderrahman MOKNI 20 minutes / 10 minutes 14h45 Pause 15h15 2A Darshan VENKATRAYAPPA 20 minutes / 10 minutes 15h30 Synthèse Yannick VIMONT Fin du séminaire 16h00 16h15 Merci de votre participation! Plus d infos : Page 5

8 Page 6

9 Model of Noise and Color Restoration!"#$#%&"'()(%*+,,-./% Hasan Abdulrahmam # 1, Marc Chaumont 2, Philippe Montesinos # 3 # EMA, LGI2P Laboratory, Parc Scientifique G. Besse, Nimes, France {Hasan.Abdulrahman,Philippe.Montesinos}@mines-ales.fr University of Nimes, F Nimes, France LIRMM Laboratory, UMR 5506 CNRS, University of Montpellier II Marc.Chaumont@lirmm.fr Abstract Over the past few years sophisticated techniques for dealing with Steganography have been developed rapidly. These developments, along with high-resolution digital images, and the real world is dealing with color. The challenge of detecting the presence of hidden messages in color images leads us to search for a novel technique of Steganalysis in colored images. Steganography is the technique for hiding secret information in multimedia images, texts, audios. audios in statistically undetectable way. Whereas Steganalysis is the dual technique in which detection of presence or absence of a secret information is done. The aim of this thesis is to design and implemental a Steganalysis system that scan and test color images,to detect hidden information through construct model of color noise, is done by separating the three color channels (R,G,B) of image, then extracting features from each channel separately by compute noise residual using high pass filters and cooccurrences matrix. K eywords: Noise residuals, Color images, Steganography, Steganalysis, Features, Ensemble classification, 1.Introduction Graphic files are the most common data files on the Internet after text information files. There are many different image formats but only a few of them are well discussed in focus to the steganography. In the modern information area, digital images have been widely used in a growing number of applications related to military, intelligence, surveillance, law enforcement, and commercial applications.[1] 1.1 Model of noise We can consider a noisy image to be modelled as follows: g( x, y) f ( x, y) ( x, y) f(x, y) is the original image pixel, (x, y) is the noise term g(x, y) is the resulting noisy pixel. There are many different models for the image noise term (x, y): shown in Figure 1 Rayleigh Noise, Erlang Noise, Exponential Noise, Uniform Noise and Impulse Noise.[2] Figure 1 different Noise models Page 7

10 1.2 Steganography The art and science of hiding information by embedding messages within other, seemingly harmless messages. Steganography works by replacing bits of useless or unused data in regular computer files (such as graphics, sound, text, HTML, or even hard disks ) with bits of different, invisible information. This hidden information can be plain text, cipher text, or even images. Steganography (literally meaning covered writing) dates back to ancient Greece, where common practices consisted of etching messages in wooden tablets and covering them with wax, and tattooing a shaved messenger's head, letting his hair grow back, then shaving it again when he arrived at his contact point.[3] Image steganography techniques can be divided into two groups, spatial domain (also known as Image domain ) and Transform Domain (also known as frequency domain). Spatial domain techniques embed messages in the intensity of the pixels directly. For transform domain, on the other hand, images are first transformed and then the message is embedded in the image.[4] In his 1984 landmark paper [5], Gustavus Simmons illustrated what is now widely known as steganography in terms of the prisoners crime, Alice and Bob, are arrested in separate cells. They want to coordinate an escape plan, but their only means of communication is by way of messages conveyed for them by Wendy the warden. Should Alice and Bob try to exchange messages that are not completely open to Wendy, or ones that seem suspicious to her, they will be put into a high security prison no one has ever escaped from. Block diagram of steganography is shown in Figure Steganalysis Figure 1 Block diagram of Steganography Steganalysis is the art and science of detecting messages hidden by using steganography. The goal of image steganalysis is to discover the presence of a hidden information from the given cover image. Commercial software can identify the presence of hidden information and if possible, original information can be obtained. However, if the information is scattered in random form encrypted in a different form which does not comply to the existing methods, then there comes the difficulty in identifying the presence of information and if identified, reconstruction of original information is still a challenging task. [6] 2. Image Steganalysis Algorithms for image steganalysis are primarily of two types: Specific and Generic. The targeted approach represents a class of image steganalysis techniques that very much depend on the underlying steganographic algorithm used and have a high success rate for detecting the presence of the secret message if the message is hidden with the algorithm for which the techniques are meant for. The Blind approach represents a class of image steganalysis techniques that are independent of the underlying steganography algorithm used to hide the message and produces good results for detecting the presence of a secrete message hidden using new and/or unconventional steganographic algorithms.[7] 2.1 Targeted Image Steganalysis Algorithms Image steganography algorithms are more often based on an embedding mechanism called Least Significant Bit (LSB) embedding. Each pixel in an image is represented as a 24-bitmap value, composed of 3 bytes representing the R, G and B values for the three primary colors Red, Green and Blue [8]. Page 8

11 Images can be represented in different formats, the three more commonly used formats are: GIF (Graphics Interchange Format), BMP (Bit Map) and JPEG (Joint Photographic Exchange Group). We discuss the algorithms for each of these formats Palette Image Steganalysis Palette image steganalysis is primarily used for GIF images. The GIF format supports up to 8 bits per pixel and the color of the pixel is referenced from a palette table of up to 256 distinct colors mapped to the 24-bit RGB color space. LSB embedding of a GIF image changes the 24-bit RGB value of a pixel and this could bring about a change in the palette color (among the 256 distinct colors) of the pixel. The steganalysis of a GIF stego image is conducted by performing a statistical analysis of the palette table vis-à-vis the image and the detection is made when there is an appreciable increase in entropy (a measure of the variation in the palette colors).[9] Raw Image Steganalysis The Raw image steganalysis technique is primarily used for BMP images that are characterized by a lossless LSB plane. Fridrich et. al. [10] proposed a steganalysis technique that studies color bitmap images for LSB embedding and it provides high detection rates for shorter hidden messages. This technique makes use of the property that the number of unique colors for a high quality bitmap image is half the number of pixels in the image. The new color palette that is obtained after LSB embedding is characterized by a higher number of close color pairs JPE G Image Steganalysis JPEG is a popular cover image format used in steganography. Two well-known Steganography algorithms for hiding secret messages in JPEG images are: the F5 algorithm [11] and Outguess algorithm [12]. 2.2 Generic Image Steganalysis Algorithms The generic steganalysis algorithms, usually referred to as Universal or Blind Steganalysis algorithms, work well on all known and unknown steganography algorithms. These steganalysis techniques exploit the changes in certain innate features of the cover images when a message is embedded. generic steganalysis techniques that use Fisher Linear Discriminant (FLD), Support Vector Machine (SVM) and a Ensemble Classifier have been proposed to accurately differentiate between cover and stego images.[13] 3. Aims of thesis Since the beginning of the modern steganography in the end of the nineties, color steganography and color steganalysis have still not been studied, and now people dealing with color images more than grayscale images, leads us to search for a novel technique of steganalysis in color images. Our work aims is to design and implemental steganlysis system that detect the presence hidden information in color images through the construction of a model of noise extracted by computing noise residuals from color images using special filters for color. and also is to propose color filters that are useful for color steganalysis. Additionally also doing a state-of-the-art of the different steganographic techniques and evaluate their security through an efficient steganalysis. 4. Embedding Methods Embedding methods for images considered for the performance analysis of above mentioned steganalysis techniques, we separated each color image into three color channels then hide information in one channel And then merged the three channels to get the stego image, we try to hide information on each color channels with different payload ratio by using deferent steganography methods which is :- 1- S-UNIWARD Steganography. 2- HUGO Steganography. 3- LSB ± 1 Steganography Page 9

12 5. Database sets used To analyze the performance of steganalysis technique, we need to have test set of images to experiment with. the test data needed to include both non-stego images (raw images) and stego images(with the secret message). Also, ensemble classification needed a significant number of train data for training the classifiers Therefore, data preparation was the first and a very important step in our work. we listed the details of each database set created separately. and details include the embedding message payload used in the embedding process to create the stego image database and to create the cover image database. 1 - cover images database : ( ) color images taken from the BOSSBase[14], subset of Dresden image database [15], and subset of Sam Houston State University [16], all images convert into Portable PixMap (ppm). 2 - S-UNIWARD stego images database : Created by implement S-UNIWARD embedding method by embedding a message in different payload ( 0.1,0.2,0.3,0.4,0.5).( to the moment of the date of writing this report ) we embedding ( 6150 ) color images. And in the future complete the database of stego images to ( ) stego. 3- HUGO stego images database : Created by implement HUGO embedding method by embedding a message in different payload( 0.1,0.2,0.3,0.4,0.5). ( to the moment of the date of writing this report ) we embedding ( 1000 ) color images. And in the future complete the database of stego images to ( ) stego. 6. Building the rich model Jessica F ridrich and Jan Kodovsk y [17] General methodology for Steganalysis of digital images in graylevel based on the concept of a rich model consisting of a large number of diverse submodels, and submodels consider various types of relationships among neighboring samples of noise residuals obtained by linear and nonlinear filters with compact supports. Rich model is assembled as part of the training process and is driven by the available examples of cover and stego images. 6.1 Submodels The individual submodels of the proposed rich model are formed by joint distributions of neighboring samples from quantized image noise residuals obtained using linear and non-linear high-pass filters the features are computing from the following steps:- A - Computing Residuals: The submodels are formed from noise residuals, our work we compute residual for each channel ( Red,Green,and Bule) ( Right, Left, Up, Down, Right up, Left Up, Right diagonal, and Left diagonal ). computed using highpass filters of the following form: where c!!!is the residual order, N i,j is a local neighborhood of pixel X i,j, X i,j N i,j, and of c X i,j defined on N i,j. The set of { X i,j + N i,j } is called the support of the residual. "#!(.) is a predictor B-Computing T runcation and Quantization Each submodel is formed from a quantized and truncated version of the residual: runct ( round ( Rij / q ) ) ( 2 ) where q > 0 is a quantization step. The purpose of truncation is description using co-occurrence matrices with a small T. The quantization makes the residual more sensitive to embedding changes at spatial discontinuities in the image (at edges and textures). eg: if T = 2, so the residual image Ri,j, 2>= Ri,j >= -2 figure 2 show a block of noise residual Page 10

13 C- Computing Co-Occurrences matrix : The construction of each submodel continues with computing one or more co-occurrence matricesof neighboring samples from the truncated and quantized residual equation 2.[18] Figure 2 Show how can compute co-occurences matrix. Figure 2 Show co-occurences matrix. 7. Schedule to work later In our work I will continue with my research in steganalysis to design a detector of hidden information by means of special filters for color images. All following points. a - Try to use different filters or filter banks to get more features from color image in fast time. b - Try to use LibVision Library, C and C++ programming language in Linux, which was built in LGI2P lab. c - Try to use different types of stegnography methods, d - prepare own decoder of color images using special filters for color images, e - Try to detect different sizes payload ratio to hidden message in image and make compared between the results. g - Try to optimize length of inputs into Ensemble Classifiers and decrease the complexity features. 8. Conclusions and Future work This report first, gives an overview of using steganalysis in colored images, then lists the main existing approaches of steganalysis. The main idea is to design and implementation a steganlysis system that scans and tests color images for detecting hidden information through constructing a model of noise extracted by computing noise residuals from color images using special filters then, extract very big set of features from these residuals in order to feed it to classifier. In the future work, we will try to develop a rich model for steganalysis (SRM) to detect color images by extracting features for the three channels with more advanced filters by reducing cost and time consuming in the process of managing large database. 9. References [1] Wayner, P. Disappearing Cryptography, Second Edition: Information Hiding: Steganography & Watermarking (The Morgan Kaufmann Series in Software Engineering and Programming)., Morgan Kaufmann; 2 edition, s. ISBN [2] C. Tomasi, R. Manduchi, Bilateral filtering for gray and color images, in: Proceedings of the IEEE International Conference on Computer Vision,Bombay, India, January 2011, pp [3] Hidden Data in PE-File with in U Journal of Computer and Electrical Engineering (IJCEE), Vol.1, No.5, ISSN: , p.p [4].Zaidoon kh.al- als for Journal of computing,volume 2,Issue 3,March 2010 ISSN [5 problem and the subliminal channel. In Advances 67, [6] F. Neil Johnson and Sushil Jajodia, Steganalysis of images created using current steganography software. Proc. of the Second International Workshop on Information Hiding, vol. 1525, pp , [7 and Multimedia Signal Processing Page 11

14 [8] y Computer Science, vol. 1525, pp , Springer Verlag, [9] teganalysis of Digital Images: Estimating the Secret s Journal, Special issue on Multimedia Security, vol. 9, no. 3, pp , [10 International Conference on Multimedia and Expo (ICME), vol. 3, pp , New York, NY, USA, July August [11 pp , January [12] Outguess Universal Steganography: [13] -order Statistics and Support Vector Notes in Computer Science, vol. 2578, pp , [14] Patrick Bas, Tom a s Filler, and Tom a s Pevn y, Break our steganographic system the ins and outs of organizing BOSS, in Information Hiding, 13th International Conference, IH 2011, Tom a s Filler, Tom a s Pevn y, Scott Craver, * and Andrew Ker, Eds. 2011, vol of Lecture Notes in Computer Science, pp , Springer. [15] Thomas Gloe and Rainer B ohme, The Dresden Image Database for benchmarking digital image forensics, Journal of Digital Forensic Practice, vol. 3, no. 2 4, pp , [16] Sam Houston State Universit: [17] Jessica Fridrich and Jan Kodovsk and Applications. Cambridge, U.K.: Cambridge Univ. Press, [18] D. Zou, Y. Q. Shi, W. Su, and G. Xuan, Steganalysis based onmarkov model of thresholded prediction-error image, in Proc. IEEE, Int. Conf. Multimedia Expo., Toronto, Canada, Jul. 9 12, 2006, pp Page 12

15 Contribution to Model Verification: Operational Semantics for Systems Engineering Modeling Languages Blazo Nastov LGI2P, Ecole des Mines d Alès, Parc Scientifique G. Besse, Nimes, France Blazo.Nastov@mines-ales.fr System Engineering (SE) [2] is an approach for designing complex systems based on creating, manipulating and analyzing various models. Each model is related to and is specific to a domain (e.g. quality model, requirements model or architecture model). Classically, models are the subject of study of Model Driven Engineering (MDE) [5] and they are nowadays built by using, and conforming to Domain Specific Modeling Languages (DSMLs). Creating DSML for SE objectives primarily consists in defining its abstract and concrete syntaxes. An abstract syntax is given by a meta model, while various concrete syntaxes (only graphical are considered here) define the representation of models instances of metamodels. So proposing the abstract syntax and one concrete syntax of a DSML makes it operational to create models seen then as a graphical representation of a part of a modeled system. Unfortunately, created models may have ambiguous meaning if reviewed by different practitioners. A DSML is then not complete without a description of its semantics, as described in [3], also highlighting four different types of semantics. Denotational, given by a set of mathematical objects which represents the meaning of the model. Operational, describing how a valid model is interpreted as a sequence of computational steps. Translational, translating the model into another language that is well understood and finally pragmatic, providing a tool that execute the model. The main idea of this work is to create and use such DSMLs focusing on the model verification problematic. We aim to improve model quality in terms of construction (the model is correctly build thanks to construction rules) and in terms of relevance for reaching design objectives (the model respects some of the stakeholder s requirements), considering each model separately and in interaction with the other models of the system under study so called System of Interest (SOI). There are four main ways of verifying a given SOI model, 1) advice of a verification expert, 2) guided modeling, 3) model simulation and 4) formal proof of model properties. SE is a multi-disciplinary approach, so multiple verification experts are requested. Guided modeling is a modeling approach that consists of guiding an expert to design a model, proposing different construction possibilities or patterns in order to avoid some construction errors. In these two cases, the quality of the designed models cannot be guaranteed. Simulation refers to the application of computational models to the study and prediction of physical events or the behavior of engineered systems [4]. To be simulated, a model requires the description of its operational semantics. We define an operational semantics as a set of formal rules, describing on the one hand, the conditions, causes and effects of the evolution of each modeling concept and on the other hand, the temporal hypothesis (internal or external time, physic or logic time, synchronism or asynchrony hypothesis of events, etc.) based on which a considered model can be interpreted without ambiguity. Last, formal proof of properties is an approach that consists of using formal methods to check the correctness of a given model. Literature highlights two ways to prove model properties, either through operational semantics, or through translational semantics. In both cases, a property modeling language is used to describe properties which are afterwards proved using a theorem proving or a model checking mechanisms. Page 13

16 Our goal is to provide mechanisms for model simulation and formal proof of properties. We focus on concepts, means and tools allowing to define and to formalize an appropriate operational semantics for a DSML when creating its abstract and concrete syntaxes. Translational semantics however are not considered due to their classical limitations in terms of verification possibilities. There are different ways to formally describe operational semantics; for example, using the first order logic. In this case, a set of activation and deactivation equations are defined and assigned to each DSML concept, describing its behavior. Another way is through state transition system defining the sequence of computational steps, showing how the runtime system process from one state to another as described in [3]. Nowadays, there are multiple approaches and tools for defining operational semantics for a given DSML. Unfortunately, many of them require minimal knowledge in imperative or objectoriented programming and SE experts are not necessarily experts in programming. Indeed, the operational semantics of dedicated DSML is to be described and formalized with minimal efforts from the expert by assisting him and automating the process as much as possible. An approach is proposed in [1] supporting state-based execution (simulation) of models created by DSMLs. The approach is composed of four structural parts related to each other and of a fifth part providing semantics relying on the previous four. Modeling concepts and relationships between them are defined in the Doman Definition MetaModel (DDMM) package. The DDMM does not usually contain execution-related information. Such kind of information is defined in the State Definition MetaModel (SDMM) package. The SDMM contains various sets of states related to DDMM concepts that can evolve during execution. It is placed on the top of the DDMM package. Model execution is represented as successive state changes of DDMM concepts. Such changes are provoked by stimuli. The Event Definition MetaModel (EDMM) package defines different types of stimuli (events) and their relationship with DDMM concepts and SDMM states evolution. The Trace Management MetaModel (TM3) provides monitoring on model execution by scenarios made of stimuli and traces. The last and key part is the package Semantics describing how the running model (SDMM) evolves according to the stimuli defined in the EDMM. It can be either defined as operational semantics using action language or as denotational semantics translating the DDMM into to another language using transformational language. Our initial objectives are to study and to evaluate this approach on DSMLs of the field of SE in order to become able to interpret them and to verify some properties. This work is developed in collaboration with the LGI2P (Laboratoire de Génie Informatique et d Ingénierie de Production) from Ecole des mines d Alès and the LIRMM (Laboratoire d Informatique, Robotique et Microélectronique de Montpellier) under the direction of V.Chapurlat, F.Pfister (LGI2P) and C.Dony (LIRMM) References [1] Benoît Combemale, Xavier Crégut, Marc Pantel, et al. A design pattern to build executable dsmls and associated v&v tools. In The 19th Asia-Pacific Software Engineering Conference, [2] ISO/IEC. ISO/IEC : Systems and software engineering - System life cycle processes, volume IEEE, [3] Anneke G Kleppe. A language description is more than a metamodel [4] JT Oden, T Belytschko, J Fish, TJR Hughes, C Johnson, D Keyes, A Laub, L Petzold, D Srolovitz, and S Yip. Simulation-based engineering science: Revolutionizing engineering science through simulation report of the national science foundation blue ribbon panel on simulation-based engineering science, february [5] Douglas C Schmidt. Model-driven engineering. Computer Society-IEEE, 39(2):25, Page 14

17 Toward a Methodological Approach for Engineering Complex Systems with Formal Verification Applied to the Computerization of Small and Medium-Sized Enterprises Nawel Amokrane, Vincent Chapurlat, Anne-Lise Courbis, Thomas Lambolais and Mohssine Rahhou. Team ISOE «Interoperable System & Organization Engineering» 1 Context and objectives Our research is driven by an industrial need of an IT-service enterprise named RESULIS, whose job is to computerize all or part of Small and Medium-sized Enterprises (SMEs) and provide them with adapted original software, designed around their business. In order to do that RESULIS has to fully understand and harness the way the SME operates but it lacks of means to formally gather and validate the requirements of the various stakeholders (business experts, decision makers and end users). Especially that they usually have different cultures and vocabularies comparing to project management stakeholders (requirements engineers, designers, engineers) stakeholders do not have the skills to use requirements elicitation tools or modeling languages. This induces difficulties in managing the activities of elicitation, documentation, verification, and validation of the stake We consider that end users have to be involved as active actors in the requirements engineering activities. So we aim at stakeholders in requirements authoring with simple means to autonomously provide information about the way they perform their business processes, the information and resources they use and the distribution of the responsibilities in the organization. And as the quality of requirements has a critical impact on the resulting system we have to check the quality of the produced requirements and models with a set of verification rules and techniques to ensure well-formed, non-contradictory and consistent information. This in the intention of transferring to developers verified models through model transformation techniques to accelerate mock-ups production that will be validated by end users. In the scope of the thesis we focus on modeling, requirements authoring and verification objectives. Page 15

18 2 Approach and propositions We believe that defining requirements when developing a software that manages a business activity of an enterprise is reflected by the definition of the enterprise model. Because this is what needs to be computerized. Indeed an enterprise model formalizes all or part of the business in order to explain an existing situation or to achieve and validate a designed project [1]. Therefor we use Enterprise Modeling [1] to represent, understand and engineer the structure, behavior, components and operations of the SME. We do not aim at evaluating or optimizing its business procedures. In order to manage the inherent complexity of enterprise systems due to their sociotechnical structural and behavioral characteristics, we studied a set of enterprise modeling methods, architectures and standards that have been developed and used in support of the life cycle engineering of complex and changing systems. Such as CIM Open System Architecture (CIM-OSA) that has been developed for integration in manufacturing enterprises [2], The Generalized Enterprise Reference Architecture and Methodology (GERAM) [3] that organizes and defines the generic concepts that are required to enable the creation of enterprise models. These methods have influenced the creation of standards: the standard ISO/DIS proposes constructs providing common semantics and enables the unification of models developed by different stakeholders [4]. It proposes modeling constructs structured into four enterprise modeling views: function, information, resources and organization. This enterprise model view dimension enables the modelers to filter their observations of the real world by emphasizing on aspects relevant to their particular interests and context. The framework for enterprise modeling ISO/DIS standard [5] took back part of the GERA modeling framework. It provides a unified conceptual basis for model-based enterprise engineering that enables consistency and interoperability of the various modeling methodologies and supporting tools. The framework structures the entities under consideration in terms of three dimensions: the enterprise model view, the enterprise model phase and levels of genericity. Along with the constructs for enterprise modeling standard ISO19440, the framework for enterprise modeling standard ISO19439 can be considered as an operational state of the art framework to manage the modeling activities of an enterprise or an information system [6]. We rely on the combination of the framework for enterprise modeling standard ISO19439 and the constructs for enterprise modeling standard ISO19440, which we extend with a requirements modeling view [7] to support, share and save information about stakeholders or system requirements. And we propose a generic conceptual model whose constructs allow modeling the way requirements relate to the other modeling views, assessing the matching level between stakeholder and system requirements and provide justification for design decisions. This common generic conceptual model is also a basis for consistency analysis and verification between the levels of genericity and the modeling views. The verification process allows assessing the correctness of the models and their compliance to meta-models. It is carried out through the definition of modeling rules, consistency rules and completeness criteria. The modeling activities are achieved by stakeholders to whom we provide simple and intuitive modeling languages. We studied a set of languages and approaches that can be conducted upstream of detailed conception, namely: RDM Page 16

19 (Requirement Definition Model) part of the CIM-OSA modeling process [8], goal oriented requirements languages: KAOS [9] and GRL (Goal-Oriented Requirement Language) [10], scenario oriented language UCM (Use Case Maps) [10] and URML (Unified Requirements Modeling language) [11] a high-level quality and functional system requirements language. We are not trying to make an inventory of the all languages proposed in the literature, but rather identify the information that should be addressed while collecting and modeling requirements, so we identified the modeling views that are covered by these languages. We noticed that along with concepts related to the requirements view (goal, expectation, hypothesis, functional / nonfunctional requirements, obstacle these languages do cover some of the enterprise modeling views; and this shows the potential links that requirements have with enterprise modeling constructs. We also assessed stakeholders based on the notation, the orientation and basic concept. We deduced that the studied languages are destined for experts and are not intended to SMEs end users. For instance, Reasoning with goals for functional requirement elicitation is not kely to describe their daily activities rather than thinking about the motivation behind them. Furthermore, what mitigates the accessibility of these languages to is the use of notations that require special knowledge in modeling artifacts and need former training. Accordingly, we believe that textual formulation using natural language is more appropriate for non-expert users. We propose to use languages expressing business knowledge in a sub-set of natural language understandable by human and computer systems. We defined a set of natural language boilerplates (in French according to RESULIS on the basis of the proposed conceptual model to represent all enterprise modeling views. We also want to offer SME s stakeholders the freedom to choose between textual and graphical notations, we are now working on a set of simple graphical notations. Model transformation techniques will be settled to enable switching from a notation to another. In order to guide end users in the definition of their enterprise model and requirements regarding the new software, we propose a modeling process that comprises the four following paradigms: organization modeling and role definition, Function and behavior modeling, information and resource modeling and stakeholder requirements definition. A tool will support the modeling process. It will be endowed with verification stakeholders and the conformance of the produced models to their meta-models; (ii) detect contradictory behaviors among roles and processes definition. For instance situations where stakeholders intervening in the same business processes provide conflicting descriptions regarding the inputs, outputs or the order of the activities; and (iii) discern non-exhaustive descriptions where for instance the output of an activity (that does not represent the purpose of the business process) is not used by any other activity. We use advanced techniques to ensure correct requirements writing such as: the use of Natural Language Processing (NLP) to verify the lexical correctness of requirements and the use of requirements boilerplates for guiding the authoring activity. Page 17

20 3 Conclusion Fostering the collaboration between the involved stakeholders during the software requirements elicitation and validation activities involves the construction, by the, of a common understanding about the structure and the behavior of the enterprise. We propose a requirements elicitation and validation process that is compliant with enterprise modeling reference frameworks. It uses a set of intuitive modeling languages to capture stakeholders as an entry point for the construction of the enterprise specific model. And it is supported with verification mechanisms to ensure the quality of the models that will be used in the downstream development phases. References 1. -système. La modélisation systémique en entreprise; C. Braesch & A. Haurat (éd.), Hermès, Paris, 1995; pp Berio G, Vernadat F. New developments in enterprise modelling using CIMOSA. Computers in Industry; 1999; 40: Bernus P, Nemes L. The contribution of the generalised enterprise reference architecture to consensus in the area of enterprise integration. Proceedings of ICEIMT97; ISO/DIS Enterprise integration - Constructs for enterprise modelling. ISO/DIS 19440, ISO/TC 184/SC 5; ISO/DIS CIM Systems Architecture Framework for enterprise modelling. ISO/DIS 19439, ISO/TC 184/SC 5; Millet P.A. ERP. Thèse INSA de Lyon; Amokrane N, Chapurlat V, Courbis A.L., Lambolais T, Rahhou M. Modeling frameworks, methods and languages for computerizing Small and Medium-sized Enterprises: review and proposal. I-ESA. Albi, France, Zelm M, Vernadat F, Kosanke K. The CIMOSA business modelling process. Computers in Industry, Amsterdam: Elsevier, 1995; pp Darimont R, Delor E, Massonet P, Van Lamsweerde A. GRAIL/KAOS: An Environment for Goal-Driven Requirements Engineering. - 20th Intl. Conf: on Software Engineering; ITU T, Recommendation Z.151: User Requirements Notation (URN) Language Definition, Geneva, Switzerland; Schneider, F., Naughton, H., & Berenbach, B. A modeling language to support early lifecycle requirements modeling for SE. Procedia Computer Science, 2012; 8, ! Page 18

21 Contribution to System of Systems Engineering (SoSE)[Internal report - LGI2P] Mustapha Bilal, Nicolas Daclin, and Vincent Chapurlat LGI2P, Laboratoire de Génie Informatique et d Ingénierie de Production, ENS Mînes Alès, Parc Scientifique G.Besse, Nîmes cedex 1, France {firstname.lastname}@mines-ales.fr Abstract. It is agreed that there are similarities between collaborative Networked Organizations (CNOs) and the System of Systems (SoS). System of Systems Engineering (SoSE) can be distinguished from simple system engineering. As in many other engineering and scientific discipline, SoSE is required to conduct such complex systems. The first phase of doing SoSE is to start by building a model. Therefore, in this report, we propose a meta-model which respects the particularity of the SoS. Moreover, this meta-model includes concepts that allows to analyze the impact of interoperability on what we call the analysis perspectives of the SoS: Stability, Integrity and Performance. Verification and simulation approach will be used simultaneously to permit this analysis. Keywords: System of Systems (SoS), Collaborative Networked Organizations CNOs, Verification, Formal proofs, System Engineering, System of Systems Engineering, Interoperability 1 Overview Nowadays, most enterprises and organizations search to cooperate and to build up their own network in order to obtain a single Collaborative Networked Organization (CNO) which is able to perform a mission that an enterprise, organization or entity alone cannot perform [1]. In this sense, the CNO is considered as a SoS in terms of the ARCON reference modeling framework [2]. Therefore, and in order to conduct this kind of complex systems it is required to propose an engineering approach: System of Systems Engineering (SoSE). The subject of SoS Engineering (SoSE) versus Systems Engineering (SE) is debated in the literature. The question has been asked : Is engineering a system of systems really any different from engineering an ordinary system? [3]. Others, like us, believe that traditional Systems Engineering (SE) differs from SoS Engineering (SoSE). SoSE differs from the SE in the selection phase of the entities that are supposed to be part of the SoS. Indeed, systems, where most of them already exist, are selected according to their relevance, capacity and Self-interest to fulfill the SoS mission. They are assembled in a way to respect the requirements of SoS stakeholders and that their interaction allows them to fulfill this mission. During this assembly, interfaces are required whether physical (hardware), informational (model and Page 19

22 data exchange protocols) or organizational (rules, procedures and protocols) in order to ensure the necessary interoperability of subsystems [4]. Moreover, their behavior, decision-making autonomy and their own organization should not be impacted or influenced more than necessary by risky situations or undesired effects resulting from the interaction between these subsystems. These interactions have not been yet studied by the literature and it is a new challenging research topic to discover. It is required to help and to support actors in charge of System of Systems (SoS) design to ensure the quality of the design reducing large and timeconsuming modeling and analysis efforts. This has to be done whatever may be the size or type of the proposed SoS, the various and wanted disciplines which may be involved in this design, and the available details at design time about the systems which can be considered as relevant by designers to compose the SoS. This report presents briefly the work has been done. A methodology to achieve SoS design verification has been proposed. It is based, first, on building the SoS meta-model. Second, it proposes the fusion of two complementary approaches of model verification in order to ensure the impact of interoperability on the analysis perspectives of the SoS: stability, integrity and performance in order to maximize the robustness and reliability of the proposed SoS particularly when facing some disturbances due to subsystems interactions during the SoS operational mission execution. A formal properties specification and proof approach allows the verification of the adequacy and coherence of SoS model with regard to stakeholders requirements. Moreover, simulation allows the execution of the architectural model of SoS and the identification of the impact of the interoperability on the SoS analysis perspectives. The SoS meta-model is enriched by concepts and mechanisms allowing the evaluation and the test of various interoperability characteristics of various operational scenarios resulting from the execution. 2 SoS versus CNO : similarities Giving the definition of each SoS and CNO will draw the first line of similarities. While there is no universal definition for SoS [5], there is a general consensus about its several characteristics [6] [7]: Operational Independence, Managerial Independence, Evolutionary Development, Emergent Behavior, Geographic distribution, Connectivity and Diversity. Now, if we go through the definition of the CNO, we will realize that there is a number of common characteristics between CNO and SoS. A collaborative network is a network consisting of a variety of entities (e.g. organizations and people) that are largely autonomous, geographically distributed, and heterogeneous in terms of their operating environment, culture, social capital and goals, but that collaborate to better achieve common or compatible goals, thus jointly generating value, and whose interactions are supported by computer network [1]. Page 20

23 Looking at the life cycle draws the second line of similarities. Both CNO and SoS pass through the same phases of the life cycle: Creation, Operation, Evolution, Dissolution or Metamorphosis [1]. 3 SoSE principles Interfacing the entities appears in the creation phase of the CNO life cycle. One of the famous problems in CNO is the physical integration of multiple subsystems due to the diversity of interfaces [8]. Therefore, entities are selected and involved under various conditions and constraints, particularly their interoperability, that have to be characterized prior the assembling. Indeed this assembly establishes various interactions between the subsystems. In this context, interoperability takes on its full meaning when considering these interactions that make these subsystems able to work together. On the one hand, the interactions between subsystems are expected in order to allow to the SoS to fulfill its mission. On the other hand, these interactions imposes to have interfaces of various types: technical (e.g. software), organizational (e.g. communication rules), human/machine (e.g. touchscreens) or logical at a high level of abstraction (e.g. resource utilization). Therefore, designers attention has to be then concentrated on interfaces-to-design. The challenge raised here is to design the interfaces which will improve the interoperability by managing the interactions without affecting the entities. For that, it is required to have a modeling language adapted to achieve SoS modeling and verification exceptions. The modeling language must permit to design requested interfaces allowing managing these interfaces without inducing huge modifications or dysfunction of each subsystem. Indeed, the modeling language must allow designers to attest that the SoS model is well constructed, well-formed and coherent with the stakeholders requirements. Moreover, it is important to mention that the main goal of the research is to prove that the interoperability has some influence over the SoS. We have found that there is a natural tension between interoperability and each characteristics of the SoS. The dynamic evolution, the heterogeneity, the autonomy and the connectivity of the SoS are strongly linked to the notion of interoperability. We assume that changing the interoperability between the entities that form the SoS induces some changes on the analysis perspectives of the SoS [9] : Performance [10]:The ability of a SoS to recover again its performance objectives. Stability [10]: The ability of a SoS to maintain its viability and to adapt to any change in its environment. Integrity [10]: The ability of a SoS to return to a known operating mode when facing a local modification in existing configuration. The developed SoS model contains some traditional concepts of the system engineering. However, some new concepts related 1) to the behavioral aspect of the SoS 2) to the interactions between its entities, 3) to the interoperability and 4) to the verification purpose have been added [9]. The SoS model is based Page 21

24 on four Domain Specific Modeling Languages (DSML): Requirements Modeling Language, Physical modeling language, Functional modeling language and Behavioral Modeling Language. Executing the developed SoS model allows to analyze the impact of interoperability on the SoS analysis perspectives through step by step simulation and formal proofs techniques [10]. 4 Conclusion and Perspectives This report has briey introduced the similarities between SoS and CNO and how the System of Systems Engineering have to be used to conduct such complex systems. Moreover, it has showed that changing the interoperability between SoS entities have some effects on the analysis perspectives of the SpS: Stability, Integrity and Performance. A SoS meta-model have been developed and it took into account new concepts that will allow to analyze the impact of interloperability through step by step simulation and formal proofs techniques. Further work has to be done to apply the simulation and verification methodology. References 1. Camarinha-Matos, L.M., Afsarmanesh, H., Galeano, N., Molina, A.: Collaborative networked organizations Concepts and practice in manufacturing enterprises. Computers & Industrial Engineering 57(1) (August 2009) Camarinha-Matos, L.M., Afsarmanesh, H.: Collaborative Networks: Reference Modeling. Springer (2008) 3. Sheard, S.: Is Systems Engineering for Systems of Systems Really Any Different? INCOSE Insight 9(1) (2006) 4. Mallek, S., Daclin, N., Chapurlat, V.: The application of interoperability requirement specification and verification to collaborative processes in industry. Computers in Industry 63(7) (September 2012) Sage, A.: Processes for System Family Architecting, Design, and Integration. Systems Journal IEEE 1(1) (2007) Maier, M.W.: Architecting principles for systems-of-systems. Systems Engineering 1(4) (1998) Stevens Intitute Of Technology, Castle Point On Hudson, Hoboken, N..: Report On System Of Systems Engineering. (2006) 8. REITHOFER, W., NAEGER, G.: Bottom-up planning approaches in enterprise modeling-the need and the state of the art. Computer in Industry 33 (1997) Bilal, M., Daclin, N., Chapurlat, V.: Collaborative Networked Organizations as System of Systems: a model-based engineering approach. IFIP AICT, pro-ve (2014) 10. Bilal, M., Daclin, N., Chapurlat, V.: System of Systems design verification : problematic, trends and opportunities. Enterprise Interoperability VI (2014) Page 22

25 2nd year PhD Thesis Summary: Efficient Local Search for Large Scale Combinatorial Problems Mirsad Buljubašić 1, Michel Vasquez 1, and Haris Gavranović 2 1 Ecole des Mines d Ales LGI2P Research Center Nimes, France {mirsad.buljubasic,michel.vasquez}@mines-ales.fr 2 International University of Sarajevo, Bosnia and Herzegovina haris.gavranovic@gmail.com 1 Introduction Many problems of practical and theoretical importance within the fields of Artificial Intelligence and Operations Research are of a combinatorial nature. Combinatorial problems involve finding values for discrete variables such that certain conditions are satisfied and objective function is optimized. One of the mostly used strategies in solving combinatorial optimization problems is local search technique. Local search is an iterative heuristic which typically starts with any feasible solution, and improves the quality of the solution iteratively. At each step, it considers only local operations to improve the cost of the solution. The aim of the thesis is to develop an efficient local search algorithms for few large scale combinatorial optimization problems 3. The problems include Machine Reassignment Problem (MRP), Generalized Assignment Problem (GAP), Bin Packing Problem (BPP), Large Scale Energy Management Problem (LSEM), SNCF Rolling Stock Problem (RSP),... Some of the problems concerned are real world industrial problems proposed by the companies (Google, EDF, SNCF). Here we present the work that has been done until now, emphasizing the work done in the last several months concerning SNCF Rolling Stock Problem. 2 1st year In the first year of the thesis problems concerned were MRP, GAP, BPP, LSEM. An emphasize was on MRP problem and an algorithm developed for solving MRP has been adapted for GAP and BPP. Machine Reassignment Problem is a problem proposed at ROADEF/EURO co-advisor 3 start date of the thesis - December 1st 2012 Page 23

26 Challenge ( the competition organized jointly by French Operations Research Society (ROADEF) and European Operations Research Society (EURO). The problem was proposed by Google. The method used to solve MRP is a multi-start local search combined with noising strategy and high quality results are obtained. The method is tested on 30 instances proposed by Google and used for challenge evaluation. Most of the numerical results obtained are proven to be optimal, near optimal, or the best known. GAP and BPP are well-known NP-hard combinatorial optimization problems. Both problems are relaxation of MRP and, therefore, we use a local search algorithm similar to the one developed for MRP. The method has been tested on standard benchmarks from literature. The results obtained with adapted method are satisfiable (but not quite in the same range with the best results that can be found in the literature). LSEM is a problem proposed at ROADEF/EURO 2010 Challenge ( The goal is to fulfil the respective demand of energy over a time horizon of several years, with respect to the total operating cost of all machinery. The problem was posted by Electricity de France (EDF) and it is a real world industry problem solved at EDF. The method combines constraint programming (for solving the problem of scheduling outages), greedy construction procedure (for finding a feasible production plan) and local search (for solution improvement) techniques. High quality results are obtained on benchmarks given by EDF. There is a lot of space for improving the method and it could be the subject for a future work. The work on those problems is mainly finished and more details can be found in the following published papers: Michel Vasquez, Mirsad Buljubašić : Une procedure de recherche iterative en deux phases : la methode GRASP (march 2014) Chapter in the book Metaheuristiques pour loptimisation difficile Mirsad Buljubašić, Haris Gavranović: An Efficient Local Search with Noising Strategy for Google Machine Reassignment problem, to appear, Annals of Operations Research. Mirsad Buljubašić, Haris Gavranović: A Hybrid Approach Combining Local Search and Constraint Programming for a Large Scale Energy Management Problem. RAIRO - Operations Research 47(4): (2013) 3 SNCF Rolling Stock Problem Most of the work in the second year has been done on SNCF Rolling Stock Problem. All the work has been done while particippating in ROADEF/EURO 2014 Challenge competition ( The problem is defined by French railway company SNCF. Page 24

27 3 3.1 Short description The aim of this challenge is to find the best way to handle trains between their arrivals and departures in terminal stations. Today, this problem is shared between several departments at SNCF, so it is rather a collection of sub-problems which are solved in a sequential way. Between arrivals and departures in terminal train stations, trains never vanish. Unfortunately, this aspect is often neglected in railway optimization approaches. Whereas in the past, rail networks had enough capacity to handle all trains without much trouble, this is not true anymore. Indeed, traffic has increased a lot in recent years and some stations have real congestion issues. The current trend will make this even more difficult to deal with in the next few years. This problem involves temporary parking and shunting on infrastructure which are typically platforms, maintenance facilities, rail yards and tracks linking them. This rolling stock unit management on railway sites problem is extremely hard problem for several reasons. Most of induced sub problems are NP hard problems such as assignment problem, scheduling problem, conflicts problem on gates, platform assignment problem. 3.2 Solving method We propose a two phase approach combining mixed integer programming (MIP) and heuristics. In the first phase, a train assignment problem (AP) is solved with a combination of a greedy heuristic and branch-and-bound. The objective is to maximize the number of assigned departures while respecting technical constraints. In the second phase trains scheduling problem (SP) which consists of scheduling all the trains in the station s infrastructure while minimizing the number of cancelled departures, is solved using a constructive heuristic. The goal of SP is to schedule as many assignments as possible, using resources on the station and respecting all constraints. Local Search is used to improve obtained solutions. Several methods are proposed to solve sub-problems such as greedy algorithm, local search, tabu search, matching algorithm, branch-and-bound, depth first search, oscillation strategy, multi start method. References 1. Mirsad Buljubasic, Haris Gavranovic (2014). An Efficient Multi-Start Local Search with Noising Strategy for Google Machine Reassignment problem Annals of Operational Research, to appear. 2. Mirsad Buljubasic, Haris Gavranovic (2013). A Hybrid Approach Combining Local Search and Constraint Programming for a Large Scale Energy Management Problem RAIRO - Operations Research 47(4): (2013). 3. Michel Vasquez, Mirsad Buljubašić (march 2014). Une procedure de recherche iterative en deux phases : la methode GRASP Metaheuristiques pour l optimisation difficile. Page 25

28 4 4. R. Masson, T. Vidal, J. Michallet, P.H.V. Penna, V. Petrucci, A. Subramanian, and H. Dubedout. (2012). An iterated local search heuristic for multi-capacity bin packing problems and machine reassignment. url: 5. Aarts, Emile and Lenstra, Jan K. (1997). Local Search in Combinatorial Optimization. John Wiley & Sons, Inc., New York, NY, USA. 6. Pisinger, David and Ropke, Stefan (2007). A general heuristic for vehicle routing problems. Comput. Oper. Res. 34(8), p Yagiura, Mutsunori and Iwasaki, Shinji and Ibaraki, Toshihide and Glover, Fred (2004). A very large-scale neighborhood search algorithm for the multi-resource generalized assignment problem. Discret. Optim. 1(1), p Alvim, Adriana C. F. and Ribeiro, Celso C. and Glover, Fred and Aloise, Dario J. (2004). A Hybrid Improvement Heuristic for the One-Dimensional Bin Packing Problem. Journal of Heuristics 10(2), p Page 26

29 E nsemble methods for transfer learning in braincomputer interfacing Sami DALHOUMI, Gérard DRAY, Jacky MONTMAIN Parc Scientifique G. Besse, Nîmes, France.!"#$%&'(!"#$)#*!$&+",$&%-(.! Introduction A brain-computer interface (BCI) is a communication system that allows people suffering from severe neuromuscular disorders to interact with their environment without using peripheral nervous and muscular system, by directly monitoring electrical or hemodynamic activity of the brain. A BCI is considered as a pattern recognition system that classifies different brain activity patterns into different brain states according to their spatio-temporal characteristics [1]. The relevant signals that decode brain states may be hidden in highly noisy data or overlapped by signals from other brain states. Extracting such information is a very challenging issue. To do so, a long calibration time is needed before every use of the BCI in order to extract enough data used for features selection and classifier training. Because calibration is time consuming and boring even for healthy users, several machine learning approaches have been proposed to address this issue. Among all proposed approaches, subject transfer and session transfer frameworks have been shown to be the most promising ones to solve this problem [2-3]. They consist of incorporating data recorded from other users and/or during other sessions in the learning process of the current user. To do so, most of existing approaches are based on the assumption that there is a common underlying brain activity pattern which they try to extract in order to build a subject-independent classification model. Although this assumption can be effective for able-bodied users, it may be very strong for disabled users as their brain activity patterns are much more variable. This work aims to develop new transfer learning frameworks that allow reducing calibration time in BCI technology while maintaining good classification accuracy. These frameworks are based on Bayesian model averaging technique which seems to be suitable for transfer learning applications especially when learning from many sources of data. We opted for ensemble strategies because they allow modeling many patterns simultaneously and relax the assumptions considered in previous work. Page 27

30 Contributions In this section, we present two transfer learning frameworks for reducing calibration time in BCI technology. Both approaches are based on Bayesian model averaging which is a data-dependent aggregation method that allows tuning classifiers weights dynamically and adapt the ensemble to brain signals of each user. We validated our approaches using two types of signals used in BCI technology: near-infrared spectroscopy (NIRS) signals and electroencephalography (EEG) signals. 2.1 Bayesian model averaging Let be a set of hypotheses and a training set. The probability of having class label given a feature vector is : In transfer learning, since the training and test distributions are different, the hypotheses priors should incorporate information about the test set in order to adapt the ensemble to target distribution [4]. In this case, (1) is replaced by where is the test set. In our application, are classification models learned using data recorded from different users and/or during different sessions of the same user and is a labeled set recorded during calibration phase. The goal is to find a good estimation of while keeping as small as possible. 2.2 G raph-based transfer learning for managing brain signals variability in NIRS based B C Is In this approach, we model the heterogeneous NIRS data recorded from different users during different sessions by a bipartite graph. The two sets of vertices and correspond to the NIRS data sets and the feature set respectively. An edge exists if the feature is an explanatory feature in the data set (i.e., ). The partitioning of this bipartite graph allows creation of groups of data sets that share (approximately) the same spatial dis- Page 28

31 tribution of explanatory features. Hypotheses are learned using these groups separately. NIRS signals recorded during a new session are classified as follows: first, we find the group of data sets sharing the most similar spatial distribution of brain activity patterns and then use the hypothesis trained on that data to predict class labels of each trial in the new session. In real time conditions, assuming that spatial distribution of brain activity patterns do not vary significantly during the same session, only the first few trials (i.e., test set T) are used to find the closest co-cluster in our support set. In the Bayesian model averaging framework given in (2), is calculated using «the winner takes all rule» and consequently will be determined using only one hypothesis. This approach was validated using a real NIRS data set and it is accepted for publication in proceedings of the 15th International Conference on Information Processing and Management of Uncertainty [5]. 2.3 Ensemble-based transfer learning for E E G-based brain-computer interfacing Because of the low spatial resolution of EEG signals, spatial filtering is a very important stage that needs to be performed before classification. Common spatial pattern (CSP) is the most used spatial filtering technique for EEG-based BCIs. It is based on calculation of covariance matrices of different classes which requires enough labeled data. Thus, reducing the duration of calibration time may dramatically deteriorate classification performance. In this section, we present a transfer learning framework for EEG classification that allows learning CSP filters and classifiers from other BCI users and consequently reducing calibration time for the current user. It consists of the following steps : 1. Calculate spatial filters and train corresponding classifier for each user separately. 2. Project the small labeled set of EEG signals recorded during the calibration phase of current user on spatial filters of other users. 3. Apply Bayesian model averaging to previously learned classifiers. Classifiers priors are estimated empirically using the projections of the test set of current user. Page 29

32 4. Perform leave one trial out cross-validation (LOOCV) on the test set of current user in order to check if the transfer learning framework outperforms traditional learning approach or not. 5. If yes, use the transfer learning framework to predict class labels of trials performed during the rest of the session. If not, use traditional learning technique in order to avoid «negative transfer». In step three, base classifiers priors are estimated as follows : (6) Where is the projection of the feature vector on spatial filters of user. Evaluation on a real EEG dataset (BCI competition IV dataset 2A) showed that our approach significantly outperforms traditional learning techniques when the size of test set is small. Acknowledgement We would like to thank Stéphane PERREY and Gérard DEROSIERE for the valuable collaboration fruitful scientific discussions. References 1. Lotte, F., Congedo, M., Lécuyer, A., Lamarche, F., Arnaldi, B.: A Review of Classification Algorithms for EEG-based Brain-Computer Interfaces. Journal of Neural Engineering, vol. 4, R1--R13 (2007). 2. Tu, W., Sun, S.: A subject transfer framework for EEG classification. Neurocomputing, vol. 82, pp (2011). 3. Samek, W., Meinecke, F.C., Muller, K.R.: Transferring Subspaces Between Subjects in Brain-Computer Interfacing. IEEE Transactions on Biomedical Engineering, vol. 60, no. 8, pp (2013). 4. Gao, J., Fan, W., Jiang, J., Han, J.: Knowledge transfer via multiple model local structure mapping. Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, Las Vegas, Nevad, USA (2008). 5. Dalhoumi S., Derosiere G., Dray G., Montmain J., Perrey S.: Graph-based transfer learning for managing brain signals variability in NIRS-based BCIs. Proceedings of the 15 th International Conference on Information Processing and Management of Uncertainty (2014). Page 30

33 Coping with Imprecision During a Semi-automatic Conceptual Indexing Process Nicolas Fiorini 1, Sylvie Ranwez 1, Jacky Montmain 1, and Vincent Ranwez 2 1 Centre de recherche LGI2P de l école des mines d Alès, Parc Scientifique Georges Besse, F Nîmes cedex 1, France {nicolas.fiorini,sylvie.ranwez,jacky.montmain}@mines-ales.fr 2 Montpellier SupAgro, UMR AGAP, F Montpellier, France {vincent.ranwez}@supagro.inra.fr Abstract. Concept-based information retrieval is known to be a powerful and reliable process. It relies on a semantically annotated corpus, i.e. resources indexed by concepts organized within a domain ontology. The conception and enlargement of such index is a tedious task, which is often a bottleneck due to the lack of automated solutions. In this synthesis, we first introduce a solution to assist experts during the indexing process thanks to a k-nearest neighbors approach. The idea is to let them position the new resource on a semantic map, containing already indexed resources and to propose an indexation of this new resource based on those of its neighbors. To further help users, we then introduce indicators to estimate the robustness of the indexation with respect to the indicated position and to the annotation homogeneity of nearby resources. It is possible to visually inform users on their margins of error, therefore reducing the risk of having a unsatisfying annotation. Keywords: conceptual indexing imprecision management visualization 1 Introduction Over the last decade, the amount of data has been incessantly growing because of new or improved numerical technologies. They give anyone the ability to create and share new contents. The management of massive collections is a problem that needs to be addressed by new methods capable of handling big data. One key process is document indexing: it associates each document with metadata so that the corpus can be more easily exploited by applications such as information retrieval or recommending systems. Most of the time, annotations are made of simple words. However, ambiguous words e.g. jaguar (car or animal) and synonyms hamper such keyword-based applications. Also, there is no relation considered between the words car and vehicle whereas their meanings are pretty close. In order to overcome these problems, a widespread solution is to rely on knowledge representations such as ontologies [1]. Annotations of entities (genes, biomedical papers, etc.) using such structured vocabularies are more informative since their concepts and the relations among them tackle the Page 31

34 above-mentioned limitations of keyword-based approaches [2]. However, indexing process is hard to fully automatise and it is time-consuming when it is manually done by experts. Here we describe a way of interacting with a user to assist them during the indexing process. Visualization techniques are used to accurately define the neighbor documents of the one to annotate. Once this neighborhood has been identified, the system suggests concepts for characterizing the document. 2 Related work In most existing methods indexing only consists in ordering previously collected concepts, such as in MTI [3]. More recently, various machine learning (ML) approaches were applied to learn how relevant a concept is towards a given document. They all show better results than MTI: gradient boosting [4], reflective random indexing [5] and learning-to-rank [6]. Among all indexing models, Yang [7] stated that the k-nearest Neighbor (knn) approach is the only method that can scale while providing good results. This approach is based on the neighborhood of the document to annotate. Each neighbor acts like a voter for each potential annotating concept. In basic applications, the most frequent concepts in the k neighbor annotations will be the ones proposed for annotating the new document. Huang et al. [6] present a more elaborated approach. First, the union of the concepts indexing the knns provides the set of concepts to start with. Second, the concepts are ordered thanks to the learningto-rank [8] algorithm relying on a set of natural language features and the 25 top concepts are returned. 3 A New Semantic Annotation Propagation Framework The indexing process we propose consists in two steps: the building of a semantic map containing already indexed resources; and identification of relevant concepts once the user has placed a document to be annotated on this map. Relevant concepts are then identified using a knn approach by propagating annotations of the k neighbors of the document to be indexed. Pointing the correct location for this document is thus a decisive action. During the first step, the construction of the semantic map presented to the user requires i) to identify a subset of relevant resources and ii) to organize them in a visual and meaningful way. The first step is obviously crucial and can be tackled thanks to information retrieval approaches. Given an input set of relevant resources, we chose to use the MDS (Multi Dimensional Scaling) to display them on a semantic map so that resource closeness on the map reflects as much as possible their semantic relatedness. During the second step, the annotation propagation starts with the selection of the set N of the k closest neighbors of the click. We make a first raw annotation A 0, which is the union of all annotations of N, soa 0 = n Annotation(n i N i). We defined an objective function which, when maximized, gives an annotation A A 0 that is the median of those of the elements of N, i.e.: A = arg max{score(a)}, score(a) = sim(a, Annotation(n i )) (1) A A 0 n i N Page 32

35 Where sim(a, Annotation(n i )) denotes the groupwise semantic similarity between two groups of concepts, respectively A and Annotation(n i ). This subset can not be found using a brute force approach as there are 2 A0 solutions. Therefore, the computation relies on a greedy heuristic starting from A 0 and deleting concepts one by one. The concept to be deleted at each step is the one leading to the greatest improvement of the objective function. When there is no possible improvement, the algorithm stops and returns a locally optimal A. 4 Coping with Imprecision There is one main cause that may affect the neighborhood definition, correlated with one of the advantages of the method: the user interaction. We propose to estimate the robustness of the proposed annotation with respect to the click position to help the user focusing on difficult cases while going faster on easier ones. Therefore, the user needs to approximately know the impact of a misplacement of an item on its suggested annotation. We compute an annotation stability indicator prior to display the map and visually help users by letting them know their margin of error when clicking. On a zone where this indicator is high the annotation is robust to a misplacement of a new resource because all element annotations are rather similar. Whereas if this indicator is low, the annotation variability associated to a misplaced click is high. To efficiently compute those annotation stability indicators, we first split the map into smaller elementary pieces and generate the annotation corresponding to their center. Using those pre-computed indexations we then assess the robustness of each submaps of M by identifying the number of connected elementary submaps sharing a similar annotation. 5 Evaluation Our application protocol is based on scientific paper annotation. As the documents are annotated with the MeSH ontology, we rely on this structure in our application. We use the Semantic Measures Library (SML) [9] in order to assess the groupwise semantic similarities. In order to enhance the human-machine interaction and to improve the efficiency of the indexation process, we propose to give visual hints to the end-user about the impact of a misplacement. To that aim we color the area of the map surrounding the current mouse position that will led to similar annotations. More precisely, the colored area is such that positioning the item anywhere in this area will lead to an indexation similar to the one obtained by positioning the item at the current mouse position. Figure 1 shows a representation of such zones on different parts of the same map. 6 Conclusion In this synthesis, we describe a new method inspired from knn approaches in which users play a key role by pointing a location on a map thus implicitly Page 33

36 (a) Cursor on a homogeneous zone (b) Cursor on a heterogeneous zone Fig. 1. Visual hints of position deviation impact. The cursor is surrounded by a grey area indicating positions that would lead to similar annotation. defining the neighborhood of a document to annotate. In order to help the user, the system displays the homogeneity of the zone hovered by the user. Therefore, one can easily know how focused they need to be when placing the document on the map. We plan pursuing this work by studying the possible algorithm optimizations for generating annotations, which would make this method more usable. References 1. Haav, H., Lubi, T.: A survey of concept-based information retrieval tools on the web. Proc. 5th East-European Conf. ADBIS, vol. 2, pp , Baziz, M., Boughanem, M., Pasi, G., Prade, H.: An information retrieval driven by ontology from query to document expansion. Large Scale Semant. Access to Content (Text, Image, Video, Sound), pp , Aronson, A.R., Mork, J.G., Gay, C.W., Humphrey, S.M., Rogers, W.J.: The NLM indexing initiative s medical text indexer. Medinfo, vol. 11, no. Pt 1, pp , Delbecque, T., Zweigenbaum, P.: Using Co-Authoring and Cross-Referencing Information for MEDLINE Indexing. AMIA Annu. Symp. Proc., vol. 2010, p. 147, Jan Vasuki, V., Cohen, T.: Reflective random indexing for semi-automatic indexing of the biomedical literature. J. Biomed. Inform., vol. 43, no. 5, pp , Oct Huang, M., Névéol, A., Lu, Z.: Recommending MeSH terms for annotating biomedical articles. J. Am. Med. Informatics Assoc., vol. 18, no. 5, pp , Yang, Y.: An evaluation of Statistical Approaches to Text Categorization. Inf. Retr. Boston., vol. 1, no. 1 2, pp , Cao, Z., Qin, T., Liu, T., Tsai, M., Li, H.: Learning to rank: from pairwise approach to listwise approach. Proc. 24th Int. Conf. Mach. Learn., pp Harispe, S., Ranwez, S., Janaqi, S., Montmain, J.: The semantic measures library and toolkit: fast computation of semantic similarity and relatedness using biomedical ontologies. Bioinformatics, vol. 30, no. 5, pp , Mar Page 34

37 A three-level formal model for software architecture evolution Abderrahman Mokni +, Marianne Huchard*, Christelle Urtado +, Sylvain Vauttier +, and Huaxi (Yulin) Zhang + LGI2P, Ecole Nationale Supérieure des Mînes Alès, Nîmes, France *LIRMM, CNRS and Université de Montpellier 2, Montpellier, France INRIA / ENS Lyon, France {Abderrahman.Mokni, Christelle.Urtado, Sylvain.Vauttier}@mines-ales.fr, huchard@lirmm.fr, yulinz88@gmail.com 1 Introduction Software evolution has gained a lot of interest during the last years [1]. Indeed, as software ages, it needs to evolve and be maintained to fit new user requirements. This avoids to build a new software from scratch and hence save time and money. Handling evolution in large component-based software systems is complex and evolution may lead to architecture inconsistencies and incoherence between design and implementation. Many ADLs were proposed to support architecture change. Examples include C2SADL [2], Wright [3] and π-adl [4]. Although, most ADLs integrate architecture modification languages, handling and controlling architecture evolution in the overall software lifecycle is still an important issue. In our work, we attempt to provide a reliable solution to the architecture-centric evolution that preserves consistency and coherence between architecture levels. We propose a formal model for our three-level ADL Dedal [5] that provides rigorous typing rules and evolution rules using the B specification language [6]. The remainder of this paper is organized as follows: Section 2 gives an overview of Dedal. Section 3 summarizes our contributions before Section 4 concludes and discusses future work. 2 Overview of Dedal the three-level ADL Dedal is a novel ADL that covers the whole life-cycle of a component-based software. It proposes a three-step approach for specifying, implementing and deploying software architectures in a reuse-based process. The abstract architecture specification is the first level of architecture software descriptions. It represents the architecture as designed by the architect and after analyzing the requirements of the future software. In Dedal, the architecture specification is composed of component roles and their connections. Component roles are abstract and partial component type specifications. They are identified by the architect in order to search for and select corresponding concrete components in the next step. Page 35

38 The concrete architecture configuration is an implementation view of the software architecture. It results from the selection of existing component classes in component repositories. Thus, an architecture configuration lists the concrete component classes that compose a specific version of the software system. In Dedal, component classes can be either primitive or composite. Primitive component classes encapsulate executable code. Composite component classes encapsulate an inner architecture configuration (i.e. a set of connected component classes which may, in turn, be primitive or composite). A composite component class exposes a set of interfaces corresponding to unconnected interfaces of its inner components. The instantiated architecture assembly describes software at runtime and gathers information about its internal state. The architecture assembly results from the instantiation of an architecture configuration. It lists the instances of the component and connector classes that compose the deployed architecture at runtime and their assembly constraints (such as maximum numbers of allowed instances). 3 Summary of ongoing research 3.1 Dedal to B formalization Dedal is a relatively rich ADL since it proposes three levels of architecture descriptions and supports component modeling and reuse. However, the present usage of Dedal is limited since there is no formal type theory for Dedal components and hence there is no way to decide about component compatibility and substitutability as well as relations between the three abstraction levels. To tackle with this issue, we proposed in [7] a formal model for Dedal that supports all its underlying concepts. The formalization is specified in B, a set-theory and first order logic based language with a flexible and simple expressiveness. The formal model is then enhanced with invariant constraints to set rules between Dedal concepts. 3.2 Intra-level and inter-level rules in Dedal Intra-level rules in Dedal consist in substitutability and compatibility between components of the same abstraction level (component roles, concrete component types, instances). Defining intra-level relations is necessary to set the architecture completeness property: An architecture is complete when all its required functionalities are met. This implies that all required interfaces of the architecture components must be connected to a compatible provided interface. Inter-level rules are specific to Dedal and consist in relations between components at different abstraction levels as shown in Figure 1. Defining inter-level relations is mandatory to decide about coherence between abstraction levels. 2 Page 36

39 Fig. 1. Inter-level relations in Dedal For instance, the conformance rule between a specification and a configuration is stated as follows: A configuration C implements a specification S if and only if all the roles of S are realized by the concrete component classes of C. 3.3 Evolution rules in Dedal An evolution rule is an operation that makes change in a target software architecture by the deletion, addition or substitution of one of its constituent elements (components and connections). Each rule is composed of three parts: the operation signature, preconditions and actions. Specific evolution rules are defined at each abstraction level to perform change at the corresponding formal description. These rules are triggered by the evolution manager when a change is requested. Firstly, a sequence of rule triggers is generated to reestablish consistency at the formal description of the initial level of change. Afterward, the evolution manager attempts to restore coherence between the other descriptions by executing the adequate evolution rules. Figure 2 presents the corresponding condition diagram of the proposed evolution process. 4 Conclusion and future work In this paper, we give an overview of our three-level ADL Dedal and its formal model. At this stage, a set of evolution rules is proposed to handle architecture change during the three steps of software lifecycle: specification, implementation and deployment. The rules were tested and validated on sample models using a B model checker. As future work, we aim to manage the history of architecture changes in Dedal descriptions as a way to manage software system versions. Furthermore we are considering to automate evolution by integrating Dedal and evolution rules into an eclipse-based platform. 3 Page 37

40 Fig. 2. Condition diagram of the evolution process References 1. Mens, T., Serebrenik, A., Cleve, A., eds.: Evolving Software Systems. Springer (2014) 2. Medvidovic, N.: ADLs and dynamic architecture changes. In: Joint Proceedings of the Second International Software Architecture Workshop and International Workshop on Multiple Perspectives in Software Development (Viewpoints 96) on SIG- SOFT 96 Workshops, New York, USA, ACM (1996) Allen, R., Garlan, D.: A formal basis for architectural connection. ACM TOSEM 6(3) (July 1997) Oquendo, F.: Pi-ADL: An architecture description language based on the higherorder typed Pi-calculus for specifying dynamic and mobile software architectures. SIGSOFT Software Engineering Notes 29(3) (May 2004) Zhang, H.Y., Urtado, C., Vauttier, S.: Architecture-centric component-based development needs a three-level ADL. In: Proceedings of the 4th ECSA. Volume 6285 of LNCS., Copenhagen, Denmark, Springer (August 2010) Abrial, J.R.: The B-book: Assigning Programs to Meanings. Cambridge University Press, New York, USA (1996) 7. Mokni, A., Huchard, M., Urtado, C., Vauttier, S., Zhang, H.Y.: Fostering component reuse: automating the coherence verification of multi-level architecture descriptions. Submitted to ICSEA 2014 (2014) 4 Page 38

41 OBJECT MATCHING IN VIDEOS :A SMALL REPORT Darshan Venkatrayappa, Philippe Montesinos, Daniel.Depp Ecole des Mines d Ales, LGI2P, Parc Scientifique Georges Besses Nimes, France {Darshan.Venkatrayappa,Philippe.Montesinos,Daniel.Depp }@mines-ales.fr Abstract. In this report, we propose a new approach for object matching in videos. Points of interest are extracted from the object using simple color Harris detector. By applying our novel descriptor on these points we obtain point descriptors or signatures. This novel deformation invariant descriptor is made up of rotating anisotropic half-gaussian smoothing convolution kernels. Thus obtained descriptor has a smaller dimension compared to that of the well known SIFT descriptor. Further, the dimension of our descriptor can be controlled by varying the angle of the rotating filter. We achieve euclidean invariance by computing Fast Fourier Transform (FFT) between the two signatures. Deformation invariance is achieved using Dynamic Time Warping (DTW). 1 Introduction Object matching has found prominence in a variety of application such as image indexing in image databases, object detection and tracking, shape matching and image classification. In a nutshell, object matching can be defined as matching a model representing an object to an instance of that object in another image. Object matching methods are of two types: 1) Direct, 2) feature based methods. Lukas-Kanade [2] came up with a direct method, in which a parametric optical flow mapping is sought between two images, so as to minimize the sum of squared difference between objects in two different images. In contrast, Feature based methods such as SIFT [10] and ASIFT [12] are designed to be scale and affine invariant. In this case, the most common approach to match object in images is to find the points of interest of the object in images. This is followed by finding the descriptors of the regions surrounding these points and then matching these descriptors across images. The development of these methods has led to the genesis of new point detectors and descriptors that are invariant to changes in image transformation [1]. Using random Ferns [13] authors are able to achieve real-time object matching in videos. In this approach the authors bypass the patch preprocessing step by using a Naive Bayesian classification framework and produces an algorithm that Page 39

42 2 Lecture Notes in Computer Science: Authors Instructions is simple, efficient, and robust. The only drawback of this approach is the offline training stage which is very time consuming. In [7], the authors have come up with a linear formulation that simultaneously matches feature points and estimates global geometrical transformation in a constrained linear space. The linear scheme reduces the search space based on the lower convex hull property so that the problem size is largely decoupled from the original hard combinatorial problem. They have achieved accurate, efficient, and robust performance for scale and rotation invariance for object matching in videos. This report is organised as follows: section 2 describes our point descriptor. In section 3, we deal with affine invariant image matching. Section 4 deals with the experiments and results. The final section we will talk about the conclusion and future work. 2 POINT DESCRIPTOR We use the point descriptor as in [11], where the authors use a anisotropic derivative half-gaussian filter. We have replaced the derivative filter with a smoothing filter oriented along a direction θ. The switching from derivative to smoothing filter is intended to reduce the size of the descriptor there by increasing the frame rate. This filter is described by : g (σξ,σ η)(x, y, θ) = C S x y Rθ y e (xy).z (1) on a considered pixel point at (x, y) with: 1/(2σ Z = R 1 2 ξ ) 0 x θ 0 1/(2ση) 2.R θ, y σ ξ and σ η controls the size of the gaussian along the two orthogonal directions, radial and axial. S y is a sigmoid function (along the Y axis) used to cut smoothly the gaussian kernel. R θ is a 2D rotation matrix. C a normalization coefficient. Depending on the application, we increment the direction parameter θ in steps of 5 0, to obtain a set of half-gaussian smoothing kernels. which, scans by convolution the surrounding of points from 0 to 360 degrees. Convolution of a point in an image with all the kernels results in an intensity function, which depends on the direction of the kernel. Illumination invariance as proposed by diagonal illumination model [8] (eq2) is achieved by normalizing channel by channel: (R 2,G 2,B 2 ) t = M.(R 1,G 1,B 1 ) t +(T R,T G,T B ) t, (2) where: (R 1,G 1,B 1 ) t and (R 2,G 2,B 2 ) t are color inputs and outputs respectively. M is a diagonal 3x3 matrix and (T R,T G,T B ) represents a colour transition vector of the 3 channels. Page 40

43 OBJECT MATCHING IN VIDEOS 3 3 AFFINE INVARIANT IMAGE MATCHING The descriptor discussed in the previous section dose not provide direct euclidean or deformation invariance. Euclidean invariance is easily obtained by computing correlation between the descriptor curves describing respectively the two points. The phase between the two curves is defined by the location of the maxima of correlation. Correlation between two curves can be obtained at a low computational cost using a FFT(FFTW3) transform. Since, angles are not preserved under deformation or projective transforms, correlation alone is insufficient in ranking the match between a point in an image and the same point seen in a second image under a change of viewpoint. In such a situation, curve deformation is needed to obtain affine invariant correlation scores. The simplest way to transform a curve into another is to make use of the dynamic time warping(dtw) algorithm. DTW is a popular similarity measure between two temporal signals. In [9] the authors have used an improved DTW for time series retrieval in pattern recognition. When transforming two curves, DTW doesn t take into account any affine transformations. In order to obtain warping compatibility with affine transformation, we need to introduce some constraints to the original DTW algorithm. 4 EXPERIMENTS AND DISCUSSION (a) 18 (b) 28 (c) 35 (d) 55 (e) 71 (f) 98 (g) 18 (h) 28 (i) 35 (j) 55 (k) 71 (l) 98 Fig. 1: first six color images are output of matching using our method. last six gray images are output of matching using SIFT. Numbers are the frame numbers for both SIFT and our method Software implementation is in c/c++ programming platform. We use intel machine with 4 cores. We have tested our method on 1 video sequences. The Page 41

Blazo Nastov. Journée des doctorant, Nîmes, France 19 June 2014

Blazo Nastov. Journée des doctorant, Nîmes, France 19 June 2014 Apport de l Ingénierie des Langages de Modélisation à l Ingénierie Système Basée sur les Modèles : conception d une méthode outillée pour la génération de Langages Métier interopérables, analysables et

More information

Towards V&V suitable Domain Specific Modeling Languages for MBSE

Towards V&V suitable Domain Specific Modeling Languages for MBSE Doctoral symposium, Nîmes France, 16 June 2016 Towards V&V suitable Domain Specific Modeling Languages for MBSE Laboratoire de Génie Informatique et d Ingénierie de Production Blazo Nastov 1, Vincent Chapurlat

More information

Optimizing Pixel Predictors for Steganalysis

Optimizing Pixel Predictors for Steganalysis Optimizing Pixel Predictors for Steganalysis Vojtěch Holub and Jessica Fridrich Dept. of Electrical and Computer Engineering SUNY Binghamton, New York IS&T / SPIE 2012, San Francisco, CA Steganography

More information

Research Article A Novel Steganalytic Algorithm based on III Level DWT with Energy as Feature

Research Article A Novel Steganalytic Algorithm based on III Level DWT with Energy as Feature Research Journal of Applied Sciences, Engineering and Technology 7(19): 4100-4105, 2014 DOI:10.19026/rjaset.7.773 ISSN: 2040-7459; e-issn: 2040-7467 2014 Maxwell Scientific Publication Corp. Submitted:

More information

The Analysis and Proposed Modifications to ISO/IEC Software Engineering Software Quality Requirements and Evaluation Quality Requirements

The Analysis and Proposed Modifications to ISO/IEC Software Engineering Software Quality Requirements and Evaluation Quality Requirements Journal of Software Engineering and Applications, 2016, 9, 112-127 Published Online April 2016 in SciRes. http://www.scirp.org/journal/jsea http://dx.doi.org/10.4236/jsea.2016.94010 The Analysis and Proposed

More information

ISO/IEC/ IEEE INTERNATIONAL STANDARD. Systems and software engineering Architecture description

ISO/IEC/ IEEE INTERNATIONAL STANDARD. Systems and software engineering Architecture description INTERNATIONAL STANDARD ISO/IEC/ IEEE 42010 First edition 2011-12-01 Systems and software engineering Architecture description Ingénierie des systèmes et des logiciels Description de l'architecture Reference

More information

The Specifications Exchange Service of an RM-ODP Framework

The Specifications Exchange Service of an RM-ODP Framework The Specifications Exchange Service of an RM-ODP Framework X. Blanc (*+), M-P. Gervais(*), J. Le Delliou(+) (*)Laboratoire d'informatique de Paris 6-8 rue du Capitaine Scott F75015 PARIS (+)EDF Research

More information

A Framework to Reversible Data Hiding Using Histogram-Modification

A Framework to Reversible Data Hiding Using Histogram-Modification A Framework to Reversible Data Hiding Using Histogram-Modification R. Neeraja 1 PG Student, ECE Department Gokula Krishna College of Engineering Sullurpet, India e-mail:r.p.neeru@gmail.com M. Gnana Priya

More information

Adaptive Pixel Pair Matching Technique for Data Embedding

Adaptive Pixel Pair Matching Technique for Data Embedding Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 1, January 2014,

More information

A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features and its Performance Analysis

A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features and its Performance Analysis International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 11, Issue 09 (September 2015), PP.27-31 A Blind Steganalysis on JPEG Gray Level

More information

Breaking the OutGuess

Breaking the OutGuess Breaking the OutGuess Jessica Fridrich, Miroslav Goljan, Dorin Hogea * presented by Deepa Kundur Department of Electrical and Computer Engineering * Department of Computer Science SUNY Binghamton, Binghamton,

More information

Fundamentals to Creating Architectures using ISO/IEC/IEEE Standards

Fundamentals to Creating Architectures using ISO/IEC/IEEE Standards Fundamentals to Creating Architectures using ISO/IEC/IEEE Standards What to Architect? How to Architect? IEEE Goals and Objectives Chartered by IEEE Software Engineering Standards Committee to: Define

More information

ACEAIT-3055 High-Capacity Steganography Using MRF-Synthesized Cover Images

ACEAIT-3055 High-Capacity Steganography Using MRF-Synthesized Cover Images ACEAIT-3055 High-Capacity Steganography Using MRF-Synthesized Cover Images Chaur-Chin Chen and Wei-Ju Lai Department of Computer Science National Tsing Hua University Hsinchu 30013, Taiwan e-mail: cchen@cs.nthu.edu.tw

More information

1 Executive Overview The Benefits and Objectives of BPDM

1 Executive Overview The Benefits and Objectives of BPDM 1 Executive Overview The Benefits and Objectives of BPDM This is an excerpt from the Final Submission BPDM document posted to OMG members on November 13 th 2006. The full version of the specification will

More information

Digital Image Steganography Techniques: Case Study. Karnataka, India.

Digital Image Steganography Techniques: Case Study. Karnataka, India. ISSN: 2320 8791 (Impact Factor: 1.479) Digital Image Steganography Techniques: Case Study Santosh Kumar.S 1, Archana.M 2 1 Department of Electronicsand Communication Engineering, Sri Venkateshwara College

More information

Extension and integration of i* models with ontologies

Extension and integration of i* models with ontologies Extension and integration of i* models with ontologies Blanca Vazquez 1,2, Hugo Estrada 1, Alicia Martinez 2, Mirko Morandini 3, and Anna Perini 3 1 Fund Information and Documentation for the industry

More information

A New Approach to Compressed Image Steganography Using Wavelet Transform

A New Approach to Compressed Image Steganography Using Wavelet Transform IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 5, Ver. III (Sep. Oct. 2015), PP 53-59 www.iosrjournals.org A New Approach to Compressed Image Steganography

More information

A Detailed look of Audio Steganography Techniques using LSB and Genetic Algorithm Approach

A Detailed look of Audio Steganography Techniques using LSB and Genetic Algorithm Approach www.ijcsi.org 402 A Detailed look of Audio Steganography Techniques using LSB and Genetic Algorithm Approach Gunjan Nehru 1, Puja Dhar 2 1 Department of Information Technology, IEC-Group of Institutions

More information

OTP-Steg. One-Time Pad Image Steganography Using OTP-Steg V.1.0 Software October 2015 Dr. Michael J. Pelosi

OTP-Steg. One-Time Pad Image Steganography Using OTP-Steg V.1.0 Software October 2015 Dr. Michael J. Pelosi OTP-Steg One-Time Pad Image Steganography Using OTP-Steg V.1.0 Software October 2015 Dr. Michael J. Pelosi What is Steganography? Steganography literally means covered writing Encompasses methods of transmitting

More information

SOFTWARE ARCHITECTURE & DESIGN INTRODUCTION

SOFTWARE ARCHITECTURE & DESIGN INTRODUCTION SOFTWARE ARCHITECTURE & DESIGN INTRODUCTION http://www.tutorialspoint.com/software_architecture_design/introduction.htm Copyright tutorialspoint.com The architecture of a system describes its major components,

More information

Perspectives on User Story Based Visual Transformations

Perspectives on User Story Based Visual Transformations Perspectives on User Story Based Visual Transformations Yves Wautelet 1, Samedi Heng 2, and Manuel Kolp 2 1 KU Leuven, Belgium yves.wautelet@kuleuven.be, 2 LouRIM, Université catholique de Louvain, Belgium

More information

COMPARATIVE STUDY OF HISTOGRAM SHIFTING ALGORITHMS FOR DIGITAL WATERMARKING

COMPARATIVE STUDY OF HISTOGRAM SHIFTING ALGORITHMS FOR DIGITAL WATERMARKING International Journal of Computer Engineering and Applications, Volume X, Issue VII, July 16 www.ijcea.com ISSN 2321-3469 COMPARATIVE STUDY OF HISTOGRAM SHIFTING ALGORITHMS FOR DIGITAL WATERMARKING Geeta

More information

Hiding of Random Permutated Encrypted Text using LSB Steganography with Random Pixels Generator

Hiding of Random Permutated Encrypted Text using LSB Steganography with Random Pixels Generator Hiding of Random Permutated Encrypted Text using LSB Steganography with Random Pixels Generator Noor Kareem Jumaa Department of Computer Technology Engineering Al-Mansour University College, Iraq ABSTRACT

More information

Steganalysis Techniques: A Comparative Study

Steganalysis Techniques: A Comparative Study University of New Orleans ScholarWorks@UNO University of New Orleans Theses and Dissertations Dissertations and Theses 5-18-2007 Steganalysis Techniques: A Comparative Study Swaroop Kumar Pedda Reddy University

More information

Quantitative and Binary Steganalysis in JPEG: A Comparative Study

Quantitative and Binary Steganalysis in JPEG: A Comparative Study Quantitative and Binary Steganalysis in JPEG: A Comparative Study Ahmad ZAKARIA LIRMM, Univ Montpellier, CNRS ahmad.zakaria@lirmm.fr Marc CHAUMONT LIRMM, Univ Nîmes, CNRS marc.chaumont@lirmm.fr Gérard

More information

Chaos-based Modified EzStego Algorithm for Improving Security of Message Hiding in GIF Image

Chaos-based Modified EzStego Algorithm for Improving Security of Message Hiding in GIF Image 015 International Conference on Computer, Control, Informatics and Its Applications Chaos-based Modified EzStego Algorithm for Improving Security of Message Hiding in GIF Image Rinaldi Munir Informatics

More information

Highly Secure Invertible Data Embedding Scheme Using Histogram Shifting Method

Highly Secure Invertible Data Embedding Scheme Using Histogram Shifting Method www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume 3 Issue 8 August, 2014 Page No. 7932-7937 Highly Secure Invertible Data Embedding Scheme Using Histogram Shifting

More information

Chapter 2 Overview of the Design Methodology

Chapter 2 Overview of the Design Methodology Chapter 2 Overview of the Design Methodology This chapter presents an overview of the design methodology which is developed in this thesis, by identifying global abstraction levels at which a distributed

More information

The GEMOC Initiative On the Globalization of Modeling Languages

The GEMOC Initiative On the Globalization of Modeling Languages The GEMOC Initiative On the Globalization of Modeling Languages Benoit Combemale (Inria & Univ. Rennes 1) http://people.irisa.fr/benoit.combemale benoit.combemale@irisa.fr @bcombemale SOFTWARE COLUMN SECTION

More information

LSB Based Audio Steganography Using Pattern Matching

LSB Based Audio Steganography Using Pattern Matching ISSN: 359-0040 Vol 2 Issue, November - 205 LSB Based Audio Steganography Using Pattern Matching Mr Ratul Choudhury Student, Dept of Computer Sc & Engg Dept University of Calcutta Kolkata, India ratulchowdhury@iemcalcom

More information

DIGITAL STEGANOGRAPHY 1 DIGITAL STEGANOGRAPHY

DIGITAL STEGANOGRAPHY 1 DIGITAL STEGANOGRAPHY DIGITAL STEGANOGRAPHY 1 DIGITAL STEGANOGRAPHY DIGITAL STEGANOGRAPHY 2 Abstract Steganography derives from a Greek word and means covered writing. It is a sector of computer information security. Cryptography

More information

Software Language Engineering of Architectural Viewpoints

Software Language Engineering of Architectural Viewpoints Software Language Engineering of Architectural Viewpoints Elif Demirli and Bedir Tekinerdogan Department of Computer Engineering, Bilkent University, Ankara 06800, Turkey {demirli,bedir}@cs.bilkent.edu.tr

More information

A General Framework for the Structural Steganalysis of LSB Replacement

A General Framework for the Structural Steganalysis of LSB Replacement A General Framework for the Structural Steganalysis of LSB Replacement Andrew Ker adk@comlab.ox.ac.uk Royal Society University Research Fellow Oxford University Computing Laboratory 7 th Information Hiding

More information

DATA hiding [1] and watermarking in digital images

DATA hiding [1] and watermarking in digital images 14 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 6, NO. 1, MARCH 2011 Data Hiding in Motion Vectors of Compressed Video Based on Their Associated Prediction Error Hussein A. Aly, Member,

More information

Hybrid Stegnography using ImagesVaried PVD+ LSB Detection Program

Hybrid Stegnography using ImagesVaried PVD+ LSB Detection Program www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume 4 Issue 5 May 2015, Page No. 12086-12090 Hybrid Stegnography using ImagesVaried PVD+ LSB Detection Program Shruti

More information

Semantic Reconciliation in Interoperability Management through Model-driven Approach

Semantic Reconciliation in Interoperability Management through Model-driven Approach Semantic Reconciliation in Interoperability Management through Model-driven Approach Frédérick Bénaben 1, Nicolas Boissel-Dallier 1,2, Jean-Pierre Lorré 2, Hervé Pingaud 1 1 Mines Albi Université de Toulouse,

More information

SECURITY ENHANCEMENT: STEGANO-CRYPTO USING CHOAS BASED Sblock EMBEDDING TECHNIQUE

SECURITY ENHANCEMENT: STEGANO-CRYPTO USING CHOAS BASED Sblock EMBEDDING TECHNIQUE Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 8, August 2015,

More information

Describing the architecture: Creating and Using Architectural Description Languages (ADLs): What are the attributes and R-forms?

Describing the architecture: Creating and Using Architectural Description Languages (ADLs): What are the attributes and R-forms? Describing the architecture: Creating and Using Architectural Description Languages (ADLs): What are the attributes and R-forms? CIS 8690 Enterprise Architectures Duane Truex, 2013 Cognitive Map of 8090

More information

Robust Steganography Using Texture Synthesis

Robust Steganography Using Texture Synthesis Robust Steganography Using Texture Synthesis Zhenxing Qian 1, Hang Zhou 2, Weiming Zhang 2, Xinpeng Zhang 1 1. School of Communication and Information Engineering, Shanghai University, Shanghai, 200444,

More information

Lossless and Lossy Minimal Redundancy Pyramidal Decomposition for Scalable Image Compression Technique

Lossless and Lossy Minimal Redundancy Pyramidal Decomposition for Scalable Image Compression Technique Lossless and Lossy Minimal Redundancy Pyramidal Decomposition for Scalable Image Compression Technique Marie Babel, Olivier Déforges To cite this version: Marie Babel, Olivier Déforges. Lossless and Lossy

More information

Bizagi Process Management Suite as an Application of the Model Driven Architecture Approach for Developing Information Systems

Bizagi Process Management Suite as an Application of the Model Driven Architecture Approach for Developing Information Systems Bizagi Process Management Suite as an Application of the Model Driven Architecture Approach for Developing Information Systems Doi:10.5901/ajis.2014.v3n6p475 Abstract Oskeol Gjoni PHD Student at European

More information

Adding Formal Requirements Modeling to SysML

Adding Formal Requirements Modeling to SysML Adding Formal Requirements Modeling to SysML Mark R. Blackburn www.markblackburn.com Abstract. This paper seeks to raise awareness on the SCR extensions derived from industry use, and discusses how an

More information

CE4031 and CZ4031 Database System Principles

CE4031 and CZ4031 Database System Principles CE431 and CZ431 Database System Principles Course CE/CZ431 Course Database System Principles CE/CZ21 Algorithms; CZ27 Introduction to Databases CZ433 Advanced Data Management (not offered currently) Lectures

More information

Least Significant Bit (LSB) and Discrete Cosine Transform (DCT) based Steganography

Least Significant Bit (LSB) and Discrete Cosine Transform (DCT) based Steganography Least Significant Bit (LSB) and Discrete Cosine Transform (DCT) based Steganography Smruti Ranjan Gouda (Dept. Of computer Science & Engineering, Asst. Professor, Gandhi Group of institutions, Berhampur,

More information

3.4 Data-Centric workflow

3.4 Data-Centric workflow 3.4 Data-Centric workflow One of the most important activities in a S-DWH environment is represented by data integration of different and heterogeneous sources. The process of extract, transform, and load

More information

Massive Data Analysis

Massive Data Analysis Professor, Department of Electrical and Computer Engineering Tennessee Technological University February 25, 2015 Big Data This talk is based on the report [1]. The growth of big data is changing that

More information

Quality Measurements of Lossy Image Steganography Based on H-AMBTC Technique Using Hadamard Transform Domain

Quality Measurements of Lossy Image Steganography Based on H-AMBTC Technique Using Hadamard Transform Domain Quality Measurements of Lossy Image Steganography Based on H-AMBTC Technique Using Hadamard Transform Domain YAHYA E. A. AL-SALHI a, SONGFENG LU *b a. Research Scholar, School of computer science, Huazhong

More information

Easy Ed: An Integration of Technologies for Multimedia Education 1

Easy Ed: An Integration of Technologies for Multimedia Education 1 Easy Ed: An Integration of Technologies for Multimedia Education 1 G. Ahanger and T.D.C. Little Multimedia Communications Laboratory Department of Electrical and Computer Engineering Boston University,

More information

IMPROVING THE RELIABILITY OF DETECTION OF LSB REPLACEMENT STEGANOGRAPHY

IMPROVING THE RELIABILITY OF DETECTION OF LSB REPLACEMENT STEGANOGRAPHY IMPROVING THE RELIABILITY OF DETECTION OF LSB REPLACEMENT STEGANOGRAPHY Shreelekshmi R, Wilscy M 2 and C E Veni Madhavan 3 Department of Computer Science & Engineering, College of Engineering, Trivandrum,

More information

A Constraint Programming Based Approach to Detect Ontology Inconsistencies

A Constraint Programming Based Approach to Detect Ontology Inconsistencies The International Arab Journal of Information Technology, Vol. 8, No. 1, January 2011 1 A Constraint Programming Based Approach to Detect Ontology Inconsistencies Moussa Benaissa and Yahia Lebbah Faculté

More information

Bit Adjusting Image Steganography in Blue Channel using AES and Secured Hash Function

Bit Adjusting Image Steganography in Blue Channel using AES and Secured Hash Function Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology ISSN 2320 088X IMPACT FACTOR: 6.017 IJCSMC,

More information

Adaptive Spatial Steganography Based on the Correlation of Wavelet Coefficients for Digital Images in Spatial Domain Ningbo Li, Pan Feng, Liu Jia

Adaptive Spatial Steganography Based on the Correlation of Wavelet Coefficients for Digital Images in Spatial Domain Ningbo Li, Pan Feng, Liu Jia 216 International Conference on Information Engineering and Communications Technology (IECT 216) ISBN: 978-1-69-37- Adaptive Spatial Steganography Based on the Correlation of Wavelet Coefficients for Digital

More information

Research Article Improvements in Geometry-Based Secret Image Sharing Approach with Steganography

Research Article Improvements in Geometry-Based Secret Image Sharing Approach with Steganography Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2009, Article ID 187874, 11 pages doi:10.1155/2009/187874 Research Article Improvements in Geometry-Based Secret Image Sharing

More information

VIDEO SEARCHING AND BROWSING USING VIEWFINDER

VIDEO SEARCHING AND BROWSING USING VIEWFINDER VIDEO SEARCHING AND BROWSING USING VIEWFINDER By Dan E. Albertson Dr. Javed Mostafa John Fieber Ph. D. Student Associate Professor Ph. D. Candidate Information Science Information Science Information Science

More information

Distributed Objects with Sense of Direction

Distributed Objects with Sense of Direction Distributed Objects with Sense of Direction G. V. BOCHMANN University of Ottawa P. FLOCCHINI Université de Montréal D. RAMAZANI Université de Montréal Introduction An object system consists of a collection

More information

Definition of Information Systems

Definition of Information Systems Information Systems Modeling To provide a foundation for the discussions throughout this book, this chapter begins by defining what is actually meant by the term information system. The focus is on model-driven

More information

SECRETLY CONCEALING MESSAGE USING ADVANCED IMAGE PROCESSING

SECRETLY CONCEALING MESSAGE USING ADVANCED IMAGE PROCESSING International Journal of Engineering Research ISSN: 2348-4039 & Management Technology May-2017 Volume- 4, Issue-3 Email: editor@ijermt.org www.ijermt.org SECRETLY CONCEALING MESSAGE USING ADVANCED IMAGE

More information

Generic and Domain Specific Ontology Collaboration Analysis

Generic and Domain Specific Ontology Collaboration Analysis Generic and Domain Specific Ontology Collaboration Analysis Frantisek Hunka, Steven J.H. van Kervel 2, Jiri Matula University of Ostrava, Ostrava, Czech Republic, {frantisek.hunka, jiri.matula}@osu.cz

More information

Syrtis: New Perspectives for Semantic Web Adoption

Syrtis: New Perspectives for Semantic Web Adoption Syrtis: New Perspectives for Semantic Web Adoption Joffrey Decourselle, Fabien Duchateau, Ronald Ganier To cite this version: Joffrey Decourselle, Fabien Duchateau, Ronald Ganier. Syrtis: New Perspectives

More information

Designing a System Engineering Environment in a structured way

Designing a System Engineering Environment in a structured way Designing a System Engineering Environment in a structured way Anna Todino Ivo Viglietti Bruno Tranchero Leonardo-Finmeccanica Aircraft Division Torino, Italy Copyright held by the authors. Rubén de Juan

More information

Generalized Document Data Model for Integrating Autonomous Applications

Generalized Document Data Model for Integrating Autonomous Applications 6 th International Conference on Applied Informatics Eger, Hungary, January 27 31, 2004. Generalized Document Data Model for Integrating Autonomous Applications Zsolt Hernáth, Zoltán Vincellér Abstract

More information

Image Steganography (cont.)

Image Steganography (cont.) Image Steganography (cont.) 2.2) Image Steganography: Use of Discrete Cosine Transform (DCT) DCT is one of key components of JPEG compression JPEG algorithm: (1) algorithm is split in 8x8 pixel squares

More information

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features

More information

A Steganography method for JPEG2000 Baseline System

A Steganography method for JPEG2000 Baseline System A Steganography method for JPEG2000 Baseline System P.Ramakrishna Rao M.Tech.,[CSE], Teaching Associate, Department of Computer Science, Dr.B.R.Ambedkar University, Etcherla Srikaulam, 532 410. Abstract

More information

Taxonomy Dimensions of Complexity Metrics

Taxonomy Dimensions of Complexity Metrics 96 Int'l Conf. Software Eng. Research and Practice SERP'15 Taxonomy Dimensions of Complexity Metrics Bouchaib Falah 1, Kenneth Magel 2 1 Al Akhawayn University, Ifrane, Morocco, 2 North Dakota State University,

More information

Harmonization of usability measurements in ISO9126 software engineering standards

Harmonization of usability measurements in ISO9126 software engineering standards Harmonization of usability measurements in ISO9126 software engineering standards Laila Cheikhi, Alain Abran and Witold Suryn École de Technologie Supérieure, 1100 Notre-Dame Ouest, Montréal, Canada laila.cheikhi.1@ens.etsmtl.ca,

More information

Automation of Semantic Web based Digital Library using Unified Modeling Language Minal Bhise 1 1

Automation of Semantic Web based Digital Library using Unified Modeling Language Minal Bhise 1 1 Automation of Semantic Web based Digital Library using Unified Modeling Language Minal Bhise 1 1 Dhirubhai Ambani Institute for Information and Communication Technology, Gandhinagar, Gujarat, India Email:

More information

Information mining and information retrieval : methods and applications

Information mining and information retrieval : methods and applications Information mining and information retrieval : methods and applications J. Mothe, C. Chrisment Institut de Recherche en Informatique de Toulouse Université Paul Sabatier, 118 Route de Narbonne, 31062 Toulouse

More information

DITA for Enterprise Business Documents Sub-committee Proposal Background Why an Enterprise Business Documents Sub committee

DITA for Enterprise Business Documents Sub-committee Proposal Background Why an Enterprise Business Documents Sub committee DITA for Enterprise Business Documents Sub-committee Proposal Background Why an Enterprise Business Documents Sub committee Documents initiate and record business change. It is easy to map some business

More information

Topic 01. Software Engineering, Web Engineering, agile methodologies.

Topic 01. Software Engineering, Web Engineering, agile methodologies. Topic 01 Software Engineering, Web Engineering, agile methodologies. 1 What is Software Engineering? 2 1 Classic Software Engineering The IEEE definition: Software Engineering is the application of a disciplined,

More information

Panel: Pattern management challenges

Panel: Pattern management challenges Panel: Pattern management challenges Panos Vassiliadis Univ. of Ioannina, Dept. of Computer Science, 45110, Ioannina, Hellas E-mail: pvassil@cs.uoi.gr 1 Introduction The increasing opportunity of quickly

More information

MODEL-DRIVEN APPROACH FOR PRODUCT INFORMATION MANAGEMENT

MODEL-DRIVEN APPROACH FOR PRODUCT INFORMATION MANAGEMENT MODEL-DRIVEN APPROACH FOR PRODUCT INFORMATION MANAGEMENT Salah Baïna, Hervé Panetto, Gérard Morel CRAN (UMR-7039), Nancy-Université, CNRS, F 54506 Vandoeuvre-les-Nancy, France. E-mail: {salah.baina, herve.panetto,

More information

CE4031 and CZ4031 Database System Principles

CE4031 and CZ4031 Database System Principles CE4031 and CZ4031 Database System Principles Academic AY1819 Semester 1 CE/CZ4031 Database System Principles s CE/CZ2001 Algorithms; CZ2007 Introduction to Databases CZ4033 Advanced Data Management (not

More information

FRAGILE WATERMARKING USING SUBBAND CODING

FRAGILE WATERMARKING USING SUBBAND CODING ICCVG 2002 Zakopane, 25-29 Sept. 2002 Roger ŚWIERCZYŃSKI Institute of Electronics and Telecommunication Poznań University of Technology roger@et.put.poznan.pl FRAGILE WATERMARKING USING SUBBAND CODING

More information

Software Architectures. Lecture 6 (part 1)

Software Architectures. Lecture 6 (part 1) Software Architectures Lecture 6 (part 1) 2 Roadmap of the course What is software architecture? Designing Software Architecture Requirements: quality attributes or qualities How to achieve requirements

More information

TWO APPROACHES IN SYSTEM MODELING AND THEIR ILLUSTRATIONS WITH MDA AND RM-ODP

TWO APPROACHES IN SYSTEM MODELING AND THEIR ILLUSTRATIONS WITH MDA AND RM-ODP TWO APPROACHES IN SYSTEM MODELING AND THEIR ILLUSTRATIONS WITH MDA AND RM-ODP Andrey Naumenko, Alain Wegmann Laboratory of Systemic Modeling, Swiss Federal Institute of Technology - Lausanne, EPFL-I&C-LAMS,1015

More information

Keywords Stegnography, stego-image, Diamond Encoding, DCT,stego-frame and stego video. BLOCK DIAGRAM

Keywords Stegnography, stego-image, Diamond Encoding, DCT,stego-frame and stego video. BLOCK DIAGRAM Volume 6, Issue 1, January 2016 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Information

More information

STEGANOGRAPHY: THE ART OF COVERT COMMUNICATION

STEGANOGRAPHY: THE ART OF COVERT COMMUNICATION Journal homepage: www.mjret.in STEGANOGRAPHY: THE ART OF COVERT COMMUNICATION Sudhanshi Sharma 1, Umesh Kumar 2 Computer Engineering, Govt. Mahila Engineering College, Ajmer, India 1 sudhanshisharma91@gmail.com,

More information

Version 11

Version 11 The Big Challenges Networked and Electronic Media European Technology Platform The birth of a new sector www.nem-initiative.org Version 11 1. NEM IN THE WORLD The main objective of the Networked and Electronic

More information

Engineering Design Notes I Introduction. EE 498/499 Capstone Design Classes Klipsch School of Electrical & Computer Engineering

Engineering Design Notes I Introduction. EE 498/499 Capstone Design Classes Klipsch School of Electrical & Computer Engineering Engineering Design Notes I Introduction EE 498/499 Capstone Design Classes Klipsch School of Electrical & Computer Engineering Topics Overview Analysis vs. Design Design Stages Systems Engineering Integration

More information

Proposed Revisions to ebxml Technical. Architecture Specification v1.04

Proposed Revisions to ebxml Technical. Architecture Specification v1.04 Proposed Revisions to ebxml Technical Architecture Specification v1.04 Business Process Team 11 May 2001 (This document is the non-normative version formatted for printing, July 2001) Copyright UN/CEFACT

More information

On-line and Off-line 3D Reconstruction for Crisis Management Applications

On-line and Off-line 3D Reconstruction for Crisis Management Applications On-line and Off-line 3D Reconstruction for Crisis Management Applications Geert De Cubber Royal Military Academy, Department of Mechanical Engineering (MSTA) Av. de la Renaissance 30, 1000 Brussels geert.de.cubber@rma.ac.be

More information

A Review of Approaches for Steganography

A Review of Approaches for Steganography International Journal of Computer Science and Engineering Open Access Review Paper Volume-2, Issue-5 E-ISSN: 2347-2693 A Review of Approaches for Steganography Komal Arora 1* and Geetanjali Gandhi 2 1*,2

More information

Texture Image Segmentation using FCM

Texture Image Segmentation using FCM Proceedings of 2012 4th International Conference on Machine Learning and Computing IPCSIT vol. 25 (2012) (2012) IACSIT Press, Singapore Texture Image Segmentation using FCM Kanchan S. Deshmukh + M.G.M

More information

Digital Image Steganography Using Bit Flipping

Digital Image Steganography Using Bit Flipping BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 18, No 1 Sofia 2018 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.2478/cait-2018-0006 Digital Image Steganography Using

More information

Work Environment and Computer Systems Development.

Work Environment and Computer Systems Development. CID-133 ISSN 1403-0721 Department of Numerical Analysis and Computer Science KTH Work Environment and Computer Systems Development. Jan Gulliksen and Bengt Sandblad CID, CENTRE FOR USER ORIENTED IT DESIGN

More information

Rich Hilliard 20 February 2011

Rich Hilliard 20 February 2011 Metamodels in 42010 Executive summary: The purpose of this note is to investigate the use of metamodels in IEEE 1471 ISO/IEC 42010. In the present draft, metamodels serve two roles: (1) to describe the

More information

Detecting Digital Image Forgeries By Multi-illuminant Estimators

Detecting Digital Image Forgeries By Multi-illuminant Estimators Research Paper Volume 2 Issue 8 April 2015 International Journal of Informative & Futuristic Research ISSN (Online): 2347-1697 Detecting Digital Image Forgeries By Multi-illuminant Estimators Paper ID

More information

Black-Box Program Specialization

Black-Box Program Specialization Published in Technical Report 17/99, Department of Software Engineering and Computer Science, University of Karlskrona/Ronneby: Proceedings of WCOP 99 Black-Box Program Specialization Ulrik Pagh Schultz

More information

Theme Identification in RDF Graphs

Theme Identification in RDF Graphs Theme Identification in RDF Graphs Hanane Ouksili PRiSM, Univ. Versailles St Quentin, UMR CNRS 8144, Versailles France hanane.ouksili@prism.uvsq.fr Abstract. An increasing number of RDF datasets is published

More information

NEURAL NETWORKS - A NEW DIMENSION IN DATA SECURITY

NEURAL NETWORKS - A NEW DIMENSION IN DATA SECURITY NEURAL NETWORKS - A NEW DIMENSION IN DATA SECURITY 1. Introduction: New possibilities of digital imaging and data hiding open wide prospects in modern imaging science, content management and secure communications.

More information

Random Image Embedded in Videos using LSB Insertion Algorithm

Random Image Embedded in Videos using LSB Insertion Algorithm Random Image Embedded in Videos using LSB Insertion Algorithm K.Parvathi Divya 1, K.Mahesh 2 Research Scholar 1, * Associate Professor 2 Department of Computer Science and Engg, Alagappa university, Karaikudi.

More information

Breaking Down the Invisible Wall

Breaking Down the Invisible Wall Breaking Down the Invisible Wall to Enrich Archival Science and Practice Kenneth Thibodeau December 8, 2016 1 Record Integrity A document has integrity if continue to be capable of delivering the message

More information

Cognitive augmented routing system and its standardisation path

Cognitive augmented routing system and its standardisation path Cognitive augmented routing system and its standardisation path ETSI Future Network Technologies Workshop Dimitri Papadimitriou, Bernard Sales Alcatel-Lucent, Bell Labs March, 2010 Self-Adaptive (top-down)

More information

Two interrelated objectives of the ARIADNE project, are the. Training for Innovation: Data and Multimedia Visualization

Two interrelated objectives of the ARIADNE project, are the. Training for Innovation: Data and Multimedia Visualization Training for Innovation: Data and Multimedia Visualization Matteo Dellepiane and Roberto Scopigno CNR-ISTI Two interrelated objectives of the ARIADNE project, are the design of new services (or the integration

More information

For many years, the creation and dissemination

For many years, the creation and dissemination Standards in Industry John R. Smith IBM The MPEG Open Access Application Format Florian Schreiner, Klaus Diepold, and Mohamed Abo El-Fotouh Technische Universität München Taehyun Kim Sungkyunkwan University

More information

Information management - Topic Maps visualization

Information management - Topic Maps visualization Information management - Topic Maps visualization Benedicte Le Grand Laboratoire d Informatique de Paris 6, Universite Pierre et Marie Curie, Paris, France Benedicte.Le-Grand@lip6.fr http://www-rp.lip6.fr/~blegrand

More information

EUROPEAN ICT PROFESSIONAL ROLE PROFILES VERSION 2 CWA 16458:2018 LOGFILE

EUROPEAN ICT PROFESSIONAL ROLE PROFILES VERSION 2 CWA 16458:2018 LOGFILE EUROPEAN ICT PROFESSIONAL ROLE PROFILES VERSION 2 CWA 16458:2018 LOGFILE Overview all ICT Profile changes in title, summary, mission and from version 1 to version 2 Versions Version 1 Version 2 Role Profile

More information

Current State of ontology in engineering systems

Current State of ontology in engineering systems Current State of ontology in engineering systems Henson Graves, henson.graves@hotmail.com, and Matthew West, matthew.west@informationjunction.co.uk This paper gives an overview of the current state of

More information

On the link between Architectural Description Models and Modelica Analyses Models

On the link between Architectural Description Models and Modelica Analyses Models On the link between Architectural Description Models and Modelica Analyses Models Damien Chapon Guillaume Bouchez Airbus France 316 Route de Bayonne 31060 Toulouse {damien.chapon,guillaume.bouchez}@airbus.com

More information