OSSEC Wazuh documentation

Size: px
Start display at page:

Download "OSSEC Wazuh documentation"

Transcription

1 OSSEC Wazuh documentation Release 0.1 Wazuh, Inc. Oct 30, 2017

2

3 Contents 1 About this documentation 1 2 Installation guide OSSEC HIDS Wazuh HIDS First steps Integration with ELK Stack Components and architecture Java 8 JRE Logstash Elasticsearch Kibana OSSEC Wazuh Reference Manage agents OSSEC Authd Integrator Agent ID reusage OSSEC Wazuh RESTful API Installation Reference Examples OSSEC Wazuh Ruleset Introduction Manual installation Automatic installation Wazuh rules Contribute to the ruleset What s next OSSEC Docker container Docker installation OSSEC-ELK Container OSSEC HIDS Container i

4 8 OSSEC deployment with Puppet Puppet master installation PuppetDB installation Puppet agents installation Puppet certificates OSSEC Puppet module OSSEC for Amazon AWS OSSEC integration with Amazon AWS Use Cases Contribute to the ruleset What s next OSSEC for PCI DSS Introduction Log analysis Rootcheck - Policy monitoring Rootcheck - Rootkits detection File Integrity Monitoring Active response ELK What s next ii

5 CHAPTER 1 About this documentation Welcome to Wazuh documentation. Here you will find instructions to install and deploy OSSEC HIDS, both the official version and our forked one. Please note that this documentation is not intended to substitute OSSEC HIDS documentation, or the reference manual, which are currently maintained by the project team members and external contributors. Wazuh team is currently supporting OSSEC enterprise users, and decided to develop and publish additional capabilities as a way to contribute back to the Open Source community. Find below a list and description of our main projects, that have been released under the terms of GPLv2 license. OSSEC Wazuh Ruleset: Includes new rootchecks, decoders and rules, increasing OSSEC monitoring and detection capabilities. Those have also been tagged for PCI Data Security Standard, allowing users to monitor compliance for each of the standard requirements. Users can contribute to this ruleset by submitting pull requests to our Github repository. Our team will continue to maintain and update it periodically. Wazuh HIDS: Our OSSEC fork. Implements bug fixes and new features. It provides extended JSON logging capabilities, for easy integration with ELK Stack and third party log management tools. It also includes compliance support, and modifications in OSSEC binaries needed by the OSSEC RESTful API. Wazuh RESTful API: Used to monitor and control your OSSEC deployment, providing an interface to interact with the manager from anything that can send an HTTP request. Pre-compiled installation packages, both for OSSEC agent and manager: Including repositories for RedHat, CentOS, Fedora, Debian, Ubuntu and Windows. Puppet scripts for automatic OSSEC deployment and configuration. Docker containers to virtualize and run your OSSEC manager and an all-in-one integration with ELK Stack. Note: If you want to contribute to this documentation or our projects please head over to our Github repositories. You can also join our users mailing list, by sending an to wazuh+subscribe@googlegroups.com, to ask questions and participate in discussions. 1

6 2 Chapter 1. About this documentation

7 CHAPTER 2 Installation guide Two different installation options: OSSEC HIDS and Wazuh HIDS. Please read carefully below to learn the differencies between these two options since it might be key for the utilization of further items of your interest in this documentation. OSSEC HIDS installers contain the latest stable version as stated at OSSEC project Github repository. Wazuh creates and maintains OSSEC installers for the Open Source community, and you can find the instructions on how to use them in this documentation section. Wazuh HIDS is an OSSEC fork, that contains additional features for the OSSEC manager, such as compliance support and extended JSON logging capabilities, that allow the integration with ELK Stack (Elasticsearch, Logstash and Kibana) and other log management tools. As well, this installation is ready for the utilization of the Wazuh RESTful API. OSSEC HIDS OSSEC HIDS Latest Stable Release (2.8.3) OSSEC is an Open Source Host-based Intrusion Detection System that performs log analysis, file integrity checking, policy monitoring, rootkit detection, real-time alerting and active response. It runs on most operating systems, including Linux, MacOS, Solaris, HP-UX, AIX and Windows. You can find more information at OSSEC HIDS project documentation, or the reference manual. Note: For the OSSEC manager, this version doesn t allow the integration with ELK Stack neither the use of Wazuh RESTFUL API. If you plan to use either of these two, or both, follow the Wazuh HIDS installation guide instead. Debian packages 3

8 Apt-get repository key If it is the first installation from Wazuh repository you need to import the GPG key: $ wget -qo - sudo apt-key add - Debian repositories To add your Debian repository, depending on your distribution, run these commands: For Wheezy: $ echo -e "deb wheezy main" >> /etc/apt/ sources.list.d/ossec.list For Jessie: $ echo -e "deb jessie main" >> /etc/apt/ sources.list.d/ossec.list For Stretch: $ echo -e "deb stretch main" >> /etc/apt/ sources.list.d/ossec.list For Sid: $ echo -e "deb sid main" >> /etc/apt/sources. list.d/ossec.list Ubuntu repositories To add your Ubuntu repository, depending on your distribution, run these commands: For Precise: $ echo -e "deb precise main" >> /etc/apt/ sources.list.d/ossec.list For Trusty: $ echo -e "deb trusty main" >> /etc/apt/ sources.list.d/ossec.list For Vivid: $ echo -e "deb vivid main" >> /etc/apt/ sources.list.d/ossec.list For Wily: $ echo -e "deb wily main" >> /etc/apt/ sources.list.d/ossec.list For Xenial: 4 Chapter 2. Installation guide

9 $ echo -e "deb xenial main" >> /etc/apt/ sources.list.d/ossec.list For Yakkety: $ echo -e "deb yakkety main" >> /etc/apt/ sources.list.d/ossec.list Update the repository Type the next command to update the repository: $ apt-get update OSSEC manager installation To install the OSSEC manager debian package, from our repository, run this command: $ apt-get install ossec-hids OSSEC agent installation To install the OSSEC agent debian package, from our repository, run this command: $ apt-get install ossec-hids-agent RPM packages Yum repository To add the Wazuh yum repository, depending on your Linux distribution, create a file named /etc/yum.repos. d/wazuh.repo with the following content: For Amazon Linux AMI: [wazuh] name = WAZUH OSSEC Repository - baseurl = gpgcheck = 1 gpgkey = enabled = 1 For RHEL and CentOS (version EL5): [wazuh] name = WAZUH OSSEC Repository - baseurl = gpgcheck = 1 gpgkey = enabled = OSSEC HIDS 5

10 For RHEL and CentOS (versions EL6 or EL7): [wazuh] name = WAZUH OSSEC Repository - baseurl = gpgcheck = 1 gpgkey = enabled = 1 For Fedora (versions 21, 22 or 23): [wazuh] name = WAZUH OSSEC Repository - baseurl = gpgcheck = 1 gpgkey = enabled = 1 OSSEC manager installation To install the OSSEC manager using Yum packages manager, run the following command: $ yum install ossec-hids On Fedora 23, to install the OSSEC manager with DNF packages manager, run the following command: $ dnf install ossec-hids OSSEC agent installation To install the OSSEC agent using the Yum packages manger, run the following command: $ yum install ossec-hids-agent On Fedora 23, to install the OSSEC agent with the DNF packages manager, run the following command: $ dnf install ossec-hids-agent Note: If it is your first installation from our repository, you will need to accept our repository GPG key when prompted during the installation. This key can be found at: Windows agent Agent pre-compiled installer You can find a pre-compiled version of the OSSEC agent for Windows, both for 32 and 64 bits architectures, at our repository. Current version is and these are the MD5 and SHA1 checksums: md5sum: 633d898d51eb49050c735abd278e08c8 sha1sum: 4ebcb31e4eccd509ae34148dd7b1b78d75b58f53 6 Chapter 2. Installation guide

11 Compiling from sources This section describes how to download and compile your OSSEC HIDS Windows agent (version 2.8.3). You can use either a CentOS or a Debian system as a compilation environment. Source code download Download the source code and checksum files: $ wget $ wget sha256 Generate SHA256 checksum and compare with downloaded one: $ sha256sum ossec-hids tar.gz $ cat ossec-hids tar.gz.sha256 The expected hash checksum, in both cases, is: e23330d18b0d900e cdbe4f17364a c0fd005a1df7dd Note: Both checksums need to match, meaning that data has not been corrupted through the download process. If that is not the case, please try it again through a reliable connexion. Build environment on CentOS First, you need to install MinGW and Nsis (to build the installer). Let s start installing the EPEL repository: $ wget $ rpm -i epel-release-latest-7.noarch.rpm After that, we install MinGW gcc and other libraries for the Nsis compilation: $ yum install gcc-c++ gcc scons mingw32-gcc mingw64-gcc zlib-devel bzip2 unzip Now, to install Nsis, follow these steps: $ wget nsis-3.0b2-src.tar.bz2 $ wget nsis-3.0b2.zip $ mkdir /usr/local/nsis $ mv nsis-3.0b2-src.tar.bz2 nsis-3.0b2.zip /usr/local/nsis $ cd /usr/local/nsis $ tar -jxvf nsis-3.0b2-src.tar.bz2 $ unzip nsis-3.0b2.zip Then we need to build makensis, which will actually build the OSSEC Installer Package for Windows: $ cd /usr/local/nsis/nsis-3.0b2-src/ $ scons SKIPSTUBS=all SKIPPLUGINS=all SKIPUTILS=all SKIPMISC=all NSIS_CONFIG_CONST_ DATA=no PREFIX=/usr/local/nsis/nsis-3.0b2 install-compiler 2.1. OSSEC HIDS 7

12 $ mkdir /usr/local/nsis/nsis-3.0b2/share $ cd /usr/local/nsis/nsis-3.0b2/share $ ln -s /usr/local/nsis/nsis-3.0b2 nsis $ cp../bin/makensis /bin Build environment on Debian To compile the OSSEC agent on a Debian system install these packages: $ apt-get install gcc-mingw-w64 $ apt-get install nsis $ apt-get install make Compiling the agent Extract ossec-hids and run gen_win.sh and make.sh scripts: $ tar -xvzf ossec-hids tar.gz $ cd ossec-hids-2.8.3/src/win32 $./gen_win.sh $ cd../win-pkg $ sh./make.sh You should expect the following output: Making windows agent... Output: "ossec-win32-agent.exe" Install: 7 pages (448 bytes), 3 sections (3144 bytes), 586 instructions (16408 bytes), 287 strings (31800 bytes), 1 language table (346 bytes). Uninstall: 5 pages (320 bytes), 1 section (1048 bytes), 347 instructions (9716 bytes), 181 strings (3323 bytes), 1 language table (290 bytes). Datablock optimizer saved bytes (~7.9%). Using zlib compression. EXE header size: Install code: Install data: Uninstall code+data: CRC (0xAB53A27C): / bytes / bytes / bytes / bytes 4 / 4 bytes Total size: / bytes (29.2%) Now you should have the OSSEC agent installer for Windows, ossec-win32-agent.exe, ready to be used. Installation from sources Source code download Download the source code and checksum files: 8 Chapter 2. Installation guide

13 $ wget $ wget checksums/2.8.3/ossec-hids tar.gz.sha256 Generate SHA256 checksum and compare with downloaded one: $ sha256sum ossec-hids tar.gz $ cat ossec-hids tar.gz.sha256 The expected hash checksum, in both cases, is: e23330d18b0d900e cdbe4f17364a c0fd005a1df7dd Note: Both checksums need to match, meaning that data has not been corrupted through the download process. If that is not the case, please try it again through a reliable connection. Build environment Now we need to prepare our build environment, so we can compile the downloaded OSSEC source code. On Debian based distributions install the build-essential package: $ apt-get install build-essential On RPM based distributions install the Development tools package: $ yum groupinstall "Development Tools" Or if you use the DNF package manager (Fedora 23), run this command: $ dnf groupinstall "Development tools" Note: On OS X you are required to install Xcode command line tools, which include GCC compiler. Compiling OSSEC Extract the source code and run the installation script: $ tar xvfz ossec-hids tar.gz $ bash ossec-hids-2.8.3/install.sh Now the following script will pop up multiple questions, which may vary depending on your installation type: Choose language: ** Para instalação em português, escolha [br]. **, [cn]. ** Fur eine deutsche Installation wohlen Sie [de]. ** Για εγκατ στ αση στ α Eλληνικ, επιλξτ ε [el]. ** For installation in English, choose [en]. ** Para instalar en Español, eliga [es]. ** Pour une installation en français, choisissez [fr] 2.1. OSSEC HIDS 9

14 ** A Magyar nyelvű telepítéshez válassza [hu]. ** Per l'installazione in Italiano, scegli [it]. ** [jp]. ** Voor installatie in het Nederlands, kies [nl]. ** Aby instalować w języku Polskim, wybierz [pl]. **, [ru]. ** Za instalaciju na srpskom, izaberi [sr]. ** Türkçe kurulum için seçin [tr]. (en/br/cn/de/el/es/fr/hu/it/jp/nl/pl/ru/sr/tr) [en]: Choose installation type: 1.-What kind of installation do you want (server, agent, local, hybrid or help)? Here you have a brief summary for all these options: - If you choose 'server', you will be able to analyze all the logs, create notifications and responses, and also receive logs from remote syslog machines and from systems running the 'agents' (from where traffic is sent encrypted to the server). - If you choose 'agent'(client), you will be able to read local files (from syslog, snort, apache, etc) and forward them (encrypted) to the server for analysis. - If you choose 'local', you will be able to do everything the server does, except receiving remote messages from the agents or external syslog devices. - If you choose 'hybrid', you get the 'local' installation plus the 'agent' installation. Choose the installation folder: 2- Setting up the installation environment. - Choose where to install the OSSEC HIDS [/var/ossec]: Enable or disable mail notifications: 3- Configuring the OSSEC HIDS Do you want notification? (y/n) [y]: - What's your address? sammy@example.com - We found your SMTP server as: mail.example.com. - Do you want to use it? (y/n) [y]: Enable or disable the file integrity monitoring daemon: 3.2- Do you want to run the integrity check daemon? (y/n) [y]: - Running syscheck (integrity check daemon). Enable or disable the rootkits and malware detection daemon: 10 Chapter 2. Installation guide

15 3.3- Do you want to run the rootkit detection engine? (y/n) [y]: - Running rootcheck (rootkit detection). Enable or disable the active response module: 3.4- Active response allows you to execute a specific command based on the events received. For example, you can block an IP address or disable access for a specific user. More information at: - Do you want to enable active response? (y/n) [y]: - Active response enabled. - By default, we can enable the host-deny and the firewall-drop responses. The first one will add a host to the /etc/hosts.deny and the second one will block the host on iptables (if linux) or on ipfilter (if Solaris, FreeBSD or NetBSD). - They can be used to stop SSHD brute force scans, portscans and some other forms of attacks. You can also add them to block on snort events, for example. - Do you want to enable the firewall-drop response? (y/n) [y]: - firewall-drop enabled (local) for levels >= 6 - Default white list for the active response: Do you want to add more IPs to the white list? (y/n)? [n]: Note: If you select yes for Active response you are enabling some basic Intrusion Prevention capabilities. This is generally a good thing, but only recommended if you know what you are doing. Enable or disable remote syslog: 3.5- Do you want to enable remote syslog (port 514 udp)? (y/n) [y]: After these questions are answered, the compilation process starts: 5- Installing the system - Running the Makefile Once completed, you will be presented with final instructions: - System is Debian (Ubuntu or derivative). - Init script modified to start OSSEC HIDS during boot. - Configuration finished properly. - To start OSSEC HIDS: /var/ossec/bin/ossec-control start 2.1. OSSEC HIDS 11

16 - To stop OSSEC HIDS: /var/ossec/bin/ossec-control stop - The configuration can be viewed or modified at /var/ossec/etc/ossec.conf Thanks for using the OSSEC HIDS. If you have any question, suggestion or if you find any bug, contact us at contact@ossec.net or using our public maillist at ossec-list@ossec.net ( ). More information can be found at Press ENTER to finish (maybe more information below). --- Wazuh HIDS Wazuh team has developed an OSSEC fork, implementing new features to improve OSSEC manager capabilities. These modifications do not affect OSSEC agents. Meaning that, if you are looking to install an agent, you just need to run a standard OSSEC installation and do not need to follow next steps. Documentation to perform an standard OSSEC installation can be found here. Now, if you are installing an OSSEC manager, we strongly recommend you to use our forked OSSEC version. It provides compliance support, extended logging, and additional management features. Some of these capabilities are required for the integration with ELK Stack and Wazuh RESTful API. To start with this installation, first we need to set up the compilation environment by installing development tools and compilers. In Linux this can easily be done using your distribution packages manager: For RPM based distributions: $ sudo yum install make gcc git If you want to use Auth, also install: $ sudo yum install openssl-devel For Debian based distributions: $ sudo apt-get install gcc make git libc6-dev If you want to use Auth, also install: $ sudo apt-get install libssl-dev Now we are ready to clone our Github repository and compile the source code, to install OSSEC: $ cd ~ $ mkdir ossec_tmp && cd ossec_tmp $ git clone -b stable ossec-wazuh $ cd ossec-wazuh $ sudo./install.sh 12 Chapter 2. Installation guide

17 Choose server when being asked about the installation type and answer the rest of questions as desired. Once installed, you can start your OSSEC manager running: $ sudo /var/ossec/bin/ossec-control start Here are some useful commands to check that everything is working as expected. You should get a similar output in your system. $ ps aux grep ossec root ? S 23:01 0:00 /var/ossec/bin/ossec- execd ossec ? S 23:01 0:00 /var/ossec/bin/ossec- analysisd root ? S 23:01 0:00 /var/ossec/bin/ossec- logcollector root ? S 23:01 0:00 /var/ossec/bin/ossec- syscheckd ossec ? S 23:01 0:00 /var/ossec/bin/ossec- monitord root pts/0 S+ 23:02 0:00 grep ossec $ lsof /var/ossec/logs/alerts/alerts.json COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ossec-ana ossec 10w REG 202, /var/ossec/logs/alerts/ alerts.json $ cat /var/ossec/logs/alerts/alerts.json "rule":"level":3,"comment":"ossec server started.","sidid":502,"groups":["ossec", "pci_dss"],"pci_dss":["10.6.1"]},"full_log":"ossec: Ossec started.","hostname":"vpc- agent-debian","timestamp":"2015 Nov 08 23:01:28","location":"ossec-monitord"} First steps In this documentation you will find the instructions to add a new agent, and to configure it to report to your OS- SEC/Wazuh manager. For more information on OSSEC HIDS configuration options, please go to the project documentation, or the reference manual. Add a new agent On your OSSEC manager, run /var/ossec/bin/manage_agents: $ /var/ossec/bin/manage_agents You will then be presented the options shown below. Choose A to add an agent : **************************************** * OSSEC HIDS v2.8 Agent manager. * * The following options are available: * **************************************** (A)dd an agent (A). (E)xtract key for an agent (E). (L)ist already added agents (L). (R)emove an agent (R). (Q)uit. Choose your action: A,E,L,R or Q: A 2.3. First steps 13

18 You need to type a name for the agent, an IP address and an ID: - Adding a new agent (use '\q' to return to the main menu). Please provide the following: * A name for the new agent: agent-name * The IP Address of the new agent: * An ID for the new agent[001]: Agent information: ID:001 Name:agent-name IP Address: Confirm adding it?(y/n): y Note: The agent IP address should always match the one the agent will be connected from. If unsure you can use any. As well you could inspect your network traffic with tcpdump, to see IP headers of incoming packets. Now you have to extract the agent s key, which will be displayed on the screen. See below an example: **************************************** * OSSEC HIDS v2.8 Agent manager. * * The following options are available: * **************************************** (A)dd an agent (A). (E)xtract key for an agent (E). (L)ist already added agents (L). (R)emove an agent (R). (Q)uit. Choose your action: A,E,L,R or Q:e Available agents: ID: 001, Name: agent-name, IP: Provide the ID of the agent to extract the key (or '\q' to quit): 001 Agent key information for '001' is: MDAxIFRlc3RBZ2V0biAxMTEuMTExLjExMS4xMTEgY2MxZjA1Y2UxNWQyNzEyNjdlMmE3MTRlODI0MTA1YTgxNTM5ZDliN2U2ZDQ5M ** Press ENTER to return to the main menu. Now copy the key (the whole line ending in ==), because you ll have to import it on the agent. Agent configuration on Linux Your agent needs to have the IP address of your manager, in order to know where to send the data. Please check your agent configuration file, which is located at /var/ossec/etc/ossec.conf, and set the server-ip to the right value: <ossec_config> <client> <server-ip>xxx.xxx.xxx.xxx</server-ip> </client> Now you can run manage_agents (remember we are on your agent system, not on the manager), and paste the previously copied key: 14 Chapter 2. Installation guide

19 $ /var/ossec/bin/manage_agents **************************************** * OSSEC HIDS v2.8 Agent manager. * * The following options are available: * **************************************** (I)mport key from the server (I). (Q)uit. Choose your action: I or Q: I * Provide the Key generated by the server. * The best approach is to cut and paste it. *** OBS: Do not include spaces or new lines. Paste it here (or '\q' to quit): MDAxIFRlc3RBZ2V0biAxMTEuMTExLjExMS4xMTEgY2MxZjA1Y2UxNWQyNzEyNjdlMmE3MTRlODI0MTA1YTgxNTM5ZDliN2U2ZDQ Agent information: ID:001 Name:agent-name IP Address: Confirm adding it?(y/n): y Now your agent has been properly added. You can restart it running: $ /var/ossec/bin/ossec-control restart 2.3. First steps 15

20 16 Chapter 2. Installation guide

21 CHAPTER 3 Integration with ELK Stack Documentation structure This document will guide you through the installation, configuration and integration of ELK Stack and Wazuh HIDS (our OSSEC fork). We will make use of expanded logging features that have been implemented for the manager, along with custom Logstash/Elasticsearch configurations, our OSSEC Wazuh Ruleset, our Wazuh RESTful API and Kibana with hardcoded modifications. Components and architecture Components See below a brief description of the components and tools involved in the integration of our OSSEC Wazuh fork with ELK Stack, for long term data storage, alerts indexing, management and visualization. Wazuh HIDS: Performs log analysis, file integrity checking, policy monitoring, rootkits/malware detection and real-time alerting. The alerts are written in an extended JSON format, and stored locally on the box running as the OSSEC manager. Logstash: Is a data pipeline used for processing logs and other event data from a variety of systems. Logstash will read and process OSSEC JSON files, adding IP Geolocation information and modeling data before sending it to the Elasticsearch Cluster. Elasticsearch: Is the search engine used to index and store our OSSEC alerts. It can be deployed as a cluster, with multiple nodes, for better performance and data replication. Kibana: Kibana is a WEB framework used to explore all elasticsearch indexes. We will use it to analyze OSSEC alerts and to create custom dashboards for different use cases, including compliance regulations like PCI DSS or benchmarks like CIS. 17

22 These components are meant to communicate with each other, so the original data generated by your systems and applications is centralized, analyzed, indexed, stored and made available for you at the Kibana interface. See below a graph describing this data flow: Architecture The components for OSSEC and ELK Stack integration can be deployed all in a single host, or distributed across multiple systems. This last type of deployment is useful for load balancing, high availability and data replication. In most cases Elasticesearch will only be indexing OSSEC alerts, as opposed to every event processed by the system (also possible using archives.json output). This approach reduces considerably the performance and storage requirements, making it perfectly possible to deploy all the components in a single server. In this case, the same system would run the OSSEC manager, the Logstash server and an Elasticsearch single-node cluster with Kibana user interface on top of it. In an effort to cover all possible scenarios, this guide describes both options to deploy OSSEC with ELK Stack (distributed and single-host). Distributed deployment with four servers See below our recommended deployement when using four different hosts (which includes a 3 nodes Elasticsearch cluster): Host 1: OSSEC Manager + Logstash Forwarder Host 2: Logstash Server + Elasticsearch Node 1 + Kibana Host 3: Elasticsearch Node 2 Host 3: Elasticsearch Node 3 18 Chapter 3. Integration with ELK Stack

23 Requirements Operating System: This document includes a detailed description of the steps you need to follow to install the components both in Debian (latest stable is version 8) and CentOS (latest stable is version 7) Linux distributions. RAM memory: Elasticsearch tends to utilize a high amount of memory for data sorting and aggregation and, according to their documentation, less than 8GB RAM is counterproductive. For single-host deployments, considering that Elasticsearch will share resources with OSSEC, Logstash and Kibana, we recommend to provision your server with at least 16GB RAM (more if possible). Less than 16GB RAM would only work for small OSSEC deployments. OSSEC Wazuh fork: It is required for the integration with ELK Stack. You can install it by following the instructions in our documentation Java 8 JRE: Java 8 is required both by Logstash server and by Elasticsearch. In this guide we have also included a description on how to install it. OSSEC alerts dashboard Kibana offers interactive visualization capabilities, that we have used to put together an OSSEC alerts dashboard with visualization of alerts geolocation and timeline. In addition you will be able to see the alerts level evolution, and charts showing you aggregated information for easy analysis. Filters can also be applied, as all alert fields are also indexed by the search engine. See below an screenshot of this dashboard. PCI DSS compliance dashboard OSSEC HIDS can be used to become compliant with PCI DSS, especially due to the intrusion detection, file integrity monitoring and policy enforcement capabilities. This dashboard will make use of OSSEC rules mapping with the compliance controls, showing useful information to identify which systems are not fully compliant with the regulation Components and architecture 19

24 Java 8 JRE Java 8 JRE is required by Logstash server and by the Elasticsearch engine to be able to run. That is why we need to install it both for single-host deployments or distributed ones (only in those systems running Logstash server or Elasticsearch). Java 8 JRE for Debian To install Java 8 JRE on Debian based distributions we just need to add the webupd8team JAVA repository to our sources and then proceed to install Java 8 via apt-get install: $ sudo add-apt-repository ppa:webupd8team/java $ sudo apt-get update $ sudo apt-get install oracle-java8-installer Java 8 JRE for CentOS To install Java 8 JRE on CentOS, download and run the Oracle Java 8 JDK RPM, following these steps: $ cd ~ $ wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3a%2f%2fwww. oracle.com%2f; oraclelicense=accept-securebackup-cookie" " com/otn-pub/java/jdk/8u60-b27/jdk-8u60-linux-x64.rpm" $ sudo yum localinstall jdk-8u60-linux-x64.rpm $ rm ~/jdk-8u60-linux-x64.rpm $ export JAVA_HOME=/usr/java/jdk1.8.0_60/jre At last, to set the JAVA_HOME environment variable for all users, we need to add this line at the end of our /etc/ profile file: export JAVA_HOME=/usr/java/jdk1.8.0_60/jre 20 Chapter 3. Integration with ELK Stack

25 What s next Once you have Java 8 JRE installed you can move forward and install Logstash, Elasticsearch and Kibana: Logstash Elasticsearch Kibana OSSEC Wazuh RESTful API OSSEC Wazuh Ruleset Logstash When integrating OSSEC HIDS with ELK Stack, we use Logstash to model OSSEC alerts output using an Elasticsearch template that will let the indexer know how to process each alert field. For single-host type of deployments we directly install the Logstash server on the same system where the OSSEC manager and Elasticsearch are running. This type of installations do not require the Logstash forwarder component. This one is only necessary when deploying the OSSEC manager on a different server from the one where Logstash server and Elasticsearch are running. Note: Remember Java 8 JRE is required by Logstash server. You can see instructions to install it at our documentation. Distributed architectures For distributed deployments, with multiple servers, this is where you need to install Logstash components: Elasticsearch main cluster node: Logstash server OSSEC manager server: Logstash forwarder Logstash installation on Debian To install Logstash server version 2.1 on Debian based distributions run the following commands on your system: $ wget -qo - sudo apt-key add - $ echo "deb stable main" sudo tee -a /etc/apt/sources.list $ sudo apt-get update && sudo apt-get install logstash If you have any doubt, visit the official installation guide. Logstash forwarder Only for distributed architectures you need to install Logstash forwarder, on the system where you run your OSSEC manager, running the following commands: 3.3. Logstash 21

26 $ wget -qo - sudo apt-key add - $ sudo echo "deb stable main" sudo tee -a /etc/apt/sources.list $ sudo apt-get update && sudo apt-get install logstash-forwarder Logstash installation on CentOS To install Logstash server version 2.1 RPM package. Lets start importing the repository GPG key: $ sudo rpm --import Then we need to create /etc/yum.repos.d/logstash.repo file with the following content: [logstash-2.1] name=logstash repository for 2.1.x packages baseurl= gpgcheck=1 gpgkey= enabled=1 And finally we install the RPM package with yum: $ sudo yum install logstash If you have any doubt, visit the official installation guide. Logstash forwarder Only for distributed architectures you need to install Logstash forwarder, on the system where you run your OSSEC manager. Lets start importing the necessary GPG key: $ sudo rpm --import Then we create a yum repository in /etc/yum.repos.d/logstash-forwarder.repo with the following content: [logstash-forwarder] name=logstash-forwarder repository baseurl= gpgcheck=1 gpgkey= enabled=1 And now we install the RPM package with yum: $ sudo yum install logstash-forwarder Logstash forwarder configuration 22 Chapter 3. Integration with ELK Stack

27 Note: This step is only necessary when deploying the OSSEC manager and Elasticsearch on different systems. If you are using a single host deployment, with OSSEC manager and ELK Stack on the same box, you can skip this section. Since we are going to use Logstash forwarder to ship logs from our hosts to our Logstash server, we need to create an SSL certificate and key pair. The certificate is used by the Logstash forwarder to verify the identity of Logstash server and encrypt communications. SSL Certificate The SSL certificate needs to be created on your Logstash Sever, and then copied to your Logstash forwarder machine. See below how to create this certificate when you run your Logstash server on a Debian or a CentOS Linux distribution. SSL Certificate on Debian To create the SSL certificate on a Debian system, open /etc/ssl/openssl.cnf and find the [ v3_ca ] section, adding the following line below it (replacing logstash_server_ip with your Logstash Server IP): [ v3_ca ] subjectaltname = IP: logstash_server_ip Now generate the SSL certificate and private key, and copy it to your Logstash forwarder system via scp (substituting user and logstash_forwarder_ip by their real values): $ cd /etc/ssl/ $ sudo openssl req -config /etc/ssl/openssl.cnf -x509 -days batch -nodes - newkey rsa:2048 -keyout /etc/logstash/logstash-forwarder.key -out /etc/logstash/ logstash-forwarder.crt $ scp /etc/logstash/logstash-forwarder.crt user@logstash_forwarder_ip:/tmp Then log into your Logstash forwarder system, via SSH, and move the certificate to the right directory: $ sudo mv /tmp/logstash-forwarder.crt /opt/logstash-forwarder/ SSL Certificate on CentOS To create the SSL certificate on a CentOS system, open /etc/pki/tls/openssl.cnf and find the [ v3_ca ] section, adding the following line below it (replacing logstash_server_ip with your Logstash Server IP): [ v3_ca ] subjectaltname = IP: logstash_server_ip Now generate the SSL certificate and private key, and copy it to your Logstash forwarder system via scp (substituting user and logstash_forwarder_ip by their real values): $ cd /etc/pki/tls/ $ sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days batch -nodes - newkey rsa:2048 -keyout /etc/logstash/logstash-forwarder.key -out /etc/logstash/ logstash-forwarder.crt $ scp /etc/logstash/logstash-forwarder.crt user@logstash_forwarder_ip:/tmp 3.3. Logstash 23

28 Then log into your Logstash forwarder system, via SSH, and move the certificate to the right directory: $ sudo mv /tmp/logstash-forwarder.crt /opt/logstash-forwarder Logstash forwarder settings Now on your Logstash forwarder system (same one where you run the OSSEC manager), open the configuration file /etc/logstash-forwarder.conf and, at the network section, modify the servers array adding your Logstash server IP address (substitute logstash_server_ip with the real value). As well don t forget to uncomment the line # A list of downstream servers listening for our messages. # logstash-forwarder will pick one at random and only switch if # the selected one appears to be dead or unresponsive "servers": [ "logstash_server_ip:5000" ], Below those lines you will find the CA configuration settings. We use ssl ca variable to specify the path to our Logstash forwarder SSL certificate # The path to your trusted ssl CA file. This is used # to authenticate your downstream server. "ssl ca": "/opt/logstash-forwarder/logstash-forwarder.crt", Once that is done, in the same file, uncomment timeout option line to increase connection reliability: # logstash-forwarder will assume the connection or server is bad and # will connect to a server chosen at random from the servers list. "timeout": 15 Finally set Logstash forwarder to read OSSEC alerts file, modify list of files configuration to look like this: # The list of files configurations "files": [ "paths": [ "/var/ossec/logs/alerts/alerts.json" ], "fields": "type": "ossec-alerts" } } ] At this point, save and exit the Logstash forwarder configuration file. Let s now give it permissions to read the alerts file, by adding logstash-forwarder user to the ossec group: $ sudo usermod -a -G ossec logstash-forwarder We are now done with the configuration, and just need to restart the Logstash Forwarder to apply changes: $ sudo service logstash-forwarder restart Logstash server configuration Logstash configuration is based on three different plugins: input, filter and output. You can find the plugins already preconfigured, to integrate OSSEC with ELK Stack, in our public github repository. 24 Chapter 3. Integration with ELK Stack

29 Depending on your architecture, single-host or distributed, we will configure Logstash server to read OSSEC alerts directly from OSSEC log file, or to read the incoming data (sent by Logstash forwarder) from port 5000/udp (remember to open your firewall to accept this traffic). For single-host deployments (everything running on the same box), just copy the configuration file 01-ossec-singlehost.conf to the right directory: $ sudo cp ~/ossec_tmp/ossec-wazuh/extensions/logstash/01-ossec-singlehost.conf /etc/ logstash/conf.d/ Instead, for distributed architectures, you need to copy the configuration file 01-ossec.conf $ sudo cp ~/ossec_tmp/ossec-wazuh/extensions/logstash/01-ossec.conf /etc/logstash/ conf.d/ Logstash server by default is bound to loopback address , if your Elasticsearch server is in a different host, remember to modify 01-ossec.conf or 01-ossec-singlehost.conf to set up your Elastic IP hosts => ["elasticsearch_server_ip:9200"] Note: Remember that, for both single-host and distributed deployments, we recommend to run Logstash server and Elasticsearch on the same server. This means that elasticsearch_server_ip would match your logstash_server_ip. Copy the Elasticsearch custom mapping from the extensions folder to the Logstash folder: $ sudo cp ~/ossec_tmp/ossec-wazuh/extensions/elasticsearch/elastic-ossec-template. json /etc/logstash/ And now download and install GeoLiteCity from the Maxmind website. This will add geolocation support for public IP addresses: $ sudo curl -O " " $ sudo gzip -d GeoLiteCity.dat.gz && sudo mv GeoLiteCity.dat /etc/logstash/ In single-host deployments, you also need to grant the logstash user access to OSSEC alerts file: $ sudo usermod -a -G ossec logstash Note: We are not going to start Logstash service yet, we need to wait until we import Wazuh template into Elasticsearch (see next guide) What s next Once you have Logstash installed and configured you can move forward with Elasticsearch and Kibana: Elasticsearch Kibana OSSEC Wazuh RESTful API OSSEC Wazuh Ruleset 3.3. Logstash 25

30 Elasticsearch In this guide we will describe how to install Elasticsearch, as a single-node cluster (with no shard replicas). This is usally enough to process OSSEC alerts data. For very large deployments we recommend to actually use a multi-node cluster, which provides load balancing and data replication. Single-host vs distributed deployments As a reminder, for a single-host OSSEC integration with ELK Stack, we run all components in the same server, which also act as an Elasticsearch single-node cluster. On the other hand, for distributed deployments, we recommend to run the Elasticsearch engine and the OSSEC manager in different systems. Please go to components and architecture documentation for more information. Elasticsearch installation on Debian To install the Elasticsearch version 2.x Debian package, using official repositories run the following commands: $ wget -qo - sudo apt-key add - $ echo "deb stable main" sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list $ sudo apt-get update && sudo apt-get install elasticsearch $ sudo update-rc.d elasticsearch defaults If you have any doubt, visit the official installation guide. Elasticsearch installation on CentOS To install Elasticsearch version 2.x RPM package. Lets start importing the repository GPG key: $ sudo rpm --import Then we create /etc/yum.repos.d/elasticsearch.repo file with the following content: [elasticsearch-2.x] name=elasticsearch repository for 2.x packages baseurl= gpgcheck=1 gpgkey= enabled=1 And we can now install the RPM package with yum: $ sudo yum install elasticsearch Finally configure Elasticsearch to automatically start during bootup: If your distribution is using SysV init, then you will need to run: $ sudo chkconfig --add elasticsearch If your distribution is using Systemd: 26 Chapter 3. Integration with ELK Stack

31 $ sudo /bin/systemctl daemon-reload $ sudo /bin/systemctl enable elasticsearch.service If you have any doubt, visit the official installation guide. Configuration and tuning Once the installation is completed, we can now configure some basic settings modifiying in /etc/ elasticsearch/elasticsearch.yml. Open this file and look for the following variables, uncommenting the lines and assigning them the right values: cluster.name: ossec node.name: ossec_node1 Elasticsearch server by default is bound to loopback address , remember to modify it if it is necessary network.host: elasticsearch_server_ip or if single-node architecture. Shards default number is 5 and Replicas default number is 1, if you are deploying a single-node Elastic cluster, in order to have a Green status you have to set to 1/0 shards and replicas index.number_of_shards: 1 index.number_of_replicas: 0 Elasticsearch perform poorly with memory swaps, in order to disable memory swappping and lock some memory to Elastic, set true the mlockall option and follow the next steps bootstrap.mlockall: true Add the following lines at the end of /etc/security/limits.conf file: elasticsearch - nofile elasticsearch - memlock unlimited As well, open your Elasticsearch service default configuration file (/etc/default/elasticsearch on Debian, and /etc/sysconfig/elasticsearch on CentOS) and edit the following settings (please notice that ES_HEAP_SIZE should be set to half your server memory): # ES_HEAP_SIZE - Set it to half your system RAM memory ES_HEAP_SIZE=8g MAX_LOCKED_MEMORY=unlimited MAX_OPEN_FILES=65535 If your server uses Systemd, edit /usr/lib/systemd/system/elasticsearch.service and uncomment the following line: LimitMEMLOCK=infinity Now we are done with Elasticsearch configuration and tuning, and we must start the service to apply changes and Elastic will be up and running: $ sudo /etc/init.d/elasticsearch start 3.4. Elasticsearch 27

32 Elasticsearch multi-node cluster Elasticsearch uses port 9200/tcp (by default) for API queries and ports in the range /tcp to communicate with other cluster nodes. Remember to open those ports in your firewall for this type of deployments. On the other hand, for multi-node clusters, it is recommended to have as many number of shards per index (index. number_of_shards) as nodes you have in your cluster. And it is also a good practice to use at least one replica (index.number_of_replicas). Cluster health To be sure our single-node cluster is working properly, wait a couple of minutes and check if Elasticsearch is running: $ curl -XGET localhost:9200 Expected result: } "name": "node1", "cluster_name": "ossec", "version": "number": "2.1.1", "build_hash": "40e2c53a6b6c2972b3d13846e450e66f4375bd71", "build_timestamp": " T13:05:55Z", "build_snapshot": false, "lucene_version": "5.3.1" }, "tagline": "You Know, for Search" Elasticsearch cluster health status: $ curl -XGET ' Expected result: } "cluster_name": "ossec", "status": "green", "timed_out": false, "number_of_nodes": 2, "number_of_data_nodes": 2, "active_primary_shards": 281, "active_shards": 562, "relocating_shards": 0, "initializing_shards": 0, "unassigned_shards": 0, "delayed_unassigned_shards": 0, "number_of_pending_tasks": 0, "number_of_in_flight_fetch": 0, "task_max_waiting_in_queue_millis": 0, "active_shards_percent_as_number": Chapter 3. Integration with ELK Stack

33 OSSEC alerts template It s time to integrate the OSSEC Wazuh custom mapping. It s an Elasticsearch template that has already pre-mapped all possible OSSEC alert fields, as they are generated by OSSEC Wazuh fork JSON Output. This way the indexer will automatically know how to process the data, which will be displayed with user-friendly names on your Kibana interface. Add the template by a CURL request to the Elastic API: $ cd ~/ossec_tmp/ossec-wazuh/extensions/elasticsearch/ && curl -XPUT " localhost:9200/_template/ossec/" -d "@elastic-ossec-template.json" If everything was okay, the API response should be: "acknowledged":true} To make sure it has actually been added successfully, you can check the template using the Elasticsearch API: $ curl -XGET Start Logstash-Server Now that we have insert our custom Elasticsearch template containing about 72 OSSEC fields, we can start Logstash server $ sudo service logstash start What s next Once you have Elasticsearch installed and configured you can move forward with Kibana: Kibana OSSEC Wazuh RESTful API OSSEC Wazuh Ruleset Kibana This is your last step in the process of setting up your ELK cluster. In this section you will find the instructions to install Kibana, version 4.3, and to configure it to provide a centralized OSSEC alerts dashboard. In addition you will find dashboards for CIS security benchmark and PCI DSS compliance regulation. Furthermore, the documentation also includes extra steps to secure your Kibana interface with username and password, using Nginx web server. Kibana installation on Debian To install the Kibana version 4.5 Debian package, using official repositories run the following commands: 3.5. Kibana 29

34 $ wget -qo - sudo apt-key add - $ echo "deb stable main" sudo tee -a / etc/apt/sources.list $ sudo apt-get update && sudo apt-get install kibana Configure Kibana to automatically start during bootup. If your distribution is using the System V version of init, run the following command: $ sudo update-rc.d kibana defaults If your distribution is using systemd, run the following commands instead: $ sudo /bin/systemctl daemon-reload $ sudo /bin/systemctl enable kibana.service Kibana installation on CentOS To install Kibana version 4.5 RPM package. Lets start importing the repository GPG key: $ sudo rpm --import Then we create /etc/yum.repos.d/kibana.repo file with the following content: [kibana-4.5] name=kibana repository for 4.5.x packages baseurl= gpgcheck=1 gpgkey= enabled=1 And we can now install the RPM package with yum: $ sudo yum install kibana Finally configure Kibana to automatically start during bootup: If your distribution is using SysV init, then you will need to run: $ sudo chkconfig --add kibana If your distribution is using Systemd: $ sudo /bin/systemctl daemon-reload $ sudo /bin/systemctl enable kibana.service Kibana on low memory systems New Kibana 4.3 based on Node (V8) uses a lazy and greedy garbage collector. With its default limit of about 1.5 GB. In low ram memory systems (below 2GB) Kibana could not run properly. Kibana developers included one fix, but later decided remove this patch. If your host total RAM is below 2GB, from Wazuh we recommend to limit NodeJS max ram space, to do it open the file /opt/kibana/bin/kibana and add the following line NODE_OPTIONS="$NODE_OPTIONS:=--max-old-space-size=250}" Change 250 value acording to your needs. 30 Chapter 3. Integration with ELK Stack

35 Kibana configuration Kibana is bound by default to address (listening on all addresses), it uses by default 5601 port and try to connect to Elasticsearch using the URL If you need to change any of this settings, open the /opt/kibana/config/kibana.yml configuration file and set up the following variables: # Kibana is served by a back end server. This controls which port to use. server.port: 80 # The host to bind the server to. server.host: " " # The Elasticsearch instance to use for all your queries. elasticsearch.url: " Note: Please note that the IP address we use in elasticsearch.url variable needs to match the one we used for network.bind_host and network.host when we configured the Elasticsearch component. Now we can start Kibana: $ sudo service kibana start OSSEC alerts index To create OSSEC alerts index, access your Kibana interface at Kibana will ask you to Configure an index pattern, set it up following these steps: - Check "Index contains time-based events". - Insert Index name or pattern: ossec-* - On "Time-field name" list option. - Click on "Create" button. - You should see the fields list with about ~72 fields. - Go to "Discover" tap on top bar buttons. Note: Kibana will search Elasticsearch index name pattern ossec-yyyy.mm.dd. You need to have at least an OSSEC alert before you set up the index pattern on Kibana. Otherwise it won t find any index on Elasticsearch. If you want to generate one, for example you could try a sudo -s and miss the password on purpose several times. OSSEC Dashboards Custom dashboards for OSSEC alerts, GeoIP maps, file integrity, alert evolution, PCI DSS controls and CIS benchmark. Import the custom dashboards. Access Kibana web interface on your browser and navigate to Objects : - Click at top bar on "Settings". - Click on "Objects". - Then click the button "Import" - Select the file ~/ossec_tmp/ossec-wazuh/extensions/kibana/kibana-ossecwazuh- dashboards.json - Optional: You can download the Dashboards JSON File directly from the repository `here< kibana-ossecwazuh-dashboards.json>`_ Kibana 31

36 Refresh the Kibana page and you should be able to load your imported Dashboards. Note: Some Dashboard visualizations require time and specific alerts to work. Please don t worry if some visualizations do not display data immidiately after the import. Nginx secure proxy We are going to use the Nginx web server to build a secure proxy to our Kibana web interface, we will establish a secure connection with SSL Certificates and HTTP Authentication. To install Nginx on Debian systems, update your repositories and install Nginx and apache2-utils (for htpassword): $ sudo apt-get update $ sudo apt-get install nginx apache2-utils To install Nginx on CentOS systems, run the following commands: $ sudo yum install epel-release $ sudo yum install nginx httpd-tools $ sudo systemctl start nginx Nginx configuration Create and edit Kibana configuration file for Nginx: - On CentOS: /etc/nginx/conf.d/kibana.conf - On Debian: /etc/nginx/sites-available/default Copy and paste the following configuration: server listen 80 default_server; listen [::]:80; return } #Listen on IPv4 #Listen on IPv6 server listen *:443; listen [::]:443; ssl on; ssl_certificate /etc/pki/tls/certs/kibana-access.crt; ssl_certificate_key /etc/pki/tls/private/kibana-access.key; server_name "Server Name"; access_log /var/log/nginx/kibana.access.log; error_log /var/log/nginx/kibana.error.log; } location ~ (/ /app/kibana /bundles/ /kibana4 /status /plugins) auth_basic "Restricted"; auth_basic_user_file /etc/nginx/conf.d/kibana.htpasswd; proxy_pass } 32 Chapter 3. Integration with ELK Stack

37 On CentOS we also need to edit /etc/nginx/nginx.conf, including the following line inside the server block: include /etc/nginx/conf.d/*.conf; SSL Certificate Now we can create the SSL certificate to encrypt our connection via HTTPS. This can be done by following the next steps: $ cd ~ $ sudo openssl genrsa -des3 -out server.key 1024 Enter a password for the certificate and continue: $ sudo openssl req -new -key server.key -out server.csr Enter the password again, fill the certificate information, and continue: $ sudo cp server.key server.key.org $ sudo openssl rsa -in server.key.org -out kibana-access.key $ sudo openssl x509 -req -days 365 -in server.csr -signkey server.key -out kibana- access.crt $ sudo mkdir -p /etc/pki/tls/certs $ sudo cp kibana-access.crt /etc/pki/tls/certs/ $ sudo mkdir -p /etc/pki/tls/private/ $ sudo cp kibana-access.key /etc/pki/tls/private/ Password authentication To generate your.htpasswd file, run this command, replacing kibabaadmin with your own username $ sudo htpasswd -c /etc/nginx/conf.d/kibana.htpasswd kibanaadmin Now restart the Nginx service: $ sudo service nginx restart Try to access the Kibana web interface via HTTPS. It will ask for the username and password you just created. Note: If you are running SELinux in enforcing mode, you might need to do some additional configuration in order to allow connections to :5601. What s next Now you have finished your ELK cluster installation and we recommend you to go to your OSSEC Wazuh manager and install OSSEC Wazuh RESTful API and OSSEC Wazuh Ruleset modules: OSSEC Wazuh RESTful API OSSEC Wazuh Ruleset 3.5. Kibana 33

38 34 Chapter 3. Integration with ELK Stack

39 CHAPTER 4 OSSEC Wazuh Reference This section is intended to extend the official OSSEC manual. Manage agents New in version v Introduction We have introduced new features into manage_agents OSSEC binary to prevent adding two agents with the same IP address. manage_agents will not allow us to add an Agent if the IP is assigned already to another agent, in that case it will generate a log and warn us about it. Forcing insertion In case you want to overwrite an existing agent, we have created a way to force the agent registration, option [-d <seconds>] will remove the old agent if it is disconnected since <seconds> value. Using 0 value will replace the agent in any case. Usage example Adding new agent called MyNewAgent, in case the IP already exists, replace it if it was disconnected for the last 3600 seconds. /var/ossec/bin/manage_agents -a " " -n "MyNewAgent" -d 3600 See also: For a complete description of every manage_agents option, please read OSSEC documentation: manage_agents. 35

40 Data backup Before OSSEC removes an agent by forcing, it will backup the data of the old agent in /backup/agents, in a new folder with the agent s name and IP, and the current timestamp. The saved data is the following: Agent s operating system. Version of the agent. Timestamp when it was added. Syscheck database. Rootcheck database. See also: There is a compile option that allows a new agent to inherit the ID of the agent that was removed by forcing insertion. To learn more about this, please read Agent ID reusage. OSSEC Authd New in version v1.1. ossec-authd is an automatic agents registration tool, it will automatically add an agent to the manager and provide a new key to the agent. Now, ossec-authd tool is password protected, increasing security in the agent registration process. OSSEC Manager looks for a defined password at file /var/ossec/etc/authd.pass. If a password isn t found, a random one is generated and shown on the console. Duplicated IPs are no longer allowed. So if there s an attempt to add two agents with the same IP, ossec-authd will fail and report it through an alert. Configuration On server-side New options: -i Register agent with client s IP instead of any. -f <seconds> Remove old agents with the same IP if they were not connected since <seconds>. It has only sense along with option -i. -P Enable shared password authentication. Option -f forces the insertion on IP collision, this means that if OSSEC finds another agent with the same IP, but it has not connected since a specified time, that agent will be deleted automatically and the new agent will be added. To force insertion always (regardless of the time of the last agent connection), use -f 0. See also: For a complete description of every option, please read OSSEC documentation: ossec-authd. 36 Chapter 4. OSSEC Wazuh Reference

41 On client-side New options: -P <password> Use the specified password instead of searching for it at authd.pass. If a password is not provided in the file nor on the console, the client will connect with the server without a password (insecure mode). See also: For a complete description of every option, please read OSSEC documentation: agent-auth. Data backup Before OSSEC removes an agent by forcing, it will backup the data of the old agent in /var/ossec/backup/ agents/<id> <name> <ip> <delete timestamp>, in a new folder with the agent s name and IP, and the current timestamp. The saved data is the following: Agent s operating system. Version of the agent. Timestamp when it was added. Syscheck database. Rootcheck database. See also: There is a compile option that allows a new agent to inherit the ID of the agent that was removed by forcing insertion. To learn more about this, please read Agent ID reusage. Integrator New in version v1.1. Integrator is a new daemon that allows to connect OSSEC to external APIs and alerting tools, such Slack and Pager- Duty. Enabling Integrator Integrator is not enabled by default, but it can be enabled with the following command: $ /var/ossec/bin/ossec-control enable integrator $ /var/ossec/bin/ossec-control restart Configuration Integrations are configured in the file etc/ossec.conf, which is located inside your OSSEC installation directory. Add inside <ossec_config></ossec_config> tags your integration like this: 4.3. Integrator 37

42 <integration> <name> </name> <hook_url> </hook_url> <api_key> </api_key> <!-- Optional filters --> <rule_id> </rule_id> <level> </level> <group> </group> <event_location> </event_location> </integration> Basic configuration <name> Name of the service. Allowed values: slack pagerduty <hook_url> The URL provided by Slack when the integration was enabled. Mandatory for Slack. <api_key> The key that you retrieved from the PagerDuty API. Mandatory for PagerDuty. Note: You must restart OSSEC after changing the configuration. Integrating with Slack <integration> <name>slack</name> <hook_url> </integration> Integrating with PagerDuty <integration> <name>pagerduty</name> <api_key>mykey</api_key> </integration> 38 Chapter 4. OSSEC Wazuh Reference

43 Optional filters <level> Filter rules by level: push only alerts with the specified level or above. <rule_id> Filter by rule ID. <group> Filter rules by category. OS_Regex Syntax. <event_location> Filter rules by location where they were originated. OS_Regex Syntax Agent ID reusage New in version v When OSSEC adds a new agent, assigns a unique ID for it and creates a shared key which will be used to encrypt messages between agent and server. All this information is stored in the file etc/client.keys. Information about the agent s id and keys are not removed by default when removing agents, instead OSSEC comments the corresponding line in the file. This behavior can potentially make the client.keys grow if agents are re-added frequently with forcing. In order to solve this issue, there is an optional feature: id reusage, that can be enabled as compile option: make TARGET=server REUSE_ID=yes (...) Note: This option affects only to managers. When enabled, deleting agents will remove the corresponding key from client.keys. Every time that manage_agents or ossec-auth remove an agent to add another with the same IP, the new agent will get the id of the former, and the key in client.keys will be overwritten. This feature doesn t affect the backup: the old agent s data will still be backed up. See also: OSSEC Authd manual_manage_agents 4.4. Agent ID reusage 39

44 40 Chapter 4. OSSEC Wazuh Reference

45 CHAPTER 5 OSSEC Wazuh RESTful API Introduction OSSEC Wazuh RESTful API provides a new mechanism to manage OSSEC Wazuh. The goal is to manage your OSSEC deployment remotely (e.g. through a web browser), or to control OSSEC with external systems. Perform everyday actions like adding an agent, restart OSSEC, or check the configuration are now simplest using Wazuh RESTful API. OSSEC Wazuh API RESTful capabilities: Agents management Manager control & overview Rootcheck control Syscheck control Statistical Information HTTPS and User authentication Error Handling Documentation sections 41

46 Installation Pre-requisites In order to install and run the API, you will need some packages, in the following steps we will guide you to install them. Wazuh HIDS NodeJS server (v0.10.x) with Express module (4.0.x) Python 2.6 or superior OSSEC Wazuh RESTful API requires you to have previously installed our OSSEC fork as your manager. You can download and install it following these instructions. The API will operate on port 55000/tcp by default, and NodeJS service will be protected with HTTP Authentication and encrypted by HTTPS. NodeJS Most of distributions contain a version of NodeJS in its default repositories but we prefer to use the repositories maintained by NodeSource because they have more recent versions. Follow the official guide to install it. Usually, it is enough with the next commands: Debian and Ubuntu based Linux distributions: $ curl -sl sudo -E bash - $ sudo apt-get install -y nodejs Red Hat, CentOS and Fedora: $ curl --silent --location bash - $ yum -y install nodejs Python packages The API needs Python 2.6 or newer to perform some tasks. Also, you need to install the python package xmljson: $ sudo pip install xmljson In case you need the pip tool, you can install it following these steps: Debian and Ubuntu based Linux distributions: $ sudo apt-get install python-pip Red Hat, CentOS and Fedora: $ sudo yum install python-pip 42 Chapter 5. OSSEC Wazuh RESTful API

47 RESTful API Proceed to download the API and copy API folder to OSSEC folder: $ cd ~ $ wget -O wazuh-api tar.gz $ tar -xvf wazuh-api-*.tar.gz $ sudo mkdir -p /var/ossec/api && sudo cp -r wazuh-api-*/* /var/ossec/api Once you have installed NodeJS, NPM and the API, you must install the NodeJS modules: $ sudo -s $ cd /var/ossec/api $ npm install Configuration You can configure some parameters using the file api/config.js // Port // TCP Port used by the API. config.port = "55000"; // Security // Use HTTP protocol over TLS/SSL config.https = "yes"; // Use HTTP authentication config.basic_auth = "yes"; // In case the API run behind a proxy server, turn to "yes" this feature. config.behindproxyserver = "no"; // Cross-origin resource sharing config.cors = "yes"; // Paths config.ossec_path = "/var/ossec"; config.log_path = "/var/ossec/logs/api.log"; config.api_path = dirname; // Logs // Values for API log: disabled, info, warning, error, debug (each level includes the previous level). config.logs = "info"; config.logs_tag = "WazuhAPI"; Basic Authentication By default you can access by entering user foo and password bar. We recommend you to generate new credentials. This can be done very easily, doing the following steps: At first please make sure that you have htpasswd tool installed. On Debian, update your repositories and install apache2-utils package: 43

48 $ sudo apt-get update $ sudo apt-get install apache2-utils On Centos, install the package running $ sudo yum install httpd-tools Then, run htpasswd with your desired username: $ cd /var/ossec/api/ssl $ sudo htpasswd -c htpasswd username SSL Certificate At this point, you will create certificates to use the API, in case you prefer to use the out-of-the-box certificates, skip this section. Follow the next steps to generate them (Openssl package is required): $ cd /var/ossec/api/ssl $ sudo openssl genrsa -des3 -out server.key 1024 $ sudo openssl req -new -key server.key -out server.csr The password must be entered everytime you run the server, if you don t want to enter the password everytime, you can remove it by running these commands: $ sudo cp server.key server.key.org $ sudo openssl rsa -in server.key.org -out server.key Now generate your self-signed certificate: $ sudo openssl x509 -req -days 365 -in server.csr -signkey server.key -out server. crt And remove temporary files: $ sudo rm server.csr $ sudo rm server.key.org Running API There are two ways to run the API: as service or on background. Service We recommend to run the API as a service. In order to install the service excecute the following script: $ sudo /var/ossec/api/scripts/install_daemon.sh Then, check out if the API is running: Systemd systems: systemctl status wazuh-api SysVinit systems: service wazuh-api status The available options are: start, stop, status and restart. 44 Chapter 5. OSSEC Wazuh RESTful API

49 Background In order to run the API on background execute the following command: $ /bin/node /var/ossec/api/app.js & API logs will be saved at /var/ossec/logs/api.log. Note: Sometimes NodeJS binary is called nodejs or it is located on /usr/bin/, if the API does not start, check it please. Reference This API reference is organized by resources: Agents Manager Rootcheck Syscheck Also, it is provided an Request List with all available requests. Before starting to use the API, you must keep in mind: The base URL for each request is: All responses are in JSON format with the following structure: error: 0 if everything was fine and an error code otherwise. data: data requested or empty if error is different to 0. message: error description or empty if error is equal to 0 Examples: * Response without errors: "error": "0", "data": "...", "message": "" } * Response with errors: "error": "NOT 0", "data": "", "message": "... " } All responses have a HTTP Status code: 2xx (success), 4xx (client error), 5xx (server error), etc. Find some Examples of how to use this API with CURL, Powershell and Python. Request List Agents DELETE /agents/:agent_id GET /agents GET /agents/:agent_id GET /agents/:agent_id/key 45

50 POST /agents PUT /agents/:agent_id/restart PUT /agents/:agent_name Manager GET /manager/configuration GET /manager/configuration/test GET /manager/stats GET /manager/stats/hourly GET /manager/stats/weekly GET /manager/status PUT /manager/restart PUT /manager/start PUT /manager/stop Rootcheck DELETE /rootcheck DELETE /rootcheck/:agent_id GET /rootcheck/:agent_id GET /rootcheck/:agent_id/last_scan PUT /rootcheck PUT /rootcheck/:agent_id Syscheck DELETE /syscheck DELETE /syscheck/:agent_id GET /syscheck/:agent_id/files/changed GET /syscheck/:agent_id/last_scan PUT /syscheck PUT /syscheck/:agent_id Agents List GET /agents Returns a list with the available agents. Parameters: N/A 46 Chapter 5. OSSEC Wazuh RESTful API

51 Query: status: Status of the agents to return. Possible values: Active, Disconnected or Never connected. Example Request: GET Example Response: } "error": "0", "data": [ "id": "001", "name": "Host1", "ip": "any", "status": "Never connected" }, "id": "002", "name": "Host2", "ip": " ", "status": "Never connected" } ], "message": "" Info GET /agents/:agent_id Returns the information of an agent. Parameters: Query: agent_id N/A Example Request: GET Example Response: "error": "0", "data": "id": "000", "name": "LinMV", "ip": " ", "status": "Active", "os": "Linux LinMV amd64 #1 SMP Debian ckt11-1 ( ) x86_ 64", "version": "OSSEC HIDS v2.8", "lastkeepalive": "Not available", "syschecktime": "Tue Feb 23 10:57: ", "syscheckendtime": "Tue Feb 23 11:02: ", "rootchecktime": "Tue Feb 23 11:03: ", 47 "rootcheckendtime": "Tue Feb 23 10:33: " }, "message": "" }

52 key GET /agents/:agent_id/key Returns the key for an agent. Parameters: Query: agent_id N/A Example Request: GET Example Response: "error": "0", "data": "MDAxIEhvc3QxIGFueSBkMDZlYjRkNTk4MzU2YjAwYWQzNzcxZTdiMDJiMmRiZDhkM2ZhNjA3ZGU0NGU4YTQyZGVkYTJjMGY0N ", "message": "" } Restart PUT /agents/:agent_id/restart Restarts the agent. Parameters: Query: agent_id N/A Example Request: PUT Example Response: } "error": "0", "data": "Restarting agent", "message": "" 48 Chapter 5. OSSEC Wazuh RESTful API

53 Add PUT /agents/:agent_name Add a new agent with name :agent_name. This agent will use ANY as IP. Parameters: Query: agent_name N/A Example Request: PUT Example Response: } "error": 0, "data": "id": "002", "message": "Agent added" }, "message": "" POST /agents Add a new agent. Parameters: Query: name: Agent name ip: (optional) N/A IP ( ) IP/MASK ( /24) ANY Example Request: If you do not include this param, the API will get the IP automatically. If you are behind a proxy, you must set the option config.behindproxyserver to yes at config.js. POST Body: name: HostWindows ip: Example Response: 49

54 } "error": 0, "data": "id": "003", "message": "Agent added" }, "message": "" Remove DELETE /agents/:agent_id Removes an agent. Internally use manage_agents with option -r <id>. You must restart OSSEC after removing an agent. Parameters: Query: agent_id N/A Example Request: DELETE Example Response: } "error": "0", "data": "Agent removed", "message": "" Manager Start PUT /manager/start Starts the OSSEC Manager processes. Parameters: N/A Query: N/A Example Request: 50 Chapter 5. OSSEC Wazuh RESTful API

55 PUT Example Response: } "error": "0", "data": [ "daemon": "ossec-maild", "status": "running" }, "daemon": "ossec-execd", "status": "running" }, "daemon": "ossec-analysisd", "status": "running" }, "daemon": "ossec-logcollector", "status": "running" }, "daemon": "ossec-remoted", "status": "running" }, "daemon": "ossec-syscheckd", "status": "running" }, "daemon": "ossec-monitord", "status": "running" } ], "message": "" Stop PUT /manager/stop Stops the OSSEC Manager processes. Parameters: N/A Query: N/A Example Request: PUT 51

56 Example Response: } "error": "0", "data": [ "daemon": "ossec-monitord", "status": "killed" }, "daemon": "ossec-logcollector", "status": "killed" }, "daemon": "ossec-remoted", "status": "killed" }, "daemon": "ossec-syscheckd", "status": "killed" }, "daemon": "ossec-analysisd", "status": "killed" }, "daemon": "ossec-maild", "status": "stopped" }, "daemon": "ossec-execd", "status": "killed" } ], "message": "" Restart PUT /manager/restart Restarts the OSSEC Manager processes. Parameters: N/A Query: N/A Example Request: PUT Example Response: 52 Chapter 5. OSSEC Wazuh RESTful API

57 } "error": "0", "data": [ "daemon": "ossec-maild", "status": "running" }, "daemon": "ossec-execd", "status": "running" }, "daemon": "ossec-analysisd", "status": "running" }, "daemon": "ossec-logcollector", "status": "running" }, "daemon": "ossec-remoted", "status": "running" }, "daemon": "ossec-syscheckd", "status": "running" }, "daemon": "ossec-monitord", "status": "running" } ], "message": "" Status GET /manager/status Returns the OSSEC Manager processes that are running. Parameters: N/A Query: N/A Example Request: GET Example Response: 53

58 } "error": "0", "data": [ "daemon": "ossec-monitord", "status": "running" }, "daemon": "ossec-logcollector", "status": "running" }, "daemon": "ossec-remoted", "status": "running" }, "daemon": "ossec-syscheckd", "status": "running" }, "daemon": "ossec-analysisd", "status": "running" }, "daemon": "ossec-maild", "status": "stopped" }, "daemon": "ossec-execd", "status": "running" } ], "message": "" Configuration GET /manager/configuration Returns ossec.conf in JSON format. Parameters: N/A Query: Section: Indicates the ossec.conf section: global, rules, syscheck, rootcheck, remote, alerts, command, activeresponse, localfile. Field: Indicates section child, e.g, fields for rule section are: include, decoder_dir, etc. Example Request: GET 54 Chapter 5. OSSEC Wazuh RESTful API

59 Example Response: } "error": "0", "data": [ "$t": "rules_config.xml" }, "$t": "pam_rules.xml" }, "$t": "..._rules.xml" } ], "message": "" GET /manager/configuration/test Test OSSEC Manager configuration. Parameters: N/A Query: N/A Example Request: GET * The second line of ossec.conf have been changed from <global> to <globaaaal>. Example Response: "error": 82, "data": "", "message": "[\"2016/02/23 12:30:57 ossec-testrule(1226): ERROR: Error reading XML file '/var/ossec/etc/ossec.conf': XMLERR: Element 'globaaaal' not closed. (line 6).\", \"2016/02/23 12:30:57 ossec-testrule(1202): ERROR: Configuration error at '/var/ossec/etc/ossec.conf'. Exiting.\"]" } Stats GET /manager/stats Returns OSSEC statistical information of current date. Parameters: N/A 55

60 Query: date: Select date for getting the statistical information. Format: YYYYMMDD Example Request: GET Example Response: 56 "error": "0", "data": [ "hour": 10, "firewall": 0, "alerts": [ "times": 2, "sigid": 600, "level": 0 }, "times": 2, "sigid": 1002, "level": 2 }, "times": 8, "sigid": 530, "level": 0 }, "times": 1, "sigid": 535, "level": 1 }, "times": 1, "sigid": 502, "level": 3 }, "times": 1, "sigid": 515, "level": 0 } ], "totalalerts": 15, "syscheck": 1126, "events": 1144 }, "hour": 11, "firewall": 0, "alerts": [ "...": "..." } ], "totalalerts": 432, "syscheck": 1146, "events": 1607 } ], "message": "" Chapter 5. OSSEC Wazuh RESTful API }

61 GET /manager/stats/hourly Returns OSSEC statistical information per hour. Each item in averages field represents the average of alerts per hour. Parameters: N/A Query: N/A Example Request: GET Example Response: "error":"0", "response": "averages":[ 974, 1291, 886, 784, 1013, 843, 880, 872, 805, 681, 1094, 868, 609, 659, 1455, 1382, 1465, 2092, 1475, 1879, 1548, 1854, 1849, 1020 ], "interactions":20 }, "message":null } GET /manager/stats/weekly Returns OSSEC statistical information per week. Each item in hours field represents the average of alerts per hour and week day. 57

62 Parameters: N/A Query: N/A Example Request: GET Example Response: 58 "error": "0", "data": "Mon": "hours":[ 948, 838, 711, 1091, 589, 574, 888, 665, 522, 428, 593, 638, 446, 757, 401, 443, 1439, 1114, 648, 1047, 629, 483, 2641, 649 ], "interactions":0 }, "...":... }, "Sun": "hours":[ 1066, 1684, 901, 652, 1078, 1236, 1052, 920, 803, 686, 391, 800, 736, 558, 418, 703, 591, Chapter 5. OSSEC Wazuh RESTful API

63 Rootcheck Database GET /rootcheck/:agent_id Returns the rootcheck database of an agent. Parameters: agent_id Query: N/A Example Request: GET Example Response: "error": "0", "data": [ "status": "outstanding", "readday": "2016 Feb 23 12:52:58", "oldday": "2016 Feb 22 19:41:05", "event": "(null)system Audit: CIS - Testing against the CIS Debian Linux Benchmark v1.0. File: /etc/debian_version. Reference: index.php/cis_debianlinux." }, "status": "outstanding", "readday": "2016 Feb 23 12:52:58", "oldday": "2016 Feb 22 19:41:05", "event": "(null)system Audit: CIS - Debian Linux Robust partition scheme - /tmp is not on its own partition CIS: 1.4 Debian Linux}. File: /etc/ fstab. Reference: }, "status": "outstanding", "readday": "2016 Feb 23 12:52:58", "oldday": "2016 Feb 22 19:41:05", "event": "(null)system Audit: CIS - Debian Linux Robust partition scheme - /opt is not on its own partition CIS: 1.4 Debian Linux}. File: /opt. Reference: }, "status": "outstanding", "readday": "2016 Feb 23 12:52:58", "oldday": "2016 Feb 22 19:41:05", "event": "(null)system Audit: CIS - Debian Linux Robust partition scheme - /var is not on its own partition CIS: 1.4 Debian Linux}. File: /etc/ fstab. Reference: }, "status": "outstanding", "readday": "2016 Feb 23 12:52:58", "oldday": "2016 Feb 22 19:41:05", "event": "(null)system Audit: CIS - Debian Linux Disable standard boot services - Web server Enabled CIS: 4.13 Debian Linux} PCI_DSS: 2.2.2}. File: /etc/init.d/apache2. Reference: 59

64 Last scan GET /rootcheck/:agent_id/last_scan Return the timestamp of the last rootcheck scan. Parameters: Query: agent_id N/A Example Request: GET Example Response: } "error": "0", "data": "rootchecktime": "Tue Feb 23 15:54: ", "rootcheckendtime": "Tue Feb 23 15:58: " }, "message": "" Run PUT /rootcheck Runs syscheck/rootcheck on all agents. This request has the same behavior that PUT /syscheck. Due to OSSEC launches both processes at once. Parameters: N/A Query: N/A Example Request: PUT Example Response: } "error": "0", "data": "Restarting Syscheck/Rootcheck on all agents", "message": "" 60 Chapter 5. OSSEC Wazuh RESTful API

65 PUT /rootcheck/:agent_id Runs syscheck/rootcheck on an agent. This request has the same behavior that PUT /syscheck/:agent_id. Due to OSSEC launches both processes at once. Parameters: Query: agent_id N/A Example Request: PUT Example Response: } "error": "0", "data": "Restarting Syscheck/Rootcheck on agent", "message": "" Clear Database DELETE /rootcheck Clears the rootcheck database for all agents. Parameters: N/A Query: N/A Example Request: DELETE Example Response: } "error": "0", "data": "Policy and auditing database updated", "message": "" DELETE /rootcheck/:agent_id Clears the rootcheck database for an agent. Parameters: 61

66 Query: agent_id N/A Example Request: DELETE Example Response: } "error": "0", "data": "Policy and auditing database updated", "message": "" Syscheck Database GET /syscheck/:agent_id/files/changed Returns changed files for an agent. If a filename is specified, returns the changes in that files. Parameters: Query: agent_id filename Example Request: GET Example Response: "error": "0", "data": [ "date": "2016 Feb 23 15:42:46", "file": "/home/test/passwords.txt", "changes": 0, "attrs": "event": "added", "size": "2", "mode": 33188, "perm": "rw-r--r--", "uid": "0", "gid": "0", "md5": "60b725f10c9c85c70d97880dfe8191b3", "sha1": "3f786850e387550fdab836ed7e6dc881de23001b" } }, "date": "2016 Feb 23 15:53:41", "file": "/home/test/passwords.txt", "changes": 0, 62 "attrs": Chapter 5. OSSEC Wazuh RESTful API "event": "modified", "size": "53", "mode": 33279, "perm": "rwxrwxrwx",

67 Last scan GET /syscheck/:agent_id/last_scan Return the timestamp of the last syscheck scan. Parameters: Query: agent_id N/A Example Request: GET Example Response: } "error": "0", "data": "syschecktime": "Tue Feb 23 15:37: ", "syscheckendtime": "Tue Feb 23 15:42: " }, "message": "" Run PUT /syscheck Runs syscheck/rootcheck on all agents. This request has the same behavior that PUT /rootcheck. Due to OSSEC launches both processes at once. Parameters: N/A Query: N/A Example Request: PUT Example Response: } "error": "0", "data": "Restarting Syscheck/Rootcheck on all agents", "message": "" 63

68 PUT /syscheck/:agent_id Runs syscheck/rootcheck on an agent. This request has the same behavior that PUT /rootcheck/:agent_id. Due to OSSEC launches both processes at once. Parameters: Query: agent_id N/A Example Request: PUT Example Response: } "error": "0", "data": "Restarting Syscheck/Rootcheck on agent", "message": "" Clear Database DELETE /syscheck Clears the syscheck database for all agents. Parameters: N/A Query: N/A Example Request: DELETE Example Response: } "error": "0", "data": "Integrity check database updated", "message": "" DELETE /syscheck/:agent_id Clears the syscheck database for an agent. Parameters: 64 Chapter 5. OSSEC Wazuh RESTful API

69 Query: agent_id N/A Example Request: DELETE Example Response: } "error": "0", "data": "Integrity check database updated", "message": "" Examples CURL curl is a command-line tool for transferring data using various protocols. It can be used to interact with this API. It is pre-installed on many Linux and Mac systems. Some examples: GET $ curl -u foo:bar -k "error":"0","data":"ossec-api","message":"wazuh.com"} PUT $ curl -u foo:bar -k -X PUT "error":0,"data":"id":"004","message":"agent added"},"message":""} POST $ curl -u foo:bar -k -X POST -d 'name=newhost&ip= ' agents "error":0,"data":"id":"004","message":"agent added"},"message":""} DELETE $ curl -u foo:bar -k -X DELETE "error":"0","data":"policy and auditing database updated","message":""} Python It is very easy interact with the API using Python: Code: 65

70 #!/usr/bin/env python import json import requests # Install request: pip install requests # Configuration base_url = ' auth = requests.auth.httpbasicauth('foo', 'bar') verify = False requests.packages.urllib3.disable_warnings() # Request url = '0}1}'.format(base_url, "/agents/000") r = requests.get(url, auth=auth, params=none, verify=verify) print(json.dumps(r.json(), indent=4, sort_keys=true)) print("status: 0}".format(r.status_code)) Output: "error": "0", "message": "", "data": "id": "000", "ip": " ", "lastkeepalive": "Not available", "name": "LinMV", "os": "Linux LinMV amd64 #1 SMP Debian ckt11-1 ( ) x86_64", "rootcheckendtime": "Unknown", "rootchecktime": "Unknown", "status": "Active", "syscheckendtime": "Unknown", "syschecktime": "Unknown", "version": "OSSEC HIDS v2.8" } } Status: 200 Full example in wazuh-api/examples/api-client.py. Powershell The Invoke-RestMethod cmdlet sends requests to the API and handle the response easily. This cmdlet is introduced in Windows PowerShell 3.0. Code: 66 Chapter 5. OSSEC Wazuh RESTful API

71 function Ignore-SelfSignedCerts using System.Net; using System.Security.Cryptography.X509Certificates; } public class PolicyCert : ICertificatePolicy public PolicyCert() } public bool CheckValidationResult( ServicePoint spoint, X509Certificate cert, WebRequest wrequest, int certprob) return true; } } [System.Net.ServicePointManager]::CertificatePolicy = new-object PolicyCert # Configuration $base_url = " $username = "foo" $password = "bar" $base64authinfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("0}: 1}" -f $username, $password))) Ignore-SelfSignedCerts # Request $url = $base_url + "/syscheck/000/last_scan" $method = "get" try $r = Invoke-RestMethod 0}" -f $base64authinfo)} -Method $method -Uri $url }catch $r = $_.Exception } Write-Output $r Output: error data message Feb 24 09:55: ; syscheckendtime=wed Feb 24 10:00: } Full example in wazuh-api/examples/api-client.ps1. What s next Once you have your OSSEC RESTful API running, we recommend you to check our OSSEC Wazuh ruleset: OSSEC Wazuh Ruleset installation guide 67

72 68 Chapter 5. OSSEC Wazuh RESTful API

73 CHAPTER 6 OSSEC Wazuh Ruleset Introduction This documentation explains how to install, update, and contribute to OSSEC HIDS Ruleset mantained by Wazuh. These rules are used by the system to detect attacks, intrusions, software misuse, configuration problems, application errors, malware, rootkits, system anomalies or security policy violations. OSSEC provides an out-of-the-box set of rules that we update by modifying them or including new ones, in order to increase OSSEC detection capabilities. In the ruleset repository you will find: OSSEC out-of-the-box rule/rootcheck updates and compliance mapping We update and maintain out-ofthe-box rules provided by OSSEC, both to eliminate false positives or to increase their accuracy. In addition, we map those with PCI-DSS compliance controls, making it easy to identify when an alert is related to a compliance requirement. New rules/rootchecks OSSEC default number of rules and decoders is limited. For this reason, we centralize, test and maintain decoders and rules submitted by Open Source contributors. As well, we create new rules and rootchecks periodically that are added to this repository so they can be used by the users community. Some examples are the new rules for Netscaler and Puppet. Resources Visit our repository to view the rules in detail at Github OSSEC Wazuh Ruleset Find a complete description of the available rules: OSSEC Wazuh Ruleset Summary Rule and Rootcheck example Log analysis rule for Netscaler with PCI DSS compliance mapping: <rule id="80102" level="10" frequency="6"> <if_matched_sid>80101</if_matched_sid> <same_source_ip /> 69

74 <description>netscaler: Multiple AAA failed to login the user</description> <group>authentication_failures,netscaler-aaa,pci_dss_10.2.4,pci_dss_10.2.5,pci_ dss_11.4,</group> </rule> Rootcheck rule for SSH Server with mapping to CIS security benchmark and PCI DSS compliance: [CIS - Debian Linux SSH Configuration - Empty passwords permitted CIS: 2.3 Debian Linux} PCI_DSS: 4.1}] [any] [ DebianLinux] f:/etc/ssh/sshd_config ->!r:^# && r:^permitemptypasswords\.+yes; Manual installation Log analysis rules In the Github repository you will find two different kind of rules under ossec-rules/rules-decoders/ directory: Updated out-of-the-box rules These rules can be found under ossec-rules/rules-decoders/ossec directory, and you can manually install them following these steps: - Copy "ossec-rules/rules-decoders/ossec/decoders/*_decoders.xml" to "/var/ossec/etc/ ossec_decoders/". - Copy all files "ossec-rules/rules-decoders/ossec/rules/*rules*.xml" to "/var/ossec/ rules/", except for "local_rules.xml". - Restart your OSSEC manager. If you do not use the OSSEC Wazuh fork, copy, after the above steps, the decoders ossec/decoders/ compatibility/*_decoders.xml to /var/ossec/etc/ossec_decoders/. New log analysis rules These rules are located at ossec-rules/rules-decoders/software (being software the name of your log messages source) and can be installed manually following next steps. Copy new rule files into OSSEC directories and add the new rules file to ossec.conf configuration file: - Copy "software_decoders.xml" to "/var/ossec/etc/wazuh_decoders/". - Copy "software_rules.xml" to "/var/ossec/rules/" - Add "<include>software_rules.xml</include>" to "/var/ossec/etc/ossec.conf" before the tag "</rules>". - If there are additional instructions to install these rules and decoders, you will find them in an instructions.md file in the same directory. - Restart your OSSEC manager Decoder paths Configure decoder paths adding the next lines after tag <rules> at /var/ossec/etc/ossec.conf: 70 Chapter 6. OSSEC Wazuh Ruleset

75 <decoder_dir>etc/ossec_decoders</decoder_dir> <decoder>etc/local_decoder.xml</decoder> <decoder_dir>etc/wazuh_decoders</decoder_dir> If you do not use the OSSEC Wazuh fork, you must move the file decoder.xml to the directory etc/ ossec_decoders. Also, if you do not use local_decoder.xml, remove that line in ossec.conf. Remember that local_decoder.xml can not be empty. Rootcheck rules Rootchecks can be found in ossec-rules/rootcheck/ directory. There you will see both updated out-of-thebox OSSEC rootchecks, and new ones. To install a rootcheck file, go to your OSSEC manager and copy the.txt file to /var/ossec/etc/shared/. Then modify /var/ossec/etc/ossec.conf by adding the path to the.txt file into the <rootcheck> section. Examples: - <rootkit_files>/var/ossec/etc/shared/rootkit_files.txt</rootkit_files> - <system_audit>/var/ossec/etc/shared/cis_rhel5_linux_rcl.txt</system_audit> - <windows_malware>/var/ossec/etc/shared/win_malware_rcl.txt</windows_malware> - <windows_audit>/var/ossec/etc/shared/win_audit_rcl.txt</windows_audit> - <windows_apps>/var/ossec/etc/shared/win_applications_rcl.txt</windows_apps> Automatic installation Run ossec_ruleset.py script to update OSSEC Wazuh Ruleset with no need to manually change OSSEC internal files. Getting the script: $ sudo mkdir -p /var/ossec/update/ruleset && cd /var/ossec/update/ruleset $ sudo wget py Running the script: $ sudo chmod +x /var/ossec/update/ruleset/ossec_ruleset.py $ sudo /var/ossec/update/ruleset/ossec_ruleset.py --help Usage examples Update decoders/rules/rootchecks:./ossec_ruleset.py Update and prompt menu to activate new Rules & Rootchecks:./ossec_ruleset.py -a Restore a backup: 6.3. Automatic installation 71

76 ./ossec_ruleset.py --backups list All script options: Select ruleset: -r, --rules Update rules -c, --rootchecks Update rootchecks *If not -r or -c indicated, rules and rootchecks will be updated. Activate: -a, --activate Prompt a interactive menu for selection of rules and rootchecks to activate. -A, --activate-file Use a configuration file to select rules and rootchecks to activate. *If not -a or -A indicated, NEW rules and rootchecks will NOT activated. Restart: -s, --restart Restart OSSEC when required. -S, --no-restart Do not restart OSSEC when required. Backups: -b, --backups Restore backups. Use 'list' to show the backups list available. Additional Params: -f, --force-update Force to update all rules and rootchecks. By default, only it is updated the new/changed rules/rootchecks. -d, --directory Use the ruleset specified at 'directory'. Directory structure should be the same that ossec-rules repository. Configuration file syntax using option -A: # Commented line rules:rule_name rootchecks:rootcheck_name Configure weekly updates Run ossec_ruleset.py weekly and keep your OSSEC Wazuh Ruleset installation up to date by adding a crontab job to your system. Run sudo crontab -e and, at the end of the file, add the following root cd /var/ossec/update/ruleset &&./ossec_ruleset.py -s Wazuh rules All Wazuh rules can be automatically installed by running wazuh/ossec-rules/ossec_ruleset.py -r, but for some of these rules it is necessary to perform manual steps. This section describes the new rules developed by Wazuh and, if necessary, the manual steps to be performed. 72 Chapter 6. OSSEC Wazuh Ruleset

77 Netscaler NetScaler is a network appliance (or hardware device) manufactured by Citrix, which primary role is to provide Level 4 Load Balancing. It also supports Firewall, proxy and VPN functions. Puppet Puppet is an open-source configuration management utility. After installing Puppet rules (automatically or manually) you need to perform the next manual step. This is due to some rules need to read the output of a command. Copy the code below to /var/ossec/etc/shared/agent.conf in your OSSEC Manager to allow OSSEC execute this command and read its output: <agent_config> <localfile> <log_format>full_command</log_format> <command>timestamp_puppet=`cat /var/lib/puppet/state/last_run_summary.yaml grep last_run cut -d: -f 2 tr -d '[[:space:]]'`;timestamp_current_date=$(date +" %s");diff_min=$((($timestamp_current_date-$timestamp_puppet)/60));if [ "$diff_min" - le "30" ];then echo "Puppet: OK. It runs in the last 30 minutes";else puppet_ date=`date "Puppet: KO. Last run: $puppet_date";fi</ command> <frequency>2100</frequency> </localfile> </agent_config> Also you must configure in every agent the logcollector option to accept remote commands from the manager. To do this, add the following lines to /var/ossec/etc/internal_options.conf: # Logcollector - If it should accept remote commands from the manager logcollector.remote_commands=1 Serv-U FTP Server software (FTP, FTPS, SFTP, Web & mobile) for secure file transfer and file sharing on Windows & Linux. Amazon Before installing our Amazon rules, you need to follow the steps below in order to enable logging through AWS API and download the JSON data files. A detailed description of each of the steps will be find further below. 1. Turn on CloudTrail. 2. Create a user with permission over S3. 3. Install AWS Cli in your Ossec Manager. 4. Configure the previous user credentials with AWS Cli in your Ossec Manager. 5. Run a python script to download JSON data in gzipped files logs and convert it into a flat file. 6. Install Wazuh Amazon rules Wazuh rules 73

78 1.- Turn on CloudTrail In this section you will learn how to create a trail for your AWS account. Trails can be created using the AWS CloudTrail console or the AWS Command Line Interface (AWS CLI). Both methods follow the same steps but we will be focusing on the first one: Turn on CloudTrail. By default, when creating a trail in one region in the CloudTrail console, this one will apply to all regions. Create a new Amazon S3 bucket for storing your log files, or specify an existing bucket where you want your log files to be stored. By default, log files from all AWS regions in your account will be stored in the bucket you specified. S3 bucket name is common for all amazon users, don t worry if you get this error Bucket already exists. Select a different bucket name., even if you don t have any bucket created before. From now on all your actions in Amazon AWS console will be logged. You can search logs manually inside CloudTrail/API activity history. Also, notice that every 7 min a.json file will be stored in your bucket. 2. Create a user with permission over S3 Sign in to the AWS Management Console and open the IAM console at In the navigation panel, choose Users and then choose Create New Users. Type the user names for the users you would like to create. You can create up to five users at one time. Note: User names can only use a combination of alphanumeric characters and these characters: plus (+), equal (=), comma (,), period (.), at (@), and hyphen (-). Names must be unique within an account. The users require access to the API. For this they must have access keys. To generate access key for new users, select Generate an access key for each user and Choose Create. (Optional) To view users access keys (access key IDs and secret access keys), choose Show User Security Credentials. To save the access keys, choose Download Credentials and then save the file to a safe location on your computer. Warning: This is your only opportunity to view or download the secret access keys, and you must provide this information to your users before they can use the AWS Console. If you don t download and save them now, you will need to create new access keys for the users later. Save the new user s access key ID and secret access key in a safe and secure place. You will not have access to the secret access keys again after this step. Give the user(s) permission to manage security policies, press Attach Policy and select AmazonS3FullAccess policy. 3. Install AWS Cli in your Ossec Manager To download and process the Amazon AWS logs that already are archived in S3 Bucket we need to install AWS Cli in your system and configure it to use with AWS. The AWS CLI comes pre-installed on the Amazon Linux AMI. Run $ sudo yum update after connecting to the instance to get the latest version of the package available via yum. If you need a more recent version of the AWS CLI than the available in the Amazon updates repository, uninstall the package $ sudo yum remove aws-cli and then install using pip as follows. Prerequisites for AWS CLI Using Pip 74 Chapter 6. OSSEC Wazuh Ruleset

79 Windows, Linux, OS X, or Unix Python 2 version or Python 3 version 3.3+ Pip If you don t have Python installed, install version 2.7 or 3.4 using one of the following methods: Check if Python is already installed: $ python --version If Python 2.7 or later is not installed, install it with your distribution s package manager. The command and package name varies: On Debian derivatives such as Ubuntu, use APT: $ sudo apt-get install python2.7 On Red Hat and derivatives, use yum: $ sudo yum install python27 Open a command prompt or shell and run the following command to verify that Python has been installed correctly: $ python --version Python To install pip on Linux Download the installation script from pypa.io: $ curl -O Run the script with Python: $ sudo python get-pip.py Now than we have Python and pip installed, use pip to install the AWS CLI: $ sudo pip install awscli Note: If you installed a new version of Python alongside an older version that came with your distribution, or update pip to the latest version, you may get the following error when trying to invoke pip with sudo: command not found. We can work around this issue by using which pip to locate the executable, and then invoke it directly by using an absolute path when installing the AWS CLI: $ which pip /usr/local/bin/pip $ sudo /usr/local/bin/pip install awscli To upgrade an existing AWS CLI installation, use the --upgrade option: $ sudo pip install --upgrade awscli 6.4. Wazuh rules 75

80 4. Configure user credentials with AWS Cli To configure the user credentials type: $ sudo aws configure This command is interactive, prompting you to enter additional information. Enter each of your access keys in turns and press Enter. Region name is not necessary, press Enter, and press Enter once again to skip the output format setting. The latest Enter command is shown as replaceable text because there is no user input for that line. The result should be something like this: AWS Access Key ID [None]: ``AKIAIOSFODNN7EXAMPLE`` AWS Secret Access Key [None]: ``wjalrxutnfemi/k7mdeng/bpxrficyexamplekey`` Default region name [None]: ENTER Default output format [None]: ENTER 5. Run a python script for download the JSON data To download the JSON file from S3 Bucket and convert it into a flat file to be used with Ossec, we use a python script written by Xavier with minor modifications done by Wazuh. The script is located in our repository at wazuh/ossec-rules/tools/amazon/getawslog.py. The command to use this script is: $./getawslog.py -b s3bucketname -d -j -D -l /var/log/amazon/amazon.log Where s3bucketname is the name of the bucket created when CloudTrail was activated and /var/log/amazon/ amazon.log is the path where the log is stored after being converted by the script. Note: In case you don t want to use an existing folder, then the folder where the log is stored need to be created manually before starting the script. CloudTrail delivers log files to your S3 bucket approximately every 5 minutes. CloudTrail does not deliver log files if no API calls are made on your account so you can run the script every 5 min or more adding a crontab job to your system. Note: If after executing the first time getawslog.py the result is: Traceback (most recent call last): File "/root/script/getawslog.py", line 16, in <module> import boto ImportError: No module named boto To work around this issue install the module named boto, use this command $ sudo pip install boto Run vi /etc/crontab and, at the end of the file, add the following line */5 * * * * root python path_to_script/getawslog.py -b s3bucketname -d -j -D - l /var/log/amazon/amazon.log 76 Chapter 6. OSSEC Wazuh Ruleset

81 Note: This script downloads and deletes the files from your S3 Bucket, but you can always review the last 7 days logs through CloudTrail. 6. Install Wazuh Amazon rules. To install Wazuh Amazon rules follow either the Automatic installation section or Manual installation section in this guide. Contribute to the ruleset If you have created new rules, decoders or rootchecks and you would like to contribute to our repository, please fork our Github repository and submit a pull request. If you are not familiar with Github, you can also share them through our users mailing list, to which you can subscribe by sending an to wazuh+subscribe@googlegroups.com. As well do not hesitate to request new rules or rootchecks that you would like to see running in OSSEC and our team will do our best to make it happen. Note: In our repository you will find that most of the rules contain one or more groups called pci_dss_x. This is the PCI DSS control related to the rule. We have produced a document that can help you tag each rule with its corresponding PCI requirement: What s next Once you have your ruleset up to date we encourage you to move forward and try out ELK integration or the API RESTful, check them on: ELK Stack integration guide OSSEC Wazuh RESTful API installation Guide 6.5. Contribute to the ruleset 77

82 78 Chapter 6. OSSEC Wazuh Ruleset

83 CHAPTER 7 OSSEC Docker container Docker installation Docker requires a 64-bit installation regardless of your CentOS or Debian version. Also, your kernel must be 3.10 at minimum. To check your current kernel version, open a terminal and use uname -r to display your kernel version: $ uname -r el7.x86_64 Note: These Docker containers are based on xetus-oss dockerfiles, which can be found at xetus-oss/docker-ossec-server. We created our own fork, which we test and maintain. Thank you Terence Kent for your contribution to the community. Run the Docker installation script. $ curl -ssl sh If you would like to use Docker as a non-root user, you should now consider adding your user to the docker group with something like: $ sudo usermod -ag docker your-user Note: Remember that you will have to log out and back in for this to take effect! 79

84 OSSEC-ELK Container These Docker container source files can be found in our ossec-wazuh Github repository. It includes both an OSSEC manager and an Elasticsearch single-node cluster, with Logstash and Kibana. You can find more information on how these components work together in our documentation. To install the ossec-elk container run this command: $ docker run -d -p 55000: p 1514:1514/udp -p 1515:1515 -p 514:514/udp -p 5601:5601 -v /somepath/elasticsearch:/var/lib/elasticsearch -v /somepath/ossec_mnt:/ var/ossec/data --name ossec wazuh/ossec-elk The /var/ossec/data directory allows the container to be replaced without configuration or data loss: logs, etc, stats,rules, and queue (all OSSEC files). In addition to those directories, the bin/.process_list file is symlinked to process_list in the data volume. Other available configuration parameters are: AUTO_ENROLLMENT_ENABLED: Specifies whether or not to enable auto-enrollment via ossec-authd. Defaults to true. AUTHD_OPTIONS: Options to passed ossec-authd, other than -p and -g. No default. SYSLOG_FORWADING_ENABLED: Specifies whether Syslog forwarding is enabled or not. false. SYSLOG_FORWARDING_SERVER_IP: The IP address for the Syslog server. No default. SYSLOG_FORWARDING_SERVER_PORT: The destination port for Syslog messages. Default is 514. SYSLOG_FORWARDING_FORMAT: The Syslog message format to use. Default is default. Defaults to Note: All SYSLOG configuration variables are only applicable to the first time setup. Once the container s data volume has been initialized, all the configuration options for OSSEC can be changed. To add an agent use the next command: $ docker exec -it ossec /var/ossec/bin/manage_agents Note: You can also use agents auto enrollment with ossec-authd Then restart your OSSEC manager: $ docker exec -it ossec /var/ossec/bin/ossec-control restart Access to Kibana4.5 If you have an error the first time you log in kibana: move to a different menu and return to discover and it should be working properly. Note: Some Dashboard visualizations require time and specific alerts to work. Please don t worry if some visualizations do not display data immidiately after the import. 80 Chapter 7. OSSEC Docker container

85 OSSEC HIDS Container These Docker container source files can be found in our ossec-server Github repository. To install it run this command: $ docker run --name ossec-server -d -p 1514:1514/udp -p 1515:1515\ -e SYSLOG_FORWARDING_ENABLED=true -e SYSLOG_FORWARDING_SERVER_IP=X.X.X.X\ -v /somepath/ossec_mnt:/var/ossec/data wazuh/docker-ossec The /var/ossec/data directory allows the container to be replaced without configuration or data loss: logs, etc, stats,rules, and queue. In addition to those directories, the bin/.process_list file is symlinked to process_list in the data volume. Other available configuration parameters are: AUTO_ENROLLMENT_ENABLED: Specifies whether or not to enable auto-enrollment via ossec-authd. Defaults to true. AUTHD_OPTIONS: Options to passed ossec-authd, other than -p and -g. No default. SYSLOG_FORWADING_ENABLED: Specifies whether Syslog forwarding is enabled or not. false. SYSLOG_FORWARDING_SERVER_IP: The IP address for the Syslog server. No default. SYSLOG_FORWARDING_SERVER_PORT: The destination port for Syslog messages. Default is 514. SYSLOG_FORWARDING_FORMAT: The Syslog message format to use. Default is default. Defaults to SMTP_ENABLED: Whether or not to enable SMTP notifications. Defaults to true if ALERTS_TO_ is specified, otherwise defaults to false. SMTP_RELAY_HOST: The relay host for SMTP messages, required for SMTP notifications. This host must support non-authenticated SMTP. No default. ALERTS_FROM_ The address the alerts should come from. Defaults to ossec@$hostname. ALERTS_TO_ The destination address for SMTP notifications, required for SMTP notifications. No default. Note: All SMTP and SYSLOG configuration variables are only applicable for the first time setup. Once the container s data volume has been initialized, all the configuration options for OSSEC can be changed. Once the system starts up, you can execute the standard OSSEC commands using docker. For example, to list active agents: $ docker exec -ti ossec-server /var/ossec/bin/list_agents -a 7.3. OSSEC HIDS Container 81

86 82 Chapter 7. OSSEC Docker container

87 CHAPTER 8 OSSEC deployment with Puppet Puppet master installation Before we get started with Puppet, check the following network requirements: Private network DNS: Forward and reverse DNS must be configured, and every server must have a unique hostname. If you do not have DNS configured, you must use your hosts file for name resolution. We will assume that you will use your private network for communication within your infrastructure. Firewall open ports: The Puppet master must be reachable on port Installation on CentOS Install your Yum repository, and puppet-server package, for your Enterprise Linux distribution. For example, for EL7: $ sudo rpm -ivh $ sudo yum install puppetserver Installation on Debian To install your Puppet master on Debian/Ubuntu systems, we first need to add our distribution repository. This can be done, downloading and installing a package named puppetlabs-release-distribution.deb where distribution needs to be substituted by your distribution codename (e.g. wheezy, jessie, trusty, utopic). See below the commands to install the Puppet master package for a jessie distribution: $ wget $ sudo dpkg -i puppetlabs-release-pc1-trusty.deb $ sudo apt-get update && apt-get install puppetserver 83

88 Memory Allocation By default, Puppet Server will be configured to use 2GB of RAM. However, if you want to experiment with Puppet Server on a VM, you can safely allocate as little as 512MB of memory. To change the Puppet Server memory allocation, you can edit the init config file. /etc/sysconfig/puppetserver RHEL /etc/default/puppetserver Debian Replace 2g with the amount of memory you want to allocate to Puppet Server. For example, to allocate 1GB of memory, use JAVA_ARGS="-Xms1g -Xmx1g"; for 512MB, use JAVA_ARGS="-Xms512m -Xmx512m". Configuration Configure /etc/puppetlabs/puppet/puppet.conf adding the dns_alt_names line to the [main] section, and replacing puppet.example.com with your own FQDN: [main] dns_alt_names = puppet,puppet.example.com Note: If found in the configuration file, remove the line templatedir=$confdir/templates, which has been deprecated. Then, restart your Puppet master to apply changes: $ sudo service puppetserver start PuppetDB installation After configuring your Puppet master to run on Apache with Passenger, the next step is to add Puppet DB so that you can take advantage of exported resources, as well as have a central storage place for Puppet facts and catalogs. Installation on CentOS $ sudo rpm -Uvh 64/pgdg-centos noarch.rpm $ yum install puppetdb-terminus.noarch puppetdb postgresql94-server postgresql94 postgresql94-contrib.x86_64 $ sudo /usr/pgsql-9.4/bin/postgresql94-setup initdb $ systemctl start postgresql-9.4 $ systemctl enable postgresql-9.4 Installation on Debian $ sudo echo "deb trusty-pgdg main" >> /etc/ apt/sources.list.d/pgdg.list $ wget --quiet -O - \ sudo apt-key add - 84 Chapter 8. OSSEC deployment with Puppet

89 $ sudo apt-get update $ apt-get install puppetdb-terminus puppetdb postgresql-9.4 postgresql-contrib-9.4 Configuration The next step is to edit pg_hba.conf and modify the METHOD to md5 in the next two lines /var/lib/pgsql/9.4/data/pg_hba.conf -- CentOS # IPv4 local connections: host all all /32 ``md5`` # IPv6 local connections: host all all ::1/128 ``md5`` Create a PostgreSQL user and database: # su - postgres $ createuser -DRSP puppetdb $ createdb -O puppetdb puppetdb The user is created so that it cannot create databases (-D), or roles (-R) and doesn t have superuser privileges (-S). It ll prompt for a password (-P). Let s assume a password of yourpassword has been used. The database is created and owned (-O) by the puppetdb user. Test the database access and create the extension pg_trgm: # psql -h p U puppetdb -W puppetdb Password for user puppetdb: psql (8.4.13) Type "help" for help. puppetdb=> CREATE EXTENSION pg_trgm; puppetdb=> \q Configure /etc/puppetlabs/puppetdb/conf.d/database.ini: [database] classname = org.postgresql.driver subprotocol = postgresql subname = // :5432/puppetdb username = puppetdb password = yourpassword log-slow-statements = 10 Create /etc/puppetlabs/puppet/puppetdb.conf: [main] server_urls = Create /etc/puppetlabs/puppet/routes.yaml: --- master: facts: terminus: puppetdb cache: yaml 8.2. PuppetDB installation 85

90 Finally, update /etc/puppetlabs/puppet/puppet.conf: [master] storeconfigs = true storeconfigs_backend = puppetdb Once all steps are completed, restart your Puppet master and run puppet agent --test: $ puppet agent --test Now PuppetDB is working. Puppet agents installation In this section we assume you have already installed APT and Yum Puppet repositories. Installation on CentOS $ sudo yum install puppet $ sudo puppet resource package puppet ensure=latest Installation on Debian $ sudo apt-get install puppet $ sudo apt-get update $ sudo puppet resource package puppet ensure=latest Configuration Add the server value to the [main] section of the node s /etc/puppet/puppet.conf file, replacing puppet. example.com with your Puppet master s FQDN: [main] server = puppet.example.com Restart the Puppet service: $ service puppet restart Puppet certificates Run Puppet agent to generate a certificate for the Puppet master to sign: $ sudo puppet agent -t Log into to your Puppet master, and list the certificates that need approval: 86 Chapter 8. OSSEC deployment with Puppet

91 $ sudo puppet cert list It should output a list with your node s hostname. Approve the certificate, replacing hostname.example.com with your agent node s name: $ sudo puppet cert sign hostname.example.com Back on the Puppet agent node, run the puppet agent again: $ sudo puppet agent -t Note: Remember the Private Network DNS is a requisite for the correct certificate sign. OSSEC Puppet module Note: This Puppet module has been authored by Nicolas Zin, and updated by Jonathan Gazeley and Michael Porter. Wazuh has forked it with the purpose of maintaining it. Thank you to the authors for the contribution. Download and install OSSEC module from Puppet Forge: $ sudo puppet module install wazuh-ossec Notice: Preparing to install into /etc/puppet/modules... Notice: Downloading from Notice: Installing -- do not interrupt... /etc/puppet/modules - wazuh-ossec (v2.0.1) - jfryman-selinux (v0.2.5) - puppetlabs-apt (v2.2.0) - puppetlabs-concat (v1.2.4) - puppetlabs-stdlib (v4.9.0) - stahnma-epel (v1.1.1) This module installs and configures OSSEC HIDS agent and manager. The manager is configured by installing the ossec::server class, and using optionally: ossec::command: to define active/response command (like firewall-drop.sh). ossec::activeresponse: to link rules to active/response commands. ossec::addlog: to define additional log files to monitor. Example Here is an example of a manifest ossec.pp: OSSEC manager: node "server.yourhost.com" class 'ossec::server': mailserver_ip => 'localhost', 8.5. OSSEC Puppet module 87

92 } ossec_ to => use_mysql => true, mysql_hostname => ' ', mysql_name => 'ossec', mysql_password => 'yourpassword', mysql_username => 'ossec', ossec::command 'firewallblock': command_name => 'firewall-drop', command_executable => 'firewall-drop.sh', command_expect => 'srcip' } ossec::activeresponse 'blockwebattack': command_name => 'firewall-drop', ar_level => 9, ar_rules_id => [31153,31151], ar_repeated_offenders => '30,60,120' } ossec::addlog 'monitorlogfile': logfile => '/var/log/secure', logtype => 'syslog' } class '::mysql::server': root_password => 'yourpassword', remove_default_accounts => true, } } mysql::db 'ossec': user => 'ossec', password => 'yourpassword', host => 'localhost', grant => ['ALL'], sql => '/var/ossec/contrib/sqlschema/mysql.schema' } OSSEC agent: node "client.yourhost.com" class "ossec::client": ossec_server_ip => " " } } Reference OSSEC manager class class ossec::server 88 Chapter 8. OSSEC deployment with Puppet

93 $mailserver_ip: SMTP mail server. $ossec_ from (default: from. $ossec_ to: to. $ossec_active_response (default: true): Enable/disable active-response (both on manager and agent). ossec_server_port (default: 1514 ): Port to allow communication between manager and agents. $ossec_global_host_information_level (default: 8): Alerting level for the events generated by the host change monitor (from 0 to 16). $ossec_global_stat_level: (default: 8): Alerting level for the events generated by the statistical analysis (from 0 to 16). $ossec_ _alert_level: (default: 7): It correspond to a threshold (from 0 to 156 to sort alert send by . Some alerts circumvent this threshold (when they have alert_ option). $ossec_ notification: (default: yes): Whether to send notifications. $ossec_prefilter : (default: false) Command to run to prevent prelinking from creating false positives. This option can potentially impact performance negatively. The configured command will be run for each and every file checked. $local_decoder_template: (default: ossec/local_decoder.xml.erb) $local_rules_template: (default: ossec/local_rules.xml.erb) $manage_repo (default: true): Install Ossec through Wazuh repositories. $manage_epel_repo (default: true): Install epel repo and inotify-tools $manage_paths (default: [ 'path' => '/etc,/usr/bin,/usr/sbin', 'report_changes' => 'no', 'realtime' => 'no'}, 'path' => '/bin,/ sbin', 'report_changes' => 'yes', 'realtime' => 'yes'} ]): Follow the instructions bellow. $ossec_white_list: Allow white listing of IP addresses. $manage_client_keys: (default: true): Manage client keys option. $ossec_auto_ignore: (default: yes): Specifies if syscheck will ignore files that change too often (after the third change) $use_mysql: (default: false). Set to true to enable database integration for alerts and other outputs. mariadb: (default: false). Set to true to enable to use mariadb instead of mysql. $mysql_hostname: MySQL hostname. $mysql_name: MySQL Database name. $mysql_password: MySQL password. $mysql_username: MySQL username. $syslog_output: (default: false). $syslog_output_server: (default: undef). $syslog_output_format: (default: undef). $ossec_extra_rules_config: To use it, after enabling the Wazuh ruleset (either manually or via the automated script), take a look at the changes made to the ossec.conf file. You will need to put these same changes into the $ossec_extra_rules_config array parameter when calling the ossec::server class OSSEC Puppet module 89

94 $ossec_ _maxperhour: (default: 12): Global Configuration with a larger maximum s per hour $ossec_ _idsname: (default: undef) $server_package_version: (default: installed) Modified client.pp and server.pp to accept package versions as parameter. $ossec_service_provider: (default: $::ossec::params::ossec_service_provider) Set service provider to Redhat on Redhat systems. $ossec_rootcheck_frequency: (default: 36000) Frequency that the rootcheck is going to be executed (in seconds). $ossec_rootcheck_checkports: (default: true) Look for the presence of hidden ports. $ossec_rootcheck_checkfiles: (default: true) Scan the whole filesystem looking for unusual files and permission problems. $ossec_conf_template: (default: ossec/10_ossec.conf.erb`) Allow to use a custom ossec.conf in the manager. Consequently, if you add or remove any of the Wazuh rules later on, you ll need to ensure to add/remove the appropriate bits in the $ossec_extra_rules_config array parameter as well. function ossec:: _alert $alert_ to send to. $alert_group: (default: false): Array of name of rules group. Note: No will be send below the global $ossec_ _alert_level. function ossec::command $command_name: Human readable name for ossec::activeresponse usage. $command_executable: Name of the executable. OSSEC comes preloaded with disable-account.sh, host-deny.sh, ipfw.sh, pf.sh, route-null.sh, firewall-drop.sh, ipfw_mac.sh, ossec-tweeter.sh, restart-ossec.sh. $command_expect (default: srcip). $timeout_allowed (default: true). function ossec::activeresponse $command_name. $ar_location (default: local): It can be set to local, server, defined-agent, all. $ar_level (default: 7): Can take values between 0 and 16. $ar_rules_id (default: []): List of rules ID. $ar_timeout (default: 300): Usually active reponse blocks for a certain amount of time. $ar_repeated_offenders (default: empty): A comma separated list of increasing timeouts in minutes for repeat offenders. There can be a maximum of 5 entries. function ossec::addlog $log_name. $agent_log: (default: false) 90 Chapter 8. OSSEC deployment with Puppet

95 $logfile /path/to/log/file. $logtype (default: syslog): The OSSEC log_format of the file. OSSEC agent class $ossec_server_ip: IP of the server. $ossec_server_hostname: Hostname of the server. ossec_server_port (default: 1514 ): Port to allow communication between manager and agents. $ossec_active_response (default: true): Allows active response on this host. $ossec_ notification (default: yes): Whether to send notifications or not. $ossec_prefilter : (default: false) Command to run to prevent prelinking from creating false positives. This option can potentially impact performance negatively. The configured command will be run for each and every file checked. $selinux (default: false): Whether to install a SELinux policy to allow rotation of OSSEC logs. agent_name (default: $::hostname) agent_ip_address (default: $::ipaddress) $manage_repo (default: true): Install Ossec through Wazuh repositories. manage_epel_repo (default: true): Install epel repo and inotify-tools $ossec_scanpaths (default: []): Agents can be Linux or Windows for this reason don t have ossec_scanpaths by default. $manage_client_keys: (default: true): Manage client keys option. ar_repeated_offenders: (default: empty) A comma separated list of increasing timeouts in minutes for repeat offenders. There can be a maximum of 5 entries. /local_decoder_template: (default: $::ossec::params::service_has_status) Allow configurable service_has_status, default to params. agent_package_version: (default: installed) Modified client.pp and server.pp to accept package versions as parameter. agent_package_name: (default: $::ossec::params::agent_package) Override package for client installation. agent_service_name: (default: $::ossec::params::agent_service) Override service for client installation. ossec_service_provider: (default: $::ossec::params::ossec_service_provider) Set service provider to Redhat on Redhat systems. $ossec_conf_template: (default: ossec/10_ossec_agent.conf.erb`) Allow to use a custom ossec.conf in the agent. function ossec::addlog $log_name. $agent_log (default: false) $logfile /path/to/log/file. $logtype (default: syslog): The OSSEC log_format of the file OSSEC Puppet module 91

96 ossec_scanpaths configuration Leaving this unconfigured will result in OSSEC using the module defaults. By default, it will monitor /etc, /usr/bin, /usr/sbin, /bin and /sbin on Ossec Server, with real time monitoring disabled and report_changes enabled. To overwrite the defaults or add in new paths to scan, you can use hiera to overwrite the defaults. To tell OSSEC to enable real time monitoring of the default paths: ossec::server::ossec_scanpaths: path: /etc report_changes: no realtime: no path: /usr/bin report_changes: no realtime: no path: /usr/sbin report_changes: no realtime: no path: /bin report_changes: yes realtime: yes path: /sbin report_changes: yes realtime: yes Note: Configuring the ossec_scanpaths variable will overwrite the defaults. i.e. if you want to add a new directory to monitor, you must also add the above default paths to be monitored. 92 Chapter 8. OSSEC deployment with Puppet

97 CHAPTER 9 OSSEC for Amazon AWS This section provides instructions to integrate OSSEC with Amazon AWS. It also explains different use cases as examples on how the rules, developed by Wazuh, can be used to alert on specific events. In our github repository there are rules for IAM, EC2 and VPC services. The diagram below explains how a log message, generated by an AWS event, flows until it arrives to the OSSEC agent. Once the agent reads the message, it sends it to the OSSEC manager which performs the analysis using the rules. When a rule matches, an alert is triggered (if the level is high enough). 1. CloudTrail is a web service that records AWS API calls for your account and delivers log files. Meaning that, when an AWS event occurs, Cloudtrail generates the log message. Using CloudTrail we can get more visibility into AWS user activity, tracking changes made to AWS resources. 2. Once an event takes place, CloudTrail delivers the log message to Amazon S3, writing it to a log file. S3 allows log files to be stored durably and inexpensively. 3. The script getawslog.py downloads the logs files from Amazon S3 into the OSSEC agent, uncompressing them and appending new data to a local plain text file. This diagram makes it easier to understand the integration process described below. OSSEC integration with Amazon AWS Prior to the installation of the OSSEC rules for Amazon AWS, follow the steps below in order to enable AWS API to generate log messages and store them as JSON data files in Amazon S3 Bucket. A detailed description of each of the steps can be found further below. 93

98 1. Turn on CloudTrail. 2. Create a user with permission to access S3. 3. Install Python Boto in your Ossec Agent. 4. Configure the previous user credentials with AWS Cli in your Ossec Agent. 5. Run the script getawslog.py to download the log JSON files and convert them into flat files. 6. Install Wazuh Amazon rules. Turn on CloudTrail Create a trail for your AWS account. Trails can be created using the AWS CloudTrail console or the AWS Command Line Interface (AWS CLI). Both methods follow the same steps. In this case we will be focusing on the first one: Turn on CloudTrail. Note that, by default, when creating a trail in one region in the CloudTrail console, this one will apply to all regions. Warning: Please do not enable Enable log file validation parameter, it s not supported by provided python script. Create a new Amazon S3 bucket or specify an existing bucket to store all your log files. By default, log files from all AWS regions in your account will be stored in the bucket selected. Note: When naming a new bucket, if you get this error Bucket already exists. Select a different bucket name., then try a different name, since the one you have selected is already in use by other Amazon AWS user. From now on, all the events in your Amazon AWS account will be logged. You can search log messages manually inside CloudTrail/API activity history. Note that every 7 min a JSON file containing new log messages will be stored in your bucket. Create a user with permission to access S3 Sign in to the AWS Management Console and open the IAM console at In the navigation panel, choose Users and then choose Create New Users. Type the user names for the users you would like to create. Note: User names can only use a combination of alphanumeric characters and these characters: plus (+), equal (=), comma (,), period (.), at (@), and hyphen (-). Names must be unique within an account. The users require access to the API. For this, they must have access keys. To generate access key for new users, select Generate an access key for each user and Choose Create. Warning: This is your only opportunity to view or download the secret access keys, and you must provide this information to your users before they can use the AWS Console. If you don t download and save them now, you will need to create new access keys for the users later. You will not have access to the secret access keys again after this step. 94 Chapter 9. OSSEC for Amazon AWS

99 Give the user(s) access to this specific S3 bucket (based on Tx3VRSWZ6B3SHAV/Writing-IAM-Policies-How-to-grant-access-to-an-Amazon-S3-bucket) Under the IAM console, select Users and go to the Permissions tab, in the Inline Policies section, select the Create User Policy button. Click the Custom Policy option and push the Select button. In the next page enter some Policy Name e.g. ossec-cloudtrail-s3-access and for Policy Document use the example provided below: } "Version": " ", "Statement": [ "Effect": "Allow", "Action": ["s3:listbucket"], "Resource": ["arn:aws:s3:::yourbucketname"] }, "Effect": "Allow", "Action": [ "s3:getobject", "s3:deleteobject" ], "Resource": ["arn:aws:s3:::yourbucketname/*"] } ] Install Python Boto in your Ossec Agent To download and process the Amazon AWS logs that already are archived in S3 Bucket we need to install Python Boto in the OSSEC agent and configure it to enable the connection with AWS S3. Prerequisites for Python Boto installation using Pip Windows, Linux, OS X, or Unix Python 2 version 2.7+ or Python 3 version 3.3+ Pip Check if Python is already installed: $ python --version If Python 2.7 or later is not installed then, install it with your distribution s package manager as shown below: On Debian derivatives such as Ubuntu, use APT: $ sudo apt-get install python2.7 On Red Hat and derivatives, use yum: $ sudo yum install python27 Open a command prompt or shell and run the following command to verify that Python has been installed correctly: $ python --version Python OSSEC integration with Amazon AWS 95

100 To install pip on Linux Download the installation script from pypa.io: $ curl -O Run the script with Python: $ sudo python get-pip.py Now that Python and pip are installed, use pip to install boto: $ sudo pip install boto Configure user credentials with Python Boto To configure the user credentials you need to create a file called /etc/boto.cfg looking like: [Credentials] aws_access_key_id = <your_access_key_here> aws_secret_access_key = <your_secret_key_here> Run the python script to download the JSON data We use a python script to download JSON files from S3 Bucket and convert them into flat files that can be used with Ossec. This script was written by Xavier and contains minor modifications done by Wazuh. It is located in our repository at wazuh/ossec-rules/tools/amazon/getawslog.py. Run the following command to use this script: $./getawslog.py -b s3bucketname -d -j -D -l /path-with-write-permission/amazon.log Where s3bucketname is the name of the bucket created when CloudTrail was activated (see the first step in this section: Turn on CloudTrail ) and /path-with-write-permission/amazon.log is the path where the log flat file is stored once has been converted by the script. Note: In case you don t want to use an existing folder, create it manually before running the script. CloudTrail delivers log files to your S3 bucket approximately every 7 minutes. Run the script adding a crontab job and note that running it more frequently than once every 7 minutes would be useless. CloudTrail does not deliver log files if no API calls are made on your account. Run crontab -e and, at the end of the file, add the following line */5 * * * * /usr/bin/flock -n /tmp/cron.lock -c python path_to_script/getawslog.py - b s3bucketname -d -j -D -l /path-with-write-permission/amazon.log Note: This script downloads and deletes the files from your S3 Bucket. However, you can always review the log messages generated during the last 7 days through CloudTrail. 96 Chapter 9. OSSEC for Amazon AWS

101 Install Wazuh Amazon rules To install Wazuh Amazon rules follow either the Automatic installation section or Manual installation section in this guide. Use Cases Our Rules focuses on providing the desired visibility within the Amazon AWS platform. The following describes some use cases for IAM, EC2 and VPC services. The structure followed is always the same. You will see the definition of the rule that matches with the log message generated by the AWS event. You can check how this log message flows in the diagram at the beginning of this section. Also, in each of the examples, you will see a screenshot of how Kibana shows the corresponding alert. Remember that an alert is triggered when the log message matches a specific rule if its level is high enough. Iam Use cases AWS Identity and Access Management (IAM) enables you to securely control access to AWS services and resources for your users. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. To follow find some use cases when using some of the Wazuh rules built for IAM. Create user account When we create a new user account in IAM, an AWS event is generated. As per the diagram at the beginning of this section, the log message flows until the OSSEC agent gets the log file and sends it to the OSSEC manager. The latter analyze the log file and finds that the log message generated by this event matches the rule with id number Due to this match, an alert is generated and Kibana will show it as seen below Use Cases 97

102 Definition of rule <rule id="80861" level="2"> <if_sid>80860</if_sid> <action>createuser</action> <description>amazon-iam: User created</description> <group>amazon,pci_dss_10.2.5,</group> </rule> Kibana will show this alert Create user account without permissions If the user that is creating a new user account doesn t have permissions to create new users, then the log message generated will match the rule id and Kibana will show the alert as follows: 98 Chapter 9. OSSEC for Amazon AWS

103 Definition of rule <rule id="80862" level="2"> <if_sid>80861</if_sid> <match>"errorcode":"accessdenied"</match> <description>amazon-iam: User creation denied</description> <group>amazon,pci_dss_10.2.4,pci_dss_10.2.5,</group> </rule> Kibana will show this alert User login failed When a user try to log in introducing an invalid password, a new event, and therefore a new log message will be generated. This log message, once is analyzed by the OSSEC manager, will match the rule id 80802, generating an alert that will be shown in Kibana as follows: 9.2. Use Cases 99

104 Definition of rule <rule id="80802" level="2"> <if_sid>80801</if_sid> <match>'consolelogin': u'failure'</match> <description>amazon-signin: User Login failed</description> <group>amazon,authentication_failed,pci_dss_10.2.4,pci_dss_10.2.5,</group> </rule> Kibana will show this alert Possible break-in attempt When having more than 4 incorrect access in less than 360 seconds the rule id will apply and an alert will be generated: 100 Chapter 9. OSSEC for Amazon AWS

105 Definition of rule <rule id="80803" level="10" frequency="4" timeframe="360"> <if_matched_sid>80802</if_matched_sid> <description>possible breakin attempt (high number of login attempts).</ description> <group>amazon,authentication_failures,pci_dss_11.4,pci_dss_10.2.4,pci_dss_ ,</group> </rule> Kibana will show this alert Login success After a succesful login the rule id will match the log message generated by this event and a new alert will be shown in Kibana: 9.2. Use Cases 101

106 Definition of rule <rule id="80801" level="2"> <if_sid>80800</if_sid> <action>consolelogin</action> <description>amazon-signin: User Login Success</description> <group>amazon,authentication_success,pci_dss_10.2.5,</group> </rule> Kibana will show this alert The Kibana Dashboards will show: Pie Chart Stacked Groups 102 Chapter 9. OSSEC for Amazon AWS

107 EC2 Use cases Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2 s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon s proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides developers the tools to build failure resilient applications and isolate themselves from common failure scenarios. To follow find some use cases when using some of the Wazuh rules built for EC2. Run a new instance in EC2 When a user runs a new instance in EC2, an AWS event is generated. As per the diagram at the beginning of this section, the log message flows until the OSSEC agent gets the log file and sends it to the OSSEC manager. The latter analyzes the log file and finds that the log message generated by this event which matches the rule with id number Due to this match, an alert is generated and Kibana will show it as seen below: Definition of rule <rule id="80301" level="2"> <if_sid>80300</if_sid> <action>runinstances</action> <description>amazon-ec2: Run instance</description> <group>amazon,pci_dss_10.6.1,</group> </rule> Kibana will show this alert When a user without permissions tries to run an instance, then the log message will match the rule id and an alert will be generated as seen below: 9.2. Use Cases 103

108 Definition of rule <rule id="80301" level="2"> <if_sid>80301</if_sid> <match>"errorcode":"client.unauthorizedoperation"</match> <description>amazon-ec2: Run instance unauthorized</description> <group>amazon,pci_dss_10.6.1,</group> </rule> Kibana will show this alert Start instances in EC2 When one instance in EC2 is started, the log message will match the rule id and an alert will be generated as shown below: 104 Chapter 9. OSSEC for Amazon AWS

109 Definition of rule <rule id="80305" level="2"> <if_sid>80300</if_sid> <action>startinstances</action> <description>amazon-ec2: Instance started</description> <group>amazon,pci_dss_10.6.1,</group> </rule> Kibana will show this alert If one user without permissions to start instances tries to start one the rule id will apply and an alert will be generated as shown below: 9.2. Use Cases 105

110 Definition of rule <rule id="80306" level="5"> <if_sid>80305</if_sid> <match>"errorcode":"client.unauthorizedoperation"</match> <description>amazon-ec2: Start instance unauthorized</description> <group>amazon,pci_dss_10.6.1,</group> </rule> Kibana will show this alert Stop instances in EC2 When one instance in EC2 is stopped, the rule id will apply and an alert will be generated as shown below: 106 Chapter 9. OSSEC for Amazon AWS

111 Definition of rule <rule id="80308" level="2"> <if_sid>80300</if_sid> <action>stopinstances</action> <description>amazon-ec2: Instance stopped</description> <group>amazon,pci_dss_10.6.1,</group> </rule> Kibana will show this alert If one user without permissions to start instances tries to start one, the rule id will apply and an alert will be generated as shown below: 9.2. Use Cases 107

112 Definition of rule <rule id="80309" level="5"> <if_sid>80308</if_sid> <action>stopinstances</action> <match>"errorcode":"client.unauthorizedoperation"</match> <description>amazon-ec2: Stop instance unauthorized</description> <group>amazon,pci_dss_10.6.1,</group> </rule> Kibana will show this alert Create Security Groups in EC2 When a new security group is created the rule id will match the log message generated by this event and an alert will be shown as follows: 108 Chapter 9. OSSEC for Amazon AWS

113 Definition of rule <rule id="80404" level="2"> <if_sid>80300</if_sid> <action>createsecuritygroup</action> <description>amazon-ec2: Create Security Group</description> <group>amazon,pci_dss_10.6.1,</group> </rule> Kibana will show this alert Allocate new Elastic IP s address If a new address Elastic IP s is allocated, then the rule id will apply: 9.2. Use Cases 109

114 Definition of rule <rule id="80411" level="2"> <if_sid>80300</if_sid> <action>allocateaddress</action> <description>amazon-ec2: Allocate Address</description> <group>amazon,</group> </rule> Kibana will show this alert Associate new Elastic IP s address If one Elastic IP s addres is associated, then the rule id will apply generating the corresponding alert: 110 Chapter 9. OSSEC for Amazon AWS

115 Definition of rule <rule id="80446" level="2"> <if_sid>80300</if_sid> <action>associateaddress</action> <description>amazon-ec2: Associate Address</description> <group>amazon,pci_dss_10.6.1,</group> </rule> Kibana will show this alert The Kibana Dashboards will show: Pie Chart Stacked Groups 9.2. Use Cases 111

116 VPC Use cases Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. Create VPC If one VPC is created the rule id will apply and an alert will be generated as shown below: Definition of rule <rule id="81000" level="2"> <if_sid>80300</if_sid> <action>createvpc</action> <description>amazon-vpc: Vpc Created</description> <group>amazon,pci_dss_10.6.1,</group> </rule> Kibana will show this alert If the user doesn t have permissions the rule id will apply: 112 Chapter 9. OSSEC for Amazon AWS

117 Definition of rule <rule id="81001" level="5"> <if_sid>81000</if_sid> <match>"errorcode":"client.unauthorizedoperation"</match> <description>amazon-vpc: Vpc Created Unauthorized Operation</description> <group>amazon,pci_dss_10.6.1,</group> </rule> Kibana will show this alert Contribute to the ruleset If you have created new rules, decoders or rootchecks and you would like to contribute to our repository, please fork our Github repository and submit a pull request. If you are not familiar with Github, you can also share them through our users mailing list, to which you can subscribe by sending an to wazuh+subscribe@googlegroups.com. As well, do not hesitate to request new rules or rootchecks that you would like to see running in OSSEC and our team will do our best to make it happen. Note: In our repository you will find that most of the rules contain one or more groups called pci_dss_x. This is the PCI DSS control related to the rule. We have produced a document that can help you tag each rule with its corresponding PCI requirement: What s next Once you have your rules for Amazon AWS up to date we encourage you to move forward and try out ELK integration or the API RESTful, check them on: ELK Stack integration guide OSSEC Wazuh RESTful API installation Guide 9.3. Contribute to the ruleset 113

118 114 Chapter 9. OSSEC for Amazon AWS

119 CHAPTER 10 OSSEC for PCI DSS Introduction The Payment Card Industry Data Security Standard (PCI DSS) is a proprietary information security standard for organizations that handle branded credit cards from the major card schemes including Visa, MasterCard, American Express, Discover, and JCB. The standard was created to increase controls around cardholder data to reduce credit card fraud. OSSEC helps to implement PCI DSS by performing log analysis, file integrity checking, policy monitoring, intrusion detection, real-time alerting and active response. This guide (pdf, excel) explains how these capabilities help with each of the standard requirements. In the following section we will elaborate on some specific use cases. They explain how to use OSSEC capabilities to meet the standard requirements. Log analysis Here we will use OSSEC log analysis collection and analysis capabilities to meet the following PCI DSS controls: Invalid logical access attempts Use of and changes to identification and authentication mechanisms including but not limited to creation of new accounts and escalation of privileges and all changes, additions, or deletions to accounts with root or administrative privileges These controls require us to log invalid logical access attempts, multiple invalid login attempts (possible brute force attacks), escalation privileges, changes in accounts, etc. To achieve this, we have added PCI DSS tags to OSSEC log analysis rules, mapping them to the corresponding requirement. This way, it will be easy to analyze and visualize our PCI DSS related alerts. The syntax used for rule tagging is pci_dss_ followed by the number of the requirement. In this case those would be: pci_dss_ and pci_dss_ See below examples of OSSEC rules tagged for PCI requirements and : 115

120 <!--apache: access attempt --> <rule id="30105" level="5"> <if_sid>30101</if_sid> <match>denied by server configuration</match> <description>attempt to access forbidden file or directory.</description> <group>access_denied,pci_dss_6.5.8,pci_dss_10.2.4,</group> </rule> <!-- syslog-sudo: elevation of privileges --> <rule id="5401" level="5"> <if_sid>5400</if_sid> <match>incorrect password attempt</match> <description>failed attempt to run sudo</description> <group>pci_dss_10.2.4,pci_dss_10.2.5,</group> </rule> <rule id="5402" level="3"> <if_sid>5400</if_sid> <regex> ; USER=root ; COMMAND= ; USER=root ; TSID=\S+ ; COMMAND=</regex> <description>successful sudo to ROOT executed</description> <group>pci_dss_10.2.5,pci_dss_10.2.2,</group> </rule> <!-- ssh: identification and authentication mechanisms --> <rule id="5712" level="10" frequency="6" timeframe="120" ignore="60"> <if_matched_sid>5710</if_matched_sid> <description>sshd brute force trying to get access to </description> <description>the system.</description> <same_source_ip /> <group>authentication_failures,pci_dss_11.4,pci_dss_10.2.4,pci_dss_10.2.5,</group> </rule> <rule id="5720" level="10" frequency="6"> <if_matched_sid>5716</if_matched_sid> <same_source_ip /> <description>multiple SSHD authentication failures.</description> <group>authentication_failures,pci_dss_10.2.4,pci_dss_10.2.5,pci_dss_11.4,</group> </rule> Use cases In this scenario, we try to open the file cardholder_data.txt. Since our current user doesn t have read access to the file, we run sudo to elevate privileges. Using sudo log analysis decoder and rules, OSSEC will generate an alert for this particular action. Since we have 116 Chapter 10. OSSEC for PCI DSS

121 JSON output enabled, we can see the alert in both files alerts.log and alerts.json. Using the rule tags we can also see which PCI DSS requirements are specifically related to this alert. Kibana displays information in an organized way, allowing filtering by different type of alert fields, including compliance controls. We have also developed some specific dashboards to display the PCI DSS related alerts Log analysis 117

122 Rootcheck - Policy monitoring The OSSEC rootcheck module can be used to enforce and monitor your security policy. This is the process of verifying that all systems conform to a set of pre-defined rules surrounding configuration settings and approved application usage. There are several PCI DSS requirements to verify that systems are properly hardened. An example would be: 2.2 Develop configuration standards for all system components. Assure that these standards address all known security vulnerabilities and are consistent with industry-accepted system hardening standards. Sources of industry-accepted system hardening standards may include, but are not limited to: Center for Internet Security (CIS), International Organization for Standardization (ISO), SysAdmin Audit Network Security (SANS), Institute National Institute of Standards Technology (NIST). OSSEC includes out-of-the-box CIS baselines for Debian and Redhat and other baselines could be created for other systems or applications, just by adding the corresponding rootcheck file: <rootcheck> <system_audit>/var/ossec/etc/shared/cis_debian_linux_rcl.txt</system_audit> <system_audit>/var/ossec/etc/shared/cis_rhel_linux_rcl.txt</system_audit> <system_audit>/var/ossec/etc/shared/cis_rhel5_linux_rcl.txt</system_audit> </rootcheck> Other PCI DSS requirements will ask us to check that applications (especially network services) are configured in a secure way. One example is the following control: Configure system security parameters to prevent misuse. The following are good examples of rootcheck rules developed to check the configuration of SSH services: [SSH Configuration - Protocol version 1 enabled PCI_DSS: 2.2.4}] [any] f:/etc/ssh/sshd_config ->!r:^# && r:protocol\.+1; [SSH Configuration - Root login allowed PCI_DSS: 2.2.4}] [any] f:/etc/ssh/sshd_config ->!r:^# && r:permitrootlogin\.+yes; 118 Chapter 10. OSSEC for PCI DSS

123 In our OSSEC Wazuh fork, the rootcheck rules use this syntax in the rootcheck name: PCI_DSS: X.Y.Z}. Meaning that all rootchecks already have the PCI DSS requirement tag. Use cases In order to check the security parameters of SSH (and meet the requirement 2.2.4), we have developed the rootchecks system_audit_ssh. In our example, when OSSEC runs the rootcheck scan, it is able to detect some errors in the SSH configuration. Kibana shows the full information about the alert Rootcheck - Policy monitoring 119

124 Rootcheck - Rootkits detection Rootkit and trojan detection is performed using two files: rootkit_files.txt and rootkit_trojans.txt. There are also some tests are performed to detect kernel-level rootkits. You can use these capabilities by adding the files to ossec.conf : <rootcheck> <rootkit_files>/var/ossec/etc/shared/rootkit_files.txt</rootkit_files> <rootkit_trojans>/var/ossec/etc/shared/rootkit_trojans.txt</rootkit_trojans> 120 Chapter 10. OSSEC for PCI DSS

125 </rootcheck> These are the options available for the rootcheck component: rootkit_files: Contains the Unix-based application level rootkit signatures. rootkit_trojans: Contains the Unix-based application level trojan signatures. check_files: Enable or disable the rootkit checks. Default yes. check_trojans: Enable or disable the trojan checks. Default yes. check_dev: Check for suspicious files in the /dev filesystem. Default yes. check_sys: Scan the whole system for anomalies detection. Default yes. check_pids: Check processes. Default yes. check_ports: Check all ports. Default yes. check_if: Check interfaces. Default yes. Rootcheck helps to meet PCI DSS requirement 11.4 related to intrusions, trojans and malware in general: 11.4 Use intrusion-detection and/or intrusion-prevention techniques to detect and/or prevent intrusions into the network. Keep all intrusion-detection and prevention engines, baselines, and signatures up to date. Intrusion detection and/or intrusion prevention techniques (such as IDS/IPS) compare the traffic coming into the network with known signatures and/or behaviors of thousands of compromise types (hacker tools, Trojans, and other malware), and send alerts and/or stop the attempt as it happens. Use cases OSSEC performs several tests to detect rootkits, one of them is to check the hidden files in /dev. The /dev directory should only contain device-specific files such as the primary IDE hard disk (/dev/hda), the kernel random number generators (/dev/random and /dev/urandom), etc. Any additional files, outside of the expected device-specific files, should be inspected because many rootkits use /dev as a storage partition to hide files. In the following example we have created the file.hid which is detected by OSSEC and generates the corresponding alert. [root@manager /]# ls -a /dev grep '^\.'....hid [root@manager /]# tail -n 25 /var/ossec/logs/alerts/alerts.log Rule: 502 (level 3) -> 'Ossec server started.' ossec: Ossec started. ** Alert : mail - ossec,rootcheck 2016 Jan 29 16:52:42 manager->rootcheck Rule: 510 (level 7) -> 'Host-based anomaly detection event (rootcheck).' File '/dev/.hid' present on /dev. Possible hidden file. File Integrity Monitoring File integrity Monitoring (syscheck) is performed by comparing the cryptographic checksum of a known good file against the checksum of the file after it has been modified. The OSSEC agent scans the system at an interval you specify, and it sends the checksums of the monitored files and registry keys (for Windows systems) to the OSSEC File Integrity Monitoring 121

126 server. The server stores the checksums and looks for modifications by comparing the newly received checksums against the historical checksum values of that file or registry key. An alert is sent if anything changes. Syscheck can be used to meet PCI DSS requirement 11.5: 11.5 Deploy a change-detection mechanism (for example, file-integrity monitoring tools) to alert personnel to unauthorized modification (including changes, additions, and deletions) of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly. Use cases In this example, we have configured OSSEC to detect changes in the file /home/credit_cards. <syscheck> <directories check_all="yes">/home/credit_cards</directories> </syscheck> So, when we modify the file, OSSEC generates an alert. As you can see, syscheck alerts are tagged with the requirement Chapter 10. OSSEC for PCI DSS

127 10.5. File Integrity Monitoring 123

128 Active response Although active response is not explicitly discussed in PCI DSS, it is important to mention that an automated remediation to security violations and threats is a powerful tool that reduces the risk. Active response allows a scripted action to be performed whenever a rule matchs in your OSSEC ruleset. Remedial action could be firewall block/drop, traffic shaping or throttling, account lockout, etc. ELK OSSEC Wazuh integration with ELK Stack comes with out-of-the-box dashboards for PCI DSS compliance and CIS benchmarking. You can do forensic and historical analysis of the alerts and store your data for several years, in a reliable and scalable platform. The following requirements can be met with a combination of OSSEC + ELK Stack: 10.5 Secure audit trails so they cannot be altered Review the following at least daily: All security events, Logs of all critical system components, etc Retain audit trail history for at least one year, with a minimum of three months immediately available for analysis What s next Once you know how OSSEC can help with PCI DSS, we encourage you to move forward and try out ELK integration or the OSSEC Wazuh ruleset, check them out at: ELK Stack integration guide OSSEC Wazuh Ruleset 124 Chapter 10. OSSEC for PCI DSS

Securing AWS with HIDS. Gaurav Harsola Mayank Gaikwad

Securing AWS with HIDS. Gaurav Harsola Mayank Gaikwad Securing AWS with HIDS» Gaurav Harsola Mayank Gaikwad IDS What? Why? How? Intrusion Detection System An IDS is a software application that monitors network or system activities for malicious activities.

More information

ELK. Elasticsearch Logstash - Kibana

ELK. Elasticsearch Logstash - Kibana ELK Elasticsearch Logstash - Kibana Welcome to Infomart Infomart is a media monitoring app which monitors both Social and Traditional Media. Social media includes Twitter, Facebook, Youtube, Wordpress,

More information

HOW TO SECURELY CONFIGURE A LINUX HOST TO RUN CONTAINERS

HOW TO SECURELY CONFIGURE A LINUX HOST TO RUN CONTAINERS HOW TO SECURELY CONFIGURE A LINUX HOST TO RUN CONTAINERS How To Securely Configure a Linux Host to Run Containers To run containers securely, one must go through a multitude of steps to ensure that a)

More information

The Elasticsearch-Kibana plugin for Fuel Documentation

The Elasticsearch-Kibana plugin for Fuel Documentation The Elasticsearch-Kibana plugin for Fuel Documentation Release 0.9-0.9.0-1 Mirantis Inc. April 26, 2016 CONTENTS 1 User documentation 1 1.1 Overview................................................. 1 1.2

More information

In this brief tutorial, we will be explaining the basics of Elasticsearch and its features.

In this brief tutorial, we will be explaining the basics of Elasticsearch and its features. About the Tutorial is a real-time distributed and open source full-text search and analytics engine. It is used in Single Page Application (SPA) projects. is open source developed in Java and used by many

More information

Bro + ELK. BroCon 2015 Michael Pananen Vigilant Technology Solu<ons

Bro + ELK. BroCon 2015 Michael Pananen Vigilant Technology Solu<ons Bro + ELK BroCon 2015 Michael Pananen Vigilant Technology Solu

More information

About the Tutorial. Audience. Prerequisites. Copyright and Disclaimer. Logstash

About the Tutorial. Audience. Prerequisites. Copyright and Disclaimer. Logstash About the Tutorial is an open-source, centralized, events and logging manager. It is a part of the ELK (ElasticSearch,, Kibana) stack. In this tutorial, we will understand the basics of, its features,

More information

Installing SmartSense on HDP

Installing SmartSense on HDP 1 Installing SmartSense on HDP Date of Publish: 2018-07-12 http://docs.hortonworks.com Contents SmartSense installation... 3 SmartSense system requirements... 3 Operating system, JDK, and browser requirements...3

More information

Infoblox Kubernetes1.0.0 IPAM Plugin

Infoblox Kubernetes1.0.0 IPAM Plugin 2h DEPLOYMENT GUIDE Infoblox Kubernetes1.0.0 IPAM Plugin NIOS version 8.X August 2018 2018 Infoblox Inc. All rights reserved. Infoblox Kubernetes 1.0.0 IPAM Deployment Guide August 2018 Page 1 of 18 Overview...

More information

Upgrade Instructions. NetBrain Integrated Edition 7.0

Upgrade Instructions. NetBrain Integrated Edition 7.0 NetBrain Integrated Edition 7.0 Upgrade Instructions Version 7.0b1 Last Updated 2017-11-14 Copyright 2004-2017 NetBrain Technologies, Inc. All rights reserved. Contents 1. System Overview... 3 2. System

More information

Docker Swarm installation Guide

Docker Swarm installation Guide Docker Swarm installation Guide How to Install and Configure Docker Swarm on Ubuntu 16.04 Step1: update the necessary packages for ubuntu Step2: Install the below packages to ensure the apt work with https

More information

CONTRACTOR IS ACTING UNDER A FRAMEWORK CONTRACT CONCLUDED WITH THE COMMISSION

CONTRACTOR IS ACTING UNDER A FRAMEWORK CONTRACT CONCLUDED WITH THE COMMISSION Hands-on Session NoSQL DB Donato Summa THE CONTRACTOR IS ACTING UNDER A FRAMEWORK CONTRACT CONCLUDED WITH THE COMMISSION 1 Summary Elasticsearch How to get Elasticsearch up and running ES data organization

More information

Bitnami ELK for Huawei Enterprise Cloud

Bitnami ELK for Huawei Enterprise Cloud Bitnami ELK for Huawei Enterprise Cloud Description The ELK stack is a log management platform consisting of Elasticsearch (deep search and data analytics), Logstash (centralized logging, log enrichment

More information

MongoDB Management Suite Manual Release 1.4

MongoDB Management Suite Manual Release 1.4 MongoDB Management Suite Manual Release 1.4 MongoDB, Inc. Aug 10, 2018 MongoDB, Inc. 2008-2016 2 Contents 1 On-Prem MMS Application Overview 4 1.1 MMS Functional Overview........................................

More information

EveBox Documentation. Jason Ish

EveBox Documentation. Jason Ish Jason Ish May 29, 2018 Contents: 1 Installation 1 2 Server 3 2.1 Running................................................. 3 2.2 Oneshot Mode.............................................. 4 2.3 Authentication..............................................

More information

OSSEC 3.0 Preview OSSEC CON Scott Shinn OSSEC Project Manager

OSSEC 3.0 Preview OSSEC CON Scott Shinn OSSEC Project Manager OSSEC 3.0 Preview OSSEC CON 2018 Scott Shinn OSSEC Project Manager WHAT S NEW WITH OSSEC 3.0 A Preview of the Latest Release What s New in OSSEC 3.0 New linux distribution, snapshot and docker repo support

More information

owlh_documentation Documentation

owlh_documentation Documentation owlh_documentation Documentation Release 0.4 - Cloud and Bro owlh team Sep 13, 2018 Contents 1 What is OwlH? 1 2 A few topics 3 i ii CHAPTER 1 What is OwlH? This is OwlH, open source solution. OwlH is

More information

Purpose. Target Audience. Install SNMP On The Remote Linux Machine. Nagios XI. Monitoring Linux Using SNMP

Purpose. Target Audience. Install SNMP On The Remote Linux Machine. Nagios XI. Monitoring Linux Using SNMP Purpose This document describes how to monitor Linux machines with using SNMP. SNMP is an agentless method of monitoring network devices and servers, and is often preferable to installing dedicated agents

More information

Lab Exercises: Deploying, Managing, and Leveraging Honeypots in the Enterprise using Open Source Tools

Lab Exercises: Deploying, Managing, and Leveraging Honeypots in the Enterprise using Open Source Tools Lab Exercises: Deploying, Managing, and Leveraging Honeypots in the Enterprise using Open Source Tools Fill in the details of your MHN Server info. If you don t have this, ask your instructor. These details

More information

EveBox Documentation. Release. Jason Ish

EveBox Documentation. Release. Jason Ish EveBox Documentation Release Jason Ish Jan 25, 2018 Contents: 1 Installation 1 2 Server 3 2.1 Running................................................. 3 2.2 Oneshot Mode..............................................

More information

NAV Coin NavTech Server Installation and setup instructions

NAV Coin NavTech Server Installation and setup instructions NAV Coin NavTech Server Installation and setup instructions NavTech disconnects sender and receiver Unique double-blockchain Technology V4.0.5 October 2017 2 Index General information... 5 NavTech... 5

More information

Hiptest on-premises - Installation guide

Hiptest on-premises - Installation guide Hiptest on-premises - Installation guide Owner: Hiptest Version: 1.4.3 Released: 2017-01-27 Author: Hiptest Contributors: Module: Hiptest enterprise ID: Link: Summary This guide details the installation

More information

Offloading NDO2DB To Remote Server

Offloading NDO2DB To Remote Server Purpose This document is meant to show a step-by-step guide for offloading the NDO2DB daemon from the central server to an external, remote server. NDO2DB is an acronym of "Nagios Data Output To Database"

More information

StreamSets Control Hub Installation Guide

StreamSets Control Hub Installation Guide StreamSets Control Hub Installation Guide Version 3.2.1 2018, StreamSets, Inc. All rights reserved. Table of Contents 2 Table of Contents Chapter 1: What's New...1 What's New in 3.2.1... 2 What's New in

More information

Upgrade Instructions. NetBrain Integrated Edition 7.1. Two-Server Deployment

Upgrade Instructions. NetBrain Integrated Edition 7.1. Two-Server Deployment NetBrain Integrated Edition 7.1 Upgrade Instructions Two-Server Deployment Version 7.1a Last Updated 2018-09-04 Copyright 2004-2018 NetBrain Technologies, Inc. All rights reserved. Contents 1. Upgrading

More information

Using Fluentd as an alternative to Splunk

Using Fluentd as an alternative to Splunk Using Fluentd as an alternative to Splunk As infrastructure within organizations grows in size and the number of hosts, the cost of Splunk may become prohibitive. I created this document to demonstrate,

More information

Carbon Black QRadar App User Guide

Carbon Black QRadar App User Guide Carbon Black QRadar App User Guide Table of Contents Carbon Black QRadar App User Guide... 1 Cb Event Forwarder... 2 Overview...2 Requirements...2 Install Cb Event Forwarder RPM...2 Configure Cb Event

More information

Hortonworks SmartSense

Hortonworks SmartSense Hortonworks SmartSense Installation (January 8, 2018) docs.hortonworks.com Hortonworks SmartSense: Installation Copyright 2012-2018 Hortonworks, Inc. Some rights reserved. The Hortonworks Data Platform,

More information

AppGate 11.0 RELEASE NOTES

AppGate 11.0 RELEASE NOTES Changes in 11.0 AppGate 11.0 RELEASE NOTES 1. New packet filter engine. The server-side IP tunneling packet filter engine has been rewritten from scratch, reducing memory usage drastically and improving

More information

VIRTUAL GPU LICENSE SERVER VERSION , , AND 5.1.0

VIRTUAL GPU LICENSE SERVER VERSION , , AND 5.1.0 VIRTUAL GPU LICENSE SERVER VERSION 2018.10, 2018.06, AND 5.1.0 DU-07754-001 _v7.0 through 7.2 March 2019 User Guide TABLE OF CONTENTS Chapter 1. Introduction to the NVIDIA vgpu Software License Server...

More information

LGTM Enterprise System Requirements. Release , August 2018

LGTM Enterprise System Requirements. Release , August 2018 Release 1.17.2, August 2018 Semmle Inc 180 Sansome St San Francisco, CA 94104 Copyright 2018, Semmle Ltd. All rights reserved. LGTM Enterprise release 1.17.2 Document published August 30, 2018 Contents

More information

Image Management Service. User Guide. Issue 03. Date

Image Management Service. User Guide. Issue 03. Date Issue 03 Date 2016-10-19 Contents Contents Change History... v 1 Overview... 6 1.1 Concept... 6 1.1.1 What Is Image Management Service?... 6 1.1.2 OSs for Public Images Supported by IMS... 7 1.1.3 Image

More information

DEVOPS COURSE CONTENT

DEVOPS COURSE CONTENT LINUX Basics: Unix and linux difference Linux File system structure Basic linux/unix commands Changing file permissions and ownership Types of links soft and hard link Filter commands Simple filter and

More information

Cisco Stealthwatch Cloud. Private Network Monitoring Advanced Configuration Guide

Cisco Stealthwatch Cloud. Private Network Monitoring Advanced Configuration Guide Cisco Stealthwatch Cloud Private Network Monitoring Advanced Configuration Guide TOC About Stealthwatch Cloud Private Network Monitor Sensor 3 Checking Your Sensor Version 4 Manually Installing the Package

More information

Downloading and installing Db2 Developer Community Edition on Ubuntu Linux Roger E. Sanders Yujing Ke Published on October 24, 2018

Downloading and installing Db2 Developer Community Edition on Ubuntu Linux Roger E. Sanders Yujing Ke Published on October 24, 2018 Downloading and installing Db2 Developer Community Edition on Ubuntu Linux Roger E. Sanders Yujing Ke Published on October 24, 2018 This guide will help you download and install IBM Db2 software, Data

More information

Setting up a Chaincoin Masternode

Setting up a Chaincoin Masternode Setting up a Chaincoin Masternode Introduction So you want to set up your own Chaincoin Masternode? You ve come to the right place! These instructions are correct as of April, 2017, and relate to version

More information

Hortonworks SmartSense

Hortonworks SmartSense Hortonworks SmartSense Installation (April 3, 2017) docs.hortonworks.com Hortonworks SmartSense: Installation Copyright 2012-2017 Hortonworks, Inc. Some rights reserved. The Hortonworks Data Platform,

More information

Ingesting Data from Kafka Queues Deployed On-Prem into jsonar Cloud Systems

Ingesting Data from Kafka Queues Deployed On-Prem into jsonar Cloud Systems Ingesting Data from Kafka Queues Deployed On-Prem into jsonar Cloud Systems Most jsonar systems are deployed on the Cloud yet consume data generated within enterprise data centers. Since Kafka has emerged

More information

UiPath Orchestrator Azure Installation

UiPath Orchestrator Azure Installation UiPath Orchestrator Azure Installation Revision History Date Version Author Description 9 th June 2016 2016.1 M.B. Applied Template 8 th June 2016 2016.2 C.S. Created Document UiPath Orchestrator Azure

More information

manifold Documentation

manifold Documentation manifold Documentation Release 0.0.1 Open Source Robotics Foundation Mar 04, 2017 Contents 1 What is Manifold? 3 2 Installation 5 2.1 Ubuntu Linux............................................... 5 2.2

More information

The ELK Stack. Elastic Logging. TPS Services Ltd. Copyright 2017 Course Title

The ELK Stack. Elastic Logging. TPS Services Ltd. Copyright 2017 Course Title The ELK Stack Elastic Logging Content 1.Log analysis 2.The ELK stack 3.Elasticsearch Lab 1 4.Kibana phase 1 Lab 2 5.Beats Lab 3 6.Kibana Lab 4 7.Logstash & Filebeat Lab 5 8.Enhanced Logstash Lab 6 9.Kibana

More information

Server Monitoring. AppDynamics Pro Documentation. Version 4.1.x. Page 1

Server Monitoring. AppDynamics Pro Documentation. Version 4.1.x. Page 1 Server Monitoring AppDynamics Pro Documentation Version 4.1.x Page 1 Server Monitoring......................................................... 4 Standalone Machine Agent Requirements and Supported Environments............

More information

To configure the patching repository so that it can copy patches to alternate locations, use SFTP, SCP, FTP, NFS, or a premounted file system.

To configure the patching repository so that it can copy patches to alternate locations, use SFTP, SCP, FTP, NFS, or a premounted file system. Configuring Protocols to Stage and 1 Deploy Linux and UNIX Patches VCM supports patching of managed machines in distributed environments, either geographically or separated by firewalls. VCM uses a single

More information

NGFW Security Management Center

NGFW Security Management Center NGFW Security Management Center Release Notes 6.4.0 Revision B Contents About this release on page 2 System requirements on page 2 Build version on page 3 Compatibility on page 4 New features on page 5

More information

Linux Essentials Objectives Topics:

Linux Essentials Objectives Topics: Linux Essentials Linux Essentials is a professional development certificate program that covers basic knowledge for those working and studying Open Source and various distributions of Linux. Exam Objectives

More information

SAS Event Stream Processing for Edge Computing 4.3: Deployment Guide

SAS Event Stream Processing for Edge Computing 4.3: Deployment Guide SAS Event Stream Processing for Edge Computing 4.3: Deployment Guide SAS Documentation June 2017 The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2017. SAS Event Stream

More information

Configure CEM Controller on CentOS 6.9

Configure CEM Controller on CentOS 6.9 Configure CEM Controller on CentOS 6.9 Contents Introduction Background Prerequisites Requirements Components Used Installing Oracle Java SE Runtime Environment 8 Downloading and Installing the CEM Controller

More information

Fuel StackLight Elasticsearch-Kibana Plugin Guide

Fuel StackLight Elasticsearch-Kibana Plugin Guide Fuel StackLight Elasticsearch-Kibana Plugin Guide Release 1.0.0 Mirantis Inc. February 14, 2017 CONTENTS 1 Overview 1 1.1 Introduction............................................... 1 1.2 Key terms.................................................

More information

Hiptest on-premises - Installation guide

Hiptest on-premises - Installation guide on-premises - Installation guide Owner: Version: 1.4.13 Released: 2017-12-12 Author: Contributors: Module: enterprise ID: Link: Summary This guide details the installation and administration of Enterprise

More information

Red Hat Quay 2.9 Deploy Red Hat Quay - Basic

Red Hat Quay 2.9 Deploy Red Hat Quay - Basic Red Hat Quay 2.9 Deploy Red Hat Quay - Basic Deploy Red Hat Quay Last Updated: 2018-09-14 Red Hat Quay 2.9 Deploy Red Hat Quay - Basic Deploy Red Hat Quay Legal Notice Copyright 2018 Red Hat, Inc. The

More information

HP IT Operations Compliance Community Edition

HP IT Operations Compliance Community Edition HP IT Operations Compliance Community Edition Software Version: 00.14.1200 - Getting Started Guide Document Release Date: January 2015 Software Release Date: January 2015 Legal Notices Warranty The only

More information

McAfee Endpoint Security Threat Prevention Installation Guide - Linux

McAfee Endpoint Security Threat Prevention Installation Guide - Linux McAfee Endpoint Security 10.5.1 - Threat Prevention Installation Guide - Linux COPYRIGHT Copyright 2018 McAfee, LLC TRADEMARK ATTRIBUTIONS McAfee and the McAfee logo, McAfee Active Protection, epolicy

More information

PKI Quick Installation Guide. for PacketFence version 7.4.0

PKI Quick Installation Guide. for PacketFence version 7.4.0 PKI Quick Installation Guide for PacketFence version 7.4.0 PKI Quick Installation Guide by Inverse Inc. Version 7.4.0 - Jan 2018 Copyright 2015 Inverse inc. Permission is granted to copy, distribute and/or

More information

The build2 Toolchain Installation and Upgrade

The build2 Toolchain Installation and Upgrade The build2 Toolchain Installation and Upgrade Copyright 2014-2019 Code Synthesis Ltd Permission is granted to copy, distribute and/or modify this document under the terms of the MIT License This revision

More information

Harbor Registry. VMware VMware Inc. All rights reserved.

Harbor Registry. VMware VMware Inc. All rights reserved. Harbor Registry VMware 2017 VMware Inc. All rights reserved. VMware Harbor Registry Cloud Foundry Agenda 1 Container Image Basics 2 Project Harbor Introduction 3 Consistency of Images 4 Security 5 Image

More information

EDB Postgres Enterprise Manager Installation Guide Version 7

EDB Postgres Enterprise Manager Installation Guide Version 7 EDB Postgres Enterprise Manager Installation Guide Version 7 June 1, 2017 EDB Postgres Enterprise Manager Installation Guide by EnterpriseDB Corporation Copyright 2013-2017 EnterpriseDB Corporation. All

More information

Installation and setup guide of 1.1 demonstrator

Installation and setup guide of 1.1 demonstrator Installation and setup guide of 1.1 demonstrator version 2.0, last modified: 2015-09-23 This document explains how to set up the INAETICS demonstrator. For this, we use a Vagrant-based setup that boots

More information

Deploying Rubrik Datos IO to Protect MongoDB Database on GCP

Deploying Rubrik Datos IO to Protect MongoDB Database on GCP DEPLOYMENT GUIDE Deploying Rubrik Datos IO to Protect MongoDB Database on GCP TABLE OF CONTENTS INTRODUCTION... 1 OBJECTIVES... 1 COSTS... 2 BEFORE YOU BEGIN... 2 PROVISIONING YOUR INFRASTRUCTURE FOR THE

More information

Installing Connector on Linux

Installing Connector on Linux CHAPTER 3 Revised: July 15, 2010 Overview This chapter provides a step-by-step guide to installing the Linux Connector on x86 and x86-64 servers running either Red Hat Enterprise Linux version 5 or Cent

More information

Quick Setup Guide. NetBrain Integrated Edition 7.0. Distributed Deployment

Quick Setup Guide. NetBrain Integrated Edition 7.0. Distributed Deployment NetBrain Integrated Edition 7.0 Quick Setup Guide Distributed Deployment Version 7.0b1 Last Updated 2017-11-08 Copyright 2004-2017 NetBrain Technologies, Inc. All rights reserved. Contents 1. System Overview...

More information

NGFW Security Management Center

NGFW Security Management Center NGFW Security Management Center Release Notes 6.4.5 Revision A Contents About this release on page 2 System requirements on page 2 Build version on page 3 Compatibility on page 4 New features on page 5

More information

RDO container registry Documentation

RDO container registry Documentation RDO container registry Documentation Release 0.0.1.dev28 Red Hat Jun 08, 2018 Contents 1 Table of Contents 3 1.1 About the registry............................................ 3 1.2 Installing the registry...........................................

More information

OSSEC Documentation. Release Jeremy Rossi

OSSEC Documentation. Release Jeremy Rossi OSSEC Documentation Release 2.7.1 Jeremy Rossi Jul 21, 2017 Contents 1 Manual & FAQ 3 1.1 Manual.................................................. 3 1.2 Frequently asked questions........................................

More information

MariaDB ColumnStore C++ API Building Documentation

MariaDB ColumnStore C++ API Building Documentation MariaDB ColumnStore C++ API Building Documentation Release 1.1.3-acf32cc MariaDB Corporation Feb 22, 2018 CONTENTS 1 Licensing 1 1.1 Documentation Content......................................... 1 1.2

More information

Installing Design Room ONE

Installing Design Room ONE Installing Design Room ONE Design Room ONE consists of two components: 1. The Design Room ONE web server This is a Node JS server which uses a Mongo database. 2. The Design Room ONE Integration plugin

More information

How to force automatic removal of deleted files in nextcloud

How to force automatic removal of deleted files in nextcloud How to force automatic removal of deleted files in nextcloud Nextcloud will get rid of files that have been deleted for 30 days. However in reality these files will remain on the server until such a time

More information

Hiptest on-premises - Installation guide

Hiptest on-premises - Installation guide on-premises - Installation guide Owner: Version: 1.5.1 Released: 2018-06-19 Author: Contributors: Module: enterprise ID: Link: Summary This guide details the installation and administration of Enterprise

More information

Spacewalk. Installation Guide for CentOS 6.4

Spacewalk. Installation Guide for CentOS 6.4 Spacewalk Installation Guide for CentOS 6.4 Contents Spacewalk Overview... 3 Spacewalk Project Architecture... 3 System Prerequisites... 3 Installation... 4 Spacewalk Components... 4 Prerequisites Install

More information

Running Blockchain in Docker Containers Prerequisites Sign up for a LinuxONE Community Cloud trial account Deploy a virtual server instance

Running Blockchain in Docker Containers Prerequisites Sign up for a LinuxONE Community Cloud trial account Deploy a virtual server instance Running Blockchain in Docker Containers The following instructions can be used to install the current hyperledger fabric, and run Docker and blockchain code in IBM LinuxONE Community Cloud instances. This

More information

ovirt and Docker Integration

ovirt and Docker Integration ovirt and Docker Integration October 2014 Federico Simoncelli Principal Software Engineer Red Hat 1 Agenda Deploying an Application (Old-Fashion and Docker) Ecosystem: Kubernetes and Project Atomic Current

More information

Spacewalk. Installation Guide RHEL 5.9

Spacewalk. Installation Guide RHEL 5.9 Spacewalk Installation Guide RHEL 5.9 Contents Spacewalk Overview... 3 Spacewalk Project Architecture... 3 System Prerequisites... 3 Installation... 4 Spacewalk Components... 4 Prerequisites Install for

More information

Some Ubuntu Practice...

Some Ubuntu Practice... Some Ubuntu Practice... SANOG 10 August 29 New Delhi, India 1. Get used to using sudo 2. Create an inst account 3. Learn how to install software 4. Install gcc and make 5. Learn how to control services

More information

Linux Kung Fu. Stephen James UBNetDef, Spring 2017

Linux Kung Fu. Stephen James UBNetDef, Spring 2017 Linux Kung Fu Stephen James UBNetDef, Spring 2017 Introduction What is Linux? What is the difference between a client and a server? What is Linux? Linux generally refers to a group of Unix-like free and

More information

Red Hat JBoss Developer Studio 11.3

Red Hat JBoss Developer Studio 11.3 Red Hat JBoss Developer Studio 11.3 Installation Guide Installing Red Hat JBoss Developer Studio Last Updated: 2018-05-01 Red Hat JBoss Developer Studio 11.3 Installation Guide Installing Red Hat JBoss

More information

NGFW Security Management Center

NGFW Security Management Center NGFW Security Management Center Release Notes 6.3.2 Revision A Contents About this release on page 2 System requirements on page 2 Build version on page 3 Compatibility on page 5 New features on page 5

More information

NGFW Security Management Center

NGFW Security Management Center NGFW Security Management Center Release Notes 6.4.8 Revision A Contents About this release on page 2 System requirements on page 2 Build version on page 3 Compatibility on page 5 New features on page 5

More information

Installation 1. Installing DPS. Date of Publish:

Installation 1. Installing DPS. Date of Publish: 1 Installing DPS Date of Publish: 2018-05-18 http://docs.hortonworks.com Contents DPS Platform support requirements...3 Installation overview...4 Installation prerequisites...5 Setting up the local repository

More information

CounterACT Macintosh/Linux Property Scanner Plugin

CounterACT Macintosh/Linux Property Scanner Plugin CounterACT Macintosh/Linux Property Scanner Plugin Version 7.0.1 and Above Table of Contents About the Macintosh/Linux Property Scanner Plugin... 4 Requirements... 4 Supported Operating Systems... 4 Accessing

More information

Dell EMC ME4 Series vsphere Client Plug-in

Dell EMC ME4 Series vsphere Client Plug-in Dell EMC ME4 Series vsphere Client Plug-in User's Guide Regulatory Model: E09J, E10J, E11J Regulatory Type: E09J001, E10J001, E11J001 Notes, cautions, and warnings NOTE: A NOTE indicates important information

More information

Intellicus Cluster and Load Balancing- Linux. Version: 18.1

Intellicus Cluster and Load Balancing- Linux. Version: 18.1 Intellicus Cluster and Load Balancing- Linux Version: 18.1 1 Copyright 2018 Intellicus Technologies This document and its content is copyrighted material of Intellicus Technologies. The content may not

More information

This document provides instructions for upgrading a DC/OS cluster.

This document provides instructions for upgrading a DC/OS cluster. Upgrading ENTERPRISE This document provides instructions for upgrading a DC/OS cluster. If this upgrade is performed on a supported OS with all prerequisites fulfilled, this upgrade should preserve the

More information

Travis Cardwell Technical Meeting

Travis Cardwell Technical Meeting .. Introduction to Docker Travis Cardwell Tokyo Linux Users Group 2014-01-18 Technical Meeting Presentation Motivation OS-level virtualization is becoming accessible Docker makes it very easy to experiment

More information

Red Hat Enterprise Linux 7 Getting Started with Cockpit

Red Hat Enterprise Linux 7 Getting Started with Cockpit Red Hat Enterprise Linux 7 Getting Started with Cockpit Getting Started with Cockpit Red Hat Enterprise Linux Documentation Team Red Hat Enterprise Linux 7 Getting Started with Cockpit Getting Started

More information

NGFW Security Management Center

NGFW Security Management Center NGFW Security Management Center Release Notes 6.4.3 Revision A Contents About this release on page 2 System requirements on page 2 Build version on page 3 Compatibility on page 4 New features on page 5

More information

Containers: Exploits, Surprises, And Security

Containers: Exploits, Surprises, And Security Containers: Exploits, Surprises, And Security with Elissa Shevinsky COO at SoHo Token Labs Editor of Lean Out #RVASec @ElissaBeth on twitter @Elissa_is_offmessage on Instagram this was Silicon Valley in

More information

Image Management Service. User Guide. Issue 08. Date

Image Management Service. User Guide. Issue 08. Date Issue 08 Date 2017-02-08 Contents Contents 1 Overview... 5 1.1 Concept... 5 1.1.1 What Is Image Management Service?... 5 1.1.2 OSs for Public Images Supported by IMS... 6 1.1.3 Image Format and OS Types

More information

Red Hat Development Suite 2.1

Red Hat Development Suite 2.1 Red Hat Development Suite 2.1 Installation Guide Installing Red Hat Development Suite Last Updated: 2017-12-06 Red Hat Development Suite 2.1 Installation Guide Installing Red Hat Development Suite Petra

More information

Git Command Line Tool Is Not Installed

Git Command Line Tool Is Not Installed Git Command Line Tool Is Not Installed Make Sure It Is Accessible On Y Error: "git" command line tool is not installed: make sure it is accessible on y I have installed git tool. even in git bash its showing

More information

Red Hat Ceph Storage 3

Red Hat Ceph Storage 3 Red Hat Ceph Storage 3 Monitoring Ceph for Red Hat Enterprise Linux with Nagios Monitoring Ceph for Red Hat Enterprise Linux with Nagios Core. Last Updated: 2018-06-21 Red Hat Ceph Storage 3 Monitoring

More information

SQL Server on Linux and Containers

SQL Server on Linux and Containers http://aka.ms/bobwardms https://github.com/microsoft/sqllinuxlabs SQL Server on Linux and Containers A Brave New World Speaker Name Principal Architect Microsoft bobward@microsoft.com @bobwardms linkedin.com/in/bobwardms

More information

Hortonworks Cybersecurity Platform

Hortonworks Cybersecurity Platform 1 Hortonworks Cybersecurity Platform Date of Publish: 2018-07-30 http://docs.hortonworks.com Contents Preparing to Upgrade...3 Back up Your Configuration...3 Stop All Metron Services...3 Upgrade Metron...4

More information

TrinityCore Documentation

TrinityCore Documentation TrinityCore Documentation Release TrinityCore Developers February 21, 2016 Contents 1 Compiling TrinityCore 3 1.1 Requirements............................................... 3 1.2 Build Environment............................................

More information

Contents. Crave Masternode Setup Guides. Single / Multiple Local Masternode(s) Single Masternode using a VPS. Multiple Masternodes using a VPS

Contents. Crave Masternode Setup Guides. Single / Multiple Local Masternode(s) Single Masternode using a VPS. Multiple Masternodes using a VPS Contents Crave Masternode Setup Guides Single / Multiple Local Masternode(s) 1 Requirements...1 2 Preparing Masternodes...1 3 Preparing Controller Wallet...2 4 Masternode Configuration...3 5 Starting Masternodes...3

More information

Introduction. What is Linux? What is the difference between a client and a server?

Introduction. What is Linux? What is the difference between a client and a server? Linux Kung Fu Introduction What is Linux? What is the difference between a client and a server? What is Linux? Linux generally refers to a group of Unix-like free and open-source operating system distributions

More information

Installing Design Room ONE

Installing Design Room ONE Installing Design Room ONE Design Room ONE consists of two components: 1. The Design Room ONE web server This is a Node JS server which uses a Mongo database. 2. The Design Room ONE Integration plugin

More information

The instructions in this document are applicable to personal computers running the following Operating Systems:

The instructions in this document are applicable to personal computers running the following Operating Systems: Preliminary Notes The instructions in this document are applicable to personal computers running the following Operating Systems: Microsoft Windows from version 7 up to 10 Apple Mac OS X from versions

More information

GIT. A free and open source distributed version control system. User Guide. January, Department of Computer Science and Engineering

GIT. A free and open source distributed version control system. User Guide. January, Department of Computer Science and Engineering GIT A free and open source distributed version control system User Guide January, 2018 Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Table of Contents What is

More information

Installation Manual InfraManage.NET Installation Instructions for Ubuntu

Installation Manual InfraManage.NET Installation Instructions for Ubuntu Installation Manual InfraManage.NET Installation Instructions for Ubuntu Copyright 1996 2017 Timothy Ste. Marie Version 7.5.72SQL InfraManage.NET Installing InfraManage.NET Page 1 of 78 Table of Contents

More information

Dockerfile Best Practices

Dockerfile Best Practices Dockerfile Best Practices OpenRheinRuhr 2015 November 07th, 2015 1 Dockerfile Best Practices Outline About Dockerfile Best Practices Building Images This work is licensed under the Creative Commons Attribution-ShareAlike

More information

Bitnami MariaDB for Huawei Enterprise Cloud

Bitnami MariaDB for Huawei Enterprise Cloud Bitnami MariaDB for Huawei Enterprise Cloud First steps with the Bitnami MariaDB Stack Welcome to your new Bitnami application running on Huawei Enterprise Cloud! Here are a few questions (and answers!)

More information