Adding a Comprehensive Wazuh SIEM and Network Intrusion Detection System (NIDS) to the Proxmox Lab

In this module, we will take a look at the process setting up a comprehensive Wazuh SIEM, including a NIDS and some HIDS agents, in our Proxmox home lab.

8 months ago   •   28 min read

By 0xBEN
Table of contents

This page is part of the larger series of converting an old laptop into a bare metal home lab server. Click the link to be taken back to the original post.

Proxmox VE 7: Converting a Laptop into a Bare Metal Server
In this post, we will take a look at an in-detail process of setting up a Proxmox home lab on a bare metal server.




Reviewing Some Networking Concepts

Router and Switch with Default Configurations

In this scenario, there is a router with a very simple, default configuration. The router has the default private IP address range of 172.16.1.0/24. The DHCP server is enabled and no VLANs are configured.





Router with VLANs Configured, Default Switch Configurations

In this scenario, we have configured some VLANs in the router. That way, the router and switch can work harmoniously to route packets if it receives an Ethernet frame that was tagged with a VLAN ID. However, none of the switch ports have been configured with VLAN ID tags.





Router with VLANs and Configured Switch Ports

In this scenario, the switch tags the Ethernet frames with a VLAN tag for any configured switch ports. That way if an Ethernet frame contains a VLAN ID tag, the switch checks configured ports for the correct VLAN ID and MAC address.





Router with VLANs, Tagged Switched Ports, and Port Mirroring

Port mirroring is where we configure a switch to send a copy of every Ethernet frame to another port on the switch. This is a common configuration with Intrusion Detection Systems when you want to monitor all traffic on a network.





Understanding the Proxmox Networking

  • VMBR0 is the switch where all of your VMs and containers connected to your home network will be.
    • If you have a home router that supports 802.1q, then you could apply some VLAN segmentation to VMBR0 divide your VMs and containers into further subnets if desired
  • VMBR1 is the switch where all of the security and vulnerable infrastructure will be attached for further research. In our lab environment, we have already added some VLANs to VMBR1, because pfSense supports 802.1q.
  • The NIDS is connected to both VMBR0 and VMBR1. Both switches are configured such that the ports where the IDS is plugged in are mirror ports and every other port will send a copy of every Ethernet frame to the IDS.




Order of Operations

  1. Configure the Wazuh Indexer container
    • This is the database where event data will be stored
    • Any alerts that are picked up by Wazuh will be shipped here
  2. Configure the Wazuh Dashboards container
    • Wazuh Dashboards serves three purposes:
      • A web server that displays dashboards about alerts data
      • A Wazuh API client that can control certain features in Wazuh
      • A Wazuh Indexer API client that queries the database
  3. Configure the Wazuh Manager container
    • This is the SIEM that will collect the logs from any agents
    • Agents are running on endpoints on our network
    • The agents are HIDS which will forward event data to the SIEM
    • You can also forward syslog to Wazuh for processing as well if you cannot install the agent on a host
  4. Configure the OwlH Manager container
    • OwlH Manager serves three purposes:
      • Keeping Suricata rules up to date
      • Pushing configurations to any registered NIDS node(s)
      • Keeping services running on the NIDS node(s)
  5. Configure the OwlH NIDS node container
    • This is the network intrusion detection system
    • It runs the following applications to generate network event data
      • Suricata compares packets against a set of rules for anomalies
      • Zeek adds lots of metadata about packets
      • Wazuh agent sends alert data to the SIEM
  6. Install Wazuh Agents on Servers and Desktops
    • These are the endpoints to be monitored
    • They can be configured to ingest any log and send to the Wazuh Manager
    • The Wazuh Manager will receive the logs and attempt to decode them and parse them for an alertable event




Desired End State

  • OwlH Manager
    • Downloaded and configured ruleset to be pushed to node(s)
    • Interfaces defined for monitoring and auto-start
    • Joined the OwlH NIDS node to the OwlH Manager
    • Pushed configurations to the OwlH NIDS node
  • OwlH NIDS
    • Installed Suricata and Zeek
    • Configured network interfaces
    • Capturing mirrored packets and analyzing them
    • Wazuh agent is installed and shipping alerts to Wazuh Manager
  • Wazuh Manager
    • Wazuh software installed and running
    • Accepting inputs from agents
    • Analyzing and sending to Wazuh Indexer
  • Wazuh Indexer
    • OwlH NIDS templates installed
    • Receiving inputs from Wazuh Manager
  • Wazuh Dashboards
    • OwlH dashboards installed
    • Successfully connecting to Wazuh Indexer and Dashboards APIs
  • Wazuh Agents
    • Wazuh agent installed on any server or workstation to be monitored
    • As long as the endpoint can establish a TCP/IP connection with the Wazuh Manager, it can ship the logs




DHCP IP Reservations Strongly Recommended

I would strongly advise on giving each of your containers a DHCP reservation in your home router, so that your NIDS/SIEM are not disrupted by possible IP address changes.





Stage Your Containers

Log into Proxmox and create five Linux Containers for your infrastructure.

  1. Wazuh Indexer
    • Hostname: wazuh-indexer-1
    • Debian 11 LXC
    • Memory: 4 GiB (1 GiB swap) – 8 GiB recommended
    • 2 CPU cores – 4 CPU cores recommended
    • 25 GB disk (good enough for a lab)
    • Set your network (and VLAN) as desired
    • Set your DNS domain and servers as desired
  2. Wazuh Dashboards
    • Hostname: wazuh-dashboards
    • Debian 11 LXC
    • Memory: 1 GiB (512 MiB swap)
    • 2 CPU cores
    • 10 GB storage
    • Set your network (and VLAN) as desired
    • Set your DNS domain and servers as desired
  3. Wazuh Manager
    • Hostname: wazuh-manager
    • Debian 11 LXC
    • Memory: 1 GiB (512 MiB swap)
    • 2 CPU cores
    • 10 GB storage
    • Set your network (and VLAN) as desired
    • Set your DNS domain and servers as desired
  4. OwlH Manager
    • Hostname: owlh-manager
    • Debian 11 LXC
    • Memory: 1 GiB (512 MiB swap)
    • 2 CPU cores
    • 10 GB storage
    • Set your network (and VLAN) as desired
    • Set your DNS domain and servers as desired
  5. OwlH Node
    • Hostname: owlh-node
    • Debian 11 LXC
    • Memory: 4 GiB (1 GiB swap)
    • 4 CPU cores
    • 50 GB storage (good enough for a lab)
    • Set your network (and VLAN) as desired
    • Set your DNS domain and servers as desired





DHCP Reservations

WARNING!

Please give your containers DHCP reservations!

Things will break if hosts lose their DHCP lease.

After you create the containers do the following:

  • Make a note of
    • Each container's hostname
    • Each container's MAC address
  • Log into your home router (or DHCP server)
    • Assign a static DHCP reservation to the hosts's MAC address
    • Use the hostname of the container
    • Assign the reservations to the correct VLAN (where applicable)




Wazuh Components

Since Wazuh 4.3, the default database that stores the alerts from Wazuh Manager is the Wazuh Indexer.

  • The Wazuh Indexer is a fork of the OpenSearch Indexer.
  • The Wazuh Dashboards is a fork of the OpenSearch Dashboards.
  • OpenSearch is based off a fork of Elasticsearch from several years ago and has morphed into its own product, but looks and acts very similar to Elasticsearch.

In this section, we are going to setup the Wazuh core infrastructure with the aid of some installation scripts provided by the Wazuh team.





Wazuh Indexer Container

Log into the Wazuh Indexer container and complete these steps.

Run all commands as root!





Update and Download Dependencies and Installation Files

apt clean && apt update && apt upgrade -y
apt install -y curl dnsutils net-tools sudo gnupg
cd /tmp
curl -sO https://packages.wazuh.com/4.3/wazuh-install.sh
curl -sO https://packages.wazuh.com/4.3/config.yml




Modify Installation Variables in config.yml

This file will set all of the installation variables. Pay careful attention and replace the following templates with the correct values of your Linux Containers.

  • <wazuh-indexer-hostname>
  • <wazuh-indexer-ip>
  • <wazuh-manager-hostname>
  • <wazuh-manager-ip>
  • <wazuh-dashboards-hostname>
  • <wazuh-dashboards-ip>

For example, I've named my Wazuh Indexer wazuh-indexer-1 and it's IP address is 10.148.148.6 . Set your configuration accordingly.

You are telling the Wazuh Indexer how to communicate with the other services running on the other containers.

nodes:
  # Wazuh indexer nodes
  indexer:
    - name: <wazuh-indexer-hostname>
      ip: <wazuh-indexer-ip>

  # Wazuh server nodes
  # Use node_type only with more than one Wazuh manager
  server:
    - name: <wazuh-manager-hostname>
      ip: <wazuh-manager-ip>

  # Wazuh dashboard node
  dashboard:
    - name: <wazuh-dashboards-hostname>
      ip: <wazuh-dashboards-ip>




Create Configuration Files and Certificates

cd /tmp
bash wazuh-install.sh --generate-config-files




Run the Installation Script

Replace <wazuh-indexer-hostname> with the hostname of your Linux container.

cd /tmp
bash wazuh-install.sh --wazuh-indexer <wazuh-indexer-hostname>
Example: --wazuh-indexer wazuh-indexer-1




Copy the wazuh-install-files.tar File

In the previous steps, we generated an archive called wazuh-install-files.tar . Copy this file to ALL servers that you created beforehand. You can use the scp utility or a Python web server. There are many options, the choice is yours.

This is just an example of the SCP syntax. If you do not allow root SSH login on those instances, you can use other means of transferring the file to those containers.

The files should be placed in the /tmp directory on the target hosts.

scp /tmp/wazuh-install-files.tar root@wazuh-dashboards-container-ip:/tmp/wazuh-install-files.tar
scp /tmp/wazuh-install-files.tar root@wazuh-manager-container-ip:/tmp/wazuh-install-files.tar




Wazuh Dashboards Container

Log into the Wazuh Dashboards container and complete these steps.

Run all commands as root!





Update and Download Dependencies and Installation Files

apt clean && apt update && apt upgrade -y
apt install -y curl dnsutils net-tools sudo gnupg
cd /tmp
ls wazuh-install-files.tar
curl -sO https://packages.wazuh.com/4.3/wazuh-install.sh




Run the Installation Script

Replace <wazuh-dashboards-hostname> with the hostname of your Linux container.

cd /tmp
bash wazuh-install.sh --wazuh-dashboard <wazuh-dashboards-hostname>
Example: --wazuh-dashboard wazuh-dashboards

Once the installation finishes, you will see the:

  • URL of the Dashboards web interface
  • Dashboards username
  • Dashboards password




Wazuh Manager Container

Log into the Wazuh Manager container and complete these steps.

Run all commands as root!





Update and Download Dependencies and Installation Files

apt clean && apt update && apt upgrade -y
apt install -y curl dnsutils net-tools sudo gnupg
cd /tmp
ls wazuh-install-files.tar
curl -sO https://packages.wazuh.com/4.3/wazuh-install.sh




Run the Installation Script

Replace <wazuh-manager-hostname> with the hostname of your Linux container.

cd /tmp
bash wazuh-install.sh --wazuh-dashboard <wazuh-manager-hostname>
Example: --wazuh-server wazuh-manager




Rotate Wazuh Manager Logs to Save Disk Space

nano /var/ossec/etc/ossec.conf

Add the line <rotate_interval>1d</rotate_interval> to the <global> section as shown below:

<ossec_config>
  <global>
    <rotate_interval>1d</rotate_interval>

Press CTRL + X, then y, then Enter to save your  changes. Restart the Wazuh manager: systemctl restart wazuh-manager.





Delete Stale Logs to Save Disk Space

Since this is a lab environment, I'm not too worried about log retention or shipping them off to cold storage. I'm just going to create a cron job to delete logs older than 30 days.

# Run every day at 0400
# Find directories older than 30 days and recursively delete
0 4 * * * find /var/ossec/logs/alerts -type d -mtime +30 -exec rm -rf {} \; > /dev/null 2>&1
0 4 * * * find /var/ossec/logs/archives -type d -mtime +30 -exec rm -rf {} \; > /dev/null 2>&1




OwlH Components

OwlH Manager Server Container

Log into the OwlH Manager container and complete these steps.

Run all commands as root!





Update and Download Dependencies and Installation Files

apt clean && apt update && apt upgrade -y
apt install -y curl dnsutils net-tools sudo gnupg libpcap0.8
wget http://repo.owlh.net/current-debian/owlhinstaller.tar.gz
mkdir /tmp/owlhinstaller/
tar -C /tmp/owlhinstaller -xvf owlhinstaller.tar.gz
cd /tmp/owlhinstaller/




Edit OwlH Manager Installation Variables in config.json

Note: only the sections to be configured will be displayed Notice here the action is set to install and the target is set to owlhmaster and owlhui; indicating we are installing these services.

You will see other code in the config.json file. Just leave it alone and ensure you set the correct options as described here.

...
...
...

"action": "install",

"target": [
    "owlhmaster",
    "owlhui"
],

...
...
...




Run the OwlH Manager Installer

./owlhinstaller




Install the Web Server Component

wget http://repo.owlh.net/current-debian/services/owlhui-httpd.sh
bash owlhui-httpd.sh




Update the IP Address of the Web Server

nano /var/www/owlh/conf/ui.conf

{
    "master":{
        "ip":"owlh-manager-server-ip-here",
        "port":"50001",
        "baseurl":"/owlh"
    }
}
Replace owlh-manager-server-ip-here with your OwlH manager container's IP address




OwlH Node Container

Log into the OwlH Node container and complete these steps.

Run all commands as root!





Define Network Interfaces on the Container in Proxmox

We are going to add three network interfaces to the NIDS node. One interface will be plugged into vmbr0 and this interface will be used for management – such as SSH. Another interface will be plugged into vmrb0 and will be used to receive packets on a SPAN port. The third interface will be plugged into vmbr1 and will be used to receive packets on a SPAN port.

mgmt (The management interface where you will log into the server)

sniff-prod (no DHCP reservation needed)

This interface is going to be plugged in, but not configured with an IP address.

sniff-sec (no DHCP reservation needed)

This interface is going to be plugged in, but not configured with an IP address.




Update and Download Dependencies and Installation Files

apt clean && apt update && apt upgrade -y
apt install -y curl dnsutils net-tools sudo gnupg libpcap0.8
wget http://repo.owlh.net/current-debian/owlhinstaller.tar.gz
mkdir /tmp/owlhinstaller/
tar -C /tmp/owlhinstaller -xvf owlhinstaller.tar.gz
cd /tmp/owlhinstaller/




Edit OwlH Node Installation Variables in config.json

Note: only the sections to be configured will be displayed
Make sure there is no trailing comma after "owlhnode"

You will see other code in the config.json file. Just leave it alone and ensure you set the correct options as described here.

...
...
...

"action": "install",
"repourl":"http://repo.owlh.net/current-debian/",
"target": [
    "owlhnode"
],

...
...
...




Run the OwlH Node Installer Script

./owlhinstaller




Configure Daemons

cp /usr/local/owlh/src/owlhnode/conf/service/owlhnode.service /etc/systemd/system
systemctl daemon-reload
systemctl enable owlhnode
systemctl start owlhnode




Register the Node with the OwlH Manager

Log into the OwlH Manager container at https://owlh-manager-container-ip. The default credentials are:

  • Username: admin
  • Password: admin

Click Nodes > Add NIDS

  • Node Name: Display Name
  • Node IP: OwlH Node management IP address
  • Node user: admin
  • Node Port: 50002
  • Node password: admin




Install Suricata on the OwlH Node

SSH into the OwlH node and run these commands:

apt install -y suricata
apt-mark hold suricata
mkdir -p /var/lib/suricata/rules
touch /var/lib/suricata/rules/suricata.rules




Configure Suricata on the Node

nano /etc/suricata/suricata.yaml




Define the Networks to Monitor

The default scope will cover all RFC1918 networks. This can be altered to define only LANs that are in scope for better performance.

This is the default scope and should work fine for a lab environment:

HOME_NET: "[192.168.0.0/16,10.0.0.0/8,172.16.0.0/12]"

This is an example of a customized scope, it is not required to do this, only if you'd like to set very specific subnets for monitoring.

HOME_NET: "[172.16.1.0/24,10.148.148.0/24,10.67.67.0/24,10.107.107.0/24]"




Enable eve.json Output

- eve-log:
      enabled: yes
      filetype: regular #regular|syslog|unix_dgram|unix_stream|redis
      filename: eve.json




Define Packet Capture Interfaces

Ensure your interface name(s) match your node’s interfaces. The default settings under the sniff-prod interface can stay the same.

Notice that you are configuring the sniff-prod and sniff-sec interfaces. So, first, you're changing eth0 to sniff-prod. Next, you're going down the list and change the interface from default to sniff-sec and add cluster-id: 98.

<removed by author for brevity, nothing to change> means you do not need to change anything below this line. Just make sure you have both interfaces and two different cluster IDs -- 99 and 98

af-packet:
  - interface: sniff-prod
    <removed by author for brevity, nothing to change>

  - interface: sniff-sec
    cluster-id: 98
    <removed by author for brevity, nothing to change>




Define a Fake Rules File

This is just a placeholder for OwlH to manage the rules from the OwlH Manager server.

default-rule-path: /var/lib/suricata/rules




Create a Suricata Ruleset in the OwlH Manager

The ruleset in Suricata is the collection of rules that will be pushed to any NIDS node(s) in order to detect network anomalies. Once you download individual rulesets, you will put them into a collection and push them to your NIDS node.

Select Rule Sources

Log into the OwlH Manager at https://owlh-manager-ip-address





Download Rules

Click Open Rules > Manage Rulesets Sources > Add new ruleset source.
Under Select Ruleset Type, choose Defaults.

=

Choose a ruleset – free or paid. Some rulesets will require additional information – such as the Suricata version.

This process must be done 1 by 1 for any ruleset. You cannot pick multiple rules at the same time.

Click Open Rules again and repeat this process of downloading Suricata rules until you've downloaded all of your desired rules.





Make a Ruleset

Click Open Rules again. Create a new ruleset and give it a name, description, and check the boxes to choose any source(s) to map to the ruleset.





Set the Ruleset as Default

Click the star icon to make the ruleset the default for any NIDS deployments.





Define Packet Capture Interfaces in Service Configuration

Click Nodes > Node services configuration (on your desired node) > Choose Suricata > Add Suricata

You have to do this for each interface, as there is no multi-select. So, finish adding the first interface. Then, repeat this process to add the next interface.

Using sniff-sec as an example here

Click Add > Click the edit button

  • Description: sniff-sec
  • Ruleset: Choose your ruleset
  • Interface: Choose your interface
  • Configuration file: /etc/suricata/suricata.yaml
  • Click Save

Click the Sync Ruleset button and click the Start Service button. It might throw an error, but refresh the page and wait a few moments.

Again, do this for every interface you intend to capture on





Verify the Suricata Process(es) Started

Log onto the OwlH Node container and run this command:

ps aux | grep -i suricata | grep -v grep

You should see something similar to this:

root        2230  0.0 18.2 1688308 766312 ?      Rsl  23:45   0:37 /usr/bin/suricata -D -c /etc/suricata/suricata.yaml -i sniff-prod --pidfile /var/run/suricata/2b725740-e8bd-3dd0-18ac-e4e455409932-pidfile.pid -S /etc/suricata/rules/The-Rulez.rules

root        2294  0.0 17.8 1221432 748164 ?      Rsl  23:45   0:37 /usr/bin/suricata -D -c /etc/suricata/suricata.yaml -i sniff-sec --pidfile /var/run/suricata/48f13efb-74c8-2578-a7dc-d19eae40002e-pidfile.pid -S /etc/suricata/rules/The-Rulez.rules




Install Zeek on the OwlH Node

Log into the OwlH Node and run these commands:

echo 'deb http://download.opensuse.org/repositories/security:/zeek/Debian_11/ /' > /etc/apt/sources.list.d/zeek.list
curl -fsSL https://download.opensuse.org/repositories/security:zeek/Debian_11/Release.key | gpg --dearmor > /etc/apt/trusted.gpg.d/security_zeek.gpg
apt update && apt install -y zeek

Create the file /opt/zeek/share/zeek/site/owlh.zeek and add this content:

redef record DNS::Info += {
    bro_engine:    string    &default="DNS"    &log;
};
redef record Conn::Info += {
    bro_engine:    string    &default="CONN"    &log;
};
redef record Weird::Info += {
    bro_engine:    string    &default="WEIRD"    &log;
};
redef record SSL::Info += {
    bro_engine:    string    &default="SSL"    &log;
};
redef record SSH::Info += {
    bro_engine:    string    &default="SSH"    &log;
};

Add this line to the local.zeek file to ensure the following happens when Zeek runs:

  1. Output all Zeek logs in JSON format
  2. Load the owlh.zeek file
echo '@load policy/tuning/json-logs.zeek' >> /opt/zeek/share/zeek/site/local.zeek
echo '@load owlh.zeek' >> /opt/zeek/share/zeek/site/local.zeek




Configure Zeek on the OwlH Manager

Log into the OwlH Manager at https://owlh-manager-ip-address

Click Nodes > See node files. Then, click main.conf . Find every instance of /usr/local/zeek and change it to /opt/zeek .

Now, click Nodes > Select your node > Node services configuration > Zeek. Then, click the button to enable Zeek management.

Click node.cfg. This is where you configure the interfaces and instance type. You are editing this file remotely from the web browser. Delete everything from the configuration file and add these lines instead.

Be sure to replace owlh-node-container-ip with your OwlH node's IP address.

[logger]
type=logger
host=localhost

[manager]
type=manager
host=owlh-node-container-ip

[proxy-1]
type=proxy
host=localhost

[worker-1]
type=worker
host=localhost
interface=sniff-prod

[worker-2]
type=worker
host=localhost
interface=sniff-sec

Click Save and click Deploy. You should get a visual confirmation that Zeek has started. You can also verify by running this command on the OwlH Node:

ps -eo pid,command --sort=pid | grep zeek | grep -v grep




Add Cron Jobs to Trim Stale Zeek Logs

Run these commands on the OwlH Node container. I am trimming logs older than 30 days. You can adjust your timeframe as required for your environment.

crontab -e

# Run every day at 0400
# Find directories older than 30 days and recursively delete
0 4 * * * find /opt/zeek/logs -type d -mtime +30 -exec rm -rf {} \; > /dev/null 2>&1





Install the Wazuh Agent to Send NIDS Alerts to the Wazuh Server

Install Wazuh Agent on the OwlH Node Container

Run these commands on the OwlH Node container:

curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add -
echo "deb https://packages.wazuh.com/4.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
apt update
apt install wazuh-agent
systemctl enable wazuh-agent




Configure the Wazuh agent from the OwlH Manager

Log into the OwlH Manager at https://owlh-manager-ip-address

Click Nodes > Node services configuration > Wazuh

Click Edit ossec.conf file. Change the Manager_IP to the Wazuh Manager container IP address

And, disable the Wazuh agent queue buffer. This is because the NIDS is going to create A LOT of traffic and if the Wazuh Agent tries to buffer it, it's going to halt NIDS traffic to the SIEM.

<client_buffer>
    <!-- Agent buffer options -->
    <disabled>yes</disabled>

Click Save. Click Add file. Add /var/log/owlh/alerts.json. Click on the Run Wazuh icon to start the Wazuh agent on the OwlH node.

You can confirm the Wazuh agent is running by logging into the OwlH Node container and running this command:

systemctl status wazuh-agent




Add OwlH Dashboards, Visualizations, and Templates to Wazuh Dashboards

We've added a Wazuh agent to our NIDS node and now we need to tell Wazuh how to ship the OwlH logs to Wazuh Indexer. Then, we tell Wazuh Indexer how to store the events in the database. Finally, we add some dashboards to Wazuh Dashboards visualize our NIDS events.

SSH into the Wazuh Manager Server

Run these commands:

cd /tmp
mkdir /tmp/owlhfilebeat
wget repo.owlh.net/elastic/owlh-filebeat-7.4-files.tar.gz
tar -C /tmp/owlhfilebeat -xf owlh-filebeat-7.4-files.tar.gz




Upload OwlH Visualizations and Dashboards to Wazuh Dashboards

Still running this on the Wazuh Manager container
Be sure to replace <wazuh-dashboards-container-ip> with your Wazuh Dashboards container's IP address

curl -k -u admin:admin -X POST "https://<wazuh-dashboards-container-ip>:443/api/saved_objects/_import" -H "kbn-xsrf: true" --form file=@/tmp/owlhfilebeat/owlh-kibana-objects-20191030.ndjson




Upload the OwlH Document Templates to Wazuh Indexer

Still running this on the Wazuh Manager container
Be sure to replace <wazuh-indexer-container-ip> with your Wazuh Indexer container's IP address

curl -k -u admin:admin -X PUT -H 'Content-Type: application/json' 'https://<wazuh-indexer-container-ip>:9200/_template/owlh' -d@/tmp/owlhfilebeat/owlh-template.json




Install the OwlH Filebeat Module on the Wazuh Manager Server

Still running this on the Wazuh Manager container. Filebeat is used to ship the OwlH data from Wazuh Manager to Wazuh Indexer.





Edit the OwlH Filebeat Alerts Configuration

nano /usr/share/filebeat/module/wazuh/alerts/config/alerts.yml

Ensure that it contains these lines in the configuration file. There will be other configurations in this file. Ignore them and make sure you add matching lines as shown here.

    fields:
      index_prefix: {{ .index_prefix }}
    type: log
    paths:
    {{ range $i, $path := .paths }}
     - {{$path}}
    {{ end }}
    exclude_lines: ["bro_engine"]




Modify the Filebeat Configuration

Still running this on the Wazuh Manager container. We need to tell Filebeat to ship our OwlH data now.

# Make a backup of the current configuration
cp /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.bak

nano /etc/filebeat/filebeat.yml

Ensure the following lines are in the configuration file. Be sure to replace wazuh-indexer-server-ip-here with your Wazuh Indexer container's IP address. Make sure your configuration file matches what's shown here.

# Wazuh - Filebeat configuration file
output.elasticsearch:
  hosts: wazuh-indexer-ip-here
  protocol: https
  username: "admin"
  password: "admin"
  ssl.certificate_authorities:
    - /etc/filebeat/certs/root-ca.pem
  ssl.certificate: "/etc/filebeat/certs/filebeat.pem"
  ssl.key: "/etc/filebeat/certs/filebeat.key"
setup.template.json.enabled: true
setup.template.json.path: '/etc/filebeat/wazuh-template.json'
setup.template.json.name: 'wazuh'
setup.ilm.overwrite: true
setup.ilm.enabled: false

filebeat.modules:
  - module: wazuh
    alerts:
      enabled: true
    archives:
      enabled: false
  - module: owlh
    events:
      enabled: true

# OwlH pipeline sync
filebeat.overwrite_pipelines: true




Unpack Filebeat Certificates and Restart the Service

Still running this on the Wazuh Manager container.

cd /etc/filebeat/certs/
tar -xf certs.tar   

# Restart filebeat
systemctl restart filebeat

# Ensure it is running
journalctl -f -u filebeat




Mirror Traffic to the Sniff Interfaces

Explaining the Open vSwitch SPAN Ports

You are creating the SPAN ports on the virtual switches in Proxmox. Recall that the OwlH node has three network interfaces.

  • net0 gets an IP address from the router. We can log into the server on this interface.
  • net1 does not get an IP address. This is one of the packet capture interfaces.
  • net2 does not get an IP address. This is one of the packet capture interfaces.

If you open a shell on the Proxmox server, you can see the interfaces assigned to your OwlH node container. Here is an example from my Proxmox server, where my OwlH node container has the ID of 208.

Interfaces assigned to CT 208 – my OwlH Node container

The interfaces are shown in order here:

  • veth208i0 is my mgmt interface
  • veth208i1 is my sniff-prod interface
  • veth208i2 is my sniff-sec interface

I want to mirror all traffic from every port on the production switch to veth208i1 and all traffic from every port on the security switch to veth208i2.

Run these commands on the Proxmox host.
Be sure to replace <CTID> with your OwlH Node container ID.





Production Switch

ovs-vsctl -- --id=@p get port veth<CTID>i1 -- --id=@m create mirror name=owlhProd select-all=true output-port=@p -- set bridge vmbr0 mirrors=@m

  • ovs-vsctl is the Open vSwitch control program
  • id=@p get port veth<CTID>i1
    • Store the switch port of this interface in @p
    • @p is a variable we can reference for later
  • id=@m create mirror name=owlhProd select-all=true output-port=@p
    • Create a SPAN port called owlhProd and store it in variable @m
    • Select all interfaces on the switch
    • Mirror them to output-port @p (the variable from above)
  • set bridge vmbr0 mirrors=@m
    • Add the new mirror configuration to the vmbr0 switch




Security Switch

ovs-vsctl -- --id=@p get port veth<CTID>i2 -- --id=@m create mirror name=owlhSec select-all=true output-port=@p -- set bridge vmbr1 mirrors=@m

  • ovs-vsctl is the Open vSwitch control program
  • id=@p get port veth<CTID>i2
    • Store the switch port of this interface in @p
    • @p is a variable we can reference for later
  • id=@m create mirror name=owlhSec select-all=true output-port=@p
    • Create a SPAN port called owlhSec and store it in variable @m
    • Select all interfaces on the switch
    • Mirror them to output-port @p (the variable from above)
  • set bridge vmbr1 mirrors=@m
    • Add the new mirror configuration to the vmbr1 switch




Optional: Adding a STAP Interface to the NIDS

What is a STAP Interface and When Would It Be Used?

What is it?

The STAP interface is a software TAP interface. It is a socket that binds to a physical interface and acts as a means to receive packets from other hosts on the network.





When is it Used?

It is used to receive packets from other hosts on the network where port mirroring is not an option.





OwlH Client

The STAP interface works in a client-server relationship. The server/daemon is running on the OwlH NIDS node. The client is running on a networked host from which packets will be sent to the STAP interface.

Documentation on installing the OwlH Client can be found here: https://documentation.owlh.net/en/0.17.0/main/OwlHSTAP.html#what-is-owlh-client

Currently only Linux and Windows hosts are supported





STAP Diagram

I have created a diagram that will hopefully help you visualize the purpose of a STAP interface. In the diagram there are two networks.

On 192.168.1.0/24 there is a switch that is configured with a SPAN port. On 172.16.10.0/24 there are some VMs and we want to forward their packets to the NIDS.





Power off the OwlH Node Container

A few of the next steps require the container to be off to load some drivers.





Load the "Dummy" Driver on the Proxmox Host

Run these commands on a shell on the Proxmox server itself. These are containers. They do not have their own kernel and must utilize the host’s kernel. That’s why containers are so lightweight.

# Load the driver now
modprobe dummy

# Load at boot
echo 'dummy' >> /etc/modules-load.d/modules.conf




Allow the OwlH Node Container to Search for Kernel Modules on the Host

Still running this on the Proxmox server.
Be sure to replace <container-ID> with your OwlH Node container's ID number.

# This includes the host's modules directory as a mountpoint on the CT
# We also do not want to backup this mountpoint
pct set <container-ID> --mp0 /usr/lib/modules/$(uname -r),mp=/lib/modules/$(uname -r),ro=1,backup=0




Power on the OwlH Node Container

The OwlH node should now be ready to load required drivers from the host. Turn it back on.





Create an init.d Script to Bring up the STAP Interface

Run these commands as root on the OwlH Node container.

touch /opt/owlhinterface.sh
nano /opt/owlhinterface.sh

/opt/owlhinterface.sh

#!/bin/bash
#
### BEGIN INIT INFO
# Provides:          owlhinterface
# Required-Start:    $local_fs $network
# Required-Stop:     $local_fs
# Default-Start:     3
# Default-Stop:      0 1 6
# Short-Description: Create and cleanup OwlH STAP interface
# Description:       Create and cleanup OwlH STAP interface
### END INIT INFO

PATH=/bin:/usr/bin:/sbin:/usr/sbin
DESC="OwlH Interface Script"
NAME=owlhinterface-script
SCRIPTNAME=/etc/init.d/"$NAME"
RED='\e[0;31m'
NO_COLOR='\e[0m'

case "$1" in
start)
    modprobe -v dummy numdummies=0
    ip link add owlh type dummy
    ip link set owlh mtu 65535
    ip link set owlh up
    ;;
stop)
    ip link delete owlh
    ;;
restart)
    ip link delete owlh 2>/dev/null
    modprobe -v dummy numdummies=0
    ip link add owlh type dummy
    ip link set owlh mtu 65535
    ip link set owlh up
    ;;
*)
    echo -e '${RED}This script only supports start, stop, and restart actions.$(NO_COLOR}'
    exit 2
    ;;
esac
exit 0

Exit and save the script.

# Link the script as a startup service
ln -s /opt/owlhinterface.sh /etc/init.d/owlhinterface

# Update runlevel directories
update-rc.d owlhinterface defaults

# Reload available daemons
systemctl daemon-reload

# Service will run at boot
# Can also manually run start and stop callls
service owlhinterface start/stop/restart




Register the STAP Interface with OwlH Manager

Log into the OwlH Manager server.

Choose Nodes > Node services configuration > Traffic Management - STAP. Add Socket → Network.

  • Give it a name (eg. owlh interface)
  • Default port is fine
  • Default cert is fine
  • Forward to owlh
  • Click Add

Start the STAP service





Add the Interface to the Suricata Configuration File

Click Nodes > See node files

Edit /etc/suricata/suricata.yaml

Note that I am adding the owlh interface in addition to the sniff-prod and sniff-sec interfaces

af-packet:
  - interface: sniff-prod
    cluster-id: 99
    <removed for brevity>

  - interface: sniff-sec
    cluster-id: 98
    <removed for brevity>
    
  - interface: owlh
    #threads: auto
    cluster-id: 97
    cluster-type: cluster_flow
    defrag: yes
    #rollover: yes
    #use-mmap: yes
    #mmap-locked: yes
    tpacket-v3: yes
    ring-size: 2048
    block-size: 409600
    #block-timeout: 10
    #use-emergency-flush: yes
    #checksum-checks: kernel
    #bpf-filter: port 80 or udp
    #copy-mode: ips
    #copy-iface: eth1

Click Save





Add the STAP Interface to the Suricata Service Configuration

Click Nodes > Node services configuration > Suricata > Add Suricata. Name it stap.

Click Save. Click the sync ruleset button and click the start service button.





Click Nodes > Node services configuration > Zeek. Click node.cfg.

Note that I am adding the owlh interface in addition to the sniff-prod and sniff-sec interfaces

...removed by author for brevity...

[worker-1]
type=worker
host=localhost
interface=sniff-prod

[worker-2]
type=worker
host=localhost
interface=sniff-sec

[worker-3]
type=worker
host=localhost
interface=owlh

Click Save.





Installing Wazuh Agents on Endpoints (HIDS)

The Wazuh agent is a host intrusion detection system (HIDS). The purpose of Wazuh agents is to monitor endpoints for security configuration issues and integrity issues with the file system, operating system, and much more.

Prerequisites

  • Compatible host
  • Host needs to be able to communicate with the Wazuh server
    • May need to open routes and/or firewall ports




How to Install

Refer to the official documentation for installing endpoint agents on your servers, desktops, etc.

Wazuh agent - Installation guide · Wazuh documentation
User manual, installation and configuration guides. Learn how to get the most out of the Wazuh platform.




Viewing the SIEM Dashboards

  1. Log into the Wazuh Dashboards web server – https://wazuh-dashboards-container-ip
  2. Credentials are provided after installing Wazuh Dashboards
Wazuh dashboard - Components · Wazuh documentation
User manual, installation and configuration guides. Learn how to get the most out of the Wazuh platform.
Searching for alerts using the Wazuh app for Kibana · Wazuh · The Open Source Security Platform
Learn how you can use the search tools provided on the Wazuh app for Kibana, thanks to its integration with the Elastic Stack.
For an older version, but still a good walkthrough




Important: Define an Index Management Policy

ATTENTION!

You REALLY want to do this now as opposed to later.

  • Save your disk space
  • Reduce stressful troubleshooting hours
  • Trim your indices and improve performance

Do it now! Please.

Wazuh Index Management Policy
In this post, I show how to manage your Wazuh Indexer indices in order to improve performance and manage disk space consumed by indices.




Troubleshooting the SIEM

Changing Default Passwords

OwlH Manager Admin Password

  1. Log into the OwlH Manager server
  2. Click the user profile in the top-right
  3. Change the password


Wazuh Infrastructure Admin Password

Change the Wazuh indexer passwords - Securing Wazuh
User manual, installation and configuration guides. Learn how to get the most out of the Wazuh platform.




Alerts Stopped Registering in Wazuh Dashboards

In my past experience, this has almost always been due to hitting the maximum number of shards or running out of disk space.

If you haven't already done so, consider looking into an Index Management Policy.

Wazuh Index Management Policy
In this post, I show how to manage your Wazuh Indexer indices in order to improve performance and manage disk space consumed by indices.

I would recommend inspecting things in the following order:

  1. Make sure the Wazuh Manager service is running
  2. Make sure the Filebeat service is running on the Wazuh Manager server
    • Check Filebeat logs
      • If you see logs on hitting the shard limit
        • Consider adding another Wazuh Indexer node (see below)
        • Clean up old indices with an index management policy
  3. Make sure the Wazuh Indexer service is running
    • Check Wazuh Indexer logs
    • Make sure you have enough disk space available
      • If your disk is 95% full, Wazuh Indexer will prevent additional writes
        • Consider adding more disk space and/or another Wazuh Indexer node
        • Clean up old indices with an index management policy




Wazuh Dashboards Keeps Prompting to Select a Tenant when Logging into the Web Portal

Resolution: Need to disable multi-tenancy as it does not apply to Wazuh

  1. SSH into the Wazuh Dashboards container and edit the /etc/wazuh-dashboard/opensearch_dashboards.yml file
  2. Ensure these lines are present and/or match
opensearch_security.multitenancy.enabled: false




Extending Session Timeout in Wazuh Dashboards

  1. SSH into the Wazuh Dashboards container and edit the /etc/wazuh-dashboard/opensearch_dashboards.yml file
  2. Ensure these lines are present and/or match
opensearch_security.cookie.ttl: 86400000
opensearch_security.session.ttl: 86400000
opensearch_security.session.keepalive: true




Extending Wazuh's Capabilities

The folks over at OpenSecure have done a really fantastic job at creating content that showcases Wazuh's capabilities and ways to extend it with various integrations. I wholeheartedly recommend taking a look.

OpenSecure
Focusing on Open Source cybersecurity products that provide a robust and scalable solution that can be customized to integrate with any network.

Also, have a look at a some of the additional Wazuh content I've written. If I included, everything here, the guide would quickly grow out of scope.

Wazuh - 0xBEN
A blog about experiences in cybersecurity, information security, technology, and roasting coffee at home.




Follow-Up Post: Wazuh Indexer Cluster

Adding this here as an afterthought. I had been running my SIEM for quite some time – adding Wazuh agents to the lab – and it was growing.

My single Wazuh Indexer node was getting hammered with data and running into stability issues. So, I decided it would be a good time to expand my single node backend to a multi-node cluster. Here's how I did it:

Wazuh: Upgrading Elasticsearch to a Multi-Node Cluster
In this post, I show you how to horizontally scale your Elasticsearch single-node setup to a multi-node cluster.




Next Step: Create a Kali Linux VM

Create a Kali Linux VM in Proxmox
In this module, we will look at the process of creating a Kali Linux VM using the command line in Proxmox

Spread the word

Keep reading