Adding a Comprehensive Wazuh SIEM and Network Intrusion Detection System (NIDS) to the Proxmox Lab

In this module, we will take a look at the process setting up a comprehensive Wazuh SIEM, including a NIDS and some HIDS agents, in our Proxmox home lab.
Adding a Comprehensive Wazuh SIEM and Network Intrusion Detection System (NIDS) to the Proxmox Lab
In: Proxmox, Defend, Home Lab, Wazuh, SIEM

This page is part of the larger series of converting an old laptop into a bare metal home lab server. Click the link to be taken back to the original post.

Proxmox VE 8: Converting a Laptop into a Bare Metal Server
In this post, we will take a look at an in-detail process of setting up a Proxmox home lab on a bare metal server.
ℹ️
Please note that this step is COMPLETELY OPTIONAL. If you'd rather not work through the SIEM setup right now, you can continue on to the next step and come back here later.





Reviewing Some Networking Concepts

Router and Switch with Default Configurations

In this scenario, there is a router with a very simple, default configuration. The router has the default private IP address range of 172.16.1.0/24. The DHCP server is enabled and no VLANs are configured.





Router with VLANs Configured, Default Switch Configurations

In this scenario, we have configured some VLANs in the router. That way, the router and switch can work harmoniously to route packets if it receives an Ethernet frame that was tagged with a VLAN ID. However, none of the switch ports have been configured with VLAN ID tags.





Router with VLANs and Configured Switch Ports

In this scenario, the switch tags the Ethernet frames with a VLAN tag for any configured switch ports. That way if an Ethernet frame contains a VLAN ID tag, the switch checks configured ports for the correct VLAN ID and MAC address.





Router with VLANs, Tagged Switched Ports, and Port Mirroring

Port mirroring is where we configure a switch to send a copy of every Ethernet frame to another port on the switch. This is a common configuration with Intrusion Detection Systems when you want to monitor all traffic on a network.





Understanding the Proxmox Networking

  • VMBR0 is the switch where all of your VMs and containers connected to your home network will be.
    • If you have a home router that supports 802.1q, then you could apply some VLAN segmentation to VMBR0 divide your VMs and containers into further subnets if desired
  • VMBR1 is the switch where all of the security and vulnerable infrastructure will be attached for further research. In our lab environment, we have already added some VLANs to VMBR1, because pfSense supports 802.1q.
  • The NIDS is connected to both VMBR0 and VMBR1. Both switches are configured such that the ports where the IDS is plugged in are mirror ports and every other port will send a copy of every Ethernet frame to the IDS.





Order of Operations

  1. Configure the Wazuh Indexer container
    • This is the database where event data will be stored
    • Any alerts that are picked up by Wazuh will be shipped here
  2. Configure the Wazuh Manager container
    • This is the SIEM that will collect the logs from any agents
    • Agents are running on endpoints on our network
    • The agents are HIDS which will forward event data to the SIEM
    • You can also forward syslog to Wazuh for processing as well if you cannot install the agent on a host
  3. Configure the Wazuh Dashboards container
    • Wazuh Dashboards serves three purposes:
      • A web server that displays dashboards about alerts data
      • A Wazuh API client that can control certain features in Wazuh
      • A Wazuh Indexer API client that queries the database
  4. Configure the OwlH Manager container
    • OwlH Manager serves three purposes:
      • Keeping Suricata rules up to date
      • Pushing configurations to any registered NIDS node(s)
      • Keeping services running on the NIDS node(s)
  5. Configure the OwlH NIDS node container
    • This is the network intrusion detection system
    • It runs the following applications to generate network event data
      • Suricata compares packets against a set of rules for anomalies
      • Zeek adds lots of metadata about packets
      • Wazuh agent sends alert data to the SIEM
  6. Install Wazuh Agents on Servers and Desktops
    • These are the endpoints to be monitored
    • They can be configured to ingest any log and send to the Wazuh Manager
    • The Wazuh Manager will receive the logs and attempt to decode them and parse them for an alertable event





Desired End State

  • OwlH Manager
    • Downloaded and configured ruleset to be pushed to node(s)
    • Interfaces defined for monitoring and auto-start
    • Joined the OwlH NIDS node to the OwlH Manager
    • Pushed configurations to the OwlH NIDS node
  • OwlH NIDS
    • Installed Suricata and Zeek
    • Configured network interfaces
    • Capturing mirrored packets and analyzing them
    • Wazuh agent is installed and shipping alerts to Wazuh Manager
  • Wazuh Manager
    • Wazuh software installed and running
    • Accepting inputs from agents
    • Analyzing and sending to Wazuh Indexer
  • Wazuh Indexer
    • OwlH NIDS templates installed
    • Receiving inputs from Wazuh Manager
  • Wazuh Dashboards
    • OwlH dashboards installed
    • Successfully connecting to Wazuh Indexer and Dashboards APIs
  • Wazuh Agents
    • Wazuh agent installed on any server or workstation to be monitored
    • As long as the endpoint can establish a TCP/IP connection with the Wazuh Manager, it can ship the logs





Stage Your Containers

Log into Proxmox and create five Linux Containers for your infrastructure.

  1. Wazuh Indexer
    • Hostname: wazuh-indexer
    • Debian 11 LXC
    • Memory: 4 GiB (1 GiB swap) – 8 GiB recommended
    • 2 CPU cores – 4 CPU cores recommended
    • 25 GB disk (good enough for a lab)
    • Set your network (and VLAN) as desired
    • Set your DNS domain and servers as desired
  2. Wazuh Dashboards
    • Hostname: wazuh-dashboards
    • Debian 11 LXC
    • Memory: 512 GiB (512 MiB swap)
    • 2 CPU cores
    • 10 GB storage
    • Set your network (and VLAN) as desired
    • Set your DNS domain and servers as desired
  3. Wazuh Manager
    • Hostname: wazuh-manager
    • Debian 11 LXC
    • Memory: 1 GiB (512 MiB swap)
    • 2 CPU cores
    • 10 GB storage
    • Set your network (and VLAN) as desired
    • Set your DNS domain and servers as desired
  4. OwlH Manager
    • Hostname: owlh-manager
    • Debian 11 LXC
    • Memory: 512 MiB (512 MiB swap)
    • 1 CPU cores
    • 10 GB storage
    • Set your network (and VLAN) as desired
    • Set your DNS domain and servers as desired
  5. OwlH Node
    • Hostname: owlh-node
    • Debian 11 LXC
    • Memory: 4 GiB (1 GiB swap)
    • 4 CPU cores
    • 50 GB storage (good enough for a lab)
    • Set your network (and VLAN) as desired
    • Set your DNS domain and servers as desired





DHCP Reservations

⚠️
Please give your containers DHCP reservations!

After you create the containers do the following:

  • Make a note of
    • Each container's hostname
    • Each container's MAC address
  • Log into your home router (or DHCP server)
    • Assign a static DHCP reservation to the hosts's MAC address
    • Use the hostname of the container
    • Assign the reservations to the correct VLAN (where applicable)





Wazuh Components

Since Wazuh 4.3, the default database that stores the alerts from Wazuh Manager is the Wazuh Indexer.

  • The Wazuh Indexer is a fork of the OpenSearch Indexer.
  • The Wazuh Dashboards is a fork of the OpenSearch Dashboards.
  • OpenSearch is based off a fork of Elasticsearch from several years ago and has morphed into its own product, but looks and acts very similar to Elasticsearch.

In this section, we are going to setup the Wazuh core infrastructure with the aid of some installation scripts provided by the Wazuh team.





Wazuh Indexer Container

Run all commands as root!

Log into the Wazuh Indexer container and complete these steps.





Update and Download Dependencies and Installation Files

apt clean && apt update && apt upgrade -y
apt install -y curl dnsutils net-tools sudo gnupg
cd /tmp
curl -sO https://packages.wazuh.com/4.3/wazuh-install.sh
curl -sO https://packages.wazuh.com/4.3/config.yml





Modify Installation Variables in config.yml

This file will set all of the installation variables. Pay careful attention and replace the following templates with the correct values of your Linux Containers.

  • <wazuh-indexer-hostname>
  • <wazuh-indexer-ip>
  • <wazuh-manager-hostname>
  • <wazuh-manager-ip>
  • <wazuh-dashboards-hostname>
  • <wazuh-dashboards-ip>

For example, I've named my Wazuh Indexer wazuh-indexer-1 and it's IP address is 10.148.148.6 . Set your configuration accordingly.

You are telling the Wazuh Indexer how to communicate with the other services running on the other containers.

nodes:
  # Wazuh indexer nodes
  indexer:
    - name: <wazuh-indexer-hostname>
      ip: <wazuh-indexer-ip>

  # Wazuh server nodes
  # Use node_type only with more than one Wazuh manager
  server:
    - name: <wazuh-manager-hostname>
      ip: <wazuh-manager-ip>

  # Wazuh dashboard node
  dashboard:
    - name: <wazuh-dashboards-hostname>
      ip: <wazuh-dashboards-ip>





Create Configuration Files and Certificates

cd /tmp
bash wazuh-install.sh --generate-config-files --ignore-check





Run the Installation Script

Replace <wazuh-indexer-hostname> with the hostname of your Linux container.

cd /tmp
bash wazuh-install.sh --wazuh-indexer <wazuh-indexer-hostname> --ignore-check
bash wazuh-install.sh --start-cluster --ignore-check

Example: --wazuh-indexer wazuh-indexer --ignore-check





Copy the wazuh-install-files.tar File

⚠️
In the previous steps, we generated an archive called wazuh-install-files.tar. Copy this file to ALL servers that you created beforehand. You can use the scp utility or a Python web server. There are many options, the choice is yours.
ℹ️
This is just an example of the SCP syntax. If you do not allow root SSH login on those instances, you can use other means of transferring the file to those containers.

The files should be placed in the /tmp directory on the target hosts.

scp /tmp/wazuh-install-files.tar root@wazuh-dashboards-container-ip:/tmp/wazuh-install-files.tar
scp /tmp/wazuh-install-files.tar root@wazuh-manager-container-ip:/tmp/wazuh-install-files.tar





Prevent Unplanned Upgrades

You should plan to upgrade your Wazuh infrastructure in such a way that maintains the availability and integrity of your SIEM. Unplanned upgrades can cause incompatibilities and lead to time-consuming restorations and/or reinstallations.

⚠️
Please note that if you install a newer version of the wazuh-indexer package later using apt install wazuh-indexer, you will have to re-hold the package using apt-mark hold wazuh-indexer
apt-mark hold wazuh-indexer





Wazuh Manager Container

Run all commands as root!

Log into the Wazuh Manager container and complete these steps.





Update and Download Dependencies and Installation Files

apt clean && apt update && apt upgrade -y
apt install -y curl dnsutils net-tools sudo gnupg
cd /tmp
ls wazuh-install-files.tar
curl -sO https://packages.wazuh.com/4.3/wazuh-install.sh





Run the Installation Script

Replace <wazuh-manager-hostname> with the hostname of your Linux container.

cd /tmp
bash wazuh-install.sh --wazuh-server <wazuh-manager-hostname> --ignore-check

Example: --wazuh-server wazuh-manager





Rotate Wazuh Manager Logs to Save Disk Space

nano /var/ossec/etc/ossec.conf

Add the line <rotate_interval>1d</rotate_interval> to the <global> section as shown below:

<ossec_config>
  <global>
    <rotate_interval>1d</rotate_interval>

Press CTRL + X, then y, then Enter to save your changes. Restart the Wazuh manager: systemctl restart wazuh-manager.





Delete Stale Logs to Save Disk Space

Since this is a lab environment, I'm not too worried about log retention or shipping them off to cold storage. I'm just going to create a cron job to delete logs older than 30 days.

# Run every day at 0400
# Find directories older than 30 days and recursively delete
0 4 * * * find /var/ossec/logs/alerts -type d -mtime +30 -exec rm -rf {} \; > /dev/null 2>&1
0 4 * * * find /var/ossec/logs/archives -type d -mtime +30 -exec rm -rf {} \; > /dev/null 2>&1





Prevent Unplanned Upgrades

You should plan the upgrades of your Wazuh manager and Wazuh agents. Having agents that are higher versions than your Wazuh manager can lead to compatibility issues.

⚠️
Please note that if you install a newer version of the wazuh-manager package later using apt install wazuh-manager, you will have to re-hold the package using apt-mark hold wazuh-manager
apt-mark hold wazuh-manager





Wazuh Dashboards Container

Run all commands as root!

Log into the Wazuh Dashboards container and complete these steps.





Update and Download Dependencies and Installation Files

apt clean && apt update && apt upgrade -y
apt install -y curl dnsutils net-tools sudo gnupg
cd /tmp
ls wazuh-install-files.tar
curl -sO https://packages.wazuh.com/4.3/wazuh-install.sh





Run the Installation Script

Replace <wazuh-dashboards-hostname> with the hostname of your Linux container.

cd /tmp
bash wazuh-install.sh --wazuh-dashboard <wazuh-dashboards-hostname> --ignore-check

Example: --wazuh-dashboard wazuh-dashboards

Once the installation finishes, you will see the:

  • URL of the Dashboards web interface
  • Dashboards username
  • Dashboards password





Prevent Unplanned Upgrades

Again, plan your Wazuh infrastructure upgrades. Putting the packages on hold prevents unplanned upgrades, which can lead to loss of data and lengthy restoration of service.

⚠️
Please note that if you install a newer version of the wazuh-dashboard package later using apt install wazuh-dashboard, you will have to re-hold the package using apt-mark hold wazuh-dashboard
apt-mark hold wazuh-dashboard





OwlH Components

OwlH Manager Container

Run all commands as root!

Log into the OwlH Manager container and complete these steps.





Update and Download Dependencies and Installation Files

apt clean && apt update && apt upgrade -y
apt install -y curl dnsutils net-tools sudo gnupg libpcap0.8
cd /tmp
wget http://repo.owlh.net/current-debian/owlhinstaller.tar.gz
mkdir /tmp/owlhinstaller/
tar -C /tmp/owlhinstaller -xvf owlhinstaller.tar.gz
cd /tmp/owlhinstaller/





Edit OwlH Manager Installation Variables in config.json

ℹ️
Only the sections to be configured will be displayed

Notice here the action is set to install and the target is set to owlhmaster and owlhui; indicating we are installing these services.

You will see other code in the config.json file. Just leave it alone and ensure you set the correct options as described here.

...
...
...

"action": "install",

"target": [
    "owlhmaster",
    "owlhui"
],

...
...
...

Run the installer

./owlhinstaller





Install the Web Server Component

wget http://repo.owlh.net/current-debian/services/owlhui-httpd.sh
bash owlhui-httpd.sh





Update the IP Address of the Web Server

nano /var/www/owlh/conf/ui.conf

{
    "master":{
        "ip":"owlh-manager-server-ip-here",
        "port":"50001",
        "baseurl":"/owlh"
    }
}

Replace owlh-manager-server-ip-here with your OwlH manager container's IP address

systemctl restart owlhmaster
systemctl restart apache2





OwlH Node Container

Run all commands as root!

Log into the OwlH Node container and complete these steps.





Define Network Interfaces on the Container in Proxmox

We are going to add three network interfaces to the NIDS node. One interface will be plugged into vmbr0 and this interface will be used for management – such as SSH. Another interface will be plugged into vmrb0 and will be used to receive packets on a SPAN port. The third interface will be plugged into vmbr1 and will be used to receive packets on a SPAN port.

mgmt (The management interface where you will log into the server)

sniff-prod (no DHCP reservation needed)

This interface is going to be plugged in, but not configured with an IP address.

sniff-sec (no DHCP reservation needed)

This interface is going to be plugged in, but not configured with an IP address.





Bring Sniff Interfaces up

nano /etc/network/interfaces

Add these interface configurations to the bottom of the file.

auto sniff-prod
iface sniff-prod inet manual

auto sniff-sec
iface sniff-sec inet manual

Restart the networking daemon. This will kill your SSH session.

systemctl restart networking





Install OwlH Node and Configure Daemons

apt clean && apt update && apt upgrade -y
apt install -y curl dnsutils net-tools sudo gnupg libpcap0.8
cd /tmp
wget http://repo.owlh.net/current-debian/owlhinstaller.tar.gz
mkdir /tmp/owlhinstaller/
tar -C /tmp/owlhinstaller -xvf owlhinstaller.tar.gz
cd /tmp/owlhinstaller/



Edit the config.json File

ℹ️
Only the sections to be configured will be displayed
Make sure there is no trailing comma after "owlhnode"

You will see other code in the config.json file. Just leave it alone and ensure you set the correct options as described here.

nano ./config.json
...
...
...

"action": "install",
"repourl":"http://repo.owlh.net/current-debian/",
"target": [
    "owlhnode"
],

...
...
...





Run the Installer and Configure the Daemon

./owlhinstaller

cp /usr/local/owlh/src/owlhnode/conf/service/owlhnode.service /etc/systemd/system
systemctl daemon-reload
systemctl enable owlhnode
systemctl start owlhnode





Register the Node with the OwlH Manager

Log into the OwlH Manager container at https://owlh-manager-container-ip. The default credentials are:

  • Username: admin
  • Password: admin

Click Nodes > Add NIDS

  • Node Name: Display Name
  • Node IP: OwlH Node management IP address
  • Node user: admin
  • Node Port: 50002
  • Node password: admin





Install Suricata on the OwlH Node

⚠️
Run these commands as root on the OwlH Node
apt install -y suricata
systemctl disable --now suricata
mkdir -p /var/lib/suricata/rules
touch /var/lib/suricata/rules/suricata.rules

We disable the Suricata service, as OwlH will be managing it for us

sed -i.bak -e 's/\/var\/run\/suricata/\/var\/run/g' /usr/local/owlh/src/owlhnode/conf/main.conf

Change all instances of '/var/run/suricata' to '/var/run' to match Suricata config

systemctl restart owlhnode.service

Restart the 'owlhnode' service to adopt the updated 'main.conf'





Configure Suricata on the OwlH Node

nano /etc/suricata/suricata.yaml
- eve-log:
      enabled: yes
      filetype: regular #regular|syslog|unix_dgram|unix_stream|redis
      filename: eve.json

Ensure eve.json output is enabled

af-packet:
  - interface: sniff-prod
    <removed by author for brevity, nothing to change>

  - interface: sniff-sec
    cluster-id: 98
    ...
    ...
    <removed by author for brevity, nothing to change>

Note the 'sniff-prod' and 'sniff-sec' interfaces

default-rule-path: /var/lib/suricata/rules

Point to a fake rules file, as OwlH will be managing the rules for Suricata

We are now finished editing the suricata.yaml file. Please close the file and save your changes.





Create a Suricata Ruleset in the OwlH Manager

The ruleset in Suricata is the collection of rules that will be pushed to any NIDS node(s) in order to detect network anomalies. Once you download individual rulesets, you will put them into a collection and push them to your NIDS node.

Select Rule Sources

Log into the OwlH Manager at https://owlh-manager-ip-address





Download Rules

Click Open Rules > Manage Rulesets Sources > Add new ruleset source.
Under Select Ruleset Type, choose Defaults.

=
ℹ️
Some rulesets will require additional information – such as the Suricata version.
⚠️
This process must be done 1 by 1 for any ruleset. You cannot pick multiple rules at the same time.

Repeat the process as many times as needed. Choose a ruleset – free or paid. Click Open Rules again and repeat this process of downloading Suricata rules until you've downloaded all of your desired rules.





Make a Ruleset

Click Open Rules again. Create a new ruleset and give it a name, description, and check the boxes to choose any source(s) to map to the ruleset.





Set the Ruleset as Default

Click the star icon to make the ruleset the default for any NIDS deployments.





Define Packet Capture Interfaces in Service Configuration

Click Nodes > Node services configuration (on your desired node) > Choose Suricata > Add Suricata

ℹ️
You have to do this for each interface, as there is no multi-select

Finish adding the first interface. Then, repeat this process to add the next interface.

Using sniff-sec as an example here

Click Add > Click the edit button

  • Description: sniff-sec
  • Ruleset: Choose your ruleset
  • Interface: Choose your interface
  • Configuration file: /etc/suricata/suricata.yaml
  • Click Save

Click the Sync Ruleset button and click the Start Service button. It might throw an error, but refresh the page and wait a few moments.

ℹ️
Again, do this for every interface you intend to capture on





Verify the Suricata Process(es) Started

Log onto the OwlH Node container and run this command:

ps aux | grep -i suricata | grep -v grep

You should see something similar to this:

root        2230  0.0 18.2 1688308 766312 ?      Rsl  23:45   0:37 /usr/bin/suricata -D -c /etc/suricata/suricata.yaml -i sniff-prod --pidfile /var/run/2b725740-e8bd-3dd0-18ac-e4e455409932-pidfile.pid -S /etc/suricata/rules/The-Rulez.rules

root        2294  0.0 17.8 1221432 748164 ?      Rsl  23:45   0:37 /usr/bin/suricata -D -c /etc/suricata/suricata.yaml -i sniff-sec --pidfile /var/run/48f13efb-74c8-2578-a7dc-d19eae40002e-pidfile.pid -S /etc/suricata/rules/The-Rulez.rules





Install Zeek on the OwlH Node

Log into the OwlH Node and run these commands:

echo 'deb http://download.opensuse.org/repositories/security:/zeek/Debian_11/ /' > /etc/apt/sources.list.d/zeek.list
curl -fsSL https://download.opensuse.org/repositories/security:zeek/Debian_11/Release.key | gpg --dearmor > /etc/apt/trusted.gpg.d/security_zeek.gpg
apt update && apt install -y zeek

Create the file /opt/zeek/share/zeek/site/owlh.zeek and add this content:

redef record DNS::Info += {
    bro_engine:    string    &default="DNS"    &log;
};
redef record Conn::Info += {
    bro_engine:    string    &default="CONN"    &log;
};
redef record Weird::Info += {
    bro_engine:    string    &default="WEIRD"    &log;
};
redef record SSL::Info += {
    bro_engine:    string    &default="SSL"    &log;
};
redef record SSH::Info += {
    bro_engine:    string    &default="SSH"    &log;
};

These definitions tell Zeek add the string bro_engine: TYPE to their respective logs as they're analyzed. So, if Zeek is logging DNS as JSON, append the string bro_engine: DNS to the event and log it. This bro_engine syntax will be used as a filter string later on in Filebeat.

Add these lines to the local.zeek file to ensure the following happens when Zeek runs:

  1. Output all Zeek logs in JSON format
  2. Load the owlh.zeek file
echo '@load policy/tuning/json-logs.zeek' >> /opt/zeek/share/zeek/site/local.zeek
echo '@load owlh.zeek' >> /opt/zeek/share/zeek/site/local.zeek





Configure Zeek on the OwlH Manager

Log into the OwlH Manager at https://owlh-manager-ip-address

Click Nodes > See node files. Then, click main.conf . Find every instance of /usr/local/zeek and change it to /opt/zeek .

Now, click Nodes > Select your node > Node services configuration > Zeek. Then, click the button to enable Zeek management.

Click node.cfg. This is where you configure the interfaces and instance type. You are editing this file remotely from the web browser. Delete everything from the configuration file and add these lines instead.

ℹ️
Be sure to replace owlh-node-container-ip with your OwlH node's IP address
[logger]
type=logger
host=localhost

[manager]
type=manager
host=owlh-node-container-ip

[proxy-1]
type=proxy
host=localhost

[worker-1]
type=worker
host=localhost
interface=sniff-prod

[worker-2]
type=worker
host=localhost
interface=sniff-sec

Click Save and click Deploy. You should get a visual confirmation that Zeek has started. You can also verify by running this command on the OwlH Node:

ps -eo pid,command --sort=pid | grep zeek | grep -v grep





Add Cron Jobs to Trim Stale Zeek Logs

Run these commands on the OwlH Node container. I am trimming logs older than 30 days. You can adjust your timeframe as required for your environment.

crontab -e

# Run every day at 0400
# Find directories older than 30 days and recursively delete
0 4 * * * find /opt/zeek/logs -type d -mtime +30 -exec rm -rf {} \; > /dev/null 2>&1





Install the Wazuh Agent to Send NIDS Alerts to the Wazuh Server

Install Wazuh Agent on the OwlH Node Container

Run these commands on the OwlH Node container:

curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add -
echo "deb https://packages.wazuh.com/4.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
apt update
apt install wazuh-agent
systemctl enable wazuh-agent





Prevent Unplanned Upgrades of the Wazuh Agent

⚠️
Please note that if you install a newer version of the wazuh-agent package later using apt install wazuh-agent, you will have to re-hold the package using apt-mark hold wazuh-agent
apt-mark hold wazuh-agent





Configure the Wazuh agent from the OwlH Manager

Log into the OwlH Manager at https://owlh-manager-ip-address

Click Nodes > See Node Files

Edit main.conf and replace every instance of /var/ossec/bin/ossec-control with /var/ossec/bin/wazuh-control.

Click Nodes > Node services configuration > Wazuh

Click Edit ossec.conf file. Change the Manager_IP to the Wazuh Manager container IP address.

ℹ️
We are going to disable the Wazuh agent queue buffer. This is because the NIDS is going to create A LOT of traffic and if the Wazuh Agent tries to buffer it, it's going to halt NIDS traffic to the SIEM.
<client_buffer>
    <!-- Agent buffer options -->
    <disabled>yes</disabled>

Click Save. Click Add file. Add /var/log/owlh/alerts.json. Click on the Run Wazuh icon to start the Wazuh agent on the OwlH node.

You can confirm the Wazuh agent is running by logging into the OwlH Node container and running this command:

systemctl status wazuh-agent





Add OwlH Dashboards, Visualizations, and Templates to Wazuh Dashboards

We've added a Wazuh agent to our NIDS node and now we need to tell Wazuh how to ship the OwlH logs to Wazuh Indexer. Then, we tell Wazuh Indexer how to store the events in the database. Finally, we add some dashboards to Wazuh Dashboards visualize our NIDS events.

SSH into the Wazuh Manager Server

Run these commands:

cd /tmp
mkdir /tmp/owlhfilebeat
wget repo.owlh.net/elastic/owlh-filebeat-7.4-files.tar.gz
tar -C /tmp/owlhfilebeat -xf owlh-filebeat-7.4-files.tar.gz





Upload OwlH Visualizations and Dashboards to Wazuh Dashboards

⚠️
We are still running this on the Wazuh Manager container
Be sure to replace <wazuh-dashboards-container-ip> with your Wazuh Dashboards container's IP address
curl -k -u admin -X POST "https://<wazuh-dashboards-container-ip>:443/api/saved_objects/_import" -H "osd-xsrf: true" --form file=@/tmp/owlhfilebeat/owlh-kibana-objects-20191030.ndjson

When prompted, enter the password for the Wazuh Dashboards admin user that was created when you ran the install script.





Upload the OwlH Document Templates to Wazuh Indexer

⚠️
We are still running this on the Wazuh Manager container
Be sure to replace <wazuh-indexer-container-ip> with your Wazuh Dashboards container's IP address
curl -k -u admin -X PUT -H 'Content-Type: application/json' 'https://<wazuh-indexer-container-ip>:9200/_template/owlh' -d@/tmp/owlhfilebeat/owlh-template.json

When prompted, enter the password for the Wazuh Dashboards admin user that was created when you ran the install script.





Install the OwlH Filebeat Module on the Wazuh Manager Server

⚠️
We are still running this on the Wazuh Manager container

Filebeat is used to ship the OwlH data from Wazuh Manager to Wazuh Indexer. So, when the Wazuh Agent running on the OwlH node ships NIDS logs to the Wazuh Manager server, any logs generated by Wazuh will be read by the OwlH Filebeat module and shipped into Wazuh Indexer.

cd /tmp/owlhfilebeat
tar -C /usr/share/filebeat/module/ -xvf owlh-filebeat-7.4-module.tar.gz





Edit the Wazuh Filebeat Alerts Configuration

nano /usr/share/filebeat/module/wazuh/alerts/config/alerts.yml

There will be other configurations in this file. Ignore them and make sure you add matching lines as shown here.
    fields:
      index_prefix: {{ .index_prefix }}
    type: log
    paths:
    {{ range $i, $path := .paths }}
     - {{$path}}
    {{ end }}
    exclude_lines: ["bro_engine"]

We're adding the exclude_lines: ["bro_engine"] directive to the YAML configuration. This tells the wazuh Filebeat module to ignore any logs where the bro_engine string is present. We want to do this, because it is the job of the owlh Filebeat module to ingest the bro_engine logs.





Modify the Filebeat Configuration

⚠️
We are still running this on the Wazuh Manager container

We need to tell Filebeat to ship our OwlH data now, so we add the owlh Filebeat module to the configuration and enable it.

# Make a backup of the current configuration
cp /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.bak

nano /etc/filebeat/filebeat.yml

Ensure the following lines are in the configuration file. Be sure to replace wazuh-indexer-server-ip-here with your Wazuh Indexer container's IP address. Make sure your configuration file matches what's shown here.

# Wazuh - Filebeat configuration file
output.elasticsearch:
  protocol: https
  username: ${username}
  password: ${password}
  ssl.certificate_authorities:
    - /etc/filebeat/certs/root-ca.pem
  ssl.certificate: "/etc/filebeat/certs/wazuh-manager.pem"
  ssl.key: "/etc/filebeat/certs/wazuh-manager-key.pem"
setup.template.json.enabled: true
setup.template.json.path: '/etc/filebeat/wazuh-template.json'
setup.template.json.name: 'wazuh'
setup.ilm.overwrite: true
setup.ilm.enabled: false

filebeat.modules:
  - module: wazuh
    alerts:
      enabled: true
    archives:
      enabled: false
  - module: owlh
    events:
      enabled: true

logging.level: info
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat
  keepfiles: 7
  permissions: 0644

logging.metrics.enabled: false

seccomp:
  default_action: allow
  syscalls:
  - action: allow
    names:
    - rseq

output.elasticsearch.hosts:
  - wazuh-indexer-ip-here

# OwlH pipeline sync
filebeat.overwrite_pipelines: true





Restart Filebeat

⚠️
We are still running this on the Wazuh Manager container
# Restart filebeat
systemctl restart filebeat

# Ensure it is running
systemctl status filebeat

# Ensure good connectivity
filebeat test output





Mirror Traffic to the Sniff Interfaces

Explaining the Open vSwitch SPAN Ports

You are creating the SPAN ports on the virtual switches in Proxmox. Recall that the OwlH node has three network interfaces.

  • net0 gets an IP address from the router. We can log into the server on this interface.
  • net1 does not get an IP address. This is one of the packet capture interfaces.
  • net2 does not get an IP address. This is one of the packet capture interfaces.

If you open a shell on the Proxmox server, you can see the interfaces assigned to your OwlH node container. Here is an example from my Proxmox server, where my OwlH node container has the ID of 208.

Interfaces assigned to CT 208 – my OwlH Node container

The interfaces are shown in order here:

  • veth208i0 is my mgmt interface
  • veth208i1 is my sniff-prod interface
  • veth208i2 is my sniff-sec interface

I want to mirror all traffic from every port on the production switch to veth208i1 and all traffic from every port on the security switch to veth208i2.

Run these commands on the Proxmox host.
Be sure to replace <CTID> with your OwlH Node container ID.





Production Switch

ovs-vsctl -- --id=@p get port veth<CTID>i1 -- --id=@m create mirror name=owlhProd select-all=true output-port=@p -- set bridge vmbr0 mirrors=@m

  • ovs-vsctl is the Open vSwitch control program
  • id=@p get port veth<CTID>i1
    • Store the switch port of this interface in @p
    • @p is a variable we can reference for later
  • id=@m create mirror name=owlhProd select-all=true output-port=@p
    • Create a SPAN port called owlhProd and store it in variable @m
    • Select all interfaces on the switch
    • Mirror them to output-port @p (the variable from above)
  • set bridge vmbr0 mirrors=@m
    • Add the new mirror configuration to the vmbr0 switch





Security Switch

ovs-vsctl -- --id=@p get port veth<CTID>i2 -- --id=@m create mirror name=owlhSec select-all=true output-port=@p -- set bridge vmbr1 mirrors=@m

  • ovs-vsctl is the Open vSwitch control program
  • id=@p get port veth<CTID>i2
    • Store the switch port of this interface in @p
    • @p is a variable we can reference for later
  • id=@m create mirror name=owlhSec select-all=true output-port=@p
    • Create a SPAN port called owlhSec and store it in variable @m
    • Select all interfaces on the switch
    • Mirror them to output-port @p (the variable from above)
  • set bridge vmbr1 mirrors=@m
    • Add the new mirror configuration to the vmbr1 switch



Persist Reboots

The port mirroring on Open vSwitch does not persist reboots

You can't just create the port mirroring once, set it, and forget it. You'll have to implement a script of some sort – Bash, PowerShell, Python, etc. – so that the following is accomplished:

  • Recreate the port mirroring at reboots
  • Check the port mirroring at regular intervals to make sure it hasn't stopped for any reason

Cron Jobs

⚠️
Substitute <CTID> with your NIDS Linux Container's ID in Proxmox! If you've followed this guide in the configuration of your NIDS, it has one interface on each switch. veth###i1 is the interface on the production switch, veth###i2 is the interface on the security switch.
# Run every minute (no need for redundant @reboot job)
# Check to Ensure that Port Mirroring is configured
# The || condition on the right is triggered if the command on the left fails
# Effectively, attempt to start the NIDS container and create the port mirror  if the mirror config doesn't exist

# Production Switch
* * * * * ( ovs-vsctl get Mirror owlhProd name 2>/dev/null 1>/dev/null ) || ( pct start <CTID> 2>/dev/null ; ovs-vsctl -- --id=@p get port veth<CTID>i1 -- --id=@m create mirror name=owlhProd select-all=true output-port=@p -- set bridge vmbr0 mirrors=@m )

# Security Switch
* * * * * ( ovs-vsctl get Mirror owlhSec name 2>/dev/null 1>/dev/null ) || ( pct start <CTID> 2>/dev/null ; ovs-vsctl -- --id=@p get port veth<CTID>i2 -- --id=@m create mirror name=owlhSec select-all=true output-port=@p -- set bridge vmbr1 mirrors=@m )





Optional: Adding a STAP Interface to the NIDS

What is a STAP Interface and When Would It Be Used?

What is it?

The STAP interface is a software TAP interface. It is a socket that binds to a physical interface and acts as a means to receive packets from other hosts on the network.





When is it Used?

It is used to receive packets from other hosts on the network where port mirroring is not an option.





OwlH Client

The STAP interface works in a client-server relationship. The server/daemon is running on the OwlH NIDS node. The client is running on a networked host from which packets will be sent to the STAP interface.

Documentation on installing the OwlH Client can be found here:

https://documentation.owlh.net/en/0.17.0/main/OwlHSTAP.html#what-is-owlh-client

Currently only Linux and Windows hosts are supported





STAP Diagram

I have created a diagram that will hopefully help you visualize the purpose of a STAP interface. In the diagram there are two networks.

On 192.168.1.0/24 there is a switch that is configured with a SPAN port. On 172.16.10.0/24 there are some VMs and we want to forward their packets to the NIDS.





Power off the OwlH Node Container

A few of the next steps require the container to be off to load some drivers.





Load the "Dummy" Driver on the Proxmox Host

ℹ️
Run these commands on a shell on the Proxmox server itself.

These are containers. They do not have their own kernel and must utilize the host’s kernel. That’s why containers are so lightweight.

# Load the driver now
modprobe dummy

# Load at boot
echo 'dummy' >> /etc/modules-load.d/modules.conf





Allow the OwlH Node Container to Search for Kernel Modules on the Host

⚠️
Still running this on the Proxmox server.
Be sure to replace <container-ID> with your OwlH Node container's ID number.
# This includes the host's modules directory as a mountpoint on the CT
# We also do not want to backup this mountpoint
pct set <container-ID> --mp0 /usr/lib/modules/$(uname -r),mp=/lib/modules/$(uname -r),ro=1,backup=0





Power on the OwlH Node Container

The OwlH node should now be ready to load required drivers from the host. Turn it back on.





Create an init.d Script to Bring up the STAP Interface

Run these commands as root on the OwlH Node container.

touch /opt/owlhinterface.sh
nano /opt/owlhinterface.sh

/opt/owlhinterface.sh

#!/bin/bash
#
### BEGIN INIT INFO
# Provides:          owlhinterface
# Required-Start:    $local_fs $network
# Required-Stop:     $local_fs
# Default-Start:     3
# Default-Stop:      0 1 6
# Short-Description: Create and cleanup OwlH STAP interface
# Description:       Create and cleanup OwlH STAP interface
### END INIT INFO

PATH=/bin:/usr/bin:/sbin:/usr/sbin
DESC="OwlH Interface Script"
NAME=owlhinterface-script
SCRIPTNAME=/etc/init.d/"$NAME"
RED='\e[0;31m'
NO_COLOR='\e[0m'

case "$1" in
start)
    modprobe -v dummy numdummies=0
    ip link add owlh type dummy
    ip link set owlh mtu 65535
    ip link set owlh up
    ;;
stop)
    ip link delete owlh
    ;;
restart)
    ip link delete owlh 2>/dev/null
    modprobe -v dummy numdummies=0
    ip link add owlh type dummy
    ip link set owlh mtu 65535
    ip link set owlh up
    ;;
*)
    echo -e '${RED}This script only supports start, stop, and restart actions.$(NO_COLOR}'
    exit 2
    ;;
esac
exit 0

Exit and save the script.

# Link the script as a startup service
ln -s /opt/owlhinterface.sh /etc/init.d/owlhinterface

# Update runlevel directories
update-rc.d owlhinterface defaults

# Reload available daemons
systemctl daemon-reload

# Service will run at boot
# Can also manually run start and stop callls
service owlhinterface start/stop/restart





Register the STAP Interface with OwlH Manager

Log into the OwlH Manager server.

Choose Nodes > Node services configuration > Traffic Management - STAP. Add Socket → Network.

  • Give it a name (eg. owlh interface)
  • Default port is fine
  • Default cert is fine
  • Forward to owlh
  • Click Add

Start the STAP service





Add the Interface to the Suricata Configuration File

Click Nodes > See node files

Edit /etc/suricata/suricata.yaml

⚠️
Note that I am adding the owlh interface in addition to the sniff-prod and sniff-sec interfaces
af-packet:
  - interface: sniff-prod
    cluster-id: 99
    <removed for brevity>

  - interface: sniff-sec
    cluster-id: 98
    <removed for brevity>
    
  - interface: owlh
    #threads: auto
    cluster-id: 97
    cluster-type: cluster_flow
    defrag: yes
    #rollover: yes
    #use-mmap: yes
    #mmap-locked: yes
    tpacket-v3: yes
    ring-size: 2048
    block-size: 409600
    #block-timeout: 10
    #use-emergency-flush: yes
    #checksum-checks: kernel
    #bpf-filter: port 80 or udp
    #copy-mode: ips
    #copy-iface: eth1

Click Save





Add the STAP Interface to the Suricata Service Configuration

Click Nodes > Node services configuration > Suricata > Add Suricata. Name it stap.

Click Save. Click the sync ruleset button and click the start service button.





Click Nodes > Node services configuration > Zeek. Click node.cfg.

⚠️
Note that I am adding the owlh interface in addition to the sniff-prod and sniff-sec interfaces
...removed by author for brevity...

[worker-1]
type=worker
host=localhost
interface=sniff-prod

[worker-2]
type=worker
host=localhost
interface=sniff-sec

[worker-3]
type=worker
host=localhost
interface=owlh

Click Save.





Installing Wazuh Agents on Endpoints (HIDS)

The Wazuh agent is a host intrusion detection system (HIDS). The purpose of Wazuh agents is to monitor endpoints for security configuration issues and integrity issues with the file system, operating system, and much more.

Prerequisites

  • Compatible host
  • Host needs to be able to communicate with the Wazuh server
    • May need to open routes and/or firewall ports

How to Install

Refer to the official documentation for installing endpoint agents on your servers, desktops, etc.

Wazuh agent - Installation guide · Wazuh documentation
User manual, installation and configuration guides. Learn how to get the most out of the Wazuh platform.
⚠️
After installing, please run apt-mark hold wazuh-agent — or the equivalent for the host operating system — to prevent unplanned upgrades.

Please note that if you install a newer version of the wazuh-agent package later using apt install wazuh-agent, you will have to re-hold the package using apt-mark hold wazuh-agent





Viewing the SIEM Dashboards

  1. Log into the Wazuh Dashboards web server – https://wazuh-dashboards-container-ip
  2. Credentials are provided after installing Wazuh Dashboards
Wazuh dashboard - Components · Wazuh documentation
User manual, installation and configuration guides. Learn how to get the most out of the Wazuh platform.
Searching for alerts using the Wazuh app for Kibana · Wazuh · The Open Source Security Platform
Learn how you can use the search tools provided on the Wazuh app for Kibana, thanks to its integration with the Elastic Stack.

For an older version, but still a good walkthrough





Important: Define an Index Management Policy

DON'T SKIP THIS PART

You REALLY want to do this now as opposed to later.

  • Save your disk space
  • Reduce stressful troubleshooting hours
  • Trim your indices and improve performance

Do it now! Please.

Wazuh Index Management Policy
In this post, I show how to manage your Wazuh Indexer indices in order to improve performance and manage disk space consumed by indices.





Troubleshooting the SIEM

Changing Default Passwords

OwlH Manager Admin Password

  1. Log into the OwlH Manager server
  2. Click the user profile in the top-right
  3. Change the password



Wazuh Infrastructure Admin Password

Change the Wazuh indexer passwords - Securing Wazuh
User manual, installation and configuration guides. Learn how to get the most out of the Wazuh platform.





Alerts Stopped Registering in Wazuh Dashboards

In my past experience, this has almost always been due to hitting the maximum number of shards or running out of disk space.

If you haven't already done so, consider looking into an Index Management Policy.

Wazuh Index Management Policy
In this post, I show how to manage your Wazuh Indexer indices in order to improve performance and manage disk space consumed by indices.

I would recommend inspecting things in the following order:

  1. Make sure the Wazuh Manager service is running
  2. Make sure the Filebeat service is running on the Wazuh Manager server
    • Check Filebeat logs
      • If you see logs on hitting the shard limit
        • Consider adding another Wazuh Indexer node (see below)
        • Clean up old indices with an index management policy
  3. Make sure the Wazuh Indexer service is running
    • Check Wazuh Indexer logs
    • Make sure you have enough disk space available
      • If your disk is 95% full, Wazuh Indexer will prevent additional writes
        • Consider adding more disk space and/or another Wazuh Indexer node
        • Clean up old indices with an index management policy





Wazuh Dashboards Keeps Prompting to Select a Tenant when Logging into the Web Portal

Resolution: Need to disable multi-tenancy as it does not apply to Wazuh

  1. SSH into the Wazuh Dashboards container and edit the /etc/wazuh-dashboard/opensearch_dashboards.yml file
  2. Ensure these lines are present and/or match
opensearch_security.multitenancy.enabled: false





Extending Session Timeout in Wazuh Dashboards

Modify the Wazuh Dashb... | 0xBEN - Notes & Cheat Sheets
This procedure applies to Wazuh 4.3+ , as previous versions used references to opendistro in the con…





Follow-Up Activities

A Quick Sanity Check

As you've witnessed, there is a lot of parts to a SIEM setup, especially if you want to have full network AND host coverage. By now, you should have a baseline configuration that consists of:

  1. Wazuh Indexer to store logs being sent by Filebeat on Wazuh Manager
  2. Wazuh Dashboard to allow search and visualization of these logs, as well as integration with Wazuh Manager using the Wazuh plugin and API client
  3. Wazuh Manager to receive, process, and transmit inbound network and host logs to Wazuh Indexer
  4. OwlH Manager to centrally manage your OwlH NIDS node(s) services and configurations
  5. OwlH Node to receive packets via SPAN port from both switches in the lab environment and process them with Suricata and Zeek





Exploring the OwlH Integration

Want a deeper dive on the OwlH integration and how all the pieces fit together? In this post, I provide a deeper analysis on how all the various parts that make up the OwlH Manager and OwlH Node fit together. I also provie a diagram to hopefully help visualize things better.

Wazuh: Exploring the OwlH Integration
In this post, I explore the OwlH integration with Wazuh and the convenience of the centralized NIDS configuration management it offers.





Extending Wazuh's Capabilities

The folks over at OpenSecure have done a really fantastic job at creating content that showcases Wazuh's capabilities and ways to extend it with various integrations. I wholeheartedly recommend taking a look.

OpenSecure
Focusing on Open Source cybersecurity products that provide a robust and scalable solution that can be customized to integrate with any network.

Also, have a look at a some of the additional Wazuh content I've written. If I included, everything here, the guide would quickly grow out of scope.

Wazuh - 0xBEN
A blog about experiences in cybersecurity, information security, technology, and roasting coffee at home.





Next Step: Create a Kali Linux VM

Create a Kali Linux VM in Proxmox
In this module, we will look at the process of creating a Kali Linux VM using the command line in Proxmox
More from 0xBEN
Table of Contents
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to 0xBEN.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.