Previous Step
Reviewing Some Networking Concepts
Router and Switch with Default Configurations
In this scenario, there is a router with a very simple, default configuration. The router has the default private IP address range of 172.16.1.0/24
. The DHCP server is enabled and no VLANs are configured.
Router with VLANs Configured, Default Switch Configurations
In this scenario, we have configured some VLANs in the router. That way, the router and switch can work harmoniously to route packets if it receives an Ethernet frame that was tagged with a VLAN ID. However, none of the switch ports have been configured with VLAN ID tags.
Router with VLANs and Configured Switch Ports
In this scenario, the switch tags the Ethernet frames with a VLAN tag for any configured switch ports. That way if an Ethernet frame contains a VLAN ID tag, the switch checks configured ports for the correct VLAN ID and MAC address.
Router with VLANs, Tagged Switched Ports, and Port Mirroring
Port mirroring is where we configure a switch to send a copy of every Ethernet frame to another port on the switch. This is a common configuration with Intrusion Detection Systems when you want to monitor all traffic on a network.
Understanding the Proxmox Networking
VMBR0
is the switch where all of your VMs and containers connected to your home network will be.- If you have a home router that supports
802.1q
, then you could apply some VLAN segmentation toVMBR0
divide your VMs and containers into further subnets if desired
- If you have a home router that supports
VMBR1
is the switch where all of the security and vulnerable infrastructure will be attached for further research. In our lab environment, we have already added some VLANs toVMBR1
, because pfSense supports802.1q
.- The NIDS is connected to both
VMBR0
andVMBR1
. Both switches are configured such that the ports where the IDS is plugged in are mirror ports and every other port will send a copy of every Ethernet frame to the IDS.
Order of Operations
- Configure the Wazuh Indexer container
- This is the database where event data will be stored
- Any alerts that are picked up by Wazuh will be shipped here
- Configure the Wazuh Manager container
- This is the SIEM that will collect the logs from any agents
- Agents are running on endpoints on our network
- The agents are HIDS which will forward event data to the SIEM
- You can also forward syslog to Wazuh for processing as well if you cannot install the agent on a host
- Configure the Wazuh Dashboards container
- Wazuh Dashboards serves three purposes:
- A web server that displays dashboards about alerts data
- A Wazuh API client that can control certain features in Wazuh
- A Wazuh Indexer API client that queries the database
- Wazuh Dashboards serves three purposes:
- Configure the OwlH Manager container
- OwlH Manager serves three purposes:
- Keeping Suricata rules up to date
- Pushing configurations to any registered NIDS node(s)
- Keeping services running on the NIDS node(s)
- OwlH Manager serves three purposes:
- Configure the OwlH NIDS node container
- This is the network intrusion detection system
- It runs the following applications to generate network event data
- Suricata compares packets against a set of rules for anomalies
- Zeek adds lots of metadata about packets
- Wazuh agent sends alert data to the SIEM
- Install Wazuh Agents on Servers and Desktops
- These are the endpoints to be monitored
- They can be configured to ingest any log and send to the Wazuh Manager
- The Wazuh Manager will receive the logs and attempt to decode them and parse them for an alertable event
Desired End State
- OwlH Manager
- Downloaded and configured ruleset to be pushed to node(s)
- Interfaces defined for monitoring and auto-start
- Joined the OwlH NIDS node to the OwlH Manager
- Pushed configurations to the OwlH NIDS node
- OwlH NIDS
- Installed Suricata and Zeek
- Configured network interfaces
- Capturing mirrored packets and analyzing them
- Wazuh agent is installed and shipping alerts to Wazuh Manager
- Wazuh Manager
- Wazuh software installed and running
- Accepting inputs from agents
- Analyzing and sending to Wazuh Indexer
- Wazuh Indexer
- OwlH NIDS templates installed
- Receiving inputs from Wazuh Manager
- Wazuh Dashboards
- OwlH dashboards installed
- Successfully connecting to Wazuh Indexer and Dashboards APIs
- Wazuh Agents
- Wazuh agent installed on any server or workstation to be monitored
- As long as the endpoint can establish a TCP/IP connection with the Wazuh Manager, it can ship the logs
Stage Your Containers
Log into Proxmox and create five Linux Containers for your infrastructure.
- Wazuh Indexer
- Hostname: wazuh-indexer
- Debian 11 LXC
- Memory: 4 GiB (1 GiB swap) – 8 GiB recommended
- 2 CPU cores – 4 CPU cores recommended
- 25 GB disk (good enough for a lab)
- Set your network (and VLAN) as desired
- Set your DNS domain and servers as desired
- Wazuh Dashboards
- Hostname: wazuh-dashboards
- Debian 11 LXC
- Memory: 512 GiB (512 MiB swap)
- 2 CPU cores
- 10 GB storage
- Set your network (and VLAN) as desired
- Set your DNS domain and servers as desired
- Wazuh Manager
- Hostname: wazuh-manager
- Debian 11 LXC
- Memory: 1 GiB (512 MiB swap)
- 2 CPU cores
- 10 GB storage
- Set your network (and VLAN) as desired
- Set your DNS domain and servers as desired
- OwlH Manager
- Hostname: owlh-manager
- Debian 11 LXC
- Memory: 512 MiB (512 MiB swap)
- 1 CPU cores
- 10 GB storage
- Set your network (and VLAN) as desired
- Set your DNS domain and servers as desired
- OwlH Node
- Hostname: owlh-node
- Debian 11 LXC
- Memory: 4 GiB (1 GiB swap)
- 4 CPU cores
- 50 GB storage (good enough for a lab)
- Set your network (and VLAN) as desired
- Set your DNS domain and servers as desired
DHCP Reservations
After you create the containers do the following:
- Make a note of
- Each container's hostname
- Each container's MAC address
- Log into your home router (or DHCP server)
- Assign a static DHCP reservation to the hosts's MAC address
- Use the hostname of the container
- Assign the reservations to the correct VLAN (where applicable)
Wazuh Components
Since Wazuh 4.3, the default database that stores the alerts from Wazuh Manager is the Wazuh Indexer.
- The Wazuh Indexer is a fork of the OpenSearch Indexer.
- The Wazuh Dashboards is a fork of the OpenSearch Dashboards.
- OpenSearch is based off a fork of Elasticsearch from several years ago and has morphed into its own product, but looks and acts very similar to Elasticsearch.
In this section, we are going to setup the Wazuh core infrastructure with the aid of some installation scripts provided by the Wazuh team.
Wazuh Indexer Container
Log into the Wazuh Indexer container and complete these steps.
Update and Download Dependencies and Installation Files
apt clean && apt update && apt upgrade -y
apt install -y curl dnsutils net-tools sudo gnupg
cd /tmp
curl -sO https://packages.wazuh.com/4.3/wazuh-install.sh
curl -sO https://packages.wazuh.com/4.3/config.yml
Modify Installation Variables in config.yml
This file will set all of the installation variables. Pay careful attention and replace the following templates with the correct values of your Linux Containers.
<wazuh-indexer-hostname>
<wazuh-indexer-ip>
<wazuh-manager-hostname>
<wazuh-manager-ip>
<wazuh-dashboards-hostname>
<wazuh-dashboards-ip>
For example, I've named my Wazuh Indexer wazuh-indexer-1
and it's IP address is 10.148.148.6
. Set your configuration accordingly.
You are telling the Wazuh Indexer how to communicate with the other services running on the other containers.
nodes:
# Wazuh indexer nodes
indexer:
- name: <wazuh-indexer-hostname>
ip: <wazuh-indexer-ip>
# Wazuh server nodes
# Use node_type only with more than one Wazuh manager
server:
- name: <wazuh-manager-hostname>
ip: <wazuh-manager-ip>
# Wazuh dashboard node
dashboard:
- name: <wazuh-dashboards-hostname>
ip: <wazuh-dashboards-ip>
Create Configuration Files and Certificates
cd /tmp
bash wazuh-install.sh --generate-config-files --ignore-check
Run the Installation Script
Replace <wazuh-indexer-hostname>
with the hostname of your Linux container.
Copy the wazuh-install-files.tar File
wazuh-install-files.tar
. Copy this file to ALL servers that you created beforehand. You can use the scp
utility or a Python web server. There are many options, the choice is yours.The files should be placed in the /tmp
directory on the target hosts.
scp /tmp/wazuh-install-files.tar root@wazuh-dashboards-container-ip:/tmp/wazuh-install-files.tar
scp /tmp/wazuh-install-files.tar root@wazuh-manager-container-ip:/tmp/wazuh-install-files.tar
Prevent Unplanned Upgrades
You should plan to upgrade your Wazuh infrastructure in such a way that maintains the availability and integrity of your SIEM. Unplanned upgrades can cause incompatibilities and lead to time-consuming restorations and/or reinstallations.
wazuh-indexer
package later using apt install wazuh-indexer
, you will have to re-hold the package using apt-mark hold wazuh-indexer
apt-mark hold wazuh-indexer
Wazuh Manager Container
Log into the Wazuh Manager container and complete these steps.
Update and Download Dependencies and Installation Files
apt clean && apt update && apt upgrade -y
apt install -y curl dnsutils net-tools sudo gnupg
cd /tmp
ls wazuh-install-files.tar
curl -sO https://packages.wazuh.com/4.3/wazuh-install.sh
Run the Installation Script
Replace <wazuh-manager-hostname>
with the hostname of your Linux container.
Rotate Wazuh Manager Logs to Save Disk Space
nano /var/ossec/etc/ossec.conf
Add the line <rotate_interval>1d</rotate_interval>
to the <global>
section as shown below:
<ossec_config>
<global>
<rotate_interval>1d</rotate_interval>
Press CTRL + X
, then y
, then Enter
to save your changes. Restart the Wazuh manager: systemctl restart wazuh-manager
.
Delete Stale Logs to Save Disk Space
Since this is a lab environment, I'm not too worried about log retention or shipping them off to cold storage. I'm just going to create a cron
job to delete logs older than 30 days.
If prompted to choose an editor, choose nano
or vim
, whichever suits your comfort level; nano
being more beginner-friendly.
When finished — assuming you're using nano
— press CTRL + x
and then y
to save and exit the crontab.
Prevent Unplanned Upgrades
You should plan the upgrades of your Wazuh manager and Wazuh agents. Having agents that are higher versions than your Wazuh manager can lead to compatibility issues.
wazuh-manager
package later using apt install wazuh-manager
, you will have to re-hold the package using apt-mark hold wazuh-manager
apt-mark hold wazuh-manager
Wazuh Dashboards Container
Log into the Wazuh Dashboards container and complete these steps.
Update and Download Dependencies and Installation Files
apt clean && apt update && apt upgrade -y
apt install -y curl dnsutils net-tools sudo gnupg
cd /tmp
ls wazuh-install-files.tar
curl -sO https://packages.wazuh.com/4.3/wazuh-install.sh
Run the Installation Script
Replace <wazuh-dashboards-hostname>
with the hostname of your Linux container.
Once the installation finishes, you will see the:
- URL of the Dashboards web interface
- Dashboards username
- Dashboards password
Prevent Unplanned Upgrades
Again, plan your Wazuh infrastructure upgrades. Putting the packages on hold prevents unplanned upgrades, which can lead to loss of data and lengthy restoration of service.
wazuh-dashboard
package later using apt install wazuh-dashboard
, you will have to re-hold the package using apt-mark hold wazuh-dashboard
apt-mark hold wazuh-dashboard
OwlH Components
OwlH Manager Container
Log into the OwlH Manager container and complete these steps.
Update and Download Dependencies and Installation Files
apt clean && apt update && apt upgrade -y
apt install -y curl dnsutils net-tools sudo gnupg libpcap0.8
cd /tmp
wget http://repo.owlh.net/current-debian/owlhinstaller.tar.gz
mkdir /tmp/owlhinstaller/
tar -C /tmp/owlhinstaller -xvf owlhinstaller.tar.gz
cd /tmp/owlhinstaller/
Edit OwlH Manager Installation Variables in config.json
Notice here the action
is set to install
and the target
is set to owlhmaster
and owlhui
; indicating we are installing these services.
You will see other code in the config.json
file. Just leave it alone and ensure you set the correct options as described here.
...
...
...
"action": "install",
"target": [
"owlhmaster",
"owlhui"
],
...
...
...
Run the installer
./owlhinstaller
Install the Web Server Component
wget http://repo.owlh.net/current-debian/services/owlhui-httpd.sh
bash owlhui-httpd.sh
Update the IP Address of the Web Server
nano /var/www/owlh/conf/ui.conf
systemctl restart owlhmaster
systemctl restart apache2
OwlH Node Container
Log into the OwlH Node container and complete these steps.
Define Network Interfaces on the Container in Proxmox
We are going to add three network interfaces to the NIDS node. One interface will be plugged into vmbr0 and this interface will be used for management – such as SSH. Another interface will be plugged into vmrb0 and will be used to receive packets on a SPAN port. The third interface will be plugged into vmbr1 and will be used to receive packets on a SPAN port.
mgmt (The management interface where you will log into the server)
sniff-prod (no DHCP reservation needed)
sniff-sec (no DHCP reservation needed)
Bring Sniff Interfaces up
nano /etc/network/interfaces
Add these interface configurations to the bottom of the file.
auto sniff-prod
iface sniff-prod inet manual
auto sniff-sec
iface sniff-sec inet manual
Restart the networking daemon. This will kill your SSH session.
systemctl restart networking
Install OwlH Node and Configure Daemons
apt clean && apt update && apt upgrade -y
apt install -y curl dnsutils net-tools sudo gnupg libpcap0.8
cd /tmp
wget http://repo.owlh.net/current-debian/owlhinstaller.tar.gz
mkdir /tmp/owlhinstaller/
tar -C /tmp/owlhinstaller -xvf owlhinstaller.tar.gz
cd /tmp/owlhinstaller/
Edit the config.json
File
"owlhnode"
You will see other code in the config.json
file. Just leave it alone and ensure you set the correct options as described here.
nano ./config.json
...
...
...
"action": "install",
"repourl":"http://repo.owlh.net/current-debian/",
"target": [
"owlhnode"
],
...
...
...
Run the Installer and Configure the Daemon
./owlhinstaller
cp /usr/local/owlh/src/owlhnode/conf/service/owlhnode.service /etc/systemd/system
systemctl daemon-reload
systemctl enable owlhnode
systemctl start owlhnode
Register the Node with the OwlH Manager
Log into the OwlH Manager container at https://owlh-manager-container-ip. The default credentials are:
- Username:
admin
- Password:
admin
Click Nodes > Add NIDS
- Node Name:
Display Name
- Node IP:
OwlH Node management IP address
- Node user:
admin
- Node Port:
50002
- Node password:
admin
Install Suricata on the OwlH Node
Configure Suricata on the OwlH Node
nano /etc/suricata/suricata.yaml
We are now finished editing the suricata.yaml
file. Please close the file and save your changes.
Add a Cron Job to Trim Suricata Logs
crontab -e
When finished — assuming you're using nano
— press CTRL + x
and then y
to save and exit the crontab.
Create a Suricata Ruleset in the OwlH Manager
The ruleset in Suricata is the collection of rules that will be pushed to any NIDS node(s) in order to detect network anomalies. Once you download individual rulesets, you will put them into a collection and push them to your NIDS node.
Select Rule Sources
Log into the OwlH Manager at https://owlh-manager-ip-address
Download Rules
Click Open Rules > Manage Rulesets Sources > Add new ruleset source.
Under Select Ruleset Type, choose Defaults.
Repeat the process as many times as needed. Choose a ruleset – free or paid. Click Open Rules again and repeat this process of downloading Suricata rules until you've downloaded all of your desired rules.
Make a Ruleset
Click Open Rules again. Create a new ruleset and give it a name, description, and check the boxes to choose any source(s) to map to the ruleset.
Set the Ruleset as Default
Click the star icon to make the ruleset the default for any NIDS deployments.
Define Packet Capture Interfaces in Service Configuration
Click Nodes > Node services configuration (on your desired node) > Choose Suricata > Add Suricata
Finish adding the first interface. Then, repeat this process to add the next interface.
Click Add > Click the edit button
- Description:
sniff-sec
- Ruleset:
Choose your ruleset
- Interface:
Choose your interface
- Configuration file:
/etc/suricata/suricata.yaml
- Click Save
Click the Sync Ruleset button and click the Start Service button. It might throw an error, but refresh the page and wait a few moments.
Verify the Suricata Process(es) Started
Log onto the OwlH Node container and run this command:
ps aux | grep -i suricata | grep -v grep
You should see something similar to this:
root 2230 0.0 18.2 1688308 766312 ? Rsl 23:45 0:37 /usr/bin/suricata -D -c /etc/suricata/suricata.yaml -i sniff-prod --pidfile /var/run/2b725740-e8bd-3dd0-18ac-e4e455409932-pidfile.pid -S /etc/suricata/rules/The-Rulez.rules
root 2294 0.0 17.8 1221432 748164 ? Rsl 23:45 0:37 /usr/bin/suricata -D -c /etc/suricata/suricata.yaml -i sniff-sec --pidfile /var/run/48f13efb-74c8-2578-a7dc-d19eae40002e-pidfile.pid -S /etc/suricata/rules/The-Rulez.rules
Install Zeek on the OwlH Node
Log into the OwlH Node and run these commands:
echo 'deb http://download.opensuse.org/repositories/security:/zeek/Debian_11/ /' > /etc/apt/sources.list.d/zeek.list
curl -fsSL https://download.opensuse.org/repositories/security:zeek/Debian_11/Release.key | gpg --dearmor > /etc/apt/trusted.gpg.d/security_zeek.gpg
apt update && apt install -y zeek
Create the file /opt/zeek/share/zeek/site/owlh.zeek
and add this content:
redef record DNS::Info += {
bro_engine: string &default="DNS" &log;
};
redef record Conn::Info += {
bro_engine: string &default="CONN" &log;
};
redef record Weird::Info += {
bro_engine: string &default="WEIRD" &log;
};
redef record SSL::Info += {
bro_engine: string &default="SSL" &log;
};
redef record SSH::Info += {
bro_engine: string &default="SSH" &log;
};
These definitions tell Zeek add the string bro_engine: TYPE
to their respective logs as they're analyzed. So, if Zeek is logging DNS as JSON, append the string bro_engine: DNS
to the event and log it. This bro_engine
syntax will be used as a filter string later on in Filebeat.
Add these lines to the local.zeek
file to ensure the following happens when Zeek runs:
- Output all Zeek logs in JSON format
- Load the
owlh.zeek
file
echo '@load policy/tuning/json-logs.zeek' >> /opt/zeek/share/zeek/site/local.zeek
echo '@load owlh.zeek' >> /opt/zeek/share/zeek/site/local.zeek
Configure Zeek on the OwlH Manager
Log into the OwlH Manager at https://owlh-manager-ip-address
Click Nodes > See node files. Then, click main.conf
. Find every instance of /usr/local/zeek
and change it to /opt/zeek
.
Now, click Nodes > Select your node > Node services configuration > Zeek. Then, click the button to enable Zeek management.
Click node.cfg
. This is where you configure the interfaces and instance type. You are editing this file remotely from the web browser. Delete everything from the configuration file and add these lines instead.
owlh-node-container-ip
with your OwlH node's IP address[logger]
type=logger
host=localhost
[manager]
type=manager
host=owlh-node-container-ip
[proxy-1]
type=proxy
host=localhost
[worker-1]
type=worker
host=localhost
interface=sniff-prod
[worker-2]
type=worker
host=localhost
interface=sniff-sec
Click Save and click Deploy. You should get a visual confirmation that Zeek has started. You can also verify by running this command on the OwlH Node:
ps -eo pid,command --sort=pid | grep zeek | grep -v grep
Add Cron Jobs to Trim Stale Zeek Logs
Run these commands on the OwlH Node container. I am trimming logs older than 30 days. You can adjust your timeframe as required for your environment.
crontab -e
When finished — assuming you're using nano
— press CTRL + x
and then y
to save and exit the crontab.
Install the Wazuh Agent to Send NIDS Alerts to the Wazuh Server
Install Wazuh Agent on the OwlH Node Container
Run these commands on the OwlH Node container:
curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add -
echo "deb https://packages.wazuh.com/4.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
apt update
apt install wazuh-agent
systemctl enable wazuh-agent
Prevent Unplanned Upgrades of the Wazuh Agent
wazuh-agent
package later using apt install wazuh-agent
, you will have to re-hold the package using apt-mark hold wazuh-agent
apt-mark hold wazuh-agent
Configure the Wazuh agent from the OwlH Manager
Log into the OwlH Manager at https://owlh-manager-ip-address
Click Nodes > See Node Files
Edit main.conf
and replace every instance of /var/ossec/bin/ossec-control
with /var/ossec/bin/wazuh-control
.
Click Nodes > Node services configuration > Wazuh
Click Edit ossec.conf
file. Change the Manager_IP
to the Wazuh Manager container IP address.
<client_buffer>
<!-- Agent buffer options -->
<disabled>yes</disabled>
Click Save. Click Add file. Add /var/log/owlh/alerts.json
. Click on the Run Wazuh icon to start the Wazuh agent on the OwlH node.
You can confirm the Wazuh agent is running by logging into the OwlH Node container and running this command:
systemctl status wazuh-agent
Add OwlH Dashboards, Visualizations, and Templates to Wazuh Dashboards
We've added a Wazuh agent to our NIDS node and now we need to tell Wazuh how to ship the OwlH logs to Wazuh Indexer. Then, we tell Wazuh Indexer how to store the events in the database. Finally, we add some dashboards to Wazuh Dashboards visualize our NIDS events.
SSH into the Wazuh Manager Server
Run these commands:
cd /tmp
mkdir /tmp/owlhfilebeat
wget repo.owlh.net/elastic/owlh-filebeat-7.4-files.tar.gz
tar -C /tmp/owlhfilebeat -xf owlh-filebeat-7.4-files.tar.gz
Upload OwlH Visualizations and Dashboards to Wazuh Dashboards
<wazuh-dashboards-container-ip>
with your Wazuh Dashboards container's IP addresscurl -k -u admin -X POST "https://<wazuh-dashboards-container-ip>:443/api/saved_objects/_import" -H "osd-xsrf: true" --form file=@/tmp/owlhfilebeat/owlh-kibana-objects-20191030.ndjson
When prompted, enter the password for the Wazuh Dashboards admin user that was created when you ran the install script.
Upload the OwlH Document Templates to Wazuh Indexer
<wazuh-indexer-container-ip>
with your Wazuh Indexer container's IP addresscurl -k -u admin -X PUT -H 'Content-Type: application/json' 'https://<wazuh-indexer-container-ip>:9200/_template/owlh' -d@/tmp/owlhfilebeat/owlh-template.json
When prompted, enter the password for the Wazuh Dashboards admin user that was created when you ran the install script.
Install the OwlH Filebeat Module on the Wazuh Manager Server
Filebeat is used to ship the OwlH data from Wazuh Manager to Wazuh Indexer. So, when the Wazuh Agent running on the OwlH node ships NIDS logs to the Wazuh Manager server, any logs generated by Wazuh will be read by the OwlH Filebeat module and shipped into Wazuh Indexer.
cd /tmp/owlhfilebeat
tar -C /usr/share/filebeat/module/ -xvf owlh-filebeat-7.4-module.tar.gz
Edit the Wazuh Filebeat Alerts Configuration
nano /usr/share/filebeat/module/wazuh/alerts/config/alerts.yml
fields:
index_prefix: {{ .index_prefix }}
type: log
paths:
{{ range $i, $path := .paths }}
- {{$path}}
{{ end }}
exclude_lines: ["bro_engine"]
We're adding the exclude_lines: ["bro_engine"]
directive to the YAML configuration. This tells the wazuh
Filebeat module to ignore any logs where the bro_engine
string is present. We want to do this, because it is the job of the owlh
Filebeat module to ingest the bro_engine
logs.
Modify the Filebeat Configuration
We need to tell Filebeat to ship our OwlH data now, so we add the owlh
Filebeat module to the configuration and enable it.
# Make a backup of the current configuration
cp /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.bak
nano /etc/filebeat/filebeat.yml
Ensure the following lines are in the configuration file. Be sure to replace wazuh-indexer-server-ip-here
with your Wazuh Indexer container's IP address. Make sure your configuration file matches what's shown here.
# Wazuh - Filebeat configuration file
output.elasticsearch:
protocol: https
username: ${username}
password: ${password}
ssl.certificate_authorities:
- /etc/filebeat/certs/root-ca.pem
ssl.certificate: "/etc/filebeat/certs/wazuh-manager.pem"
ssl.key: "/etc/filebeat/certs/wazuh-manager-key.pem"
setup.template.json.enabled: true
setup.template.json.path: '/etc/filebeat/wazuh-template.json'
setup.template.json.name: 'wazuh'
setup.ilm.overwrite: true
setup.ilm.enabled: false
filebeat.modules:
- module: wazuh
alerts:
enabled: true
archives:
enabled: false
- module: owlh
events:
enabled: true
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
logging.metrics.enabled: false
seccomp:
default_action: allow
syscalls:
- action: allow
names:
- rseq
output.elasticsearch.hosts:
- wazuh-indexer-ip-here
# OwlH pipeline sync
filebeat.overwrite_pipelines: true
Restart Filebeat
# Restart filebeat
systemctl restart filebeat
# Ensure it is running
systemctl status filebeat
# Ensure good connectivity
filebeat test output
Mirror Traffic to the Sniff Interfaces
Explaining the Open vSwitch SPAN Ports
You are creating the SPAN ports on the virtual switches in Proxmox. Recall that the OwlH node has three network interfaces.
net0
gets an IP address from the router. We can log into the server on this interface.net1
does not get an IP address. This is one of the packet capture interfaces.net2
does not get an IP address. This is one of the packet capture interfaces.
If you open a shell on the Proxmox server, you can see the interfaces assigned to your OwlH node container. Here is an example from my Proxmox server, where my OwlH node container has the ID of 208
.
The interfaces are shown in order here:
veth208i0
is mymgmt
interfaceveth208i1
is mysniff-prod
interfaceveth208i2
is mysniff-sec
interface
I want to mirror all traffic from every port on the production switch to veth208i1
and all traffic from every port on the security switch to veth208i2
.
Be sure to replace
<CTID>
with your OwlH Node container ID.Production Switch
ovs-vsctl -- --id=@p get port veth<CTID>i1 -- --id=@m create mirror name=owlhProd select-all=true output-port=@p -- set bridge vmbr0 mirrors=@m
ovs-vsctl
is the Open vSwitch control programid=@p get port veth<CTID>i1
- Store the switch port of this interface in
@p
@p
is a variable we can reference for later
- Store the switch port of this interface in
id=@m create mirror name=owlhProd select-all=true output-port=@p
- Create a SPAN port called
owlhProd
and store it in variable@m
- Select all interfaces on the switch
- Mirror them to
output-port
@p
(the variable from above)
- Create a SPAN port called
set bridge vmbr0 mirrors=@m
- Add the new mirror configuration to the
vmbr0
switch
- Add the new mirror configuration to the
Security Switch
ovs-vsctl -- --id=@p get port veth<CTID>i2 -- --id=@m create mirror name=owlhSec select-all=true output-port=@p -- set bridge vmbr1 mirrors=@m
ovs-vsctl
is the Open vSwitch control programid=@p get port veth<CTID>i2
- Store the switch port of this interface in
@p
@p
is a variable we can reference for later
- Store the switch port of this interface in
id=@m create mirror name=owlhSec select-all=true output-port=@p
- Create a SPAN port called
owlhSec
and store it in variable@m
- Select all interfaces on the switch
- Mirror them to
output-port
@p
(the variable from above)
- Create a SPAN port called
set bridge vmbr1 mirrors=@m
- Add the new mirror configuration to the
vmbr1
switch
- Add the new mirror configuration to the
Persist Reboots
You can't just create the port mirroring once, set it, and forget it. You'll have to implement a script of some sort – Bash, PowerShell, Python, etc. – so that the following is accomplished:
- Recreate the port mirroring at reboots
- Check the port mirroring at regular intervals to make sure it hasn't stopped for any reason
Cron Jobs
<CTID>
with your NIDS Linux Container's ID in Proxmox! If you've followed this guide in the configuration of your NIDS, it has one interface on each switch. veth###i1
is the interface on the production switch, veth###i2
is the interface on the security switch.crontab -e
When finished — assuming you're using nano
— press CTRL + x
and then y
to save and exit the crontab.
Optional: Adding a STAP Interface to the NIDS
What is a STAP Interface and When Would It Be Used?
What is it?
The STAP interface is a software TAP interface. It is a socket that binds to a physical interface and acts as a means to receive packets from other hosts on the network.
When is it Used?
It is used to receive packets from other hosts on the network where port mirroring is not an option.
OwlH Client
The STAP interface works in a client-server relationship. The server/daemon is running on the OwlH NIDS node. The client is running on a networked host from which packets will be sent to the STAP interface.
Documentation on installing the OwlH Client can be found here:
https://documentation.owlh.net/en/0.17.0/main/OwlHSTAP.html#what-is-owlh-client
Currently only Linux and Windows hosts are supported
STAP Diagram
I have created a diagram that will hopefully help you visualize the purpose of a STAP interface. In the diagram there are two networks.
On 192.168.1.0/24
there is a switch that is configured with a SPAN port. On 172.16.10.0/24
there are some VMs and we want to forward their packets to the NIDS.
Power off the OwlH Node Container
A few of the next steps require the container to be off to load some drivers.
Load the "Dummy" Driver on the Proxmox Host
These are containers. They do not have their own kernel and must utilize the host’s kernel. That’s why containers are so lightweight.
# Load the driver now
modprobe dummy
# Load at boot
echo 'dummy' >> /etc/modules-load.d/modules.conf
Allow the OwlH Node Container to Search for Kernel Modules on the Host
Be sure to replace
<container-ID>
with your OwlH Node container's ID number.# This includes the host's modules directory as a mountpoint on the CT
# We also do not want to backup this mountpoint
pct set <container-ID> --mp0 /usr/lib/modules/$(uname -r),mp=/lib/modules/$(uname -r),ro=1,backup=0
Power on the OwlH Node Container
The OwlH node should now be ready to load required drivers from the host. Turn it back on.
Create an init.d Script to Bring up the STAP Interface
Run these commands as root on the OwlH Node container.
touch /opt/owlhinterface.sh
nano /opt/owlhinterface.sh
/opt/owlhinterface.sh
#!/bin/bash
#
### BEGIN INIT INFO
# Provides: owlhinterface
# Required-Start: $local_fs $network
# Required-Stop: $local_fs
# Default-Start: 3
# Default-Stop: 0 1 6
# Short-Description: Create and cleanup OwlH STAP interface
# Description: Create and cleanup OwlH STAP interface
### END INIT INFO
PATH=/bin:/usr/bin:/sbin:/usr/sbin
DESC="OwlH Interface Script"
NAME=owlhinterface-script
SCRIPTNAME=/etc/init.d/"$NAME"
RED='\e[0;31m'
NO_COLOR='\e[0m'
case "$1" in
start)
modprobe -v dummy numdummies=0
ip link add owlh type dummy
ip link set owlh mtu 65535
ip link set owlh up
;;
stop)
ip link delete owlh
;;
restart)
ip link delete owlh 2>/dev/null
modprobe -v dummy numdummies=0
ip link add owlh type dummy
ip link set owlh mtu 65535
ip link set owlh up
;;
*)
echo -e '${RED}This script only supports start, stop, and restart actions.$(NO_COLOR}'
exit 2
;;
esac
exit 0
Exit and save the script.
# Link the script as a startup service
ln -s /opt/owlhinterface.sh /etc/init.d/owlhinterface
# Update runlevel directories
update-rc.d owlhinterface defaults
# Reload available daemons
systemctl daemon-reload
# Service will run at boot
# Can also manually run start call
service owlhinterface start
Register the STAP Interface with OwlH Manager
Log into the OwlH Manager server.
Choose Nodes > Node services configuration > Traffic Management - STAP. Add Socket → Network.
- Give it a name (eg. owlh interface)
- Default port is fine
- Default cert is fine
- Forward to owlh
- Click Add
Start the STAP service
Add the Interface to the Suricata Configuration File
Click Nodes > See node files
Edit /etc/suricata/suricata.yaml
owlh
interface in addition to the sniff-prod
and sniff-sec
interfacesaf-packet:
- interface: sniff-prod
cluster-id: 99
<removed for brevity>
- interface: sniff-sec
cluster-id: 98
<removed for brevity>
- interface: owlh
#threads: auto
cluster-id: 97
cluster-type: cluster_flow
defrag: yes
#rollover: yes
#use-mmap: yes
#mmap-locked: yes
tpacket-v3: yes
ring-size: 2048
block-size: 409600
#block-timeout: 10
#use-emergency-flush: yes
#checksum-checks: kernel
#bpf-filter: port 80 or udp
#copy-mode: ips
#copy-iface: eth1
Click Save
Add the STAP Interface to the Suricata Service Configuration
Click Nodes > Node services configuration > Suricata > Add Suricata. Name it stap
.
Click Save. Click the sync ruleset button and click the start service button.
Click Nodes > Node services configuration > Zeek. Click node.cfg
.
owlh
interface in addition to the sniff-prod
and sniff-sec
interfaces...removed by author for brevity...
[worker-1]
type=worker
host=localhost
interface=sniff-prod
[worker-2]
type=worker
host=localhost
interface=sniff-sec
[worker-3]
type=worker
host=localhost
interface=owlh
Click Save.
Installing Wazuh Agents on Endpoints (HIDS)
The Wazuh agent is a host intrusion detection system (HIDS). The purpose of Wazuh agents is to monitor endpoints for security configuration issues and integrity issues with the file system, operating system, and much more.
Prerequisites
- Compatible host
- Host needs to be able to communicate with the Wazuh server
- May need to open routes and/or firewall ports
How to Install
Refer to the official documentation for installing endpoint agents on your servers, desktops, etc.
apt-mark hold wazuh-agent
— or the equivalent for the host operating system — to prevent unplanned upgrades.Please note that if you install a newer version of the
wazuh-agent
package later using apt install wazuh-agent
, you will have to re-hold the package using apt-mark hold wazuh-agent
Viewing the SIEM Dashboards
- Log into the Wazuh Dashboards web server –
https://wazuh-dashboards-container-ip
- Credentials are provided after installing Wazuh Dashboards
Important: Define an Index Management Policy
You REALLY want to do this now as opposed to later.
- Save your disk space
- Reduce stressful troubleshooting hours
- Trim your indices and improve performance
Do it now! Please.
Troubleshooting the SIEM
Changing Default Passwords
OwlH Manager Admin Password
- Log into the OwlH Manager server
- Click the user profile in the top-right
- Change the password
Wazuh Infrastructure Admin Password
Alerts Stopped Registering in Wazuh Dashboards
In my past experience, this has almost always been due to hitting the maximum number of shards or running out of disk space.
If you haven't already done so, consider looking into an Index Management Policy.
I would recommend inspecting things in the following order:
- Make sure the Wazuh Manager service is running
- Make sure the Filebeat service is running on the Wazuh Manager server
- Check Filebeat logs
- If you see logs on hitting the shard limit
- Consider adding another Wazuh Indexer node (see below)
- Clean up old indices with an index management policy
- If you see logs on hitting the shard limit
- Check Filebeat logs
- Make sure the Wazuh Indexer service is running
- Check Wazuh Indexer logs
- Make sure you have enough disk space available
- If your disk is 95% full, Wazuh Indexer will prevent additional writes
- Consider adding more disk space and/or another Wazuh Indexer node
- Clean up old indices with an index management policy
- If your disk is 95% full, Wazuh Indexer will prevent additional writes
Wazuh Dashboards Keeps Prompting to Select a Tenant when Logging into the Web Portal
Resolution: Need to disable multi-tenancy as it does not apply to Wazuh
- SSH into the Wazuh Dashboards container and edit the
/etc/wazuh-dashboard/opensearch_dashboards.yml
file - Ensure these lines are present and/or match
opensearch_security.multitenancy.enabled: false
Extending Session Timeout in Wazuh Dashboards
Follow-Up Activities
A Quick Sanity Check
As you've witnessed, there is a lot of parts to a SIEM setup, especially if you want to have full network AND host coverage. By now, you should have a baseline configuration that consists of:
- Wazuh Indexer to store logs being sent by Filebeat on Wazuh Manager
- Wazuh Dashboard to allow search and visualization of these logs, as well as integration with Wazuh Manager using the Wazuh plugin and API client
- Wazuh Manager to receive, process, and transmit inbound network and host logs to Wazuh Indexer
- OwlH Manager to centrally manage your OwlH NIDS node(s) services and configurations
- OwlH Node to receive packets via SPAN port from both switches in the lab environment and process them with Suricata and Zeek
Exploring the OwlH Integration
Want a deeper dive on the OwlH integration and how all the pieces fit together? In this post, I provide a deeper analysis on how all the various parts that make up the OwlH Manager and OwlH Node fit together. I also provie a diagram to hopefully help visualize things better.
Extending Wazuh's Capabilities
The folks over at OpenSecure have done a really fantastic job at creating content that showcases Wazuh's capabilities and ways to extend it with various integrations. I wholeheartedly recommend taking a look.
Also, have a look at a some of the additional Wazuh content I've written. If I included, everything here, the guide would quickly grow out of scope.