WARNING: This project is currently in an early stage of development. Not all components have been ported to this repository, and the features are not yet stable enough for production use.
Sentinel Kit is a comprehensive Docker stack designed to provide Digital Forensics and Incident Response (DFIR) and Security Operations Center (SOC) capabilities with unparalleled deployment simplicity.
Ideal for situational monitoring or rapid security incident response, this integrated platform enables collection, analysis, detection, and immediate response to threats.
Sentinel Kit is an all-in-one toolkit that covers the entire security incident lifecycle:
- Log Collection & Parsing (SIEM Lite): Uses Fluent Bit for data ingestion and Elasticsearch for storage and indexing.
- Advanced Analysis & Triage: Planned integration of Sigma rules for log-based detection and YARA for suspicious file triage (via upload mechanisms).
- Detection and Response (EDR): A dedicated agent (integrating into the ecosystem) provide real-time detection and response functionalities. In addition, this optional agent can act as a collection element to forward logs from your workstations to the sentinel-kit server.
- Secure Uploads: Provides a dedicated SFTP server for uploading evidence, logs, or suspicious files.
- Comprehensive Visualization: Monitoring dashboards via Kibana and Grafana/Prometheus.
This project is designed to be deployed in minutes using Docker Compose.
- Docker
- Docker Compose (or Docker Engine including Compose)
- Minimum 8 GB of RAM (essential for Elasticsearch)
-
Clone the Repository:
git clone cd sentinel-kit -
Set the following DNS entry (in hosts file if you are running it locally):
# OS host file
127.0.0.1 sentinel-kit.local
127.0.0.1 backend.sentinel-kit.local
127.0.0.1 phpmyadmin.sentinel-kit.local
127.0.0.1 kibana.sentinel-kit.local
127.0.0.1 grafana.sentinel-kit.local-
Launch the Stack:
docker-compose up -d
Startup may take several minutes, especially the first time, as Elasticsearch initializes and the backend installs its dependencies.
-
Check Status:
docker-compose ps
All services should be in the
Upstatus.
Once the stack is running, you can access the interfaces via the default ports exposed by the Caddy service:
| Service | Role | Default Access |
|---|---|---|
| Web Interface (Admin frontend) | Access to the admin application | https://sentinel-kit.local |
| Web API | Used for clients<->server communications and admin actions over the web interface | https://backend.sentinel-kit.local |
| Kibana | Exploration and visualization of Elastic logs | http://kibana.sentinel-kit.local |
| Grafana | Monitoring dashboards | http://grafana.sentinel-kit.local |
| phpMyAdmin | MySQL database management | http://phpmyadmin.sentinel-kit.local |
| SFTP Server | Secure file/evidence upload | Port 2222 |
| Tool | Username | Password |
|---|---|---|
| Grafana | sentinel-kit_grafana_admin |
sentinel-kit_grafana_password |
| MySQL (DB) | sentinel-kit_user |
sentinel-kit_passwd |
| SFTP | sentinel-kit_ftp_user |
sentinel-kit_ftp_passwd |
All of this can be edited in .env file
The architecture is modular and relies on the interconnection of several services via the sentinel-kit-network network.

Main configurations are located in the config/ folder: (edit these elements only if you know what you are doing 😊)
config/caddy_server: Reverse proxy that serve front and back-end web applicationsconfig/certificates: Contains TLS certification chains for elasticstack, caddy and backend JWTconfig/docker-config: Server stack configuration (dockerfile, entrypoints...).config/fluentbit_server: Fluent Bit configuration files (inputs, filters, outputs to Elasticsearch).config/grafana: Grafana initial setup (datasources and dashboards).config/prometheus/prometheus.yml: Prometheus monitoring targets configuration.config/sigma_ruleset: sigma rules used on elasticsearch ingested logsconfig/yara_ruleset: yara rules used ondata/yara_triage_datafolder or by sentinel-kit_datamonitor agent
Persistent data are located in the data/ folder:
data/caddy_logs: Store the caddy server access & error logsdata/ftp_data: Store file uploaded on the SFTP serverdata/grafana: Contains a persistence of your grafana profile if you want to make your own dashboard and customizationsdata/kibana: Kibana user customizationsdata/log_ingest_data: Is designed to forward logs if you don't want to use fluentbit HTTP forwarderdata/mysql_data: Constains a persistence of the web backend databasedata/yara_triage_data: is used to automatically scan any file placed in this folder
To stop and remove the containers, networks, and volumes created by Docker Compose:
docker-compose down -vIf you want to erase all user data:
- remove the content of every folder inside
data/ - remove the content of
config/certificates/in caddy_server, elasticsearch and jwt - remove the content of
config/grafana - finally, rebuild the stack with the following command:
docker-compose up --build --force-recreate