Software Revisions
Check Point R80.30 (JHF 76)
Elasticsearch 7.4.0
Logstash 7.4.0
Logstash Plugins
logstash-input-udp
logstash-filter-kv
logstash-filter-mutate
logstash-output-elasticsearch
Kibana 7.4.0
NGINX
Arch Linux
Elasticsearch
What is Elasticsearch?From Elastic’s website: Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data so you can discover the expected and uncover the unexpected.
Elasticsearch provides an API driven indexing solution for querying and storing large quantities of data. This can be used standalone or clustered solution. Elasticsearch also has additional deployment options providing real-time search integration with other database and storage platforms.
In Arch Linux, Elasticsearch configuration files are stored under /etc/elasticsearch. This guide uses the default settings within the elasticsearch.yml file. The default configuration file has reasonable defaults and listens on TCP port 9200 on localhost. The JAVA configuration is stored in the jvm.options file. This is to adjust memory requirements used for the Java processes running Elasticsearch.
Kibana
What is Kibana?From Elastic’s website: Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack…
Kibana is a customizable web interface that interacts with Elasticsearch in order to build dashboard, visualization, or search for stored data. In Arch Linux, the configuration folder is /etc/kibana. The kibana.yml file has a useful set of defaults that I recommend validating.
server.port: 5601o This is the service port the server will listen on for inbound HTTP(s) connections.
server.host: “localhost”o This will be the server name that kibana will use for this particular instance. Since this document only uses a single instance, the default name was used.
server.basePath: “/kibana”o This will be the URL path used when running behind a web service or reverse-proxy. In this example, the requests will be sent to http://host/kibana
server.rewriteBasePath: trueo This forces the kibana process to rewrite outbound responses to include the basePath in the URL.
The default template documents the additional settings available for Kibana, but it is also available at the following link.o https://www.elastic.co/guide/en/kibana/master/settings.html
Logstash
What is Logstash?
From Elastic’s website: Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.” (Ours is Elasticsearch, naturally.)
Logstash provides a method for receiving content, manipulating that content, and forwarding it on to a backend system for additional use. Logstash can manipulates these streams into acceptable formats for storage and indexing or additional processing of the content.
In Arch Linux, the configuration folder is /etc/logstash. The Arch Linux package starts the logstash process and reads the configuration files under /etc/logstash/conf.d. This allows logstash to use multiple input sources, filters, and outputs. For example, logstash could be configured as a syslog service and output specific content to an Elasticsearch cluster on the network. It could also have another configuration file that listens on port 5044 for separate data to export into a different Elasticsearch cluster.
Logstash Plugins
Depending on the Linux distribution, several logstash plugins might not be available from the distribution file repository. In order to add additional plugins, the use of git is required to import them from the Logstash plugin repository. The logstash-plugins repository are available online at https://github.com/logstash-plugins/ and their installation process is documented on that site.
NGINX
NGINX is a web and reverse proxy server. It also has other capabilities such as load balancing and web application firewall. The configuration for nginx is located in the /etc/nginx directory. Since NGINX can have a wide array of configurations, only the configuration used to allow access to Kibana is documented below. The configuration below accepts connections to /kibana and forwards them to the localhost:5601 service. For additional configuration options for NGINX, see https://docs.nginx.com.
Snippet from /etc/nginx/nginx.conf
http { include mime.types; sendfile on; keepalive_timeout 65; ## This provides a reference to the Elasticsearch service listening on localhost:9200 upstream kibana { server 127.0.0.1:5601; keepalive 15; }server { listen 80; server_name _; location / { root /usr/share/nginx/html; index index.html index.htm; } location /kibana { proxy_pass http://kibana; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } }}
Check Point Log Exporter
Check Point Log Exporter documentation can be found in SK article 122323 on Support Center. The configuration used in this document is very simple. The logstash configuration file is expecting the syslog messages to use the Splunk format. The Splunk format provides field delineation using the | character. This provides an easy option for Logstash’s KV filter to use in order to split the log fields.
[Expert@vMgmt01:0]# cp_log_export add name lab01 target-server 10.99.99.11 target-port 514 protocol udp format splunk read-mode semi-unified
Export settings for lab01 has been added successfully
To apply the changes run: cp_log_export restart name lab01
[Expert@vMgmt01:0]#
Logstash Configuration
There is a single configuration file defined for logstash in this example. Logstash will attempt to load all YML files in the configuration directory. Each file could have a different set of inputs, filters, and outputs. The location of the YAML file might differ depending on the distribution.In this example, there are three primary sections: input, filter, and output.
The input configuration tells the logstash process which plugins to run to receive content from external sources. Our example uses the UDP input plugin (logstash-input-udp) and is configured to act as a syslog service.
Logstash has many other options for input types and content codecs. Additional information can be found by accessing the logstash Input plugin documentation at https://www.elastic.co/guide/en/logstash/current/input-plugins.html.
input { udp { port => 514 codec => plain type => syslog }}
Logstash filters match, label, edit, and make decisions on content before it passes into Elasticsearch (ES). The filter section is where the patterns and labels are defined. In our example, logstash processes the syslog data through two separate filters.
The kv filter is used to automatically assign labels to the received logs. Since the messages sent via log_exporter are in a <key>=<value> format, the kv filter was chosen. This provides a simpler mechanism than using the grok or dissect plugins for assigning those values.
filter {
if [type] == "syslog" { kv { allow_duplicate_values => false recursive => false field_split => "\|" } mutate { # Fields beginning with underscore are not supported by ES/Kibana, so rename them. rename => { "__nsons" => "nsons" } rename => { "__p_dport" => "p_dport" } rename => { "__pos" => "pos" } # Not necessary, just a field name preference. rename => { "originsicname" => "sicname" }
# Example of removing specific fields # remove_field => [ "connection_luuid", "loguid" ]
# String substitution # Strip the O\=.*$ and the ^CN= in the field. gsub => [ "sicname", "CN\\=", "", "sicname", ",O\\=.*$","" ] } }}
The mutate plugin (logstash-filter-mutate) is used to manipulate the data as needed. In the configuration above, the originsicname field is renamed to sicname. Additional fields can be dropped using the remove_field configuration.The configuration can be validated using the “-t” parameter when launching logstash. Configuration errors will be displayed and include the line numbers for the error.
/usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/checkpoint.yml
Kibana - Basic Configuration
The default settings for Kibana are used in this document, so there are no additional configuration steps necessary. The NGINX configuration used in the earlier section will pass the requests to http://hostname/kibana to the http://localhost:5601 service.
After opening the management page, Kibana needs to be configured to use the data contained within Elasticsearch.
Go to Management -> Index Patterns
Select Create Pattern and add a pattern that will highlight the logstash indices and click next.
The next step will ask for the Time filter setting. Select the @timestamp default and click “Create Index Pattern”
Once this step has been completed, Kibana can now present available data from Elasticsearch.