Collecting Elasticsearch log data with Filebeat
You can use Filebeat to monitor the Elasticsearch log files, collect log events, and ship them to the monitoring cluster. Your recent logs are visible on the Monitoring page in Kibana.
Verify that Elasticsearch is running and that the monitoring cluster is ready to receive data from Filebeat.
In production environments, we strongly recommend using a separate cluster (referred to as the monitoring cluster) to store the data. Using a separate monitoring cluster prevents production cluster outages from impacting your ability to access your monitoring data. It also prevents monitoring activities from impacting the performance of your production cluster. See Monitoring in a production environment.
Enable the collection of monitoring data on your cluster.
Set
xpack.monitoring.collection.enabled
totrue
on the production cluster. By default, it is is disabled (false
).You can use the following APIs to review and change this setting:
GET _cluster/settings
PUT _cluster/settings
{
"persistent": {
"xpack.monitoring.collection.enabled": true
}
}
If Elasticsearch security features are enabled, you must have
monitor
cluster privileges to view the cluster settings andmanage
cluster privileges to change them.For more information, see Monitoring settings and Cluster update settings.
Identify which logs you want to monitor.
The Filebeat Elasticsearch module can handle audit logs, deprecation logs, gc logs, server logs, and slow logs. For more information about the location of your Elasticsearch logs, see the path.logs setting.
If there are both structured (
*.json
) and unstructured (plain text) versions of the logs, you must use the structured logs. Otherwise, they might not appear in the appropriate context in Kibana.Install Filebeat on the Elasticsearch nodes that contain logs that you want to monitor.
Identify where to send the log data.
For example, specify Elasticsearch output information for your monitoring cluster in the Filebeat configuration file (
filebeat.yml
):output.elasticsearch:
# Array of hosts to connect to.
hosts: ["http://es-mon-1:9200", "http://es-mon2:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
In this example, the data is stored on a monitoring cluster with nodes
es-mon-1
andes-mon-2
.If you configured the monitoring cluster to use encrypted communications, you must access it via HTTPS. For example, use a
hosts
setting likehttps://es-mon-1:9200
.The Elasticsearch monitoring features use ingest pipelines, therefore the cluster that stores the monitoring data must have at least one ingest node.
If Elasticsearch security features are enabled on the monitoring cluster, you must provide a valid user ID and password so that Filebeat can send metrics successfully.
For more information about these configuration options, see Configure the Elasticsearch output.
Optional: Identify where to visualize the data.
Filebeat provides example Kibana dashboards, visualizations and searches. To load the dashboards into the appropriate Kibana instance, specify the
setup.kibana
information in the Filebeat configuration file (filebeat.yml
) on each node:setup.kibana:
host: "localhost:5601"
#username: "my_kibana_user"
#password: "YOUR_PASSWORD"
In production environments, we strongly recommend using a dedicated Kibana instance for your monitoring cluster.
If security features are enabled, you must provide a valid user ID and password so that Filebeat can connect to Kibana:
- Create a user on the monitoring cluster that has the
kibana_admin
built-in role or equivalent privileges. - Add the
username
andpassword
settings to the Elasticsearch output information in the Filebeat configuration file. The example shows a hard-coded password, but you should store sensitive values in the secrets keystore.
Enable the Elasticsearch module and set up the initial Filebeat environment on each node.
For example:
filebeat modules enable elasticsearch
filebeat setup -e
For more information, see Elasticsearch module.
Configure the Elasticsearch module in Filebeat on each node.
If the logs that you want to monitor aren’t in the default location, set the appropriate path variables in the
modules.d/elasticsearch.yml
file. See Configure the Elasticsearch module.If there are JSON logs, configure the
var.paths
settings to point to them instead of the plain text logs.Start Filebeat on each node.
Depending on how you’ve installed Filebeat, you might see errors related to file ownership or permissions when you try to run Filebeat modules. See Config file ownership and permissions.
Check whether the appropriate indices exist on the monitoring cluster.
For example, use the cat indices command to verify that there are new
filebeat-*
indices.If you want to use the Monitoring UI in Kibana, there must also be
.monitoring-*
indices. Those indices are generated when you collect metrics about Elastic Stack products. For example, see Collecting monitoring data with Metricbeat.