Collect Kubernetes Events
In addition to using Logconfig to collect logs, Loggie can also configure any source/sink/interceptor through CRD. In essence, Loggie is a data stream that supports multiple Pipelines, integrating common functions such as queue retry, data processing, configuration delivery, monitoring alarms and other functions, which reduces the development cost for similar requirements. Collecting Kubernetes Events is a good example of this.
Kubernetes Events are events generated by Kubernetes’ own components and some controllers. We can use kubectl describe
to view the event information of associated resources. Collecting and recording these events can help us trace back, troubleshoot, audit, and summarize problems, and better understand internal state of Kubernetes clusters.
Preparation
Similar to Loggie Aggregator, we can Deploy Aggregator cluster separately or reuse the existing aggregator cluster.
Configuration Example
Configure the kubeEvents source and use type: cluster
to distribute configuration to the Aggregator cluster.
Config
apiVersion: loggie.io/v1beta1
kind: ClusterLogConfig
metadata:
name: kubeevent
spec:
selector:
type: cluster
cluster: aggregator
pipeline:
sources: |
- type: kubeEvent
name: event
sinkRef: dev
By default, whether it is sent to Elasticsearch or other sinks, the output is in a format similar to the following:
event
{
"body": "{\"metadata\":{\"name\":\"loggie-aggregator.16c277f8fc4ff0d0\",\"namespace\":\"loggie-aggregator\",\"uid\":\"084cea27-cd4a-4ce4-97ef-12e70f37880e\",\"resourceVersion\":\"2975193\",\"creationTimestamp\":\"2021-12-20T12:58:45Z\",\"managedFields\":[{\"manager\":\"kube-controller-manager\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2021-12-20T12:58:45Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:count\":{},\"f:firstTimestamp\":{},\"f:involvedObject\":{\"f:apiVersion\":{},\"f:kind\":{},\"f:name\":{},\"f:namespace\":{},\"f:resourceVersion\":{},\"f:uid\":{}},\"f:lastTimestamp\":{},\"f:message\":{},\"f:reason\":{},\"f:source\":{\"f:component\":{}},\"f:type\":{}}}]},\"involvedObject\":{\"kind\":\"DaemonSet\",\"namespace\":\"loggie-aggregator\",\"name\":\"loggie-aggregator\",\"uid\":\"7cdf4792-815d-4eba-8a81-d60131ad1fc4\",\"apiVersion\":\"apps/v1\",\"resourceVersion\":\"2975170\"},\"reason\":\"SuccessfulCreate\",\"message\":\"Created pod: loggie-aggregator-pbkjk\",\"source\":{\"component\":\"daemonset-controller\"},\"firstTimestamp\":\"2021-12-20T12:58:45Z\",\"lastTimestamp\":\"2021-12-20T12:58:45Z\",\"count\":1,\"type\":\"Normal\",\"eventTime\":null,\"reportingComponent\":\"\",\"reportingInstance\":\"\"}",
"systemPipelineName": "default/kubeevent/",
"systemSourceName": "event"
}
To facilitate analysis and display, we can add some interceptors to json-decode the collected events data.
The configuration example is as follows. For details, please refer to Log Segmentation。
Config
interceptor
apiVersion: loggie.io/v1beta1
kind: Interceptor
metadata:
name: jsondecode
spec:
interceptors: |
- type: normalize
name: json
processors:
- jsonDecode: ~
- drop:
targets: ["body"]
clusterLogConfig
apiVersion: loggie.io/v1beta1
kind: ClusterLogConfig
metadata:
name: kubeevent
spec:
selector:
type: cluster
cluster: aggregator
pipeline:
sources: |
- type: kubeEvent
name: event
interceptorRef: jsondecode
sinkRef: dev
The data after jsonDecode in normalize interceptor is as follows:
event
{
"metadata": {
"name": "loggie-aggregator.16c277f8fc4ff0d0",
"namespace": "loggie-aggregator",
"uid": "084cea27-cd4a-4ce4-97ef-12e70f37880e",
"resourceVersion": "2975193",
"creationTimestamp": "2021-12-20T12:58:45Z",
"managedFields": [
{
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:type": {
},
"f:count": {
},
"f:firstTimestamp": {
},
"f:involvedObject": {
"f:apiVersion": {
},
"f:kind": {
},
"f:name": {
},
"f:namespace": {
},
"f:resourceVersion": {
},
"f:uid": {
}
},
"f:lastTimestamp": {
},
"f:message": {
},
"f:reason": {
},
"f:source": {
"f:component": {
}
}
},
"manager": "kube-controller-manager",
"operation": "Update",
"apiVersion": "v1",
"time": "2021-12-20T12:58:45Z"
}
]
},
"reportingComponent": "",
"type": "Normal",
"message": "Created pod: loggie-aggregator-pbkjk",
"reason": "SuccessfulCreate",
"reportingInstance": "",
"source": {
"component": "daemonset-controller"
},
"count": 1,
"lastTimestamp": "2021-12-20T12:58:45Z",
"firstTimestamp": "2021-12-20T12:58:45Z",
"eventTime": null,
"involvedObject": {
"kind": "DaemonSet",
"namespace": "loggie-aggregator",
"name": "loggie-aggregator",
"uid": "7cdf4792-815d-4eba-8a81-d60131ad1fc4",
"apiVersion": "apps/v1",
"resourceVersion": "2975170"
},
}
If you feel that there are too many data fields or the format does not meet the requirements, you can also configure the normalize interceptor to modify it.