Alerting Message (Node Level)
Alert messages record detailed information of alerts triggered based on alert rules, including monitoring targets, alert policies, recent notifications and comments.
Prerequisites
You have created a node-level alert policy and received alert notifications of it. If it is not ready, please refer to Alert Policy (Node Level) to create one first.
Hands-on Lab
Task 1: View Alert Message
- Log in the console with one account granted the role
platform-admin
. - Click Platform in the top left corner and select Clusters Management.
- Select a cluster from the list and enter it (If you do not enable the multi-cluster feature, you will directly go to the Overview page).
- Navigate to Alerting Messages under Monitoring & Alerting, and you can see alert messages in the list. In the example of Alert Policy (Node Level), you set one node as the monitoring target, and its memory utilization rate is higher than the threshold of
50%
, so you can see an alert message of it.
- Click the alert message to enter the detail page. In Alerting Detail, you can see the graph of memory utilization rate of the node over time, which has been continuously higher than the threshold of
50%
set in the alert rule, so the alert was triggered.
Task 2: View Alert Policy
Switch to Alerting Policy to view the alert policy corresponding to this alert message, and you can see the triggering rule of it set in the example of Alert Policy (Node Level).
Task 3: View Recent Notification
- Switch to Recent Notification. It can be seen that 3 notifications have been received, because the notification rule was set with a repetition period of
Alert once every 5 minutes
and retransmission ofResend up to 3 times
.
- Log in your email to see alert notification mails sent by the KubeSphere mail server. You have received a total of 3 emails.
Task 4: Add Comment
Click Comment to add comments to current alert messages. For example, as memory utilization rate of the node is higher than the threshold set based on the alert rule, you can add a comment in the dialog below: The node needs to be tainted and new pod is not allowed to be scheduled to it
.