Observability
Monitoring with Grafana
Out of the box, you get monitoring via Prometheus and Grafana.
You may want to push some load onto the deployed applications in order to see some metrics. |
Run the following command to produce load.
open "$(minishift openshift service grafana -u -n istio-system)/d/TSEY6jLmk/istio-galley-dashboard?refresh=5s&orgId=1"
or
firefox "$(minishift openshift service grafana -u -n istio-system)/d/TSEY6jLmk/istio-galley-dashboard?refresh=5s&orgId=1"
open "$(minishift openshift service grafana -u -n istio-system)/d/UbsSZTDik/istio-workload-dashboard?orgId=1&refresh=10s"
or
firefox "$(minishift openshift service grafana -u -n istio-system)/d/UbsSZTDik/istio-workload-dashboard?orgId=1&refresh=10s"
to check the "Workload of the services"
Prometheus
Explore Prometheus:
open "$(minishift openshift service prometheus -u -n istio-system)/graph?g0.range_input=1m&g0.stacked=1&g0.expr=&g0.tab=0"
or
firefox "$(minishift openshift service prometheus -u -n istio-system)/graph?g0.range_input=1m&g0.stacked=1&g0.expr=&g0.tab=0"
Custom Metrics
Istio also allows you to specify custom metrics which can be seen inside of the Prometheus dashboard
Add the custom metric and rule. First make sure you are in the "istio-tutorial" directory and then
kubectl create -f istiofiles/recommendation_requestcount.yml -n istio-system
Then run several requests through the system
while true; do curl istio-ingressgateway-istio-system.$(minishift ip).nip.io/customer; sleep .5; done
In the Prometheus dashboard, add the following
istio_requests_total{destination_service="recommendation.tutorial.svc.cluster.local"}
and select Execute
You may have to refresh the browser for the Prometheus graph to update. And you may wish to make the interval 5m (5 minutes) as seen in the screenshot above. |
Containers memory
Istio allows also to see the RSS memory consumed by the containers.
In the Prometheus dashboard, add the following
container_memory_rss{namespace="tutorial",container_name =~ "customer|preference|recommendation"}
and select Execute
Tracing
Distributed Tracing involves propagating the tracing context from service to service, usually done by sending certain incoming HTTP headers downstream to outbound requests. For services embedding a OpenTracing framework instrumentations such as opentracing-spring-cloud, this might be transparent. For services that are not embedding OpenTracing libraries, this context propagation needs to be done manually.
As OpenTracing is "just" an instrumentation library, a concrete tracer is required in order to actually capture the tracing data and report it to a remote server. Our customer
and preference
services ship with Jaeger as the concrete tracer. the Istio platform automatically sends collected tracing data to Jaeger, so that we are able to see a trace involving all three services, even if our recommendation
service is not aware of OpenTracing or Jaeger at all.
Our customer
and preference
services are using the TracerResolver
facility from OpenTracing, so that the concrete tracer can be loaded automatically without our code having a hard dependency on Jaeger. Given that the Jaeger tracer can be configured via environment variables, we don’t need to do anything in order to get a properly configured Jaeger tracer ready and registered with OpenTracing. That said, there are cases where it’s appropriate to manually configure a tracer. Refer to the Jaeger documentation for more information on how to do that.
Let’s open the Jaeger console, select customer
from the list of services and click Find Traces
minishift openshift service tracing -n istio-system --in-browser