Configuring your driver for ArangoDB access
In this chapter you’ll learn how to configure a driver for accessingan ArangoDB deployment in Kubernetes.
The exact methods to configure a driver are specific to that driver.
Database endpoint(s)
The endpoint(s) (or URLs) to communicate with is the most importantparameter your need to configure in your driver.
Finding the right endpoints depend on wether your client application is running inthe same Kubernetes cluster as the ArangoDB deployment or not.
Client application in same Kubernetes cluster
If your client application is running in the same Kubernetes cluster asthe ArangoDB deployment, you should configure your driver to use thefollowing endpoint:
https://<deployment-name>.<namespace>.svc:8529
Only if your deployment has set spec.tls.caSecretName
to None
, shouldyou use http
instead of https
.
Client application outside Kubernetes cluster
If your client application is running outside the Kubernetes cluster in whichthe ArangoDB deployment is running, your driver endpoint depends on theexternal-access configuration of your ArangoDB deployment.
If the external-access of the ArangoDB deployment is of type LoadBalancer
,then use the IP address of that LoadBalancer
like this:
https://<load-balancer-ip>:8529
If the external-access of the ArangoDB deployment is of type NodePort
,then use the IP address(es) of the Nodes
of the Kubernetes cluster,combined with the NodePort
that is used by the external-access service.
For example:
https://<kubernetes-node-1-ip>:30123
You can find the type of external-access by inspecting the external-access Service
.To do so, run the following command:
kubectl get service -n <namespace-of-deployment> <deployment-name>-ea
The output looks like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
example-simple-cluster-ea LoadBalancer 10.106.175.38 192.168.10.208 8529:31890/TCP 1s app=arangodb,arango_deployment=example-simple-cluster,role=coordinator
In this case the external-access is of type LoadBalancer
with a load-balancer IP addressof 192.168.10.208
.This results in an endpoint of https://192.168.10.208:8529
.
TLS settings
As mentioned before the ArangoDB deployment managed by the ArangoDB operatorwill use a secure (TLS) connection unless you set spec.tls.caSecretName
to None
in your ArangoDeployment
.
When using a secure connection, you can choose to verify the server certificatesprovides by the ArangoDB servers or not.
If you want to verify these certificates, configure your driver with the CA certificatefound in a Kubernetes Secret
found in the same namespace as the ArangoDeployment
.
The name of this Secret
is stored in the spec.tls.caSecretName
setting ofthe ArangoDeployment
. If you don’t set this setting explicitly, it will beset automatically.
Then fetch the CA secret using the following command (or use a Kubernetes client library to fetch it):
kubectl get secret -n <namespace> <secret-name> --template='{{index .data "ca.crt"}}' | base64 -D > ca.crt
This results in a file called ca.crt
containing a PEM encoded, x509 CA certificate.
Query requests
For most client requests made by a driver, it does not matter if there is any kindof load-balancer between your client application and the ArangoDB deployment.
Note that even a simple Service
of type ClusterIP
already behaves as a load-balancer.
The exception to this is cursor related requests made to an ArangoDB Cluster
deployment.The coordinator that handles an initial query request (that results in a Cursor
)will save some in-memory state in that coordinator, if the result of the queryis too big to be transfer back in the response of the initial request.
Follow-up requests have to be made to fetch the remaining data.These follow-up requests must be handled by the same coordinator to which the initialrequest was made.
As soon as there is a load-balancer between your client application and the ArangoDB cluster,it is uncertain which coordinator will actually handle the follow-up request.
To resolve this uncertainty, make sure to run your client application in the sameKubernetes cluster and synchronize your endpoints before making theinitial query request.This will result in the use (by the driver) of internal DNS names of all coordinators.A follow-up request can then be sent to exactly the same coordinator.
If your client application is running outside the Kubernetes cluster this is much harderto solve.The easiest way to work around it, is by making sure that the query results are smallenough.When that is not feasible, it is also possible to resolve thiswhen the internal DNS names of your Kubernetes cluster are exposed to your client applicationand the resuling IP addresses are routeable from your client application.To expose internal DNS names of your Kubernetes cluster, your can use CoreDNS.