Configuring your driver for ArangoDB access
In this chapter you’ll learn how to configure a driver for accessingan ArangoDB deployment in Kubernetes.
The exact methods to configure a driver are specific to that driver.
Database endpoint(s)
The endpoint(s) (or URLs) to communicate with is the most importantparameter your need to configure in your driver.
Finding the right endpoints depend on wether your client application is running inthe same Kubernetes cluster as the ArangoDB deployment or not.
Client application in same Kubernetes cluster
If your client application is running in the same Kubernetes cluster asthe ArangoDB deployment, you should configure your driver to use thefollowing endpoint:
https://<deployment-name>.<namespace>.svc:8529
Only if your deployment has set spec.tls.caSecretName
to None
, shouldyou use http
instead of https
.
Client application outside Kubernetes cluster
If your client application is running outside the Kubernetes cluster in whichthe ArangoDB deployment is running, your driver endpoint depends on theexternal-access configuration of your ArangoDB deployment.
If the external-access of the ArangoDB deployment is of type LoadBalancer
,then use the IP address of that LoadBalancer
like this:
https://<load-balancer-ip>:8529
If the external-access of the ArangoDB deployment is of type NodePort
,then use the IP address(es) of the Nodes
of the Kubernetes cluster,combined with the NodePort
that is used by the external-access service.
For example:
https://<kubernetes-node-1-ip>:30123
You can find the type of external-access by inspecting the external-access Service
.To do so, run the following command:
kubectl get service -n <namespace-of-deployment> <deployment-name>-ea
The output looks like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
example-simple-cluster-ea LoadBalancer 10.106.175.38 192.168.10.208 8529:31890/TCP 1s app=arangodb,arango_deployment=example-simple-cluster,role=coordinator
In this case the external-access is of type LoadBalancer
with a load-balancer IP addressof 192.168.10.208
.This results in an endpoint of https://192.168.10.208:8529
.
TLS settings
As mentioned before the ArangoDB deployment managed by the ArangoDB operatorwill use a secure (TLS) connection unless you set spec.tls.caSecretName
to None
in your ArangoDeployment
.
When using a secure connection, you can choose to verify the server certificatesprovides by the ArangoDB servers or not.
If you want to verify these certificates, configure your driver with the CA certificatefound in a Kubernetes Secret
found in the same namespace as the ArangoDeployment
.
The name of this Secret
is stored in the spec.tls.caSecretName
setting ofthe ArangoDeployment
. If you don’t set this setting explicitly, it will beset automatically.
Then fetch the CA secret using the following command (or use a Kubernetes client library to fetch it):
kubectl get secret -n <namespace> <secret-name> --template='{{index .data "ca.crt"}}' | base64 -D > ca.crt
This results in a file called ca.crt
containing a PEM encoded, x509 CA certificate.
Query requests
For most client requests made by a driver, it does not matter if there is anykind of load-balancer between your client application and the ArangoDBdeployment.
Note that even a simple Service
of type ClusterIP
already behaves as aload-balancer.
The exception to this is cursor-related requests made to an ArangoDB Cluster
deployment. The Coordinator that handles an initial query request (that resultsin a Cursor
) will save some in-memory state in that Coordinator, if the resultof the query is too big to be transfer back in the response of the initialrequest.
Follow-up requests have to be made to fetch the remaining data. These follow-uprequests must be handled by the same Coordinator to which the initial requestwas made. As soon as there is a load-balancer between your client applicationand the ArangoDB cluster, it is uncertain which Coordinator will receive thefollow-up request.
ArangoDB will transparently forward any mismatched requests to the correctCoordinator, so the requests can be answered correctly without any additionalconfiguration. However, this incurs a small latency penalty due to the extrarequest across the internal network.
To prevent this uncertainty client-side, make sure to run your clientapplication in the same Kubernetes cluster and synchronize your endpoints beforemaking the initial query request. This will result in the use (by the driver) ofinternal DNS names of all Coordinators. A follow-up request can then be sent toexactly the same Coordinator.
If your client application is running outside the Kubernetes cluster the easiestway to work around it is by making sure that the query results are small enoughto be returned by a single request. When that is not feasible, it is alsopossible to resolve this when the internal DNS names of your Kubernetes clusterare exposed to your client application and the resulting IP addresses areroutable from your client application. To expose internal DNS names of yourKubernetes cluster, your can use CoreDNS.