Ingress Configuration

Argo CD API server runs both a gRPC server (used by the CLI), as well as a HTTP/HTTPS server (used by the UI). Both protocols are exposed by the argocd-server service object on the following ports:

  • 443 - gRPC/HTTPS
  • 80 - HTTP (redirects to HTTPS)

There are several ways how Ingress can be configured.

Ambassador

The Ambassador Edge Stack can be used as a Kubernetes ingress controller with automatic TLS termination and routing capabilities for both the CLI and the UI.

The API server should be run with TLS disabled. Edit the argocd-server deployment to add the --insecure flag to the argocd-server command, or simply set server.insecure: "true" in the argocd-cmd-params-cm ConfigMap as described here. Given the argocd CLI includes the port number in the request host header, 2 Mappings are required. Note: Disabling TLS in not required if you are using grpc-web

Option 1: Mapping CRD for Host-based Routing

  1. apiVersion: getambassador.io/v2
  2. kind: Mapping
  3. metadata:
  4. name: argocd-server-ui
  5. namespace: argocd
  6. spec:
  7. host: argocd.example.com
  8. prefix: /
  9. service: https://argocd-server:443
  10. ---
  11. apiVersion: getambassador.io/v2
  12. kind: Mapping
  13. metadata:
  14. name: argocd-server-cli
  15. namespace: argocd
  16. spec:
  17. # NOTE: the port must be ignored if you have strip_matching_host_port enabled on envoy
  18. host: argocd.example.com:443
  19. prefix: /
  20. service: argocd-server:80
  21. regex_headers:
  22. Content-Type: "^application/grpc.*$"
  23. grpc: true

Login with the argocd CLI:

  1. argocd login <host>

Option 2: Mapping CRD for Path-based Routing

The API server must be configured to be available under a non-root path (e.g. /argo-cd). Edit the argocd-server deployment to add the --rootpath=/argo-cd flag to the argocd-server command.

  1. apiVersion: getambassador.io/v2
  2. kind: Mapping
  3. metadata:
  4. name: argocd-server
  5. namespace: argocd
  6. spec:
  7. prefix: /argo-cd
  8. rewrite: /argo-cd
  9. service: https://argocd-server:443

Example of argocd-cmd-params-cm configmap

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: argocd-cmd-params-cm
  5. namespace: argocd
  6. labels:
  7. app.kubernetes.io/name: argocd-cmd-params-cm
  8. app.kubernetes.io/part-of: argocd
  9. data:
  10. ## Server properties
  11. # Value for base href in index.html. Used if Argo CD is running behind reverse proxy under subpath different from / (default "/")
  12. server.basehref: "/argo-cd"
  13. # Used if Argo CD is running behind reverse proxy under subpath different from /
  14. server.rootpath: "/argo-cd"

Login with the argocd CLI using the extra --grpc-web-root-path flag for non-root paths.

  1. argocd login <host>:<port> --grpc-web-root-path /argo-cd

Contour

The Contour ingress controller can terminate TLS ingress traffic at the edge.

The Argo CD API server should be run with TLS disabled. Edit the argocd-server Deployment to add the --insecure flag to the argocd-server container command, or simply set server.insecure: "true" in the argocd-cmd-params-cm ConfigMap as described here.

It is also possible to provide an internal-only ingress path and an external-only ingress path by deploying two instances of Contour: one behind a private-subnet LoadBalancer service and one behind a public-subnet LoadBalancer service. The private Contour deployment will pick up Ingresses annotated with kubernetes.io/ingress.class: contour-internal and the public Contour deployment will pick up Ingresses annotated with kubernetes.io/ingress.class: contour-external.

This provides the opportunity to deploy the Argo CD UI privately but still allow for SSO callbacks to succeed.

Private Argo CD UI with Multiple Ingress Objects and BYO Certificate

Since Contour Ingress supports only a single protocol per Ingress object, define three Ingress objects. One for private HTTP/HTTPS, one for private gRPC, and one for public HTTPS SSO callbacks.

Internal HTTP/HTTPS Ingress:

  1. apiVersion: networking.k8s.io/v1
  2. kind: Ingress
  3. metadata:
  4. name: argocd-server-http
  5. annotations:
  6. kubernetes.io/ingress.class: contour-internal
  7. ingress.kubernetes.io/force-ssl-redirect: "true"
  8. spec:
  9. rules:
  10. - host: internal.path.to.argocd.io
  11. http:
  12. paths:
  13. - path: /
  14. pathType: Prefix
  15. backend:
  16. service:
  17. name: argocd-server
  18. port:
  19. name: http
  20. tls:
  21. - hosts:
  22. - internal.path.to.argocd.io
  23. secretName: your-certificate-name

Internal gRPC Ingress:

  1. apiVersion: networking.k8s.io/v1
  2. kind: Ingress
  3. metadata:
  4. name: argocd-server-grpc
  5. annotations:
  6. kubernetes.io/ingress.class: contour-internal
  7. spec:
  8. rules:
  9. - host: grpc-internal.path.to.argocd.io
  10. http:
  11. paths:
  12. - path: /
  13. pathType: Prefix
  14. backend:
  15. service:
  16. name: argocd-server
  17. port:
  18. name: https
  19. tls:
  20. - hosts:
  21. - grpc-internal.path.to.argocd.io
  22. secretName: your-certificate-name

External HTTPS SSO Callback Ingress:

  1. apiVersion: networking.k8s.io/v1
  2. kind: Ingress
  3. metadata:
  4. name: argocd-server-external-callback-http
  5. annotations:
  6. kubernetes.io/ingress.class: contour-external
  7. ingress.kubernetes.io/force-ssl-redirect: "true"
  8. spec:
  9. rules:
  10. - host: external.path.to.argocd.io
  11. http:
  12. paths:
  13. - path: /api/dex/callback
  14. pathType: Prefix
  15. backend:
  16. service:
  17. name: argocd-server
  18. port:
  19. name: http
  20. tls:
  21. - hosts:
  22. - external.path.to.argocd.io
  23. secretName: your-certificate-name

The argocd-server Service needs to be annotated with projectcontour.io/upstream-protocol.h2c: "https,443" to wire up the gRPC protocol proxying.

The API server should then be run with TLS disabled. Edit the argocd-server deployment to add the --insecure flag to the argocd-server command, or simply set server.insecure: "true" in the argocd-cmd-params-cm ConfigMap as described here.

Contour httpproxy CRD:

Using a contour httpproxy CRD allows you to use the same hostname for the GRPC and REST api.

  1. apiVersion: projectcontour.io/v1
  2. kind: HTTPProxy
  3. metadata:
  4. name: argocd-server
  5. namespace: argocd
  6. spec:
  7. ingressClassName: contour
  8. virtualhost:
  9. fqdn: path.to.argocd.io
  10. tls:
  11. secretName: wildcard-tls
  12. routes:
  13. - conditions:
  14. - prefix: /
  15. - header:
  16. name: Content-Type
  17. contains: application/grpc
  18. services:
  19. - name: argocd-server
  20. port: 80
  21. protocol: h2c # allows for unencrypted http2 connections
  22. timeoutPolicy:
  23. response: 1h
  24. idle: 600s
  25. idleConnection: 600s
  26. - conditions:
  27. - prefix: /
  28. services:
  29. - name: argocd-server
  30. port: 80

kubernetes/ingress-nginx

Option 1: SSL-Passthrough

Argo CD serves multiple protocols (gRPC/HTTPS) on the same port (443), this provides a challenge when attempting to define a single nginx ingress object and rule for the argocd-service, since the nginx.ingress.kubernetes.io/backend-protocol annotation accepts only a single value for the backend protocol (e.g. HTTP, HTTPS, GRPC, GRPCS).

In order to expose the Argo CD API server with a single ingress rule and hostname, the nginx.ingress.kubernetes.io/ssl-passthrough annotation must be used to passthrough TLS connections and terminate TLS at the Argo CD API server.

  1. apiVersion: networking.k8s.io/v1
  2. kind: Ingress
  3. metadata:
  4. name: argocd-server-ingress
  5. namespace: argocd
  6. annotations:
  7. nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
  8. nginx.ingress.kubernetes.io/ssl-passthrough: "true"
  9. spec:
  10. ingressClassName: nginx
  11. rules:
  12. - host: argocd.example.com
  13. http:
  14. paths:
  15. - path: /
  16. pathType: Prefix
  17. backend:
  18. service:
  19. name: argocd-server
  20. port:
  21. name: https

The above rule terminates TLS at the Argo CD API server, which detects the protocol being used, and responds appropriately. Note that the nginx.ingress.kubernetes.io/ssl-passthrough annotation requires that the --enable-ssl-passthrough flag be added to the command line arguments to nginx-ingress-controller.

SSL-Passthrough with cert-manager and Let’s Encrypt

  1. apiVersion: networking.k8s.io/v1
  2. kind: Ingress
  3. metadata:
  4. name: argocd-server-ingress
  5. namespace: argocd
  6. annotations:
  7. cert-manager.io/cluster-issuer: letsencrypt-prod
  8. nginx.ingress.kubernetes.io/ssl-passthrough: "true"
  9. # If you encounter a redirect loop or are getting a 307 response code
  10. # then you need to force the nginx ingress to connect to the backend using HTTPS.
  11. #
  12. nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  13. spec:
  14. ingressClassName: nginx
  15. rules:
  16. - host: argocd.example.com
  17. http:
  18. paths:
  19. - path: /
  20. pathType: Prefix
  21. backend:
  22. service:
  23. name: argocd-server
  24. port:
  25. name: https
  26. tls:
  27. - hosts:
  28. - argocd.example.com
  29. secretName: argocd-server-tls # as expected by argocd-server

Option 2: SSL Termination at Ingress Controller

An alternative approach is to perform the SSL termination at the Ingress. Since an ingress-nginx Ingress supports only a single protocol per Ingress object, two Ingress objects need to be defined using the nginx.ingress.kubernetes.io/backend-protocol annotation, one for HTTP/HTTPS and the other for gRPC.

Each ingress will be for a different domain (argocd.example.com and grpc.argocd.example.com). This requires that the Ingress resources use different TLS secretNames to avoid unexpected behavior.

HTTP/HTTPS Ingress:

  1. apiVersion: networking.k8s.io/v1
  2. kind: Ingress
  3. metadata:
  4. name: argocd-server-http-ingress
  5. namespace: argocd
  6. annotations:
  7. nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
  8. nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
  9. spec:
  10. ingressClassName: nginx
  11. rules:
  12. - http:
  13. paths:
  14. - path: /
  15. pathType: Prefix
  16. backend:
  17. service:
  18. name: argocd-server
  19. port:
  20. name: http
  21. host: argocd.example.com
  22. tls:
  23. - hosts:
  24. - argocd.example.com
  25. secretName: argocd-ingress-http

gRPC Ingress:

  1. apiVersion: networking.k8s.io/v1
  2. kind: Ingress
  3. metadata:
  4. name: argocd-server-grpc-ingress
  5. namespace: argocd
  6. annotations:
  7. nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
  8. spec:
  9. ingressClassName: nginx
  10. rules:
  11. - http:
  12. paths:
  13. - path: /
  14. pathType: Prefix
  15. backend:
  16. service:
  17. name: argocd-server
  18. port:
  19. name: https
  20. host: grpc.argocd.example.com
  21. tls:
  22. - hosts:
  23. - grpc.argocd.example.com
  24. secretName: argocd-ingress-grpc

The API server should then be run with TLS disabled. Edit the argocd-server deployment to add the --insecure flag to the argocd-server command, or simply set server.insecure: "true" in the argocd-cmd-params-cm ConfigMap as described here.

The obvious disadvantage to this approach is that this technique requires two separate hostnames for the API server — one for gRPC and the other for HTTP/HTTPS. However it allows TLS termination to happen at the ingress controller.

Traefik (v3.0)

Traefik can be used as an edge router and provide TLS termination within the same deployment.

It currently has an advantage over NGINX in that it can terminate both TCP and HTTP connections on the same port meaning you do not require multiple hosts or paths.

The API server should be run with TLS disabled. Edit the argocd-server deployment to add the --insecure flag to the argocd-server command or set server.insecure: "true" in the argocd-cmd-params-cm ConfigMap as described here.

IngressRoute CRD

  1. apiVersion: traefik.io/v1alpha1
  2. kind: IngressRoute
  3. metadata:
  4. name: argocd-server
  5. namespace: argocd
  6. spec:
  7. entryPoints:
  8. - websecure
  9. routes:
  10. - kind: Rule
  11. match: Host(`argocd.example.com`)
  12. priority: 10
  13. services:
  14. - name: argocd-server
  15. port: 80
  16. - kind: Rule
  17. match: Host(`argocd.example.com`) && Header(`Content-Type`, `application/grpc`)
  18. priority: 11
  19. services:
  20. - name: argocd-server
  21. port: 80
  22. scheme: h2c
  23. tls:
  24. certResolver: default

AWS Application Load Balancers (ALBs) And Classic ELB (HTTP Mode)

AWS ALBs can be used as an L7 Load Balancer for both UI and gRPC traffic, whereas Classic ELBs and NLBs can be used as L4 Load Balancers for both.

When using an ALB, you’ll want to create a second service for argocd-server. This is necessary because we need to tell the ALB to send the GRPC traffic to a different target group then the UI traffic, since the backend protocol is HTTP2 instead of HTTP1.

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. annotations:
  5. alb.ingress.kubernetes.io/backend-protocol-version: HTTP2 #This tells AWS to send traffic from the ALB using HTTP2. Can use GRPC as well if you want to leverage GRPC specific features
  6. labels:
  7. app: argogrpc
  8. name: argogrpc
  9. namespace: argocd
  10. spec:
  11. ports:
  12. - name: "443"
  13. port: 443
  14. protocol: TCP
  15. targetPort: 8080
  16. selector:
  17. app.kubernetes.io/name: argocd-server
  18. sessionAffinity: None
  19. type: NodePort

Once we create this service, we can configure the Ingress to conditionally route all application/grpc traffic to the new HTTP2 backend, using the alb.ingress.kubernetes.io/conditions annotation, as seen below. Note: The value after the . in the condition annotation must be the same name as the service that you want traffic to route to - and will be applied on any path with a matching serviceName.

  1. apiVersion: networking.k8s.io/v1
  2. kind: Ingress
  3. metadata:
  4. annotations:
  5. alb.ingress.kubernetes.io/backend-protocol: HTTPS
  6. # Use this annotation (which must match a service name) to route traffic to HTTP2 backends.
  7. alb.ingress.kubernetes.io/conditions.argogrpc: |
  8. [{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
  9. alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
  10. name: argocd
  11. namespace: argocd
  12. spec:
  13. rules:
  14. - host: argocd.argoproj.io
  15. http:
  16. paths:
  17. - path: /
  18. backend:
  19. service:
  20. name: argogrpc
  21. port:
  22. number: 443
  23. pathType: Prefix
  24. - path: /
  25. backend:
  26. service:
  27. name: argocd-server
  28. port:
  29. number: 443
  30. pathType: Prefix
  31. tls:
  32. - hosts:
  33. - argocd.argoproj.io

Istio

You can put Argo CD behind Istio using following configurations. Here we will achive both serving Argo CD behind istio and using subpath on Istio

First we need to make sure that we can run Argo CD with subpath (ie /argocd). For this we have used install.yaml from argocd project as is

  1. curl -kLs -o install.yaml https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

save following file as kustomization.yml

  1. apiVersion: kustomize.config.k8s.io/v1beta1
  2. kind: Kustomization
  3. resources:
  4. - ./install.yaml
  5. patches:
  6. - path: ./patch.yml

And following lines as patch.yml

  1. # Use --insecure so Ingress can send traffic with HTTP
  2. # --bashref /argocd is the subpath like https://IP/argocd
  3. # env was added because of https://github.com/argoproj/argo-cd/issues/3572 error
  4. ---
  5. apiVersion: apps/v1
  6. kind: Deployment
  7. metadata:
  8. name: argocd-server
  9. spec:
  10. template:
  11. spec:
  12. containers:
  13. - args:
  14. - /usr/local/bin/argocd-server
  15. - --staticassets
  16. - /shared/app
  17. - --redis
  18. - argocd-redis:6379
  19. - --insecure
  20. - --basehref
  21. - /argocd
  22. - --rootpath
  23. - /argocd
  24. name: argocd-server
  25. env:
  26. - name: ARGOCD_MAX_CONCURRENT_LOGIN_REQUESTS_COUNT
  27. value: "0"

After that install Argo CD (there should be only 3 yml file defined above in current directory )

  1. kubectl apply -k ./ -n argocd --wait=true

Be sure you create secret for Istio ( in our case secretname is argocd-server-tls on argocd Namespace). After that we create Istio Resources

  1. apiVersion: networking.istio.io/v1alpha3
  2. kind: Gateway
  3. metadata:
  4. name: argocd-gateway
  5. namespace: argocd
  6. spec:
  7. selector:
  8. istio: ingressgateway
  9. servers:
  10. - port:
  11. number: 80
  12. name: http
  13. protocol: HTTP
  14. hosts:
  15. - "*"
  16. tls:
  17. httpsRedirect: true
  18. - port:
  19. number: 443
  20. name: https
  21. protocol: HTTPS
  22. hosts:
  23. - "*"
  24. tls:
  25. credentialName: argocd-server-tls
  26. maxProtocolVersion: TLSV1_3
  27. minProtocolVersion: TLSV1_2
  28. mode: SIMPLE
  29. cipherSuites:
  30. - ECDHE-ECDSA-AES128-GCM-SHA256
  31. - ECDHE-RSA-AES128-GCM-SHA256
  32. - ECDHE-ECDSA-AES128-SHA
  33. - AES128-GCM-SHA256
  34. - AES128-SHA
  35. - ECDHE-ECDSA-AES256-GCM-SHA384
  36. - ECDHE-RSA-AES256-GCM-SHA384
  37. - ECDHE-ECDSA-AES256-SHA
  38. - AES256-GCM-SHA384
  39. - AES256-SHA
  40. ---
  41. apiVersion: networking.istio.io/v1alpha3
  42. kind: VirtualService
  43. metadata:
  44. name: argocd-virtualservice
  45. namespace: argocd
  46. spec:
  47. hosts:
  48. - "*"
  49. gateways:
  50. - argocd-gateway
  51. http:
  52. - match:
  53. - uri:
  54. prefix: /argocd
  55. route:
  56. - destination:
  57. host: argocd-server
  58. port:
  59. number: 80

And now we can browse http://{{ IP }}/argocd (it will be rewritten to https://{{ IP }}/argocd

Google Cloud load balancers with Kubernetes Ingress

You can make use of the integration of GKE with Google Cloud to deploy Load Balancers using just Kubernetes objects.

For this we will need these five objects: - A Service - A BackendConfig - A FrontendConfig - A secret with your SSL certificate - An Ingress for GKE

If you need detail for all the options available for these Google integrations, you can check the Google docs on configuring Ingress features

Disable internal TLS

First, to avoid internal redirection loops from HTTP to HTTPS, the API server should be run with TLS disabled.

Edit the --insecure flag in the argocd-server command of the argocd-server deployment, or simply set server.insecure: "true" in the argocd-cmd-params-cm ConfigMap as described here.

Creating a service

Now you need an externally accessible service. This is practically the same as the internal service Argo CD has, but with Google Cloud annotations. Note that this service is annotated to use a Network Endpoint Group (NEG) to allow your load balancer to send traffic directly to your pods without using kube-proxy, so remove the neg annotation if that’s not what you want.

The service:

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: argocd-server
  5. namespace: argocd
  6. annotations:
  7. cloud.google.com/neg: '{"ingress": true}'
  8. cloud.google.com/backend-config: '{"ports": {"http":"argocd-backend-config"}}'
  9. spec:
  10. type: ClusterIP
  11. ports:
  12. - name: http
  13. port: 80
  14. protocol: TCP
  15. targetPort: 8080
  16. selector:
  17. app.kubernetes.io/name: argocd-server

Creating a BackendConfig

See that previous service referencing a backend config called argocd-backend-config? So lets deploy it using this yaml:

  1. apiVersion: cloud.google.com/v1
  2. kind: BackendConfig
  3. metadata:
  4. name: argocd-backend-config
  5. namespace: argocd
  6. spec:
  7. healthCheck:
  8. checkIntervalSec: 30
  9. timeoutSec: 5
  10. healthyThreshold: 1
  11. unhealthyThreshold: 2
  12. type: HTTP
  13. requestPath: /healthz
  14. port: 8080

It uses the same health check as the pods.

Creating a FrontendConfig

Now we can deploy a frontend config with an HTTP to HTTPS redirect:

  1. apiVersion: networking.gke.io/v1beta1
  2. kind: FrontendConfig
  3. metadata:
  4. name: argocd-frontend-config
  5. namespace: argocd
  6. spec:
  7. redirectToHttps:
  8. enabled: true

Note

The next two steps (the certificate secret and the Ingress) are described supposing that you manage the certificate yourself, and you have the certificate and key files for it. In the case that your certificate is Google-managed, fix the next two steps using the guide to use a Google-managed SSL certificate.


Creating a certificate secret

We need now to create a secret with the SSL certificate we want in our load balancer. It’s as easy as executing this command on the path you have your certificate keys stored:

  1. kubectl -n argocd create secret tls secret-yourdomain-com \
  2. --cert cert-file.crt --key key-file.key

Creating an Ingress

And finally, to top it all, our Ingress. Note the reference to our frontend config, the service, and to the certificate secret.


Note

GKE clusters running versions earlier than 1.21.3-gke.1600, the only supported value for the pathType field is ImplementationSpecific. So you must check your GKE cluster’s version. You need to use different YAML depending on the version.


If you use the version earlier than 1.21.3-gke.1600, you should use the following Ingress resource:

  1. apiVersion: networking.k8s.io/v1
  2. kind: Ingress
  3. metadata:
  4. name: argocd
  5. namespace: argocd
  6. annotations:
  7. networking.gke.io/v1beta1.FrontendConfig: argocd-frontend-config
  8. spec:
  9. tls:
  10. - secretName: secret-example-com
  11. rules:
  12. - host: argocd.example.com
  13. http:
  14. paths:
  15. - pathType: ImplementationSpecific
  16. path: "/*" # "*" is needed. Without this, the UI Javascript and CSS will not load properly
  17. backend:
  18. service:
  19. name: argocd-server
  20. port:
  21. number: 80

If you use the version 1.21.3-gke.1600 or later, you should use the following Ingress resource:

  1. apiVersion: networking.k8s.io/v1
  2. kind: Ingress
  3. metadata:
  4. name: argocd
  5. namespace: argocd
  6. annotations:
  7. networking.gke.io/v1beta1.FrontendConfig: argocd-frontend-config
  8. spec:
  9. tls:
  10. - secretName: secret-example-com
  11. rules:
  12. - host: argocd.example.com
  13. http:
  14. paths:
  15. - pathType: Prefix
  16. path: "/"
  17. backend:
  18. service:
  19. name: argocd-server
  20. port:
  21. number: 80

As you may know already, it can take some minutes to deploy the load balancer and become ready to accept connections. Once it’s ready, get the public IP address for your Load Balancer, go to your DNS server (Google or third party) and point your domain or subdomain (i.e. argocd.example.com) to that IP address.

You can get that IP address describing the Ingress object like this:

  1. kubectl -n argocd describe ingresses argocd | grep Address

Once the DNS change is propagated, you’re ready to use Argo with your Google Cloud Load Balancer

Authenticating through multiple layers of authenticating reverse proxies

Argo CD endpoints may be protected by one or more reverse proxies layers, in that case, you can provide additional headers through the argocd CLI --header parameter to authenticate through those layers.

  1. $ argocd login <host>:<port> --header 'x-token1:foo' --header 'x-token2:bar' # can be repeated multiple times
  2. $ argocd login <host>:<port> --header 'x-token1:foo,x-token2:bar' # headers can also be comma separated

ArgoCD Server and UI Root Path (v1.5.3)

Argo CD server and UI can be configured to be available under a non-root path (e.g. /argo-cd). To do this, add the --rootpath flag into the argocd-server deployment command:

  1. spec:
  2. template:
  3. spec:
  4. name: argocd-server
  5. containers:
  6. - command:
  7. - /argocd-server
  8. - --repo-server
  9. - argocd-repo-server:8081
  10. - --rootpath
  11. - /argo-cd

NOTE: The flag --rootpath changes both API Server and UI base URL. Example nginx.conf:

  1. worker_processes 1;
  2. events { worker_connections 1024; }
  3. http {
  4. sendfile on;
  5. server {
  6. listen 443;
  7. location /argo-cd/ {
  8. proxy_pass https://localhost:8080/argo-cd/;
  9. proxy_redirect off;
  10. proxy_set_header Host $host;
  11. proxy_set_header X-Real-IP $remote_addr;
  12. proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  13. proxy_set_header X-Forwarded-Host $server_name;
  14. # buffering should be disabled for api/v1/stream/applications to support chunked response
  15. proxy_buffering off;
  16. }
  17. }
  18. }

Flag --grpc-web-root-path is used to provide a non-root path (e.g. /argo-cd)

  1. $ argocd login <host>:<port> --grpc-web-root-path /argo-cd

UI Base Path

If the Argo CD UI is available under a non-root path (e.g. /argo-cd instead of /) then the UI path should be configured in the API server. To configure the UI path add the --basehref flag into the argocd-server deployment command:

  1. spec:
  2. template:
  3. spec:
  4. name: argocd-server
  5. containers:
  6. - command:
  7. - /argocd-server
  8. - --repo-server
  9. - argocd-repo-server:8081
  10. - --basehref
  11. - /argo-cd

NOTE: The flag --basehref only changes the UI base URL. The API server will keep using the / path so you need to add a URL rewrite rule to the proxy config. Example nginx.conf with URL rewrite:

  1. worker_processes 1;
  2. events { worker_connections 1024; }
  3. http {
  4. sendfile on;
  5. server {
  6. listen 443;
  7. location /argo-cd {
  8. rewrite /argo-cd/(.*) /$1 break;
  9. proxy_pass https://localhost:8080;
  10. proxy_redirect off;
  11. proxy_set_header Host $host;
  12. proxy_set_header X-Real-IP $remote_addr;
  13. proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  14. proxy_set_header X-Forwarded-Host $server_name;
  15. # buffering should be disabled for api/v1/stream/applications to support chunked response
  16. proxy_buffering off;
  17. }
  18. }
  19. }