client-go示例

访问kubernetes集群有几下几种方式:

方式 特点 支持者
Kubernetes dashboard 直接通过Web UI进行操作,简单直接,可定制化程度低 官方支持
kubectl 命令行操作,功能最全,但是比较复杂,适合对其进行进一步的分装,定制功能,版本适配最好 官方支持
client-go 从kubernetes的代码中抽离出来的客户端包,简单易用,但需要小心区分kubernetes的API版本 官方支持
client-python python客户端,kubernetes-incubator 官方支持
Java client fabric8中的一部分,kubernetes的java客户端 redhat

下面,我们基于client-go,对Deployment升级镜像的步骤进行了定制,通过命令行传递一个Deployment的名字、应用容器名和新image名字的方式来升级。代码和使用方式见 https://github.com/rootsongjc/kubernetes-client-go-sample

kubernetes-client-go-sample

代码如下:

  1. package main
  2. import (
  3. "flag"
  4. "fmt"
  5. "os"
  6. "path/filepath"
  7. "k8s.io/apimachinery/pkg/api/errors"
  8. metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
  9. "k8s.io/client-go/kubernetes"
  10. "k8s.io/client-go/tools/clientcmd"
  11. )
  12. func main() {
  13. var kubeconfig *string
  14. if home := homeDir(); home != "" {
  15. kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
  16. } else {
  17. kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
  18. }
  19. deploymentName := flag.String("deployment", "", "deployment name")
  20. imageName := flag.String("image", "", "new image name")
  21. appName := flag.String("app", "app", "application name")
  22. flag.Parse()
  23. if *deploymentName == "" {
  24. fmt.Println("You must specify the deployment name.")
  25. os.Exit(0)
  26. }
  27. if *imageName == "" {
  28. fmt.Println("You must specify the new image name.")
  29. os.Exit(0)
  30. }
  31. // use the current context in kubeconfig
  32. config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
  33. if err != nil {
  34. panic(err.Error())
  35. }
  36. // create the clientset
  37. clientset, err := kubernetes.NewForConfig(config)
  38. if err != nil {
  39. panic(err.Error())
  40. }
  41. deployment, err := clientset.AppsV1beta1().Deployments("default").Get(*deploymentName, metav1.GetOptions{})
  42. if err != nil {
  43. panic(err.Error())
  44. }
  45. if errors.IsNotFound(err) {
  46. fmt.Printf("Deployment not found\n")
  47. } else if statusError, isStatus := err.(*errors.StatusError); isStatus {
  48. fmt.Printf("Error getting deployment%v\n", statusError.ErrStatus.Message)
  49. } else if err != nil {
  50. panic(err.Error())
  51. } else {
  52. fmt.Printf("Found deployment\n")
  53. name := deployment.GetName()
  54. fmt.Println("name ->", name)
  55. containers := &deployment.Spec.Template.Spec.Containers
  56. found := false
  57. for i := range *containers {
  58. c := *containers
  59. if c[i].Name == *appName {
  60. found = true
  61. fmt.Println("Old image ->", c[i].Image)
  62. fmt.Println("New image ->", *imageName)
  63. c[i].Image = *imageName
  64. }
  65. }
  66. if found == false {
  67. fmt.Println("The application container not exist in the deployment pods.")
  68. os.Exit(0)
  69. }
  70. _, err := clientset.AppsV1beta1().Deployments("default").Update(deployment)
  71. if err != nil {
  72. panic(err.Error())
  73. }
  74. }
  75. }
  76. func homeDir() string {
  77. if h := os.Getenv("HOME"); h != "" {
  78. return h
  79. }
  80. return os.Getenv("USERPROFILE") // windows
  81. }

我们使用kubeconfig文件认证连接kubernetes集群,该文件默认的位置是$HOME/.kube/config

该代码编译后可以直接在kubernetes集群之外,任何一个可以连接到API server的机器上运行。

编译运行

  1. $ go get github.com/rootsongjc/kubernetes-client-go-sample
  2. $ cd $GOPATH/src/github.com/rootsongjc/kubernetes-client-go-sample
  3. $ make
  4. $ ./update-deployment-image -h
  5. Usage of ./update-deployment-image:
  6. -alsologtostderr
  7. log to standard error as well as files
  8. -app string
  9. application name (default "app")
  10. -deployment string
  11. deployment name
  12. -image string
  13. new image name
  14. -kubeconfig string
  15. (optional) absolute path to the kubeconfig file (default "/Users/jimmy/.kube/config")
  16. -log_backtrace_at value
  17. when logging hits line file:N, emit a stack trace
  18. -log_dir string
  19. If non-empty, write log files in this directory
  20. -logtostderr
  21. log to standard error instead of files
  22. -stderrthreshold value
  23. logs at or above this threshold go to stderr
  24. -v value
  25. log level for V logs
  26. -vmodule value
  27. comma-separated list of pattern=N settings for file-filtered logging

使用不存在的image更新

  1. $ ./update-deployment-image -deployment filebeat-test -image harbor-001.jimmysong.io/library/analytics-docker-test:Build_9
  2. Found deployment
  3. name -> filebeat-test
  4. Old image -> harbor-001.jimmysong.io/library/analytics-docker-test:Build_8
  5. New image -> harbor-001.jimmysong.io/library/analytics-docker-test:Build_9

查看Deployment的event。

  1. $ kubectl describe deployment filebeat-test
  2. Name: filebeat-test
  3. Namespace: default
  4. CreationTimestamp: Fri, 19 May 2017 15:12:28 +0800
  5. Labels: k8s-app=filebeat-test
  6. Selector: k8s-app=filebeat-test
  7. Replicas: 2 updated | 3 total | 2 available | 2 unavailable
  8. StrategyType: RollingUpdate
  9. MinReadySeconds: 0
  10. RollingUpdateStrategy: 1 max unavailable, 1 max surge
  11. Conditions:
  12. Type Status Reason
  13. ---- ------ ------
  14. Available True MinimumReplicasAvailable
  15. Progressing True ReplicaSetUpdated
  16. OldReplicaSets: filebeat-test-2365467882 (2/2 replicas created)
  17. NewReplicaSet: filebeat-test-2470325483 (2/2 replicas created)
  18. Events:
  19. FirstSeen LastSeen Count From SubObjectPath Type ReasoMessage
  20. --------- -------- ----- ---- ------------- -------- ------------
  21. 2h 1m 3 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set filebeat-test-2365467882 to 2
  22. 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set filebeat-test-2470325483 to 1
  23. 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set filebeat-test-2470325483 to 2

可以看到老的ReplicaSet从3个replica减少到了2个,有2个使用新配置的replica不可用,目前可用的replica是2个。

这是因为我们指定的镜像不存在,查看Deployment的pod的状态。

  1. $ kubectl get pods -l k8s-app=filebeat-test
  2. NAME READY STATUS RESTARTS AGE
  3. filebeat-test-2365467882-4zwx8 2/2 Running 0 33d
  4. filebeat-test-2365467882-rqskl 2/2 Running 0 33d
  5. filebeat-test-2470325483-6vjbw 1/2 ImagePullBackOff 0 4m
  6. filebeat-test-2470325483-gc14k 1/2 ImagePullBackOff 0 4m

我们可以看到有两个pod正在拉取image。

还原为原先的镜像

将image设置为原来的镜像。

  1. $ ./update-deployment-image -deployment filebeat-test -image harbor-001.jimmysong.io/library/analytics-docker-test:Build_8
  2. Found deployment
  3. name -> filebeat-test
  4. Old image -> harbor-001.jimmysong.io/library/analytics-docker-test:Build_9
  5. New image -> harbor-001.jimmysong.io/library/analytics-docker-test:Build_8

现在再查看Deployment的状态。

  1. $ kubectl describe deployment filebeat-test
  2. Name: filebeat-test
  3. Namespace: default
  4. CreationTimestamp: Fri, 19 May 2017 15:12:28 +0800
  5. Labels: k8s-app=filebeat-test
  6. Selector: k8s-app=filebeat-test
  7. Replicas: 3 updated | 3 total | 3 available | 0 unavailable
  8. StrategyType: RollingUpdate
  9. MinReadySeconds: 0
  10. RollingUpdateStrategy: 1 max unavailable, 1 max surge
  11. Conditions:
  12. Type Status Reason
  13. ---- ------ ------
  14. Available True MinimumReplicasAvailable
  15. Progressing True NewReplicaSetAvailable
  16. OldReplicaSets: <none>
  17. NewReplicaSet: filebeat-test-2365467882 (3/3 replicas created)
  18. Events:
  19. FirstSeen LastSeen Count From SubObjectPath Type ReasoMessage
  20. --------- -------- ----- ---- ------------- -------- ------------
  21. 2h 8m 3 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set filebeat-test-2365467882 to 2
  22. 8m 8m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set filebeat-test-2470325483 to 1
  23. 8m 8m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set filebeat-test-2470325483 to 2
  24. 2h 1m 3 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set filebeat-test-2365467882 to 3
  25. 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set filebeat-test-2470325483 to 0

可以看到available的replica个数恢复成3了。

其实在使用该命令的过程中,通过kubernetes dashboard的页面上查看Deployment的状态更直观,更加方便故障排查。

使用kubernetes dashboard进行故障排查

这也是dashboard最大的优势,简单、直接、高效。