容器化部署

这篇文章介绍一下如何在就 Kind 的 Kubernetes 的环境中部署 Linkis 服务,以方便更轻量化的学习使用和调试。

  • kind github:https://github.com/kubernetes-sigs/kind
  • kind官网:kind.sigs.k8s.io/

  • kind 0.14.0

  • docker 20.10.17
  • centos 7.6
  • helm 3.x

  • 1.确保组件依赖版本

  • 2.kind 是用 docker 容器模拟节点的,机器重启回来容器都变了,调度器就不工作了,这个是kind的limitation ,官方文档有详细说明。
  1. sudo yum install -y yum-utils device-mapper-persistent-data lvm2
  2. sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  3. sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
  4. sudo yum makecache fast
  5. sudo yum -y install docker-ce
  6. systemctl start docker
  7. systemctl enable docker
  1. vi /etc/docker/daemon.json
  2. {
  3. "registry-mirrors": ["http://hub-mirror.c.163.com"],
  4. "insecure-registries": ["https://registry.mydomain.com","http://hub-mirror.c.163.com"]
  5. }
  1. https://github.com/kubernetes-sigs/kind/releases
  1. chmod +x ./kind
  2. mv kind-linux-amd64 /usr/bin/kind

使用版本:dev-1.3.1 分支编译版本

  1. apache-linkis-1.3.1-bin.tar.gz
  1. mkdir -p /opt/data/common/extendlib

拷贝 mysql 驱动到 /opt/data/common/extendlib

  1. curl https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar -o /opt/data/common/extendlib/[mysql-connector-java-8.0.28.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar)
  1. ./bin/install-linkis-to-kubernetes.sh reset
  1. ./bin/install-linkis-to-kubernetes.sh pull -mghcr.dockerproxy.com
  1. ./bin/install-linkis-to-kubernetes.sh install -l -mghcr.dockerproxy.com
  1. kubectl get pods -A

容器化部署 - 图1

  1. ./helm/scripts/prepare-for-spark.sh
  1. ./helm/scripts/remote-proxy.sh start
  1. linkis-web: http://10.0.2.101:8088/
  2. eureka: http://10.0.2.101:20303/

容器化部署 - 图2

  1. 进入容器
  2. ./helm/scripts/login-pod.sh cg-engineconnmanager
  3. 执行shell测试
  4. sh ./bin/linkis-cli -engineType shell-1 -codeType shell -code "echo "hello" " -submitUser hadoop -proxyUser hadoop
  5. 执行hive测试
  6. sh ./bin/linkis-cli -engineType hive-2.3.3 -codeType hql -code "show tables" -submitUser hadoop -proxyUser hadoop
  7. 执行spark测试
  8. sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "show tables" -submitUser hadoop -proxyUser hadoop
  9. 执行python测试
  10. sh ./bin/linkis-cli -engineType python-python2 -codeType python -code "print(\"hello\")" -submitUser hadoop -proxyUser hadoop -confMap python.version=python

容器化部署 - 图3