Skip to main content
版本:Next

Kubernetes 部署

准备条件

使用

配置

  • 配置 values.yamlexternal.pulsar 关于 Apache Pulsar 集群信息
  • 配置 values.yamlexternal.flink 关于 Apache Flink 集群信息

安装

如果不存在名为 inlong 的命名空间,可通过以下命令创建:

kubectl create namespace inlong

docker/kubernetes 目录下安装 chart:

helm upgrade inlong --install -n inlong ./

进入 InLong Dashboard

如果 values.yaml 中的 ingress.enabled 字段值是 true, 则直接在浏览器中访问 http://${ingress.host}/dashboard 即可,否则,如果 dashboard.service.type 字段值设置为 ClusterIP,则需要执行以下命令进行端口转发:

export DASHBOARD_POD_NAME=$(kubectl get pods -l "component=dashboard" -o jsonpath="{.items[0].metadata.name}" -n inlong)
export DASHBOARD_CONTAINER_PORT=$(kubectl get pod $DASHBOARD_POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}" -n inlong)
kubectl port-forward $DASHBOARD_POD_NAME 80:$DASHBOARD_CONTAINER_PORT --address='0.0.0.0' -n inlong

之后就可以访问 http://127.0.0.1:80 进入 InLong Dashboard 了,默认登录账号为:

User: admin
Password: inlong
备注

如果出现 unable to do port forwarding: socat not found 的错误,则首先需要安装 socat

如果 dashboard.service.type 字段值设置为 NodePort,则需要执行以下命令:

export DASHBOARD_NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}" -n inlong)
export DASHBOARD_NODE_PORT=$(kubectl get svc inlong-dashboard -o jsonpath="{.spec.ports[0].nodePort}" -n inlong)

之后就可以访问 http://$DASHBOARD_NODE_IP:$DASHBOARD_NODE_PORT 进入 InLong Dashboard 了。

如果 dashboard.service.type 字段值设置为 LoadBalancer,则需要执行以下命令:

export DASHBOARD_SERVICE_IP=$(kubectl get svc inlong-dashboard --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}"  -n inlong)

之后就可以访问 http://$DASHBOARD_SERVICE_IP:30080 进入 InLong Dashboard 了。

备注

这将花费一些时间,可以运行 kubectl get svc inlong-dashboard -n inlong -w 命令来查看其状态

默认的用户名是 admin,默认密码是 inlong,你可以通过它们进入 InLong Dashboard。

配置

配置项在 values.yaml 文件中,下表展示了所有可配置项及其默认值:

ParameterDefaultDescription
timezoneAsia/ShanghaiWorld time and date for cities in all time zones
images.pullPolicyIfNotPresentImage pull policy. One of Always, Never, IfNotPresent
images.<component>.repositoryDocker image repository for the component
images.<component>.taglatestDocker image tag for the component
<component>.componentComponent name
<component>.replicas1Replicas is the desired number of replicas of a given Template
<component>.podManagementPolicyOrderedReadyPodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down
<component>.annotations{}The annotations field can be used to attach arbitrary non-identifying metadata to objects
<component>.tolerations[]Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints
<component>.nodeSelector{}You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have
<component>.affinity{}Node affinity is conceptually similar to nodeSelector, allowing you to constrain which nodes your Pod can be scheduled on based on node labels
<component>.terminationGracePeriodSeconds30Optional duration in seconds the pod needs to terminate gracefully
<component>.resources{}Optionally specify how much of each resource a container needs
<component>.port(s)The port(s) for each component service
<component>.env{}Environment variables for each component container
\<component>.probe.\<liveness|readiness>.enabledtrueTurn on and off liveness or readiness probe
\<component>.probe.\<liveness|readiness>.failureThreshold10Minimum consecutive successes for the probe
\<component>.probe.\<liveness|readiness>.initialDelaySeconds10Delay before the probe is initiated
\<component>.probe.\<liveness|readiness>.periodSeconds30How often to perform the probe
<component>.volumes.nameVolume name
<component>.volumes.size10GiVolume size
<component>.service.annotations{}The annotations field may need to be set when service.type is LoadBalancer
<component>.service.typeClusterIPThe type field determines how the service is exposed. Valid options are ClusterIP, NodePort, LoadBalancer and ExternalName
<component>.service.clusterIPnilClusterIP is the IP address of the service and is usually assigned randomly by the master
<component>.service.nodePortnilNodePort is the port on each node on which this service is exposed when service type is NodePort
<component>.service.loadBalancerIPnilLoadBalancer will get created with the IP specified in this field when service type is LoadBalancer
<component>.service.externalNamenilExternalName is the external reference that kubedns or equivalent will return as a CNAME record for this service, requires service type to be ExternalName
<component>.service.externalIPs[]ExternalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service
external.mysql.enabledfalseIf not exists external MySQL, InLong will use the internal MySQL by default
external.mysql.hostnamelocalhostExternal MySQL hostname
external.mysql.port3306External MySQL port
external.mysql.usernamerootExternal MySQL username
external.mysql.passwordpasswordExternal MySQL password
external.pulsar.serviceUrllocalhost:6650External Pulsar service URL
external.pulsar.adminUrllocalhost:8080External Pulsar admin URL

可选的组件有:agentauditdashboarddataproxymanagertubemq-managertubemq-mastertubemq-brokerzookeepermysql

卸载

通过以下命令卸载 chart:

helm uninstall inlong -n inlong

上述命令会删除除与 chart 关联的 PVC 之外的所有 Kubernetes 组件。

如果不再使用 PVC,可通过下列命令删除,它将会删除所有数据。

kubectl delete pvc -n inlong --all

注意:删除 PVC 也会删除所有数据。在做之前请小心。

开发

在开发前需要有一个装有 helm 的 Kubernetes 集群。 但是没有也没有关系,推荐使用 kind , 它能够在 Docker 容器中运行一个本地的 Kubernetes 集群,因此,只需花费很少的时间即可启动和停止 kubernetes 节点。

kind 快速开始

你可以按照 快速开始 章节中的指示安装 kind。 安装好 kind 后,你可以通过 kind.yml 配置文件来创建一个 Kubernetes 集群。

kind create cluster --config kind.yml

可以通过 --image 指定具体的 Docker 镜像 —— kind create cluster --image=....。 使用不同的镜像可以改变集群的 kubernetes 版本。 要找到适合当前版本的镜像,你可以查看 发行说明

之后,你可以通过以下命令与集群进行交互:

kubectl cluster-info --context kind-inlong-cluster

现在,你已经拥有了一个可以进行本地开发的 Kubernetes 集群!

安装 Helm

请按照 安装指引 进行安装。

安装 chart

通过以下命令创建命名空间并安装 chart:

kubectl create namespace inlong
helm upgrade inlong --install -n inlong ./

这将花费一段时间,通过以下命令查看所有 pod 是否能够正常启动:

watch kubectl get pods -n inlong -o wide

开发与调试

按照 模板调试指引 来调试你所开发的 chart。

除此以外,你可以通过以下命令保存渲染的模板:

helm template ./ --output-dir ./result

之后,你可以在 result 文件夹下检查渲染后的模板。

故障排除

我们已尽最大努力使这些 chart 尽可能正确,但偶尔也会出现我们无法控制的情况。 我们已经收集了用于解决常见问题的提示和技巧。 请在提出 issue 之前先检查这些内容,并随时欢迎向我们提出 Pull Request