Skip to main content
Version: 1.5.0

Kubernetes

Prerequisites

  • Kubernetes 1.10+
  • Helm 3.0+
  • InLong Helm Chart
  • A dynamic provisioner for the PersistentVolumes(production environment)

Usage

Install

If the namespace named inlong does not exist, create it first by running:

kubectl create namespace inlong

To install the chart with a namespace named inlong, try:

helm upgrade inlong --install -n inlong ./

Access InLong Dashboard

If ingress.enabled in values.yaml is set to true, you just access http://${ingress.host}/dashboard in browser.

Otherwise, when dashboard.service.type is set to ClusterIP, you need to execute the port-forward command like:

export DASHBOARD_POD_NAME=$(kubectl get pods -l "app.kubernetes.io/name=inlong-dashboard,app.kubernetes.io/instance=inlong" -o jsonpath="{.items[0].metadata.name}" -n inlong)
export DASHBOARD_CONTAINER_PORT=$(kubectl get pod $DASHBOARD_POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}" -n inlong)
kubectl port-forward $DASHBOARD_POD_NAME 8181:$DASHBOARD_CONTAINER_PORT -n inlong

And then access http://127.0.0.1:8181

Tip: If the error of unable to do port forwarding: socat not found appears, you need to install socat at first.

Or when dashboard.service.type is set to NodePort, you need to execute the following commands:

export DASHBOARD_NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}" -n inlong)
export DASHBOARD_NODE_PORT=$(kubectl get svc inlong-dashboard -o jsonpath="{.spec.ports[0].nodePort}" -n inlong)

And then access http://$DASHBOARD_NODE_IP:$DASHBOARD_NODE_PORT

When dashboard.service.type is set to LoadBalancer, you need to execute the following command:

export DASHBOARD_SERVICE_IP=$(kubectl get svc inlong-dashboard --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}"  -n inlong)

And then access http://$DASHBOARD_SERVICE_IP:30080

NOTE: It may take a few minutes for the LoadBalancer IP to be available. You can check the status by running kubectl get svc inlong-dashboard -n inlong -w

The default username is admin and the default password is inlong. You can access the InLong Dashboard through them.

Configuration

The configuration file is values.yaml, and the following tables lists the configurable parameters of InLong and their default values.

ParameterDefaultDescription
timezoneAsia/ShanghaiWorld time and date for cities in all time zones
images.pullPolicyIfNotPresentImage pull policy. One of Always, Never, IfNotPresent
images.<component>.repositoryDocker image repository for the component
images.<component>.taglatestDocker image tag for the component
<component>.componentComponent name
<component>.replicas1Replicas is the desired number of replicas of a given Template
<component>.podManagementPolicyOrderedReadyPodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down
<component>.annotations{}The annotations field can be used to attach arbitrary non-identifying metadata to objects
<component>.tolerations[]Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints
<component>.nodeSelector{}You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have
<component>.affinity{}Node affinity is conceptually similar to nodeSelector, allowing you to constrain which nodes your Pod can be scheduled on based on node labels
<component>.terminationGracePeriodSeconds30Optional duration in seconds the pod needs to terminate gracefully
<component>.resources{}Optionally specify how much of each resource a container needs
<component>.port(s)The port(s) for each component service
<component>.env{}Environment variables for each component container
\<component>.probe.\<liveness|readiness>.enabledtrueTurn on and off liveness or readiness probe
\<component>.probe.\<liveness|readiness>.failureThreshold10Minimum consecutive successes for the probe
\<component>.probe.\<liveness|readiness>.initialDelaySeconds10Delay before the probe is initiated
\<component>.probe.\<liveness|readiness>.periodSeconds30How often to perform the probe
<component>.volumes.nameVolume name
<component>.volumes.size10GiVolume size
<component>.service.annotations{}The annotations field may need to be set when service.type is LoadBalancer
<component>.service.typeClusterIPThe type field determines how the service is exposed. Valid options are ClusterIP, NodePort, LoadBalancer and ExternalName
<component>.service.clusterIPnilClusterIP is the IP address of the service and is usually assigned randomly by the master
<component>.service.nodePortnilNodePort is the port on each node on which this service is exposed when service type is NodePort
<component>.service.loadBalancerIPnilLoadBalancer will get created with the IP specified in this field when service type is LoadBalancer
<component>.service.externalNamenilExternalName is the external reference that kubedns or equivalent will return as a CNAME record for this service, requires service type to be ExternalName
<component>.service.externalIPs[]ExternalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service
external.mysql.enabledfalseIf not exists external MySQL, InLong will use the internal MySQL by default
external.mysql.hostnamelocalhostExternal MySQL hostname
external.mysql.port3306External MySQL port
external.mysql.usernamerootExternal MySQL username
external.mysql.passwordpasswordExternal MySQL password
external.pulsar.enabledfalseIf not exists external Pulsar, InLong will use the internal TubeMQ by default
external.pulsar.serviceUrllocalhost:6650External Pulsar service URL
external.pulsar.adminUrllocalhost:8080External Pulsar admin URL

The optional components include agent, audit, dashboard, dataproxy, manager, tubemq-manager, tubemq-master, tubemq-broker, zookeeper and mysql.

Uninstall

To uninstall the release, try:

helm uninstall inlong -n inlong

The above command removes all the Kubernetes components except the PVC associated with the chart, and deletes the release. You can delete all PVC if any persistent volume claims used, it will lose all data.

kubectl delete pvc -n inlong --all

Note: Deleting the PVC also delete all data. Please be cautious before doing it.

Development

A Kubernetes cluster with helm is required before development. But it doesn't matter if you don't have one, the kind is recommended. It runs a local Kubernetes cluster in Docker container. Therefore, it requires very little time to up and stop the Kubernetes node.

Quick start with kind

You can install kind by following the Quick Start section of their official documentation.

After installing kind, you can create a Kubernetes cluster with the kind.yml, try:

kind create cluster --config kind.yml

To specify another image use the --image flag – kind create cluster --image=..... Using a different image allows you to change the Kubernetes version of the created cluster. To find images suitable for a given release currently you should check the release notes for your given kind version (check with kind version) where you'll find a complete listing of images created for a kind release.

After installing kind, you can interact with the created cluster, try:

kubectl cluster-info --context kind-inlong-cluster

Now, you have a running Kubernetes cluster for local development.

Install Helm

Please follow the installation guide in the official documentation to install Helm.

Install the chart

To create the namespace and Install the chart, try:

kubectl create namespace inlong
helm upgrade inlong --install -n inlong ./

It may take a few minutes. Confirm the pods are up:

watch kubectl get pods -n inlong -o wide

Develop and debug

Follow the template debugging guide in the official documentation to debug your chart.

Besides, you can save the rendered templates by:

helm template ./ --output-dir ./result

Then, you can check the rendered templates in the result directory.

Troubleshooting

We've done our best to make these charts as seamless as possible, but occasionally there are circumstances beyond our control. We've collected tips and tricks for troubleshooting common issues. Please examine these first before raising an issue, and feel free to make a Pull Request!