Skip to content

Add deployment and service files for Jitsu stack

Sam Kerr requested to merge jitsu-deployment into main

Addresses gitlab-org/gitlab#369024 (closed)

This MR adds manifests files for deploying Jitsu in a Kuberenetes cluster and follows steps from the Scale with Jitsu page.

Specifically, these manifests will create:

  • A Deployment of Jitsu Server with 3 replicas
  • A Deployment of Jitsu Configurator with 1 replica
  • A Deployment of Redis for use with Jitsu with 1 replica
  • A Deployment of Redis for use with Jitsu user recognition with 1 replica
  • Several Service objects to expose all the relevant ports between the different services

Testing this MR

Getting the cluster setup

To quickly apply all manifest files:

cd jitsu/manifests && find . -name "*.yaml" | xargs -I^ kubectl apply -f ^

To quickly unapply all manifest files:

cd jitsu/manifests && find . -name "*.yaml" | xargs -I^ kubectl delete -f ^

Exposing ports for access outside the cluster

Jitsu Configurator

To interact with this, you need to expose the port for Jitsu Configurator to the outside world. You can use a Kubernetes IDE to do this or you can use port-forward command like:

kubectl port-forward service/jitsu-configurator 7000:7000

Jitsu Server

Now we need to ensure that you can actually reach the Jitsu Server not the Configurator. That is because the servers are listening on port 8001 on one IP where the Jitsu Configurator is listening on port 7000 on a different IP. Similar to what we did above, forward port 8001 so it can be accessed outside the cluster.

kubectl port-forward service/jitsu-server 8001:8001

Testing that Jitsu is working and configured correctly

The next step of testing this MR means checking that you can successfully login to Jitsu, configure it to connect to Clickhouse (or some destination), as well as receive data.

Login to Jitsu

Assuming you have done port forwarding to expose the service jitsu-configurator on port 7000, go to htpp://localhost:7000 (or wherever your cluster is hosted if not localhost) and you should see a sign up screen. Fill out the prompts and remember the username and password. You should be able to successfully login once registered.

Connect to Clickhouse

This step assumes that you have a clickhouse instance running at http://clickhouse:8123 inside your cluster.

Add a destination in Jitsu if you are not prompted to do so automatically. Fill out the form with the destination http://clickhouse:8123 and the database jitsu as shown below. Then click Test Connection. You should get a Successfully connected! message at the top of the screen. If you get an error, look at the logs in the jitsu-configurator pod to see what went wrong.

image

Send data to Jitsu

This step is going to let you test that your Jitsu instance can receive events by sending it a small event.

First you need to find out what your reporting token looks like. You can find it under the API Keys screen:

image

Note that we want the JS Key, which looks like js.c1xo1xee7cgblai5klwqxv.4mj0bbo195o7ko77tcseev or similar.

Save this file as test.html locally:

<html>
    <head>
<script src="http://localhost:8001/s/lib.js"
        data-key="<YOUR TOKEN>"
        data-init-only="false"
        defer></script>
<script>window.jitsu = window.jitsu || (function(){(window.jitsuQ = window.jitsuQ || []).push(arguments);})</script>
    </head>
    <body />
</html>

You'll note that in the HTML file above, we used port 8001 rather than 7000. This is so that we are retrieving the JS file from the Jitsu Server, not the Jitsu Configurator.

Open test.html locally and ensure that the lib.js file was successfully retrieved. You can do this by looking at the Network tab in Chrome Dev Tools if you like.

Once that is done, you should see the event in Jitsu:

image

Future Work

Ingress

This MR deploys 3 replicas of Jitsu Server, but it does not incorporate a load-balancing ingress. This will need to be done to ensure that load is distributed evenly across the various replicas that have been created.

Resource limits

These manifests include no resource limits on any containers. This means they could potentially consume more resources than intended and starve other containers. This should be investigated and addressed before a production deployment.

Hardcode credentials

These manifests use a hardcoded admin token for communication between the Jitsu Server and Jitsu Configurator containers. This should be managed differently in a production environment, most likely using a Kubernetes Secret object.

Edited by Sam Kerr

Merge request reports