Implement Watch API for Kubernetes API calls
Release notes
The cluster UI integration on the environment pages allows developers and application operators to have a quick glance on the status of their currently running applications without leaving GitLab. Up to now, the status was a one-off request when the UI loaded, and it was hard to track deployment progress as it required a page refresh. The current version of GitLab upgrades the underlying connection to use the Kubernetes watch API, and provides near real-time updates of the cluster state in the GitLab UI. Read here on how to configure the cluster UI.
Proposal
Use Watch API instead of polling on the frontend for real-time data updates on the Kubernetes dashboard.
For more details, See https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes.
Example usage (not production-ready code):
Click to expand
fetch('<k8s-proxy-base-url>/api/v1/namespaces?watch=1', {credentials: 'include', headers: {'X-Csrf-TOKEN': document.head.querySelector('meta[name="csrf-token"]').content, 'GitLab-Agent-Id': '<agent-id>'}}).then((response) => {
const stream = response.body.getReader()
const utf8Decoder = new TextDecoder('utf-8')
let buffer = ''
// wait for an update and prepare to read it
return stream.read().then(function onIncomingStream({ done, value }) {
if (done) {
console.log('Watch request terminated')
return
}
buffer += utf8Decoder.decode(value)
const remainingBuffer = findLine(buffer, (line) => {
try {
const event = JSON.parse(line)
const pod = event.object
console.log('PROCESSING EVENT: ', event.type, pod.metadata.name)
} catch (error) {
console.log('Error while parsing', chunk, '\n', error)
}
})
buffer = remainingBuffer
// continue waiting & reading the stream of updates from the server
return stream.read().then(onIncomingStream)
})
})
function findLine(buffer, fn) {
const newLineIndex = buffer.indexOf('\n')
// if the buffer doesn't contain a new line, do nothing
if (newLineIndex === -1) {
return buffer
}
const chunk = buffer.slice(0, buffer.indexOf('\n'))
const newBuffer = buffer.slice(buffer.indexOf('\n') + 1)
// found a new line! execute the callback
fn(chunk)
// there could be more lines, checking again
return findLine(newBuffer, fn)
}
The things we need to consider to make it production-ready
-
buffer += utf8Decoder.decode(value)
is not going to work if the stream got only a part of a utf-8-encoded code point i.e. "half a character". - This code needs to handle disconnects. This likely means resuming from the last seen resource version, not "from the beginning". Older resource versions may not be available (GCed by etcd) and sometimes that happens right after disconnect i.e. it's impossible to just continue where connection dropped. Informers list+watch to handle that, maintaining internal list of objects to be able to generate events reliably even in case of disconnects.
- Investigate if JSON values can have embedded newline characters (i.e. maybe Go JSON encoder encodes them as
\n
, but it doesn't have to, it's not a requirement)? That'd throw a spanner in the works. - We need to make it work with the cluster-client library.