My personal Support Engineer starter kit - gets you going for 80% of tickets you encounter!
Decided to share the setup that helped me survive my first few months as a Support Engineer! Hopefully this helps with anyone that joins in the future. :)
1. GitLab and GitLab Runner test instances
One of the most important items in my toolkit is an Omnibus GitLab instance and a GitLab Runner test instance. This is set up using dev-resources. You can see my configuration at my Terraform configuration file.
When tackling customer tickets, having an Omnibus instance with a registered Runner lets you quickly:
- Test GitLab CI configuration
- Reproduce problems reported by customers
- View configuration, files, etc. and compare them against information provided by customers
- Easily explore the internal workings of GitLab
- Perform some action and doing
sudo gitlab-ctl tail
to see what requests are triggered - Try executing Ruby/Rails code in
sudo gitlab-rails console
and see how it affects data - Modify Rails code for debugging, fun and profit
- Perform some action and doing
- Try out integrations, such as LDAP, JIRA, webhooks, etc.
In my experience, having two test instances lets you can work through 80% of customer tickets that come through the queues.
2. Have a copy of GitLab source code available offline for reference
It often helps to have a copy of the GitLab software source code for reference when either following a request's code execution path, an application stack trace or searching for error strings. The primary ones to get started with are:
- GitLab Rails
-
Omnibus Gitlab (notably the Chef recipes which are behind most
gitlab-ctl
commands and the gitlab.rb template)
You can easily checkout a specific version of the code by tag so you can be sure you're looking at the same code the customer is running. For example:
git checkout v11.10.8-ee
Another helpful tip is to look at git blame
and the commit history for specific files to see when the file or specific lines were last changed. This can either give you the issue or MR which the commit belonged to or help you determine if a recent change is to blame for a customer problem. Browsing the file on GitLab.com is particularly helpful for this!
3. Check the handbook and docs!
- GitLab Support Handbook, notably the workflows page
- GitLab Support page, notably our Statement of Support to see what's in scope and what's out of scope
- Product stages, groups, and categories page, to see which product group is responsible for a certain area of GitLab
4. Search is your friend
You can usually find what you need using the web search engine of your choice and searching:
site:gitlab.com <search/error string>
site:docs.gitlab.com <search/error string>
Don't forget that Zendesk is also a rich source of information, particularly for edge cases which few users encounter. Search for error strings to view past tickets submitted (and hopefully solved!). The Zendesk search documentation is helpful for when you need to filter your search results, say to a particular organisation or tag.
5. Try something a little more advanced
- Cody's gitlabsos utility gathers logs and system information quickly. Particularly helpful when you suspect a problem's root cause is at the OS level (NFS mounts, CPU/IO/memory issues, etc.). Try it out on your Omnibus test instance to get a feel for what it does and look at the script to see what commands it runs.
- Adam's Support Tool Kit gives you an easy way to spin up local VMs running any specific version of GitLab, and the The Quality Team's GitLab Environment Toolkit to deploy GitLab Omnibus at scale. Particularly helpful when you need to quickly test something on older versions of GitLab.
P.S.: Some time ago I recall some support folks (Lyle comes to mind) talking about having a support blog. I decided to experiment with using the issue tracker as a substitute for a blog as a boring solution. I've created a support-blog label that we can use to tag and filter for posts like these. Give it a try or leave some feedback on this approach!